This is exciting. Having another player in the cutting edge fabrication space is a good thing, and can help keep pressure on Samsung and TSMC and Intel to innovate and continue to make progress.
This is exciting. Having another player in the cutting edge fabrication space is a good thing, and can help keep pressure on Samsung and TSMC and Intel to innovate and continue to make progress.
soldered RAM across the board
More than soldering, it’s actually packaged, physically placed into the same single black package as the CPU and GPU and storage, and then that whole single package is soldered to the board.
Lots and lots of people went on the 98 to XP to 7 to 10 path, skipping Me, Vista, and 8. No reason that pattern can’t continue.
So I used THIS GUIDE to try to make a persistant USB stick.
Yeah you’re gonna want to use an external SSD instead. USB thumb drives can vary considerably in speed, and bottlenecking yourself there is gonna be a miserable experience.
Intel’s packaging doesn’t seem to be that far behind TSMC’s, just with different strengths and weaknesses, at least on the foundry side. On the design side they were slow to actually implement chiplet based design in the actual chips, compared to AMD who embraced it full force early on, and Apple who rely almost exclusively on System-in-a-Package designs (including their “ultra” line of M-series chips that are two massive Max chips stitched together) where memory and storage are all in one package.
So what IS their strategy now?
I think they need to bet the company on regaining their previous lead in actual cutting edge fabrication of semiconductors.
TSMC basically prints money, but the next stage is a new paradigm where TSMC doesn’t necessarily have a built-in advantage. Samsung and Intel are gunning for that top spot with their own technologies in actually manufacturing and packaging chips, hoping to leapfrog TSMC as the industry tries to scale up mass production of chips using backside power and gate all around FETs (GAAFETs).
If Intel 18A doesn’t succeed, the company is done.
Foxconn had two groups of engineers leave and create Intel and AMD when they were dissatisfied with how management was running the company.
You’re thinking of Fairchild, not Foxconn.
William Shockley led the team that invented the transistor while at Bell Labs, and then went on to move back to his home state of California to found his own company developing silicon transistors, ultimately resulting in the geographical area becoming known as Silicon Valley. Although a brilliant scientist and engineer, he was an abrasive manager, so 8 of his key researchers left the company to form Fairchild Semiconductor, a division of a camera and imaging company with close ties to military contracting.
The researchers at Fairchild developed the silicon integrated circuit (Texas Instruments developed the first integrated circuit with germanium, but it turns out that semiconductor material wasn’t good for scaling and hit a dead end early on), and grew the company into a powerhouse. Infighting between engineers and management (especially east coast based management dictating what the west coast lab was doing) and Fairchild’s policy of not sharing equity with employees, led Gordon Moore and Robert Noyce (who had been 2 of the 8 who left Shockley for Fairchild) to go and found Intel, poaching a talented young engineer named Andy Grove.
Intel originally focused on memory, but Grove recognized that the future value would be in processors, so they bet the company on that transition to logic chips, just in time for the computer memory market to get commoditized and for Japanese competition to crush the profit margins in that sector. By the 90’s, Intel became known as the dominant company in CPUs. Intel survived more than one generation on top because they knew when to pivot.
They had untouchable market dominance from the mid 80’s through the mid 2010’s, so probably closer to 30 years.
AMD and Apple caught up on consumer PC processors, as the consumer PC market as a whole kinda started to fall behind tablets and phones as the preferred method of computing. Even in the data center, the importance of the CPU has lost ground to GPU and AI chips in the past 5 years, too. We’ll see how Intel protects its current position in the data center.
I’m personally excited about the actual engineering challenges that come next and think that all 3 big foundries have roughly equal probability of coming out on top in the next stage, as the transistors become more complex three dimensional structures, and as the companies try to deliver power from the back side of the wafer rather than the crowded front side.
Samsung and Intel have always struggled with manufacturing finFETs with the yields/performance of TSMC. Intel’s struggles to move on from 14nm led to some fun memes, but also reflected the fact that they hit a plateau they couldn’t get around. Samsung and Intel have been eager to get off of the finFET paradigm and tried to jump early to Gate All Around FETs (GAAFETs, which Samsung calls MBCFET and Intel calls RibbonFET), while TSMC sticks around on finFET for another generation.
Samsung switched to GAAFET for its 3nm node, which began production in 2022, but the reports are that it took a while to get yields up to an acceptable level. Intel introduced GAAFET in its 20A node, but basically abandoned it before commercial production and put all of its resources into 18A, which they last reported should be ready for mass production in the first half of 2025 and will be ready for external customers to start taping out their own designs.
Meanwhile, TSMC’s 3nm node is still all finFET. Basically the end of the line for this technology that catapulted TSMC way ahead of its peers. Its 2nm node will be the first TSMC node to use GAAFET, and they have quietly abandoned plans to introduce backside power in the generation after that, for their N2P. Their 1.6 nm node is going to have backside power, though. They’ll be the last to marker with these two technologies, but maybe they’re going to release a more polished process that still produces better results.
So you have the three competitors, with Samsung being the first to market, Intel likely being second, and TSMC being third, but with no guarantees that they’ll all solve the next generation challenges in the same amount of lead time. It’s a new season, and although past success does show some advantages and disadvantages that may still be there, none of it is a guarantee that the leader right now will remain a leader into the next few generations.
How do you wake a Mac Mini? Is it enough to just press a keyboard button? If so, does the keyboard have to be wired, or does Bluetooth work?
They don’t need to limit themselves to HDMI power. They could do like the Chromecast and Fire stick do, and have an external power source for 7.5W or whatever over USB. At that power level, I’m sure Apple could develop or repurpose their existing silicon lineup to be able to make a passively cooled stick design for under $100.
This meant on traditional forums everyone’s position was not only presented equally
No, the earlier web forums based on phpbb or vbulletin or whatever prioritized the most recent posts. That means that plenty of good content was drowned out by fast moving threads, and threads were sorted by most recent activity, which would allow some threads to fall off quickly unless “bumped.”
It was inherently limited in scale. The votes made such a difference for the forums that implemented it (slashdot, hacker news, eventually reddit) that it could make the more popular stuff more visible, rather than the most recent stuff more visible. And whatever the local site culture was could prioritize the characteristics that were popular in that particular place. That’s why tech support almost entirely switched to reddit or similar places, because the helpfulness of a comment was generally what drove its popularity.
And the biggest problem with the older forums was that they didn’t allow for threading. Any particular comment can spawn its own discussion without taking the rest of the thread off on that tangent.
This limitation comes up sometimes when people try to build out a zero-trust cable where they can get a charge but not necessarily transfer data to or from an untrusted device on the other side.
Well, the last two monitors I bought didn’t come with any signal cables at all, probably because the manufacturers don’t need to presume whether the consumer prefers HDMI or DP, or whether the other side is full size, mini-DP/mini-HDMI, or USB-C alt mode. Just right there, that’s 5 possibilities, each about as common as the others.
You can even get PD capable USB-C cables that don’t transmit data at all.
I don’t think this is right. The PD standard requires the negotiation of which side is the source and which is the sink, and the voltage/amperage, over those data links. So it has to at least support the bare minimum data transmission in order for PD to work.
Adapting from usb- a to b is not adapting anything other than the physical connector.
Neither is the DisplayPort cables I’m talking about, where one end is just USB-C, but the signal actually transmitted through the USB-C connector and the cable itself is the HBR/UHBR transmission mode of any other DisplayPort cable (whatever the combination of the two ends physical connectors, between full DisplayPort, mini DisplayPort, or USB-C). It’s not “adapted” because the data signals aren’t converted in any way.
So it’s as much an “adapter” as a DP cable that is a mini one one side and a full size on the other.
Precisely, you need to use an adapter
No, it’s not an adapter. It’s literally just a cable with two different ends.
Unless you consider an ordinary USB-A to USB-B (or mini B or micro B) to be an “adapter,” too.
Dual 4k120 would already saturate the bandwith.
What would you use to drive dual 4k/120 displays over a single cable, if not Thunderbolt over USB-C? And what 2017 laptops were capable of doing that?
Even if we’re talking about two different cables over two different ports, that’s still a pretty unusual use case that not a lot of laptops would’ve been capable of in 2017.
Well, Bell Labs isn’t the magical place that it used to be, and that was originally an R&D shop basically enabled by the economics of the AT&T monopoly.
That dude was fired.