Intel’s problems, IMO, have not been an issue of strategy but of engineering. Trying to do 10nm without EUV was a forgivable error, but refusing to change course when the node failed over and over and over to generate acceptable yield was not, and that willful ceding of process leadership has put them in a hole relative to their competition, and arguably lost them a lucrative sole-source relationship with Apple.
If Intel wants to chart a course that lets them meaningfully outcompete AMD (and everyone else fighting for capacity at TSMC) they need to get their process technology back on track. 18A looks good according to rumors, but it only takes one short-sighted bean counter of a CEO to spin off fabs in favor of outsourcing to TSMC, and once that’s out of house it’s gone forever. Intel had an engineer-CEO in Gelsinger; they desperately need another, but my fear is that the board will choose to “go another direction” and pick some Welchian MBA ghoul who’ll progressively gut the enterprise to show quarterly gains.
I want that to be true, but judging from their financials, their big problem send to be just massive costs. They have more revenue than AMD despite their technical challenges, but they spend way way more to get it.
Part of it is not having a lot of fab customers yet having their own fabs, part of it is a whole bunch of Intel projects you’ve never heard of that will never amount to anything, but still cost lots of money. Part of it is essentially bribing vendors to favor Intel products even as amd makes more sense in a lot of those products.
Better engineering would certainly help, but they are just bleeding money all over the place largely owing to bad business bets.
Trying to do 10nm without EUV was a forgivable error
How so? Literally no one uses EUV for 10nm and this wasn’t the problem. Isn’t SMCI even pushing DUV toproducing 5nm?
My limited understanding is that they were too ambitious with e.g. using cobalt interconnects and at the same time had the issue that they tied their chip designs to specific nodes. Meaning that when the process side slipped they couldnt just take the design and use it on a different node without a lot of effort.
Also I think they were always going to lose apple at some point. With better products they might have delayed it further. But apple fundamentally has an interest in vertical integration and control. And they were already designing processors for their phones and tablets.
Keep in mind that when 10nm was in planning, EUV light sources looked very exotic relative to current tech, and even though we can see in hindsight that the tech works it is still expensive to operate – TSMC’s wafer costs increased 2x-3x for EUV nodes. If I was running Intel and my engineers told me that they thought they could extend the runway for DUV lithography for a node or two without sacrificing performance or yields, I’d take that bet in a heartbeat. Continuing to commit resources to 10nm DUV for years after it didn’t pan out and competitors moved on to smaller nodes just reeks of sunk-cost fallacy, though.
had the issue that they tied their chip designs to specific nodes.
In fairness to Intel, every modern semi design house has that same issue: a chip is designed and laid out for a specific node, so this isn’t really a failing so much as a how-it-works.
Of course, Intel was being very, very, very risky when they were designing for a process that basically didn’t exist assuming that hey, they’ll have it done by the time the design work is complete and they’re RTM.
couldnt just take the design and use it on a different node without a lot of effort
Which is what they had to do once they failed to ship newer nodes on schedule with the new CPU designs, and well, we see how that ultimately cost them a whole hell of a lot, if not ultimately their entire business.
In fairness to Intel, every modern semi design house has that same issue: a chip is designed and laid out for a specific node, so this isn’t really a failing so much as a how-it-works.
I thought i read somewhere that either their design was particularly tailored towards a specific node or that following that they made it a higher priority to be less bound to one. But i can’t find a source for it, so i might be mistaken.
Trying to do 10nm without EUV was a forgivable error, but refusing to change course when the node failed over and over
You say this, but 14nm was also awful for a while at the start for them, then they eventually cracked it. I don’t think it’s unreasonable that they thought they’d work out the kinks of 10nm (which they by and large did, actually). Additionally, it’s not like they had much other choice. It’s not like they had other better-than-14nm fabs ready to go.
Intel’s problems, IMO, have not been an issue of strategy but of engineering. Trying to do 10nm without EUV was a forgivable error, but refusing to change course when the node failed over and over and over to generate acceptable yield was not, and that willful ceding of process leadership has put them in a hole relative to their competition, and arguably lost them a lucrative sole-source relationship with Apple.
If Intel wants to chart a course that lets them meaningfully outcompete AMD (and everyone else fighting for capacity at TSMC) they need to get their process technology back on track. 18A looks good according to rumors, but it only takes one short-sighted bean counter of a CEO to spin off fabs in favor of outsourcing to TSMC, and once that’s out of house it’s gone forever. Intel had an engineer-CEO in Gelsinger; they desperately need another, but my fear is that the board will choose to “go another direction” and pick some Welchian MBA ghoul who’ll progressively gut the enterprise to show quarterly gains.
I want that to be true, but judging from their financials, their big problem send to be just massive costs. They have more revenue than AMD despite their technical challenges, but they spend way way more to get it.
Part of it is not having a lot of fab customers yet having their own fabs, part of it is a whole bunch of Intel projects you’ve never heard of that will never amount to anything, but still cost lots of money. Part of it is essentially bribing vendors to favor Intel products even as amd makes more sense in a lot of those products.
Better engineering would certainly help, but they are just bleeding money all over the place largely owing to bad business bets.
How so? Literally no one uses EUV for 10nm and this wasn’t the problem. Isn’t SMCI even pushing DUV toproducing 5nm?
My limited understanding is that they were too ambitious with e.g. using cobalt interconnects and at the same time had the issue that they tied their chip designs to specific nodes. Meaning that when the process side slipped they couldnt just take the design and use it on a different node without a lot of effort.
Also I think they were always going to lose apple at some point. With better products they might have delayed it further. But apple fundamentally has an interest in vertical integration and control. And they were already designing processors for their phones and tablets.
Keep in mind that when 10nm was in planning, EUV light sources looked very exotic relative to current tech, and even though we can see in hindsight that the tech works it is still expensive to operate – TSMC’s wafer costs increased 2x-3x for EUV nodes. If I was running Intel and my engineers told me that they thought they could extend the runway for DUV lithography for a node or two without sacrificing performance or yields, I’d take that bet in a heartbeat. Continuing to commit resources to 10nm DUV for years after it didn’t pan out and competitors moved on to smaller nodes just reeks of sunk-cost fallacy, though.
In fairness to Intel, every modern semi design house has that same issue: a chip is designed and laid out for a specific node, so this isn’t really a failing so much as a how-it-works.
Of course, Intel was being very, very, very risky when they were designing for a process that basically didn’t exist assuming that hey, they’ll have it done by the time the design work is complete and they’re RTM.
Which is what they had to do once they failed to ship newer nodes on schedule with the new CPU designs, and well, we see how that ultimately cost them a whole hell of a lot, if not ultimately their entire business.
I thought i read somewhere that either their design was particularly tailored towards a specific node or that following that they made it a higher priority to be less bound to one. But i can’t find a source for it, so i might be mistaken.
You say this, but 14nm was also awful for a while at the start for them, then they eventually cracked it. I don’t think it’s unreasonable that they thought they’d work out the kinks of 10nm (which they by and large did, actually). Additionally, it’s not like they had much other choice. It’s not like they had other better-than-14nm fabs ready to go.