This is what a lack of competition looks like.
However.... Twice the price of 4nm? The gains are fairly marginal from what I gather. I don't think many will bother.
This is a most excellent place for technology news and articles.
This is what a lack of competition looks like.
However.... Twice the price of 4nm? The gains are fairly marginal from what I gather. I don't think many will bother.
It's both lack of competition and the end of Moores law. We've effectively reached the end of silicon gate sizes and the tooling complexity required to keep shrinking process nodes and increase transistor density is increasing exponentially, so semiconducters no longer get cheaper... and it's starting to push these cutting edge nodes outside of economic viability for consumer products. I'm sure TSMC is taking a very healthy profit cut for sure but the absolute magic they have to work to have 2nm work at all is beginning to be too much.
I was under the impression that anything under like 10nm was just marketing and doesn't actually refer to transistor density in any meaningful way?
It is marketing and it does have meaningful connection to the litho features, but the connection is not absolute. For example Samsung's 5nm is noticeably more power hungry than TSMC's 5nm.
The number has some connection to transistor density, in the sense that a lower number means generally higher density. However there is not any physical feature on the chip that is actually 3nm in length.
This has been true since the late 90s probably.
Late 90s was 350nm down to 180nm (Known as 0.35um and 0.18um respectively). Things were still pretty honest around then.
2010s is probably where most of the shenanigans started.
the end of Moores law
It's been talked about a lot. Lots of people have predicted it.
It does eventually have to end though. And I think even if this isn't the end, we're close to the end. At the very least, we're close to the point of diminishing returns.
Look at the road to here-- We got to the smallest features the wavelength of light could produce (and people said Moore's Law was dead), so we used funky multilayer masks to make things smaller and Moore lived on. Then we hit the limits of masking and again people said Moore's Law was dead, so ASML created a whole new kind of light with a narrower wavelength (EUV) and Moore lived on.
But there is a very hard limit that we won't work around without a serious rethink of how we build chips- the width of the silicon atom. Today's chips have pathways that are in many cases well under 100 atoms wide. Companies like ASML and TSMC are pulling out all the stops to make things smaller, but we're getting close to the limit of what's possible with the current concepts of chip production (using photolithography to etch transistors onto silicon wafers). Not possible like can we do it, but possible like what the laws of physics will let us do.
That's going to be an interesting change for the industry, it will mean slower growth in processing power. That won't be a problem for the desktop market as most people only use a fraction of their CPU's power. It will mean the end of the 'more efficient chip every year' improvement for cell phones and mobile devices though.
There will be of course customers calling for more bigger better, and I think that will be served by more and bigger. Chiplets will become more common, complete with higher TDP. That'll help squeeze more yield out of an expensive wafer as the discarded parts will contain fewer mm^2. Wouldn't be surprised to see watercooling become more common in high performance workstations, and I expect we'll start to see more interest in centralized watercooling in the server markets. The most efficient setup I've seen so far basically hangs server mainboards on hooks and dunks them in a pool of non-conductive liquid. That might even lead to a rethink of the typical vertical rack setup to something horizontal.
It's gonna be an interesting next few years...
I mean technically moores law has been dead for 15 years. The main reason we went to multi-core was we couldn't keep up otherwise.
The reason we went multicore was because the frequencies weren't scaling, but the number of transistors were. We've been around the 2-300ps clock cycle for a long time now.
And now chiplet systems and 3dvcache
If we're talking about what Moore originally formulated, then the law isn't just about transistors. He actually claimed that the cost per integrated component is cutting in half every x months. The exact value of x was tweaked over the years, but we settled on 18 months.
If we were just talking about transistor count, the industry has kept up. When we bring price into the mix, then we're about an order of magnitude behind where we "should" be.
When he wrote it, the first integrated circuit had only been invented about 6 years prior. He was working from only 6 years of data and figured the price per integrated component would continue to drop for another decade. It's remarkable that it lasted as long as it did, and I wish we could find a way to be happy with that. We've done amazing things with the ICs we have, and probably haven't found everything we can do with them. If gate sizes hit a limit, so what? We'll still think of new ways to apply the technology.
I'm of the opinion that this is why liquid cooling is so important to next gen hw. I think they're going to start spreading out the chips more and sandwiching them like with the dh200s Nvidia is working on
Liquid cooling has become more needed because processors and gpu's have become outrageous power hogs. Desktops needing 1,000 watt psu's is just outrageous.
That's not really true, except at the ultra high end. My 4070 barely draws more than my old 1070. The 4080 draws the same as a 3080 with double the performance.
I would argue water cooling is far less needed today. What has changed is Nvidia selling chips that would have been considered extreme aftermarket overclocking 10 years ago.
Absolutely. 3D stacking is becoming viable too, as AMD has proven with their X3D chips with massive gobs of L3 cache stacked on top of the logic dies. Vertical stacking and sheer die size is going to make total power density only continue to go up.
Its not even entierly a tooling issue, the gates are now just getting so small that interferance from quantumn effects is becomming a genuine problem.
Yup, that's basically what I mean. Free transistor density increases via node shrink to improve processor performance are long gone, and the cost to get usable yield out of the smaller nodes is now increasing exponentially due to the limits of physics
No, there's still competition. Samsung and Intel are trying, but are just significantly behind. So leading the competition by this wide of a margin means that you can charge more, and customers decide whether they want to pay way more money for a better product now, whether they're going to wait for the price to drop, or whether they'll stick with an older, cheaper node.
And a lot of that will depend on the degree to which their customers can pass on increased costs to their own customers. During this current AI bubble, maybe some of those can. Will those manufacturing desktop CPUs or mobile SoCs be as willing to spend? Maybe not as much.
Or, if the AI hype machine crashes, so will the hardware demand, at which point TSMC might see reduced demand for their latest and greatest node.
Dangerous game considering Intel should be coming up with their 18A node pretty soon now, and it will supposedly be competitive with TSMC's 3nm or 2nm according to rumors. They will only need to compete in price, and if they are competitive in performance, and TSCM is increasing their prices so much, it would be a good way for Intel to take some of that market share.
you have practical, working tsmc chips plus next-gen r&d versus theoretical chips from Intel, a company that has not fared well over 30 years of trying to catch up with TSMC.
they're not worried yet.
And with the issues intel had with their processors..
It does sound like most of that was not actually manufacturing, but design.
If you're referring to the 13&14th Gen chips then yes, Intel is saying it's on the software side.
But if you're talking about 10th Gen chips that took forever to get out of the gate due to issues with sub 14nm lithography, then no it's a hardware issue. Intel has had issues over recent years with actual die shrinks.
Regardless, it feels like what we see with Boeing. A company culture that prioritized marketing and time to market over everything else consequences be damned.
Move fast and break stuff is probably not the best strategy if you are building airplanes or processors or other PhD level stuff... Or maybe it's just never a good strategy.
Yeah nice fast and break things is a great way to maximize short term profits at the expense of the long term. But fuck it, I got mine in the short term, so it works.
Intel blew their RD spending on share buy bucks... Now we giving them money🤡
That should never happen without equity. Controlling equity for the government.
Breach my man... But I got destroyed by the reddit normie brigade for such claims before.
They said that would make it communism... But giving rich clowns money is some how capitalism?
Their brain get broken in this 🐸
I think it's just very capitalistic.
Look you need money and/or protection, else you go bankrupt. We can provide these but there is a cost. And since you cannot refuse, we get to dictate terms... Our terms are "a controlling interest".
Alternatively we allow you to go bankrupt and then buy the bankrupt organisation for even less.
Yes 🐸
If you’re referring to the 13&14th Gen chips then yes, Intel is saying it’s on the software side.
Yes, I was, but there was also some initial manufacturing issue with oxidation. That wasn't the bulk of the issues that they were running into, though.
Intel has only been behind for the last 7 years or so, because they were several years delayed in rolling out their 10nm node. Before 14nm, Intel was always about about 3 years ahead of TSMC. Intel got leapfrogged at that stage because it struggled to implement the finFET technology that is necessary for progressing beyond 14nm.
The forward progress of semiconductor manufacturing tech isn't an inevitable march towards improvement. Each generation presents new challenges, and some of them are quite significant.
In the near future, the challenge is in certain three dimensional gate structures more complicated than finFET (known as Gate All Around FETs) and in backside power delivery. TSMC has decided to delay introducing those techniques because of the complexity and challenges while they squeeze out a few more generations, but it remains to be seen whether they'll hit a wall where Samsung and/or Intel leapfrog them again. Or maybe Samsung or Intel hit a wall and fall even further behind. Either way, we're not yet at a stage where we know what things look like beyond 2nm, so there's still active competition for that future market.
Edit: this is a pretty good description of the engineering challenges facing the semiconductor industry next:
Good article, thanks
"Intel has only been behind for the last 7 years or so"
what is your source for this?
at what point was intel even at par with tsmc in semiconductor/fab quality and production?
I've heard this twice now, but as far as I understand, Intel has never met the fabrication technology or demand that TSMC has and has been playing catch up for three decades.
I'm very willing to read a sourced article offering more historical context.
as for the article you've linked, it's a more technical iteration of the "yea but maybe?" articles.
There's zero refutation of tsmc dominance and zero evidence of a true emergent competitor.
"but it remains to be seen whether they'll hit a wall where Samsung and/or Intel leapfrog them again. Or maybe Samsung or Intel hit a wall and fall even further behind. Either way, we're not yet at a stage where we know what things look like beyond 2nm"
their point is "heyvwe don't know", but if tsmc next-gen R&D and production fails, and if another company is able to close the distance between themselves and tsmc's current held advantage, and if that theoretical company is then able to pull ahead with theoretical technologies, then TSMC might not be in first place in terms of semiconductor manufacturing.
"but what if..." isn't exactly a compelling or relevatory argument.
if a new zero emissions concrete dropped tomorrow and if a company secured the funding to produce it commercially and if they partnered with a next-gen 3d-printing company and real estate developer exclusively committed to low-income housing, then they could build a national chain of economically viable housing units.
None of that has happened and there's no evidence of it happening, so it's just a hypothetical series of events.
what is your source for this?
Familiarity with the industry, and knowledge that finFET was exactly what caused Intel to stall, Global Foundries to just give up and quit trying to keep up, and where Samsung fell behind TSMC. TSMC's dominance today all goes through its success at mass producing finFET and being able to iterate on that while everyone else was struggling to get those fundamentals figured out.
Intel launched its chips using its 22nm process in 2012, its 14nm process in 2014, and its 10nm process in 2019. At each ITRS "nm" node, Intel's performance and density was somewhere better than TSMC's at the equivalent node, but somewhere worse than the next. Intel's 5-year lag between 14nm and 10nm is when TSMC passed them up, launching 10nm, and even 7nm before Intel got its 10nm node going. And even though Intel's 14nm was better than TSMC's 14nm, and arguably comparable to TSMC's 10nm, it was left behind by TSMC's 7nm.
You can find articles from around 2018 or so trying to compare Intel's increasingly implausible claims that Intel's 14nm was comparable to TSMC's 10nm or 7nm processes, reflecting that Intel was stuck on 14nm for way too long, trying to figure out how to continue improving while grappling with finFET related technical challenges.
You can also read reviews of AMD versus Intel chips around the mid-2010s to see that Intel had better fab techniques then, and that AMD had to try to pioneer innovating packaging techniques, like chiplets, to make up for that gap.
If you're just looking at superficial developments at the mass production stage, you're going to miss out on the things that are in 20+ year pipelines between lab demonstrations, prototypes, low yield test production, etc.
Whoever figures out GAA and backside power is going to have an opportunity to lead for the next 3-4 generations. TSMC hasn't figured it out yet, and there's no real reason to assume that their finFET dominance would translate to the next step.
this sounds like it's confirming my original comment with more specificity, that Intel was consistently playing catch up to tsmc and the only thing that might happen in the future is that maybe tsmc doesn't progress at the rate they have been and Intel develops a theoretical technology.
lots of maybes and ifs.
maybes and ifs are not evidence of TSMCs downfall, they're playthings that may or may not happen without any reasonable data to interpret.
I don't have a horse in this race, but I am allegiant to facts and logical consistency.
juggling what ifs is not very interesting for me.
Intel was consistently playing catch up to tsmc
Yes, this has been true since about 2017, because 10nm was about 3 years late, losing its previous 3-year lead.
The future is uncertain, but the past is already set.
"losing its previous 3-year lead."
what three-year lead?
"The future is uncertain, but the past is already set."
or you think it is.
They aren't going to be competitive in their foundry with them laying off so many experienced operators. I work at a fab down the street from intel and our hiring classes went from 10 every other week to 20-30 now.
They can drop the price the day Intel actually puts a chip on the market... They're capturing maximum profit while they can.
Capitalism baby!
I wonder if it has anything to do with the hurricane.