> In a frontside design, the silicon substrate can be as thick as 750 micrometers. Because silicon conducts heat well, this relatively bulky layer helps control hot spots by spreading heat from the transistors laterally. Adding backside technologies, however, requires thinning the substrate to about 1 mm to provide access to the transistors from the back.
This is a typo here, right? 1mm is thicker, not thinner, than 750 micrometers. I assume 1µm was meant?
Wafers on some semiconductor processes are 0.3m in diameter. You could not practically handle a 1um thick wafer 0.3m in diameter without shattering it. 0.75mm is a reasonable overall wafer thickness.
Whose gonna pull the trigger on beryllium oxide mounting packages first?
Its the holy grail of having thermal conductivity somewhere between aluminum and copper, while being as electrically insulating as ceramic. You can put the silicon die directly on it.
Problem is that the dust from it is terrifyingly toxic, but in it's finished form it's "safe to handle".
"""
Aluminium nitride (AlN) is a solid nitride of aluminium. It has a high thermal conductivity of up to 321 W/(m·K)[5] and is an electrical insulator. Its wurtzite phase (w-AlN) has a band gap of ~6 eV at room temperature and has a potential application in optoelectronics operating at deep ultraviolet frequencies.
...
Manufacture
AlN is synthesized by the carbothermal reduction of aluminium oxide in the presence of gaseous nitrogen or ammonia or by direct nitridation of aluminium.[22] The use of sintering aids, such as Y2O3 or CaO, and hot pressing is required to produce a dense technical-grade material.[citation needed]
Applications
Epitaxially grown thin film crystalline aluminium nitride is used for surface acoustic wave sensors (SAWs) deposited on silicon wafers because of AlN's piezoelectric properties. Recent advancements in material science have permitted the deposition of piezoelectric AlN films on polymeric substrates, thus enabling the development of flexible SAW devices.[23] One application is an RF filter, widely used in mobile phones,[24] which is called a thin-film bulk acoustic resonator (FBAR). This is a MEMS device that uses aluminium nitride sandwiched between two metal layers.[25]
"""
Speculation: it's present use suggests that at commercially viable quantities it might be challenging to use as a thermal interface compound. I've also never previously considered the capacitive properties of packaging components and realize of course that's required. Use of Al O as a heat conductor is so far outside of my expertise...
Could a materials expert elaborate how viable / expensive this compound is for the rest of us?
I'm not much of an expert, but maybe this can be useful: AlN is a somewhat widely used insulating substrate that is chosen where sapphire is insufficient (~40 W/mK), but BeO (~300 W/mK) is too expensive or toxic. The intrinsic conductivity of single-crystal AlN is very high (~320 W/mK), but the material is extremely difficult to grow into large single crystals, so sintered substrates are used instead. This reduces thermal conductivity to 170-230 W/mK depending on grade. Can't comment on pricing though.
Most packages with beryllium oxide have been abandoned long ago, beryllia being replaced with aluminum nitride.
Because aluminum nitride is not as good as beryllia, packages with beryllia have survived for some special applications, like military, aerospace or transistors for high-power radio transmitters.
Those packages are not dangerous, unless someone attempts to grind them, but their high price (caused by the difficult manufacturing techniques required to avoid health risks, and also by the rarity of beryllium) discourages their use in any other domains.
The article mentions backside (underside) power distribution, capacitors to help regulate voltage (thus allowing tighter tolerances and lower voltage / operating power), voltage regulation under the chip, and finally dual-layer stacking with the above as potential avenues to spread heat dissipation.
I can't help but wonder, where exactly is that heat supposed to go on the underside of the chip? Modern CPUs practical float atop a bed of nails.
a second heatsink mounted the back of the chip? maybe the socket the chip in suck a way the back touches a copper plate attached to some heatpipes? plenty of options
I mean, there's no real reason a chip has to be a wafer.
A toroidal shape would allow more interconnects to be interspaced throughout the design as well as more heat-transfer points alongside the data transfer interconnects.
Something like chiplet design where each logical section is a complete core or even an SOC with a robust interconnect to the next and previous section.
If that were feasible, you could build it onto a hollow tube structure so that heat could be piped out from all sides once you sandwich the chip in a wraparound cooler.
I guess the idea is more scifi than anything, though. I doubt anyone other than ARM or RISC-V would ever even consider the idea until some other competitor proves the value.
We could also explore the idea that Von Neumann's architecture isn't the best choice the future. Having trillions of transistors just waiting their turn to hand off data as fast as possible doesn't seem same to me.
Start with an FPGA, they're optimized for performance, but too optimized, and very hard to program.
Rip out all the special purpose bits that make it non-uniform, and thus hard to route.
Rip out all of the long lines and switching fabric that optimizes for delays, and replace it all with only short lines to the neighboring cells. This greatly reduces switching energy.
Also have the data needed for every compute step already loaded into the cells, eliminating the memory/compute bottleneck.
Then add a latch on every cell, so that you can eliminate race conditions, and the need to worry about timing down to the picosecond.
This results in a uniform grid of Look Up Tables (LUTS) that get clocked in 2 phases, like the colors of the chessboard. Each cell thus has stable inputs, as they all come from the other phase, which is latched.
I call it BitGrid.
I'd give it a 50/50 chance of working out in the real world. If it does, it'll mean cheap PetaFlops for everyone.
One game that can be played is to use isotopically pure Si-28 in place of natural silicon. The thermal conductivity of Si-28 is 10% higher than natural Si at room temperature (but 8x higher at 26 K).
How difficult is the purification process? Is it as difficult as uranium hexafloride gas?
Yes, gas centrifuge appears to be a leading method.
'The purification starts with “simple” isotopic purification of silicon. The major breakthrough was converting this Si to silane (SiH4), which is then further refined to remove other impurities. The ultra-pure silane can then be fed into a standard epitaxy machine for deposition onto a 300-mm wafer.'
Doesn’t silane like catching fire when it sees an oxygen molecule? The other day I heard about it being used as rocket fuel for lunar ISRU applications.
This is no worse than before. All electronic grade silicon is already produced starting from silane or trichlorosilane, and both are about equally hazardous to handle. See this overview of producing purified silicon:
"Chemistry of the Main Group Elements - 7.10: Semiconductor Grade Silicon"
How much does it costs to manufacture? Are there any other benefits from using isotopically pure Si-28? Are there any other isotopes used in common thermal conductive material that are more conductive?
I don't think there would be much difference because much of the conductivity of copper is from the conduction electrons, not phonons. Isotopic purification increases thermal conductivity in silicon because it decreases phonon scattering.
Isotopically pure diamond, now there's something to look at.
"The 12C isotopically pure, (or in practice 15-fold enrichment of isotopic number, 12 over 13 for carbon) diamond gives a 50% higher thermal conductivity than the already high value of 900-2000 W/(m·K) for a normal diamond, which contains the natural isotopic mixture of 98.9% 12C and 1.1% 13C. This is useful for heat sinks for the semiconductor industry."
I understand isotopically pure Si-28 may be preferred for quantum computing devices. The Si-28 has no spin or magnetic moment, reducing the rate of decoherence of certain implementations of qubits.
With AI, both GPU and CPU are pushed to the absolute limit and we shall be putting 750W to 1000W per unit with liquid cooling in Datacenter within next 5 - 8 years.
I wonder if we can actually use those heat for something useful.
This checks out. If y'all haven't specced a modern PC: Coolers for GPU and CPU are huge, watercooling is now officially recommended for new CPUs, and cases are ventilated on all sides. Disk bays are moved out of the main chamber to improve airflow. Fans everywhere. Front panels surface areas are completely covered in fans.
I built a PC last year and saw a bunch of the CPUs were recommending water cooling. There were a few high end air coolers that were compatible. I went with an AIO water cooler. It was cheap and easy. It should give as good or better temperature control as the air coolers that are 5x more expensive.
My guess is manufacturers don't want to tell people they should air cool if it requires listing specific models. It's easy to just say they recommend water cooling since basically all water coolers will provide adequate performance.
I hope you're correct. I'm in the middle of building a replacement PC (it's been like 10 years) and went with a ~80 USD air cooler that's got two fans and a bunch of heat pipes. The case is also a consideration, I selected one that can hold a BUNCH of fans and intend to have them all always push at least a little air through, more as it gets warmer.
In my case two fans on the CPU, pointing towards the rear exhaust fan to suck, and 6 fans 120mm or larger pushing air through otherwise, will _hopefully_ remain sufficient.
For most workloads it's probably fine. If you're doing any CPU heavy work it might thermally limit you if the cooler can't keep up. But that should rarely be an issue for most people.
The noctua cpu fans are quieter and as good as liquid cooling because of the pump.
That said, I think liquid cooling has reached critical mass. AIOs are commonplace.
I think it would be (uh) cool to have a extra huge external reservoir and fan (think motorcycle or car radiator plus maybe a tank) that could be nearly silent and cool the cpu and gpu.
IMO, Noctua coolers are overpriced these days. You can get nearly identical thermal performance to their $150 NH-D15 G2 from a $40 Thermalright Peerless Assassin 120 or 140.
I am sure that they are overpriced, but the reason is because they can get away with this.
Despite the fact that I think that it is very likely that a $40 cooler like the one mentioned by you would work well enough, when I will build a new computer with a top model AMD Ryzen CPU, which dissipates up to 200 W in steady state conditions, I will certainly buy a Noctua cooler for it. A computer with an Intel Arrow Lake S CPU would be even more demanding, as those can dissipate much more than 250 W in steady state conditions.
The reason is that by now I have the experience with many Noctua coolers that have been working for 10 years or more, even 24/7, with perfect reliability and ensuring low noise and low temperatures.
I am not willing to take the risk of experimenting with a replacement, so for my peace of mind I prefer the proven solutions, both for coolers and for power supply units (for the latter I use Seasonic).
Noctua knows that many customers think like this, so they charge accordingly.
I think the reason they mentioned both those specific air coolers is because Noctua made a state-of-the-art cooler for the time, then rested on their laurels, and are now outgunned by a very specific brand (Thermal right) and their stand-out product, the Peerless Assassin. Not just any $40 cooler these days will do the trick.
Sometimes the solution is worse than the problem. My favorite example is the TRS-80 Model II and its descendants, with the combination of the fan and disk drives so loud that users experience physical discomfort. <https://archive.org/details/80-microcomputing-magazine-1983-...>
Modern computers should come with built in piezo, haptic and rumble motors that can emulate HDD, FDD and CD-ROM sounds whenever you start a game or app. Change my mind.
- Inner voice: "You don't miss the old PC noises, you just miss those times".
But this only simulates keyboard and mouse click sounds. In any case, you wrote "whenever you start a game or app" (my emphasis). The Model II's fan and drive noises are 100% present from start to finish, with the combination enough to drive users insane (or, at least, not want to use the $5-10,000 computer).
The Model II was a loud beast. Its floppy drive drew directly from mains power, not a DC rail off the power supply, and spun all the time. The heads engaged via a solenoid that was so powerful it made a loud "thunk" sound and actually changed the size of the display on the built-in CRT.
The Model 12 and 16 improved on the design, sporting Tandon "Thinline" 8" drives that ran on DC and spun down when not in use, leaving fan noise that was quite tolerable.
Hardest to cool cpu I've ever owned was an AMD Athlon 3200+. I remember moving to P4, and life got a lot easier. It still ran very hot, but it could do so without frequent crashing. This was before the giant coolers that we have today were common place. I was far too afraid of water cooling back then.
The 90 nm Prescott Pentium 4 was much more power hungry than the previous 130 nm Northwood Pentium 4.
Even worse than the TDP was the fact that the 90 nm Pentium 4 had huge leakage current, so its idle power consumption was about half of the maximum power consumption, e.g. in the range 50 to 60 W for the CPU alone.
Moreover, at that time (2004) the cooler makers were not prepared for such a jump in the idle power consumption and maximum power consumption, so the only coolers available for Pentium 4 were extremely noisy when used with 90 nm Pentium 4 CPUs.
I remember when at the company where I worked, where we had a great number of older Pentium 4 CPUs, which were acceptable, we got a few upgrades with new Prescott Pentium 4. The noise, even when the computers were completely idle, was tremendous. We could not stand it, so we have returned the computers to the vendor.
A current AMD CCD is ~70mm² and can drop around 120 W or so on that area. E.g. the 9700X has one CCD and up to a 142W PPT, 20 W goes to the IOD, ~120 into the CCD.
edit: (1) this account/IP-range is limited to a handful of comments per day so I cannot reply directly, having exhausted my allotment of HN comments for today (2) I do not understand what you take offense at, because I did not "change [my] original argument" - you claimed, a P4 die is much smaller, I gave a counter example, and made the example more specific in response to your comment (by adding the "E.g. ..." bit with an example of a SKU and how the power would approximately split up).
Since Milan the IOD consumes up to 40W during extended PPT loads (The right term for the numbers you are talking about which is more keen to Turbo of the older P4s ie. 130W tdp on Prescott). It's also important that PPT refers to power delivered to the socket, not directly to the CPU, and shouldn't be confused with TDP. Editing comments to change your original argument is cowardly behavior, so I'm ending this discussion.
I mean, if what you want is P4-class performance, the modern semiconductor industry is excellent at delivering that with low TDP. An Apple A18 Pro [1] gives you over 7x the single thread performance of a Pentium 4 Extreme Edition [2] at 8 W TDP, compared to 115 W for the latter.
I guess future designs will have a cooling ring integrated into the chiplets.. the dark silicon starts up, finds in the memory shared with the previous hot silcone the instructions and cache, computes till heat death, stores all it did in the successor chiplet - it all is on a ring like structure, that is always boiling in some cooling liquid its directly immersed in, going forever round and round. It reminds me of the Ian M. Banks setup of the fireplanet Echronedal in the player of games.
Heat and hotness don't mean the same thing in this context [1]. It doesn't help that the article seems to use heat (energy) and heat (temperature) interchangeably, but the principles of backside power delivery exacerbating hot spots and increasing peak temperatures will apply regardless of wattage or efficiency. A hypothetical (poorly designed) future M-series chip with bspdn could actually emit less heat specifically because the hotspots get hotter faster and cause throttling sooner.
I have an M4 Max MacBook Pro and it generates plenty of heat, especially when gaming, compiling, or transcoding. I think that's still far less heat than it could've generated if it weren't Apple Silicon, though.
I have an M3 Max, and since I don't do a lot of close to the metal work, I can almost always use my fan spinning as a metric for a poorly designed app.
Nice :) I upgraded from an M3 Pro about a month ago. (My $2,000 mistake is not saving up for this from the start. Apple machines can't be upgraded anymore :<)
Are you dismissing a technical article with detailed explanations and arguments about the future of CPUs by simply mentioning some piece of current consumer hardware?
Just my personal experience, but I've recently upgraded from a MBP with the M1 Max to a new MBP with the M4 Max and it does get hotter when doing heavy tasks (eg: video transcoding). It gets to 95-100ºC faster, uses more power, and the default fan curve is also more aggressive, something that Apple usually avoids doing.
It's still very efficient and doesn't run hot under normal load (right now my CPU average is 38ºC with Firefox and ~15 tabs open, fans not spinning), but it definitely generates more heat than the M1 Max under load. Apple still seems to limit them to ~100ºC though.
> In a frontside design, the silicon substrate can be as thick as 750 micrometers. Because silicon conducts heat well, this relatively bulky layer helps control hot spots by spreading heat from the transistors laterally. Adding backside technologies, however, requires thinning the substrate to about 1 mm to provide access to the transistors from the back.
This is a typo here, right? 1mm is thicker, not thinner, than 750 micrometers. I assume 1µm was meant?
I think you're right that 1µm was meant given the orders of magnitude in other sources e.g. 200µm -> 0.3µm in this white paper:
https://www.cadence.com/en_US/home/resources/white-papers/th...
Wafers on some semiconductor processes are 0.3m in diameter. You could not practically handle a 1um thick wafer 0.3m in diameter without shattering it. 0.75mm is a reasonable overall wafer thickness.
Whose gonna pull the trigger on beryllium oxide mounting packages first?
Its the holy grail of having thermal conductivity somewhere between aluminum and copper, while being as electrically insulating as ceramic. You can put the silicon die directly on it.
Problem is that the dust from it is terrifyingly toxic, but in it's finished form it's "safe to handle".
> Whose gonna pull the trigger on beryllium oxide mounting packages first?
Nobody, presumably :)
Why mess with BeO when there is AlN, with higher thermal conductivity, no supply limitations and no toxicity?
Edit: I've just checked, practically available AlN substrates still seem to lag behind BeO in terms of thermal conductivity.
https://en.wikipedia.org/wiki/Aluminium_nitride For anyone else who wasn't familiar with the compound.
""" Aluminium nitride (AlN) is a solid nitride of aluminium. It has a high thermal conductivity of up to 321 W/(m·K)[5] and is an electrical insulator. Its wurtzite phase (w-AlN) has a band gap of ~6 eV at room temperature and has a potential application in optoelectronics operating at deep ultraviolet frequencies.
...
Manufacture
AlN is synthesized by the carbothermal reduction of aluminium oxide in the presence of gaseous nitrogen or ammonia or by direct nitridation of aluminium.[22] The use of sintering aids, such as Y2O3 or CaO, and hot pressing is required to produce a dense technical-grade material.[citation needed] Applications
Epitaxially grown thin film crystalline aluminium nitride is used for surface acoustic wave sensors (SAWs) deposited on silicon wafers because of AlN's piezoelectric properties. Recent advancements in material science have permitted the deposition of piezoelectric AlN films on polymeric substrates, thus enabling the development of flexible SAW devices.[23] One application is an RF filter, widely used in mobile phones,[24] which is called a thin-film bulk acoustic resonator (FBAR). This is a MEMS device that uses aluminium nitride sandwiched between two metal layers.[25] """
Speculation: it's present use suggests that at commercially viable quantities it might be challenging to use as a thermal interface compound. I've also never previously considered the capacitive properties of packaging components and realize of course that's required. Use of Al O as a heat conductor is so far outside of my expertise...
Could a materials expert elaborate how viable / expensive this compound is for the rest of us?
I'm not much of an expert, but maybe this can be useful: AlN is a somewhat widely used insulating substrate that is chosen where sapphire is insufficient (~40 W/mK), but BeO (~300 W/mK) is too expensive or toxic. The intrinsic conductivity of single-crystal AlN is very high (~320 W/mK), but the material is extremely difficult to grow into large single crystals, so sintered substrates are used instead. This reduces thermal conductivity to 170-230 W/mK depending on grade. Can't comment on pricing though.
I think diamond is even more thermally conductive than either. A quick google finds a number of companies working on silicon-on-diamond.
Most packages with beryllium oxide have been abandoned long ago, beryllia being replaced with aluminum nitride.
Because aluminum nitride is not as good as beryllia, packages with beryllia have survived for some special applications, like military, aerospace or transistors for high-power radio transmitters.
Those packages are not dangerous, unless someone attempts to grind them, but their high price (caused by the difficult manufacturing techniques required to avoid health risks, and also by the rarity of beryllium) discourages their use in any other domains.
> Problem is that the dust from it is terrifyingly toxic, but in it's finished form it's "safe to handle".
Doesn't that mean it would be problematic for electronics recycling?
I don't think toxicity levels on compounds used in electronics has even been a stopper for furthering humanity
I know it is an hyperbole. First thing I thought was: Cadmium, Mercury, Lead and CFC. I was slightly annoyed about Cd and Hg
Or getting berylliosis from putting a drill through your electronic device before throwing it out
Won't you have conductivity issues if the oxide layer is damaged?
The article mentions backside (underside) power distribution, capacitors to help regulate voltage (thus allowing tighter tolerances and lower voltage / operating power), voltage regulation under the chip, and finally dual-layer stacking with the above as potential avenues to spread heat dissipation.
I can't help but wonder, where exactly is that heat supposed to go on the underside of the chip? Modern CPUs practical float atop a bed of nails.
a second heatsink mounted the back of the chip? maybe the socket the chip in suck a way the back touches a copper plate attached to some heatpipes? plenty of options
I mean, there's no real reason a chip has to be a wafer.
A toroidal shape would allow more interconnects to be interspaced throughout the design as well as more heat-transfer points alongside the data transfer interconnects.
Something like chiplet design where each logical section is a complete core or even an SOC with a robust interconnect to the next and previous section.
If that were feasible, you could build it onto a hollow tube structure so that heat could be piped out from all sides once you sandwich the chip in a wraparound cooler.
I guess the idea is more scifi than anything, though. I doubt anyone other than ARM or RISC-V would ever even consider the idea until some other competitor proves the value.
We could also explore the idea that Von Neumann's architecture isn't the best choice the future. Having trillions of transistors just waiting their turn to hand off data as fast as possible doesn't seem same to me.
What's your solution then?
Start with an FPGA, they're optimized for performance, but too optimized, and very hard to program.
Rip out all the special purpose bits that make it non-uniform, and thus hard to route.
Rip out all of the long lines and switching fabric that optimizes for delays, and replace it all with only short lines to the neighboring cells. This greatly reduces switching energy.
Also have the data needed for every compute step already loaded into the cells, eliminating the memory/compute bottleneck.
Then add a latch on every cell, so that you can eliminate race conditions, and the need to worry about timing down to the picosecond.
This results in a uniform grid of Look Up Tables (LUTS) that get clocked in 2 phases, like the colors of the chessboard. Each cell thus has stable inputs, as they all come from the other phase, which is latched.
I call it BitGrid.
I'd give it a 50/50 chance of working out in the real world. If it does, it'll mean cheap PetaFlops for everyone.
You should be working for Intel!
I tried, more than a decade ago to get the idea to them, but I didn't know the right insiders.
programming for anything than the Von Neumann architecture is very hard.
Generally true.
But neural networks are non-Von Neumann, and we 'program' them using backprop. This can also be applied to cellular automata.
One game that can be played is to use isotopically pure Si-28 in place of natural silicon. The thermal conductivity of Si-28 is 10% higher than natural Si at room temperature (but 8x higher at 26 K).
How difficult is the purification process? Is it as difficult as uranium hexafloride gas?
Yes, gas centrifuge appears to be a leading method.
'The purification starts with “simple” isotopic purification of silicon. The major breakthrough was converting this Si to silane (SiH4), which is then further refined to remove other impurities. The ultra-pure silane can then be fed into a standard epitaxy machine for deposition onto a 300-mm wafer.'
https://www.eejournal.com/article/silicon-purification-for-q...
Doesn’t silane like catching fire when it sees an oxygen molecule? The other day I heard about it being used as rocket fuel for lunar ISRU applications.
A rocket and a sandblaster at the same time.
This is no worse than before. All electronic grade silicon is already produced starting from silane or trichlorosilane, and both are about equally hazardous to handle. See this overview of producing purified silicon:
"Chemistry of the Main Group Elements - 7.10: Semiconductor Grade Silicon"
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/...
Thanks. I completely forgot how evil is semiconductor manufacturing.
How much does it costs to manufacture? Are there any other benefits from using isotopically pure Si-28? Are there any other isotopes used in common thermal conductive material that are more conductive?
The point of improving the thermal conductivity of silicon is that silicon is what chips are made of instead of, say, diamond.
Of course cost would have to be acceptable.
I was thinking more about isotopes of copper than carbon but I can't find data about thermal conductivity of isotopically enriched copper.
I don't think there would be much difference because much of the conductivity of copper is from the conduction electrons, not phonons. Isotopic purification increases thermal conductivity in silicon because it decreases phonon scattering.
Isotopically pure diamond, now there's something to look at.
https://en.wikipedia.org/wiki/Isotopically_pure_diamond
"The 12C isotopically pure, (or in practice 15-fold enrichment of isotopic number, 12 over 13 for carbon) diamond gives a 50% higher thermal conductivity than the already high value of 900-2000 W/(m·K) for a normal diamond, which contains the natural isotopic mixture of 98.9% 12C and 1.1% 13C. This is useful for heat sinks for the semiconductor industry."
I understand isotopically pure Si-28 may be preferred for quantum computing devices. The Si-28 has no spin or magnetic moment, reducing the rate of decoherence of certain implementations of qubits.
https://spectrum.ieee.org/silicon-quantum-computing-purified...
With AI, both GPU and CPU are pushed to the absolute limit and we shall be putting 750W to 1000W per unit with liquid cooling in Datacenter within next 5 - 8 years.
I wonder if we can actually use those heat for something useful.
It's going to be too low temperature for power production, but district heating should be viable!
The mainstream data center GPUs are already at 700 W and Blackwell sits at ~1 kW.
We are looking at 600kW per rack, and liquid cooling is already deployed in many places.
So, one power plant per aisle of a data center?
Well...
https://www.powermag.com/the-smr-gamble-betting-on-nuclear-t...
Attempts to use the waste heat for anything in a data center are likely very counter productive to actually cooling the chips
Pentium 4, GeForce FX 5800, PS3, Xbox 360, Nintendo Wii, MacBook 20??-2019: "First time?"
This checks out. If y'all haven't specced a modern PC: Coolers for GPU and CPU are huge, watercooling is now officially recommended for new CPUs, and cases are ventilated on all sides. Disk bays are moved out of the main chamber to improve airflow. Fans everywhere. Front panels surface areas are completely covered in fans.
> watercooling is now officially recommended for new CPUs
First I'm hearing of this. Last I checked, air coolers had basically reached parity with any lower-end water cooled setup.
I built a PC last year and saw a bunch of the CPUs were recommending water cooling. There were a few high end air coolers that were compatible. I went with an AIO water cooler. It was cheap and easy. It should give as good or better temperature control as the air coolers that are 5x more expensive.
My guess is manufacturers don't want to tell people they should air cool if it requires listing specific models. It's easy to just say they recommend water cooling since basically all water coolers will provide adequate performance.
I hope you're correct. I'm in the middle of building a replacement PC (it's been like 10 years) and went with a ~80 USD air cooler that's got two fans and a bunch of heat pipes. The case is also a consideration, I selected one that can hold a BUNCH of fans and intend to have them all always push at least a little air through, more as it gets warmer.
In my case two fans on the CPU, pointing towards the rear exhaust fan to suck, and 6 fans 120mm or larger pushing air through otherwise, will _hopefully_ remain sufficient.
For most workloads it's probably fine. If you're doing any CPU heavy work it might thermally limit you if the cooler can't keep up. But that should rarely be an issue for most people.
The noctua cpu fans are quieter and as good as liquid cooling because of the pump.
That said, I think liquid cooling has reached critical mass. AIOs are commonplace.
I think it would be (uh) cool to have a extra huge external reservoir and fan (think motorcycle or car radiator plus maybe a tank) that could be nearly silent and cool the cpu and gpu.
IMO, Noctua coolers are overpriced these days. You can get nearly identical thermal performance to their $150 NH-D15 G2 from a $40 Thermalright Peerless Assassin 120 or 140.
I am sure that they are overpriced, but the reason is because they can get away with this.
Despite the fact that I think that it is very likely that a $40 cooler like the one mentioned by you would work well enough, when I will build a new computer with a top model AMD Ryzen CPU, which dissipates up to 200 W in steady state conditions, I will certainly buy a Noctua cooler for it. A computer with an Intel Arrow Lake S CPU would be even more demanding, as those can dissipate much more than 250 W in steady state conditions.
The reason is that by now I have the experience with many Noctua coolers that have been working for 10 years or more, even 24/7, with perfect reliability and ensuring low noise and low temperatures.
I am not willing to take the risk of experimenting with a replacement, so for my peace of mind I prefer the proven solutions, both for coolers and for power supply units (for the latter I use Seasonic).
Noctua knows that many customers think like this, so they charge accordingly.
I think the reason they mentioned both those specific air coolers is because Noctua made a state-of-the-art cooler for the time, then rested on their laurels, and are now outgunned by a very specific brand (Thermal right) and their stand-out product, the Peerless Assassin. Not just any $40 cooler these days will do the trick.
There are lots of cheap buzzy coolers. Many CPUs came with one.
But the noctua fans are reliable, but really quiet.
Your ears are worth it.
This was my understanding as well, which is pretty much unshaken from the replies I've received.
I was surprised too, but that's from the AMD label!
You are behind the times. The latest and fastest PowerMac that Apple released so far* is water-cooled.
*Technically the truth
Wait, is it? The first G5 one was, but I thought they scrapped that towards the end.
Will there be an official “cleared for frying eggs” badge? We'll have to do something with all that heat.
mfw you forget AMD Thunderbird
Sometimes the solution is worse than the problem. My favorite example is the TRS-80 Model II and its descendants, with the combination of the fan and disk drives so loud that users experience physical discomfort. <https://archive.org/details/80-microcomputing-magazine-1983-...>
Modern computers should come with built in piezo, haptic and rumble motors that can emulate HDD, FDD and CD-ROM sounds whenever you start a game or app. Change my mind.
- Inner voice: "You don't miss the old PC noises, you just miss those times".
- Shut up!
<https://tryklack.com/>
But this only simulates keyboard and mouse click sounds. In any case, you wrote "whenever you start a game or app" (my emphasis). The Model II's fan and drive noises are 100% present from start to finish, with the combination enough to drive users insane (or, at least, not want to use the $5-10,000 computer).
I kind of miss the bass drop the Model 16's "Thinline" drives did when they were accessed. That was a cool sound.
The Model II was a loud beast. Its floppy drive drew directly from mains power, not a DC rail off the power supply, and spun all the time. The heads engaged via a solenoid that was so powerful it made a loud "thunk" sound and actually changed the size of the display on the built-in CRT.
The Model 12 and 16 improved on the design, sporting Tandon "Thinline" 8" drives that ran on DC and spun down when not in use, leaving fan noise that was quite tolerable.
Hardest to cool cpu I've ever owned was an AMD Athlon 3200+. I remember moving to P4, and life got a lot easier. It still ran very hot, but it could do so without frequent crashing. This was before the giant coolers that we have today were common place. I was far too afraid of water cooling back then.
The most power hungry P4 didn’t top 115W.
The 90 nm Prescott Pentium 4 was much more power hungry than the previous 130 nm Northwood Pentium 4.
Even worse than the TDP was the fact that the 90 nm Pentium 4 had huge leakage current, so its idle power consumption was about half of the maximum power consumption, e.g. in the range 50 to 60 W for the CPU alone.
Moreover, at that time (2004) the cooler makers were not prepared for such a jump in the idle power consumption and maximum power consumption, so the only coolers available for Pentium 4 were extremely noisy when used with 90 nm Pentium 4 CPUs.
I remember when at the company where I worked, where we had a great number of older Pentium 4 CPUs, which were acceptable, we got a few upgrades with new Prescott Pentium 4. The noise, even when the computers were completely idle, was tremendous. We could not stand it, so we have returned the computers to the vendor.
The die was much smaller…
Die size: 135mm²
A current AMD CCD is ~70mm² and can drop around 120 W or so on that area. E.g. the 9700X has one CCD and up to a 142W PPT, 20 W goes to the IOD, ~120 into the CCD.
edit: (1) this account/IP-range is limited to a handful of comments per day so I cannot reply directly, having exhausted my allotment of HN comments for today (2) I do not understand what you take offense at, because I did not "change [my] original argument" - you claimed, a P4 die is much smaller, I gave a counter example, and made the example more specific in response to your comment (by adding the "E.g. ..." bit with an example of a SKU and how the power would approximately split up).
The tdp is for the whole cpu with multiple ccds and iod…
Since Milan the IOD consumes up to 40W during extended PPT loads (The right term for the numbers you are talking about which is more keen to Turbo of the older P4s ie. 130W tdp on Prescott). It's also important that PPT refers to power delivered to the socket, not directly to the CPU, and shouldn't be confused with TDP. Editing comments to change your original argument is cowardly behavior, so I'm ending this discussion.
You added wrong numbers and shifted the metric from tdp to ppt. There seems to be a reason for your restrictions. Goodbye.
Which was huge in the era when CPUs didn't underclock themselves at idle to save power and coolers looked like this: https://www.newegg.com/cooler-master-air-cooler-series-a73/p...
Some coolers today still look like that but they're on chips drawing 35W or so while idling at <2W.
I mean, if what you want is P4-class performance, the modern semiconductor industry is excellent at delivering that with low TDP. An Apple A18 Pro [1] gives you over 7x the single thread performance of a Pentium 4 Extreme Edition [2] at 8 W TDP, compared to 115 W for the latter.
[1]: https://www.cpubenchmark.net/cpu.php?cpu=Apple+A18+Pro&id=62...
[2]: https://www.cpubenchmark.net/cpu.php?cpu=Intel+Pentium+4+3.7...
Is there a reason we can’t put heat pipes directly into chips? Or underneath
Speaking of dissipation, how is the progress in reversible computing going?
Isn’t heat just wasted energy?
I guess future designs will have a cooling ring integrated into the chiplets.. the dark silicon starts up, finds in the memory shared with the previous hot silcone the instructions and cache, computes till heat death, stores all it did in the successor chiplet - it all is on a ring like structure, that is always boiling in some cooling liquid its directly immersed in, going forever round and round. It reminds me of the Ian M. Banks setup of the fireplanet Echronedal in the player of games.
Seems my M1 Macbook Air generates almost no heat.
Heat and hotness don't mean the same thing in this context [1]. It doesn't help that the article seems to use heat (energy) and heat (temperature) interchangeably, but the principles of backside power delivery exacerbating hot spots and increasing peak temperatures will apply regardless of wattage or efficiency. A hypothetical (poorly designed) future M-series chip with bspdn could actually emit less heat specifically because the hotspots get hotter faster and cause throttling sooner.
[1] https://en.wikipedia.org/wiki/Heat#Heat_vs._temperature
good for a laptop. what would the clocks be on a desktop part that was liquid cooled?
I have an M4 Max MacBook Pro and it generates plenty of heat, especially when gaming, compiling, or transcoding. I think that's still far less heat than it could've generated if it weren't Apple Silicon, though.
I have an M3 Max, and since I don't do a lot of close to the metal work, I can almost always use my fan spinning as a metric for a poorly designed app.
Nice :) I upgraded from an M3 Pro about a month ago. (My $2,000 mistake is not saving up for this from the start. Apple machines can't be upgraded anymore :<)
<looks at the arm macs> You sure?
Are you dismissing a technical article with detailed explanations and arguments about the future of CPUs by simply mentioning some piece of current consumer hardware?
Yes, because I think they’re extatic about going the wrong way.
Edit: At a quick search, if you undervolt and set power limits to reduce an intel CPU’s consumption to 50% you only lose 20-30% performance.
So the industry is being extremely inefficient in the name of displaying higher numbers in benchmarks.
[dead]
The Apple Silicon chips are indeed running hotter on every new generation, no?
That has not been my experience, do you have a source?
Just my personal experience, but I've recently upgraded from a MBP with the M1 Max to a new MBP with the M4 Max and it does get hotter when doing heavy tasks (eg: video transcoding). It gets to 95-100ºC faster, uses more power, and the default fan curve is also more aggressive, something that Apple usually avoids doing.
It's still very efficient and doesn't run hot under normal load (right now my CPU average is 38ºC with Firefox and ~15 tabs open, fans not spinning), but it definitely generates more heat than the M1 Max under load. Apple still seems to limit them to ~100ºC though.