Aardvark DailyNew Zealand's longest-running online daily news and commentary publication, now in its 25th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.
Content copyright © 1995 - 2019 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk
Please visit the sponsor!
We've been told for quite some time now that the fabrication of CPUs from silicon is reaching its endpoint.
Apparently, on several occasions, we've reached the limits that physics allow when it comes to making things smaller and faster.
Yet, strangely enough, CPUs and GPUs just keep getting smaller, faster and more energy efficient.
How can that be?
And, as if to prove the point, we've seen some pretty impressive new launches in recent months.
Nvidia have blown our socks off with the 3000 series of RTX cards by delivering massive improvements in raw performance whilst at the same time reducing prices and power use.
AMD has thrown the Zen 3 architecture at us which also produces significant IPC increases as well as lower power use and higher clock speeds when compared to previous generations.
Just this week, Apple also rolled out its M1 processor to demonstrate even more MIPS and FLOPs per watt.
How can this all be, when we've apparently reached "the end of the road" so often already?
If we'd seen only tiny increments in performance, power and miniaturisation then I could understand that we were simply squeezing the last drops of goodness out of existing technology -- but some of these improvements are significant.
The Apple M1 has wowed the industry by not only slashing power consumption but also blowing away both Intel and AMD low-power CPUs in terms of performance. What magic is this?
Well ARM-based CPUs have always been pretty energy-efficient and I guess that having been focused mainly on the mobile phone arena, a lot of work has gone into further improving energy efficiency. However, that doesn't fully explain how they've been able to outperform the more traditional X86 architecture.
So have we been fed a line of BS by those claiming that we're hitting the limits of conventional silicon fabrication?
Even at 14nm we were told that things simply could not get much smaller or more dense -- but now there's talk of a 4nm process just down the road.
What's going on?
What about the claims that at these very small sizes the operation of traditional semiconductor elements was going to be upset by quantum effects?
What about the claims that above a certain density of active elements there would be no way to sink the heat out of the die effectively enough to avoid melt-down?
Could it be that silicon still has a lot more to give? Might we be down to the sub-nm level within a decade?
If we can't keep shrinking silicon, what are the alternatives?
Real quantum computers are still simply "proof of concept" devices with a tiny number of qbits that do little except demonstrate that perhaps one day we'll be able to build practical machines based on this technology. That day would seem to be an awfully long way off however.
Optical processors are also just a lab-based curiosity at this stage and there's no indication that they'll actually solve the density/power problems. Indeed, the wavelength of light frequencies suitable for such CPUs is likely to be so long as to see a dramatic increase in CPU size for computers based on such tech.
One thing's for sure however. Computers will continue to get faster, more powerful, more energy efficient and cheaper (per MIP). Human ingenuity and our ability to invent and develop new technologies seems to know no bounds.
Thank goodness for that.
What do you think the average CPU will look like in 10 year's time? Will there still be silicon involved? Will Moore's Law have applied for that entire decade? Or is there some breakthrough tech likely to appear that will change the way we build CPUs?
Please visit the sponsor!
Have your say in the Aardvark Forums.