Every year, we’re used to computers getting faster and more powerful. If you buy a new computer every two years for the same price (let’s say $2000), you’ll basically get double the power each time.
This phenomenon is known as Moore’s Law, even though it isn’t a scientific law at all. It’s simply a trend, first observed by Intel co-founder Gordon Moore back in 1965. His original prediction was that the number of electronic components in an integrated circuit would double every year. In 1975, the pace was reduced to every two years. Ever since then, Moore’s Law has been an accurate barometer of computing power – not to mention a neat way for chip companies like Intel and AMD to plan their production pipeline. But after more than fifty years of driving technology innovation, Moore’s Law is itself about to get disrupted.
The end is nigh for Moore’s Law mainly because components of computer chips are pushing the boundaries of how small they can go. The size of Intel’s latest chip is ten nanometers. That’s about forty silicon atoms wide. It’s debatable how much lower than they can go, although AMD is already talking seven nanometers. Roboticist Rodney Brooks predicts that five nanometers, which on the current timeline will be attained by 2020 or 2021, will be the absolute limit. To go below five nanometers would mean “the material starts to be dominated by quantum effects and classical physical properties really start to break down,” wrote Brooks in a recent blog post.
So what’s going to happen to the pace of computer improvements once Moore’s Law finally runs its course?
One way to keep progressing is to find new ways to build the chips. IBM, Samsung and others have been experimenting with so-called 3D chips, which add a third dimension. The concept is much like 3D printing, in which a computer creates a three-dimensional product based on a digital file. Only for computer chips, the materials would be incredibly small and so the printing process would be that much more expensive.
It doesn’t really matter to we consumers how small the chips are – they’re already able to fit into our smartphones.
3D chips are an intriguing idea, because modern chips are two-dimensional and flat. By switching to 3D, chip manufacturers could stack components on top of each other. It’s similar to building a skyscraper instead of a single-story house. According to IBM, “3D chips could allow designers to shrink a supercomputer that currently fills a building to something the size of a shoebox.” But while adding a third dimension sounds like a logical move, the process does have one big pretty big drawback: over-heating. If you have components stacked together, inevitably it increases the amount of energy output.
It may sound sacrilege, but there’s even a call to move beyond silicon. At this year’s South by Southwest conference in Austin, Greg Yeric, an ARM Research Fellow at the University of Texas, suggested that it’s time for transistors to move beyond silicon.
“This whole fifty years, we’ve been using what we call the MOSFET in silicon,” he said. MOSFET stands for metal–oxide–semiconductor field-effect transistor; basically it’s the most common transistor in digital circuits today. But according to Yeric, there’s an opportunity “to use different materials than silicon – kind of opening up that Periodical Table – and make better transistors.”
Indeed, there’s already been speculation that Intel might move to non-silicon technologies.
Yet another solution to the end of Moore’s Law would be to fundamentally change how computing is done. One of the most talked about of these approaches is Quantum Computing. Traditionally, computer chips have been built using binary code: “bits” of ones and zeros, representing on and off. But quantum computing is different, because it introduces the concept of a “quantum bit”; known as a “qubit.”
Rather than being a certain one or a certain zero, a qubit exists as a mixture of both – a state known as “superposition.”
There are other quirks to quantum computing which we needn’t get into here (read this comic if you’d like to know more). But in a nutshell, quantum computing is potentially far more powerful than binary computing…if it works. And there’s the rub: quantum computing is currently very expensive, highly unstable, and also has only been proven to work in a few specialist areas (such as codebreaking). There’s no evidence yet that quantum computing is an answer for general purpose computers, like your PC or smartphone.
Perhaps the most promising solution to the end of Moore’s Law, at least in the short term, is software optimisation. After all, it doesn’t really matter to we consumers how small the chips are, because they’re already able to fit into our smartphones, smart watches, and any other internet device we might own. Instead, we’re mainly interested in performance – speed, ability to render the rich graphics of games or virtual reality, energy efficiency, and other factors that directly impact our “user experience.” If chip makers can improve the performance of their chips using software optimisation – improve the runtime, compilers, and so on – then that’s fine by us.
Regardless of how chip companies get around the atomic limits of silicon chips, it’s a fair bet that the end of performance improvements to our computing devices is far from over. Rodney Brooks thinks we’ll see “a golden new era of computer architecture.” Given the recent history of technological progress, it’s hard to argue against that.
Whether it’s 3D chips, quantum computing, non-silicon transistors, software optimisation, or something entirely new that nobody has yet predicted, computing power will likely continue to double (or near to it) for the foreseeable future.