I caught up with Bonner & Partners’ tech guru Jeff Brown when he was in the US last week. It was a quick visit as he literally made his way around the world, but I got a chance to ask him one big question that’s been on my mind.
— James Wells, Editorial Manager
James: I’ve been wondering about the growth of technology because I’ve heard that there are physical limitations to just how far it can advance. Some people are even saying that we’re slowly getting to that point.
Do you think that’s true? Is there a ceiling, or are we just going to continue to find ways around it?
Jeff: So, it’s a really important question because it’s widely misunderstood.
The corollary that’s been written about is the end of Moore’s Law (the idea that computer processing power doubles about every 18 to 24 months). The problem is that, as you continue to shrink the space between transistors on an integrated circuit, you have very serious problems with the behaviour of electrons. They’re not so cooperative when they get into really tight spaces and things get very hot.
And so, as transistor density increases, the heat created becomes one of the major issues with packing more and more transistors into the same amount of space. Basically, when you get down to a distance of an atom between transistors, you really can’t go any further — you reach the physical limits.
And so, in the semiconductor industry, we talk about process nodes. So, for example, they’re typically referred to in nanometres. Older tech from several years ago was 45 or 90 nanometres.
We’re now down to about 16 nanometres. Some of the leading companies are working on 10-nanometre products. And, generally speaking, we think the theoretical limit is probably at about 7 nanometres. Going smaller than that will start to hit the wall, and my calculations show that we’ll probably be there by 2021 or 2022.
So there is actually a deadline. We know when it is; we know how long it takes to get there. It’s very real.
However, what is often ignored is that there are other forms of computing that will not only make the end of Moore’s Law irrelevant in terms of silicon-based microprocessors, but will accelerate the growth — the exponential growth — of computer processing power.
And so, a very simple example is quantum computing. In fact, Google, just late last year, was able to demonstrate a ‘quantum annealer’. It’s a form of quantum computing. Google — using technology from a company called D-Wave — demonstrated a quantum computer that was running 100,000,000 times faster than the fastest microprocessors today.
So not 100 times…not 1,000 times…not 100,000 times… but 100,000,000 times faster.
Now, the caveat is that the quantum computer was used for a very specific problem, not for a general computing application. So the quantum computers we have today are for very specific applications. You have to write the algorithms in a very special way. It has to have specific kind of problem set that you use the quantum computer to solve.
It’s not great for running Microsoft Excel. It doesn’t function like a standard microprocessor. But we’re still in the early stages of quantum computing.
The point is that there will be a few different new forms of computing that will actually start to accelerate this exponential growth in computing power.
One of them will be quantum computing.
There are also some threads of computing built around synaptic computing. In other words, modelling computers in a way that mimics or mirrors the structure of the human brain. IBM is doing some really interesting work in that field.
GPUs (graphics processing units) are also becoming much more relevant. Think of them like a graphics version of a microprocessor.
Historically, they were used for gaming platforms. But, today, they’re being used for things like self-driving cars and artificial intelligence and very complex problem sets that involve a lot of different variables. Things like climatology or cosmology, the origins of the universe or anything that requires a lot of visual or image processing.
My absolutely favourite company in that space is a company called NVIDIA, which I presented about at the Bonner Family Office event in Nicaragua last month.
So, I hope that answered your question.
James: Yes, it did. But it brings up another.
It seems from what you said that they haven’t built a ‘real’ quantum computer yet, just because those algorithms have to be so specific. Is that correct?
Jeff: The form of quantum computing that I mentioned earlier, the quantum annealer, is definitely a real quantum computer. It is the first stage of quantum computing.
The quantum annealers are unique in the sense that they can only be used for a limited set of problems, a very specific set of problems. As we move toward the next stage of quantum computing called ‘analog quantum’ — which will be within five years — there will be a much broader set of applications that we can use quantum computers for.
The most likely applications for an analogue quantum computer will be scientific. For example: quantum dynamics, quantum chemistry, and materials science.
And, within 10 years, we’ll have full blown ‘universal quantum’ computers that can be used for anything, and we will be living in a very, very different world at that time.
James: Like how different? What’s it going to be like?
Jeff: A few years back, Google purchased a company called DeepMind out of London, and it is an artificial intelligence (AI) company.
Now, if you remember back in 2011, IBM was able to beat the world champion of chess with Deep Blue. But chess is a relatively simple game. The amount of computer processing power required to memorise every possible move and every possible way out on a chessboard is not that large. We have it well within our capacity today. In fact, today’s personal computer already has that much processing power.
So Deep Blue knew every possible outcome of every move. But what happened just a few weeks ago in South Korea is lightyears beyond that.
DeepMind used an algorithm it developed called AlphaGo to play the Chinese game Go in South Korea with the world champion.
DeepMind’s AlphaGo artificial intelligence (AI) already beat the European champion last year. He was ranked 275th worldwide. The score was 5 to 0.
Now, what’s really interesting about that is that AI experts did not think this was possible. They estimated that an AI could not beat a human Go master for another 10 years.
These are the experts. These are the people in the industry who focus on nothing but AI. And even they did not think this could happen.
The main reason is that there are 361 opening moves in Go, compared to chess, which has 20. That puts the number of possible patterns and possible outcomes of the game into the hundreds of billions. In fact, it’s such a large number that we don’t even have the computer processing power to count all the possible outcomes.
That means the only way that the algorithm can actually play the game is by using AI. So the AI has to be able to apply pattern recognition and optimize that pattern recognition in a way that might lead to a winning outcome.
Essentially, it has to think.
So, in the match in South Korea, the AlphaGo AI won the first three games. The world champion was able to take the fourth game, apparently on a highly unorthodox, irrational strategy designed to throw off the AI. AlphaGo won the final game, bringing the final score 4 to 1. It wasn’t even a competition.
I have been joking that the AlphaGo AI ‘felt’ bad after winning the first three games, so it gave the fourth game to the human as a gimme and then finished him off in the final match.
Now, in the world of technology, everybody is completely shocked that the AI basically destroyed the world champion in the most complex game on Earth…more than a decade sooner than any expert thought possible.
James: I heard about this. The world champion — Lee Sedol, I think — was shocked, right? He thought he was going to win.
Jeff: He thought he was going to win.
And so, the reason I’m using this analogy is that, within 10 years, we will have a computer processing power to be able to know and conceptualise hundreds of billions of possible outcomes.
So, if you apply it to really complex problems, like nuclear fusion, climatology, space exploration, or even financial markets, it pretty much changes everything.
Think about the health applications…
Think about analysing the entire human genome…
Think about taking a dataset of 200 million people and all of their DNA sequences and then developing targeted and personalised drugs based on all of that information…
Those are the types of things that you can do — you know, aside from really simple things like self-driving cars.
For Markets and Money, Australia