Stephen Hawking was one of the world’s most brilliant scientists.
He was well known for his theories on black holes…and also for suffering a rare disease.
At age 21, he was diagnosed with ALS, or Lou Gehrig’s disease. Doctors told him he only had a couple of years left to live.
As you probably know already, he defied the doctor’s estimates and this week, at age 76, he passed away.
Unable to speak due to the disease, he used technology to communicate.
As Hawking himself explained in a post, he did this by using a clever device on his glasses. The device detected his cheek movements. By moving his cheek, he could select characters and control the mouse on his computer.
To make his speech faster, he relied on artificial intelligence (AI). As Hawking explained, his computer software included, ‘A word prediction algorithm provided by SwiftKey, trained on my books and lectures, so I usually only have to type the first couple of characters before I can select the whole word.’
While AI helped him to communicate with the world, Hawking was also quite wary of AI.
As he warned in 2014 to the BBC, ‘The development of full artificial intelligence could spell the end of the human race.’
His reasoning was that AI is evolving at a faster pace than humans are. At some point it could reach a level where it takes control of itself…and then starts competing with humans for resources.
As he wrote in his last reddit post, ‘The real risk with AI isn’t malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.’
You see, human intelligence has a fixed growth rate. Yet non-biological intelligence continues to grow exponentially. The thing is, having machines that are getting smarter is increasing the rate at which AI develops.
Hawking was part of a small group that worries about a future where humans have to compete against AI.
The small group also includes Tesla [NASDAQ:TSL] CEO, Elon Musk.
In fact, Musk said this week AI was more dangerous than nukes. As reported by CNBC:
‘I am not normally an advocate of regulation and oversight — I think one should generally err on the side of minimizing those things — but this is a case where you have a very serious danger to the public.
‘It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important. I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane.
‘And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.’
Like Hawking, what scares Musk is that we could lose control over the machines. If we program computers to improve by themselves, we could end up creating AI that is as smart as — or even smarter — than us.
So, we could end up becoming second class citizens to AI — or obsolete.
Artificial intelligence is progressing rapidly… should we be concerned?
The truth is that artificial intelligence is developing at an accelerated rate.
But Musk being Musk, has already come up with a solution to tame AI.
For one, he is trying to get us to Mars as fast as possible so that it can act as a ‘fail-safe for the human population.’
For another, he wants to create cyborgs…
That is, he wants to merge AI with humans.
As he said last year to Vanity Fair, ‘We’re already cyborgs…Your phone and your computer are extensions of you, [though I would say more like zombies, walking or riding the subway while staring at the screen] but the interface is through finger movements or speech, which are very slow.’
His solution? To create a faster connection by connecting computers directly to the brain. That is, he wants to get rid of all communication intermediaries, such as your keyboard and mouse, and merge AI with humans.
During the World Government Summit in Dubai last February, he said the best way to avoid humans becoming obsolete may be by ‘having some sort of merger of biological intelligence and machine intelligence.’
He wants to create an ‘AI human symbiote’. That is, a human with ‘amplified intelligence’ that could be more powerful than any AI we could produce.
And by amplifying human intelligence, we could get a leg up against powerful AI — and avoid the apocalyptic future.
And Musk is not just talking about it, but taking action.
Last January, Musk purchased Neuralink Corp. Neuralink’s website says it’s ‘Developing ultra-high bandwidth brain-machine interfaces to connect humans and computers.’
The project is ambitious. Then again, all of Musk’s projects are.
The AI sceptics group also includes Sir Tim Berners-Lee, the inventor of internet.
He is also concerned AI could become the new ‘master of the universe’ if they start creating and running their own companies. At a recent Summit, he said, as reported by Tech World:
‘So when AI starts to make decisions such as who gets a mortgage, that’s a big one. Or which companies to acquire and when AI starts creating its own companies, creating holding companies, generating new versions of itself to run these companies.
‘So you have survival of the fittest going on between these AI companies until you reach the point where you wonder if it becomes possible to understand how to ensure they are being fair, and how do you describe to a computer what that means anyway?’
They have a point.
There is a bit of an arms race going on in every field to create the best and smartest AI to revolutionise the industry. From hospitality to health, defence, automotive, and even finance. No industry is immune.
We are increasingly using these technologies to analyse big data sets to give loans, hire for jobs, prepare food, drive cars and even make investment decisions.
So, as AI gets more complex and becomes involved in decision making, who will be responsible for the decisions AI is taking?
Editor, Markets & Money