Artificial Intelligence, Events

Elon Musk: AI is far more dangerous than nukes

In a wide-ranging interview at the annual South by Southwest conference in Austin, Musk sounded the alarm bells on AI, advocating for a public body that will draw insight about AI and provide oversight such that artificial intelligence can be developed safely.

 

Elon Musk takes questions onstage during SXSW on Sunday, March 11. Photo by Diego Donamaria/Getty Images for SXSW)

Artificial intelligence is the hottest healthcare buzzword these days but a foremost tech executive warned Sunday that AI  improves exponentially and there are real perils if allowed to advance unfettered.

“Fools,” declared Elon Musk of self-avowed AI experts who are not concerned about the potential dangers of AI.

Musk, founder and CEO of SpaceX, and co-founder and CEO of electric car company Tesla, was speaking to a packed audience gathered in Austin for the South by Southwest conference at the Moody’s Theater, which seats 2,750. He was being interviewed by Jonathan Nolan, the co-creator of HBO’s thought-provoking and fascinating Westworld with its own take on AI. The question on AI came from an audience member and it was aimed at the concern that Musk has widely communicated about AI previously and how that differs from other experts on the subject.

Musk dismissed the experts and their predictions.

“The biggest issue with self-described experts is that they think they know more than they do and and they think they are smarter than they actually are,” he said with a visible sigh. “In general we are dumber than we think we are. By a lot. But these people define themselves by their intelligence. And they don’t like the idea that a machine can be smarter than them. I am really quite close to, very close to the cutting-age AI and it scares the hell out of me. It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential”

As an example, Musk shared the well-documented story of how of AI bested humans in the complex game Go over a period of six to nine months. In January 2016 AlphaGo, an AI system built by Google’s DeepMind beat the top human player in this game.

“In a span of maybe six to nine months, AlphaGo went from being unable to beat even a reasonably good Go player to then beating the European world champion who was ranked 600, to then beating Le De-Sol 4-to-1 who has been the world champion for five years years to then beating the current world champion and then everyone while playing simultaneously.”

Don’t even think of comparing this to IBM’s Deep Blue beating Grandmaster Garry Kasparov in Chess back in 1997. Developed in China, Go is apparently infinitely more complex because of the many moves possible. The story goes that before this seminal moment in AI history in May 2017 when AlphaGO defeated 19-year-old Ke Jie, experts had predicted that no AI would be able to topple a human in the game for at least a decade.

And then AlphaZero, which is AlphaGo repurposed for chess was able to beat the best chess computer program called Stockfish in a match of 100 games. Turns out it simply read the rules and then played itself for four hours before overpowering Stockfish. That was in December. Musk added that AlphaZero also defeated AlphaGo in the game of Go.

“AlphaZero just learned by playing itself,” Musk explained. It can basically play any game that you give the rules for. Whatever rules you give it, it literally reads the rules, plays the game [and beat] a human. For any game. Nobody expected that rate of improvement.”

The same rate of improvement is happening with self-driving cars.

“I think by end of next year, self-driving cars will encompass essentially all modes driving and be at least 100 to 200 percent safer than a person,” Musk declared. [Earlier he did concede that his timeline for when things will happen are often described by others as too ‘optimistic.’]

So, what’s to be done given that the AI horse has already left the barn?

“So the rate of growth is really dramatic and we have to figure out one way to ensure that the advent of digital superintelligence is one which is symbiotic with humanity,” he said. “I think that is the single biggest existential crisis that we face and the most pressing one.”

Later he added that the danger posed by runaway artificial intelligence is bigger than the threat of nuclear warheads:

“Mark my words. AI is far more dangerous than nukes,” he said.

Which begs the question what is the solution to this existential crisis? He gave an unexpected answer.

“I am not normally an advocate of regulation and oversight. I think one should generally err on the side of minimizing those things, but this is a case where you have a very serious danger to the public,” Musk warned. “And so, therefore, there needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely.

Musk is not the only person advocating for containing AI’s most harmful impulses. MIT physicist Max Tegmark who has penned a book titled: Life 3.0: Being Human in the Age of Artificial Intelligence advocates for robust and informed public discussion of how this power can be wielded for humanity’s benefit: a kind of Benevolent AI.

And to come up with a framework of how to develop and grow this technology, it may not be a bad idea to emulate the principles that novelist and scientist Isaac Asimov advocated in his famous Foundation Series about robots:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

And now to boldly go — albeit with some regulation — where no man or woman has gone before.

Photo: Diego Donamaria/Getty Images for SXSW)

Shares1
Shares1