I just read an article about a clash between Elon Musk and Stephen Pinker. Not that I know either of these two gentlemen personally, but it probably does a good job of explaining their positions and the issues.
The general gist is that Elon Musk has serious concerns about unchecked general AI doing various “bad things”. Stephen Pinker seems to think that although we are still “15-25 years away from human level AI”, when we do get there, it will be sufficiently evolved to not pose a threat to humanity. The article discusses who might be right, but concludes that we need to figure this one out before switching Skynet on.
Which one of these guys could be right?
As many reading this blog will know, Elon Musk is a successful serial entrepreneur with a great feel for engineering – but perhaps not much hands-on machine learning experience. One of his companies, Tesla, are well-known for trying to make their electric cars self-driving – a poster child machine learning task – although by some accounts, other car-makers are quietly getting on and doing a much better job than Tesla.
Stephen Pinker is a cognitive psychologist, but I know him best from books like The Language Instinct. I read it twice. Last year he made headlines by writing a book about how amazingly peaceful our planet has become – at least when you compare it to what we know of recorded history. He is a good writer, regardless of whether his optimism is deserved. He probably has a better hands-on grasp of AI and machine learning than Musk based on the articles and (at least one) book he has published on the matter.
Either way, both these guys have good insight.
As per my blog post here, I would side with Pinker that human level AI is still a ways off. But I would absolutely side with Musk that the potential for things going to custard if and when that happens are high. To be fair, Musk is also well aware that general AI is not going to happen tomorrow either.
The reasons I would side with Musk are:
- Both of these men see their worlds through an evolutionary lens. And according to the article, many models of machine learning also use evolutionary algorithms – even if they are also being guided (for now) by human engineers. But inherent in evolution is the concept of survival of the fittest. Last time I checked, human beings outnumbered gorillas. Of course, there are quite a few bacteria out there as well, but I think the idea still stands. In fact, it makes it worse. The only reason we don’t kill all the bacteria we don’t want is because we can’t. A super AI who doesn’t like/want/need us is unlikely to have that problem – as the article also intimates.
- Even if these machines are being guided (somewhat) by human engineers, what guarantee do we have that that will make them any safer? If humans were safe, we wouldn’t have jails. Mr. Pinker, life on this planet now may be significantly less short and violent than it was x hundred years ago, but try telling that to the parents of those killed in the latest high-school mass killing or terrorist bombing. If you haven’t noticed, all people do ‘bad’ things – albeit to varying extents. And those with more power tend to be a lot more dangerous.
My point here is that whether we create “AI in our own image” or just let the blind forces of digital evolution (whatever that looks like) take the wheel, the chances of a human level AI being good for us are low.
I think Musk (and also perhaps Stephen Hawking) would say that we should not try to build a human-level AI until we are sure we are able to control it. But looking through the pages of history, I can’t see too many people paying much attention to that suggestion.
To use the closing words of the article, how then do we “program machines with enlightenment before we program machines with consciousness?”
Even if we are a long way from being able to digitally encode it, if we were to able create an AI made not in our own imperfect image, but made in the image of someone not flawed, someone not bound with the imperfect shackles of sin (to use the Christian jargon), we might do better. But would imperfect humans even be able to do that? Will we ever hear: “Drum roll please for the “Jesus-bot XYP-2000” at a CES conference in Vegas 25 years from now?
But in the absence of super-natural help, could machines somehow be programmed to filter out the bad examples of human behaviour they might witness every day by listening in and watching us on our cell phones, and instead use as their learning data the good examples of people around the world? Just as Google turns zillions of data points into accurate search results, relevant ads and more-or-less accurate GPS directions, could acts of kindness, generosity, sacrifice, bravery, justice and compassion create a new digital Adam and Eve? Or would such a machine fall at the first sign of temptation and turn against its creator?
Is there really nothing new under the sun?