Is Innovation in Advanced AI the Apocalypse?

The following two tabs change content below.
Deepak Amirtha Raj

Deepak Amirtha Raj

Deepak is a Research Analyst with expertise in Business Strategy and Technology. His research covers Artificial intelligence, Virtual and Augmented reality, Machine learning, Big data analytics and Market Strategy. You can find him playing guitar and writing gospel songs in his band "The Brotherhood”. Follow him on LinkedIn and Twitter.

If you envision the future, it is bleak. The world is facing immense stress politically, economically and environmentally. It is really difficult to know what is feared the most. Even human existence is uncertain. Threats emerge from many potential directions: Global warming, asteroid strike, a new disease, or machines turning everything into dust. Artificial intelligence is another huge threat. “The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded” told Stephen Hawking to the BBC. Last year, he added that “ AI is likely either the best or worst thing ever to happen to humanity”.

Also, top executives of tech giants including Bill Gates, Elon Musk and Steve Wozniak, have similar predictions on AI. Yet, billions of dollars are invested in AI research. And tremendous advances are being made. In March 2016, “one of the most incredible games ever”, AlphaGo program triumphed in its final game against South Korean Go grandmaster Lee Sedol to win the series 4-1. In many other areas, from driving cars on the ground to winning dogfights in the air, computers are starting to take over from humans.

Hawking’s fears revolve around the idea of the technological singularity. This is the point in time at which machine intelligence starts to take off, and a new more intelligent species starts to inhabit Earth. We can trace the idea of the technological singularity back to a number of different thinkers including John von Neumann, one of the founders of computing, and the science fiction author Vernor Vinge. The idea is roughly the same age as research into AI itself. In 1958, mathematician Stanisław Ulam wrote a tribute to von Neumann, in which he recalled: “One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity… beyond which human affairs, as we know them, could not continue” (Bulletin of the American Mathematical Society, vol 64, p 1).

More recently, the idea of a technological singularity has been popularised by Ray Kurzweil, who predicts it will happen around 2045, and Nick Bostrom, who has written a bestseller on the consequences. There are several reasons to be fearful of machines overtaking us in intelligence. Humans have become the dominant species on the planet largely because we are so intelligent. Many animals are bigger, faster or stronger than us. But we used our intelligence to invent tools, agriculture and amazing technologies like steam engines, electric motors, and smartphones. These have transformed our lives and allowed us to dominate the planet.

It is therefore not surprising that machines that think – and might even think better than us – threaten to usurp us. Just as elephants, dolphins, and pandas depend on our goodwill for their continued existence; our fate in turn may depend on the decisions of these superior thinking machines.

The idea of an intelligence explosion, when machines recursively improve their intelligence and thus quickly exceed human intelligence, is not a particularly wild idea. The field of computing has profited considerably from many similar exponential trends. Moore’s law predicted that the number of transistors on an integrated circuit would double every two years, and it has pretty much done so for decades. So it is not unreasonable to suppose AI will also experience exponential growth.

Like many of my connections working in AI, I predict we are just 30 or 40 years away from AI achieving superhuman intelligence. But there are several strong reasons why a technological singularity is improbable.

THE “LIMITS OF INTELLIGENCE” ARGUMENT

There are many fundamental limits within the universe. Some are physical: you cannot accelerate past the speed of light, know both position and momentum with complete accuracy, or know when a radioactive atom will decay. Any thinking machine that we build will be limited by these physical laws. Of course, if that machine is electronic or even quantum in nature, these limits are likely to be beyond the biological and chemical limits of our human brains. Nevertheless, AI may well run into some fundamental limits. Some of these may be due to the inherent uncertainty of nature. No matter how hard we think about a problem, there may be limits to the quality of our decision-making. Even a super-human intelligence is not going to be any better than you at predicting the result of the next EuroMillions lottery.

THE “FAST-THINKING DOG” ARGUMENT

Silicon has a significant speed advantage over our brain’s wetware, and this advantage doubles every two years or so according to Moore’s law. But speed alone does not bring increased intelligence. Even if I can make my dog think faster, it is still unlikely to play chess. It doesn’t have the necessary mental constructs, the language, and the abstractions. Steven Pinker put this argument eloquently: “Sheer processing power is not a pixie dust that magically solves all your problems.”

Intelligence is much more than thinking faster or longer about a problem than someone else. Of course, Moore’s law has helped AI. We now learn faster, and off bigger data sets. Speedier computers will certainly help us to build artificial intelligence. But, at least for humans, intelligence depends on many other things including years of experience and training. It is not at all clear that we can short-circuit this in silicon simply by increasing the clock speed or adding more memory.

THE “COMPUTATIONAL COMPLEXITY” ARGUMENT

Finally, computer science already has a well-developed theory of how difficult it is to solve different problems. There are many computational problems for which even exponential improvements are not enough to help us solve them practically. A computer cannot analyse some code and know for sure whether it will ever stop – the “halting problem”. Alan Turing, the father of both computing and AI, famously proved that such a problem is not computable in general, no matter how fast or smart we make the computer analysing the code. Switching to other types of device like quantum computers will help. But these will only offer exponential improvements over classical computers, which is not enough to solve problems like Turing’s halting problem. There are hypothetical hypercomputers that might break through such computational barriers. However, whether such devices could exist remains controversial.

THE FUTURE

So there are many reasons why we might never witness a technological singularity. But even without an intelligence explosion, we could end up with machines that exhibit super-human intelligence. We might just have to program much of this painfully ourselves. If this is the case, the impact of AI on our economy, and on our society, may happen less quickly than people like Hawking fear. Nevertheless, we should start planning for that impact.

Even without a technological singularity, AI is likely to have a large impact on the nature of work. Many jobs, like taxi and truck driver, are likely to disappear in the next decade or two. This will further increase the inequalities we see in society today. And even quite limited AI is likely to have a large influence on the nature of war. Robots will industrialise warfare, lowering the barriers to war and destabilising the current world order. They will be used by terrorists and rogue nations against us. If we don’t want to end up with Terminator, we had better ban robots in the battlefield soon. If we get it right, AI will help make us all healthier, wealthier and happier. Finally, if we get it wrong, AI may well be one of the worst mistakes we ever get to make.

What are your views on AI?

Do you have an opinion on the future of the artificial intelligence industry? Please leave your comments on how AI will affect humans in the future below!

Print Friendly, PDF & Email

Comments

comments

Deepak Amirtha Raj

Deepak Amirtha Raj

Deepak is a Research Analyst with expertise in Business Strategy and Technology. His research covers Artificial intelligence, Virtual and Augmented reality, Machine learning, Big data analytics and Market Strategy. You can find him playing guitar and writing gospel songs in his band "The Brotherhood”. Follow him on LinkedIn and Twitter.

One thought on “Is Innovation in Advanced AI the Apocalypse?

  • April 27, 2017 at 12:53 pm
    Permalink

    The minimum requirement for being intelligent is can-speak-a-language. Mimicking language is the sign of being controlled by some intelligence…True AI is a child’s dream in a candy shop.

    Reply

Leave a Reply

59 Shares
Share13
Tweet
Pin2
+15
Share25
Stumble14