This summer, CEO of Tesla and SpaceX, Elon Musk, physicist Stephen Hawking, Apple co-founder Steve Wozniak and over 1000 leading experts in artificial intelligence (AI) and robotics signed an open letter warning of a “military artificial intelligence arms race” and called for a ban on “offensive autonomous weapons”. These weapons have been described as the “the third revolution in warfare, after gunpowder and nuclear arms”. Elon Musk also donated $10 million to Future of Life Institute, an organization dedicated to keeping AI safe. Offensive autonomous weapons use AI to select and kill without the need for human intervention – in other words, robots that decide when to kill, independently from humans. These do not include cruise missiles or drones that are controlled by humans. Think rather of armed quadcopters seeking out and eliminating people based on pre-selected criteria – and who would be held accountable? Many experts in AI worry that creating new intelligent killing machines, which could fall into the wrong hands, as well as a new arms race could spell disaster.
See this profile of the robotic take-over of the world’s militaries:
Artificial Intelligence can in theory help make the battlefield safer for soldiers and civilians, and be useful in natural disasters, rescue missions and humanitarian work. However, there is a high potential for misuse for AI technology. Progress is very rapid right now in the field of AI, and many scientists agree that if we wait too long, it may be too late to prevent certain uses.
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” – Elon Musk
Who has these weapons?
Several countries are funding and developing autonomous weapons, including China, Germany, India, Israel, Republic of Korea, Russia, and the United Kingdom. Some robots operating under various degrees of autonomy have already been deployed by the US, UK, Israel and the Republic of Korea.
It’s not just governments are that are investing in this type of technology: in 2013, Google bought Boston Dynamics, a company famous for creating eerily realistic warrior robots, and who have also developed robots for DARPA, a research branch of the US Department of Defence. Imagine the Boston Dynamics Robots combined with Google’s self-driving (autonomous) car technology!
Google also purchased the UK company Deep Mind in 2014, dedicated to “solving intelligence” or creating AI capable of “deep learning” through general purpose learning algorithms. Facebook, Apple, Microsoft and Baidu, just to name a few, are also hiring scores of AI researchers and investing hundreds of millions of dollars.
There is now an investor rush in AI, and it has become one of the hottest trends in start-up investing. This has resulted in a “wild west mentality” where entrepreneurs race to apply AI to every problem they can think of. The successful startups are acquired by larger companies, which can apply AI technology into marketable products.
What is artificial intelligence?
AI is the study and design of intelligent machines that are able to perceive their environment and take actions to maximize its chances of success. The field of AI combines many disciplines including computer science, mathematics, psychology, linguistics, philosophy and neuroscience. Central goals in AI include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.
AI is already seeping into our daily life and development is accelerating at a rapid pace. “Applied AI” are used extensively in today’s technology and research in the area is well funded in academia and industry. Here are some examples of commonly used products and services that currently incorporate AI:
Google Voice Search
Email spam filtering
Credit card fraud detection
How far have we come with AI?
Computers are incredibly fast and powerful for mathematical calculations or for processing large amounts of data. However, they cannot yet remotely compete with the human cognitive abilities, which are extremely complex.
The ultimate goal with AI is to attain general intelligence (known as strong AI), where the robot would have the full range of cognitive abilities of human beings and could perform any task at least as well as a human.
In a 2014 survey of AI experts, it was found that there was a 50% probability that high-level machine intelligence will be developed by 2030-40, and a 90% chance by 2075. The majority of respondents estimate that machines will become superintelligent within 30 years after this point, with about a 1/3 chance that this will be either bad or extremely bad for humanity.
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”– Stephen Hawking
What could happen?
If machines were to become superinteligent, our future would be shaped by the preferences of the AI. The fate of humanity may depend on what this superintelligence decides to do based on its preferences. What are those preferences? From a human perspective, this is very difficult to predict. We should not assume that intelligent robots follow the same set of morals and ethics, many of which are dependent on the biology of humans.
“Humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.” – Stephen Hawking
"We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it" – Stephen Hawking
Intelligent machines by definition would excel at using available resources to best achieve their goals. What they would choose to do in order to achieve these goals may be completely unanticipated. For example, if a computer’s job is to a mathematical problem, it may figure that the most efficient way is to take over the world to turn it into a giant computer to resolve the equation – and there may not be a way to turn it off widespread AI, especially if we have grown dependent on it. For example, where is the off-switch for the Internet?
Many scientists believe that a technological singularity event will occur due to advents in AI technology. The AI could theoretically improve and redesign itself (recursive self-improvement), or could design and build machines increasingly better than itself for successive generations. This would result in a runaway effect or an intelligence explosion that would exceed human capacity, comprehension and control.
The singularity is the point where the ever-accelerating progress of technology reaches a point of no return, where we cannot predict or comprehend what will happen, nor go back to the way we lived prior to the singularity. Computer scientist, inventor, futurist and author Ray Kurzweil, who now works for Google full time, predicts the singularity will occur around 2045.
Intelligent robots will be managing our resources and infrastructure, but as they become more intelligent, they could also become self-protective and seek resources to better attain their goals. Some argue that they could fight us to survive and refuse to be turned off.
AI and consciousness
It is clear that many of the human biological limits are absent in robots. However, perhaps robots will forever be limited by the inability to be conscious, aware or sentient. What if machines “awaken”? Could the internet become conscious?
Some of the fundamental questions in this debate include:
Is our brain just a computer?
Is general intelligence sufficient to acquire consciousness?
How do we know if a computer is conscious?
Do we need a “soul” in order to have consciousness?
What is consciousness?
So far, autonomous machines cannot exhibit creativity, emotions or free will. They are programmed to execute certain functions and they are essentially slaves to their components. In recent years, consciousness has become a significant topic of research in psychology and neuroscience, the main focus being to determine the neural and psychological correlates of consciousness.
Generally, humans seem to share a broad intuitive understanding of what consciousness is. However, one of the problems with experimental research in the area of consciousness is that we do not have an operational definition of the concept. Consciousness can be vaguely defined as the quality of awareness of an external object or of something within oneself. It has also been defined as: awareness, sentience, wakefulness, having a sense of self, having an executive control within the mind.
"Anything that we are aware of at a given moment forms part of our consciousness, making conscious experience at once the most familiar and most mysterious aspect of our lives." - Max Velmans and Susan Schneider in The Blackwell Companion to Consciousness
The future of AI: reality or science fiction?
Author and biochemistry professor Isaac Asimov wrote a series of short stories between 1940-1950 that was compiled into the book, "I, Robot". These stories share the theme of the interaction of humans, robots and morality, as well as the fictional history of robotics. In the short story "Runaround", Asimov quoted the "Three Laws of Robotics" as published in the fictional "Handbook of Robotics, 56th Edition, 2058 A.D.":
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In later stories, Asimov's robots are responsible for government of entire planets and human civilizations. Asimov subsequently added a fourth, or zeroth law, to precede the others:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Even in the 1940s, Asimov argues that robots and AI do not inherently obey the Three Laws of Robotics and that their human creators must devise a way to program them. In Asimov's later books, even though the robots follow the Three Laws perfectly, they still inflict long-term damage on humanity by depriving humanity of their creativity, as well inventive or risk-taking behaviour.
We simply do not know at this point, what consciousness is and what the limits are on the capabilities of AI. The majority of AI and robotics experts are extremely concerned about the safety of the technology, but like, any powerful new technology (e.g. use of fossil fuels, nuclear fusion, GMO's) there are opportunities and risks. Asimov's laws are a great starting point in the debate over governance and oversight of AI.