From hospital robots, SIRI, Google’s search algorithms, to self-driving cars, artificial intelligence or AI is slowly progressing into the daily lives of humans. However, science fiction has a funny way of portraying AI as renegade computers or androids with human-like characteristics that are out to destroy humanity as portrayed in films such as The Terminator, The Matrix, and The Machine. In spite of this, there are still some risks to be concerned of when using AI in its extremities.
Artificial intelligence in its simplest form is merely the intelligent behavior of machines unassisted by humans but usually programmed into the machine by human intelligence. In computer science, AI is any device that can perceive its environment and takes action to maximize its chance of success towards a certain goal.
But the artificial intelligence we know of and use today is properly called “Narrow” or “Weak AI” because it is designed to perform only a narrow or singular task or a series of tasks only done one at a time. For instance, Google algorithms only focus on internet searches and nothing else, or during facial recognition. At present, “Strong” or “General AI” is slowly being developed but not yet fully, such as the computers that can play chess or solve immense equations.
Unlike the “super intelligent” AI machines in Sci-Fi movies, present technology and researchers have found that even a super intelligent AI is unlikely to exhibit human emotions like love and hate or that an AI can become intentionally benevolent or malevolent. However, there are two aspects of concern that may make using AI risky:
The AI is programmed to do something devastating
These are AI programmed as autonomous weapons that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Also, an AI arms race could become a possibility that can lead to all-out AI war that result in mass destruction. These weapons may be designed to be extremely difficult to “turn off,” so humans could lose control of the AI. This risk grows as levels of AI intelligence and autonomy increase.
The AI is programmed to do something beneficial, but develops a destructive method for achieving its goal
This can happen when there is failure to fully align the AI’s goals with humans, and this can be difficult. For instance, you can ask an AI self-driving car to take you to the airport as fast as possible, but it might do so going over the speed limit and eventually gets chased by police cars.