It was once the stuff of science fiction: Robots programmed to kill anyone in furtherance of a mission. Nothing could stop them. Even worse, they too were controlled by other machines; computers with artificial intelligence that had become sentient and decided that mankind was unnecessary and had to be eliminated.
The most well-known instance of these rogue robots can be found in the “Terminator” movies, which portrayed a cyborg sent back through time to kill the woman who would give birth to the leader of the human resistance movement in the future.
You probably remember this final scene below from the first “Terminator.”
Many of us watched that movie and thought, That’s a long way off in the future.
But it turns out that the future is now, as Vox reported in a recent article on killer robots:
“Experts in machine learning and military technology say it would be technologically straightforward to build robots that make decisions about whom to target and kill without a ‘human in the loop’ — that is, with no person involved at any point between identifying a target and killing them. And as facial recognition and decision-making algorithms become more powerful, it will only get easier.
“Called ‘lethal autonomous weapons’ — but ‘killer robots’ isn’t an unreasonable moniker — the proposed weapons would mostly be drones, not humanoid robots, which are still really hard to build and move. But they could be built much smaller than existing military drones, and they could potentially be much cheaper.”
Drones are also known as Unmanned Aerial Vehicles (UAVs). The United States has been using them in warfare since 2000. They are quiet, carry lethal armaments, and are incredibly accurate. A drone can take out an entire enemy camp with a few well-placed missiles. But they’re controlled by a human with a computer and a joystick, so they can only do as they’re told to do.
Now think about a day in which drones are sent off to do their deadly work without human oversight. Stuart Russell, a computer science professor at UC Berkeley, says that day is less than two years away:
“Technologically, autonomous weapons are easier than self-driving cars. People who work in the related technologies think it’d be relatively easy to put together a very effective weapon in less than two years.”
Countries across the world will probably begin to use such killer machines in the very near future. And their reasoning for using drones instead of soldiers is clear:
“From a military perspective, the most straightforward argument for autonomous weapons is that they open up a world of new capabilities. If drones have to be individually piloted by a human who makes the crucial decisions about when the drone could fire, you can only have so many of them in the sky at once.
“Furthermore, current drones need to transmit and receive information from their base. That introduces some lag time, limits where they can operate, and leaves them somewhat vulnerable — they are useless if communications get cut off by enemies who can block (or ‘jam’) communication channels.
Perhaps you’re wondering: How afraid should I be of these killer robots? As Vox notes:
“Killer robots have the potential to do a lot of harm, and make the means of killing lots of people more available to totalitarian states and to non-state actors. That’s pretty scary.
“But in many ways, the situation with lethal autonomous weapons is just one manifestation of a much larger trend.
“AI is making things possible that were never possible before, and doing so quickly, such that our capabilities frequently get out ahead of thought, reflection, and strong public policy. As AI systems become more powerful, this dynamic will become more and more destabilizing.”
Translation: To paraphrase Arnold Schwarzenegger in the first “Terminator” movie, They’ll be back. Are we ready for that?
Here’s more on the MQ-9 Reaper drone:
Featured Image Via YouTube Screenshot