Killer robots that can execute without human intervention will become a reality within years unless there is a global agreement to ban them, warns a leading scientist.Wendell Wallach, an ethicist at Yale University, will today call on the US government to outlaw such machines on the basis they violate international humanitarian law.Wallach also warns that technology has become so advanced that a robot capable of killing humans on its own volition will soon become a possibility - much like the rogue machines seen in Arnold Schwarzenegger's hit film, The Terminator.
New 2015 Real Life Terminators Military Robots Documentary & Discovery HD - YouTube
Inside America's Top Secret Weapons Lab | US Military in Future #Mind BlowKiller robots are coming next: The next military-industrial complex will involve real-life Terminators - Salon.com
For many military planners, the answer is straightforward. Unmanned drones were particularly successful for the U.S. in killing leaders of al-Qaeda hidden in remote locations of Afghanistan and Pakistan. Some analysts believe unmanned air vehicles (UAVs) were the only game in town, the only tool the U.S. and its allies had to successfully combat guerrilla fighters. Furthermore, drones killed a good number of al-Qaeda leaders without jeopardizing the lives of soldiers. Another key advantage: reducing the loss of civilian lives through the greater precision that can be achieved with UAVs in comparison to more traditional missile attacks. The successful use of drones in warfare has been accompanied by the refrain that we must build more advanced robot weapons before “they” do.The Machines Are About To Become Self-Aware And They May Not Like Us Very Much
Roboticizing aspects of warfare gained steam in the U.S. during the administrations of George W. Bush and Barack Obama. As country after country follows the U.S. military’s lead and builds their own force of UAVs, it is clear that robot fighters are here to stay. This represents a shift in the way future wars will be fought, comparable to the introduction of the cross-bow, the Gatling gun, aircraft, and nuclear weapons.
"Open the pod bay doors please, Hal." "I'm sorry Dave, I'm afraid I can't do that." So said the sentient computer HAL at the controls of the Discovery One spacecraft to astronaut Dave Bowman in the 1968 Stanley Kubrick classic, "2001: A Space Oddyssey."
While computer-versus-human conflict has been foreseen in many films since then, a leading artificial intelligence researcher is now making the case that we need to start planning for the day that artificial intelligence combined with lethal capabilities will pose a real challenge to humanity.
Roman V. Yampolskiy, a respected artificial intelligence researcher and the director of the Cybersecurity Laboratory at the University of Louisville, is the author of a new study, “Taxonomy of Pathways to Dangerous AI,” due to be presented for the first time Saturday at the Association for the Advancement of Artificial Intelligence conference in Phoenix, Arizona. His paper is an attempt to spark a serious, intellectual discussion what controls humans can put on machines that don’t exist yet.
“In the next five to ten years we’re going to see a lot more intelligent non-human agents involved in serious incidents,” Yampolskiy said. “Science fiction is useful in showing you what is possible but it’s unlikely that any representations we’ve seen, whether it’s the Terminator or ‘Ex Machina,’ would be accurate. But something similar, when it comes to the possible damage they’d cause, could happen.”