I’m Paula Santolaya, reporting live from SANTOLIVE in my virtual reality. Have comments, opinions, or suggestions? Post them in the comments section or email me at email@example.com.
A few years ago, alarms went off in the digital world. A thousand of scientists who are experts in technology and Artificial Intelligence, including Stephen Hawking, Elon Musk and Steve Wozniak (Apple co-founder), signed a letter against autonomous weapons saying the technology could drive a “third revolution in warfare”.
The fighting waged by the military with its armed soldiers will be a thing of the past. New wars have moved to a new battlefield: automation. Lethal Autonomous Weapons (LAWS) consist of killing machines capable of operating on their own without any human intervention.
Although it may sound like science fiction, countries like Russia, China, the United States of America, Israel, South Korea and the United Kingdom have begun to develop ‘killer robots’ through tanks, ships, fighter aircrafts, submarines and other weapons capable of tracking, identify and attack targets without the need of a human controller.
The use of technology for warfare has sparked a controversial international debate that questions the principles of law and ethics. Currently, there is no legislation in this regard although the United Kingdom and the USA have highlighted the need of a supervision or human judgment in operations that involves lethality. The United Nations is also working to regulate this new scenario and ensure that the digital future is secure for all.
It seems inevitable that lethal autonomous weapons systems are built so it is necessary to establish a formal consensus on the limitations of their use and stop this new ‘Terminator’ that threatens our peace and security. Stephen Hawking once said that “the development of full artificial intelligence could spell the end of the human race”.