According to the United Nations Office for Disarmament Affairs there is no agreed definition for Lethal Autonomous Weapons Systems (LAWS). However, they can be broadly characterized as weapons systems that select and apply force to targets and enemy combatants without direct human intervention.
Although there is agreement that International Humanitarian Law (IHL) applies to regulating these weapon systems, there is broad agreement that they pose unique risks and challenges that pose profound ethical and legal, questions . Some of these risks and challenges, such as, the loss of control of the system, risk of proliferation ,and risk of acquisition by terrorist groups could result in catastrophic threats to international peace and security.
Given that these technologies are increasingly being developed and tested and might soon be deployed, there is an urgent need for the international community to not only commit to upholding and strengthening compliance with IHL, but to develop effective and multilaterally agreements that limit the development, deployment and use of LAWS and that prohibit those that cannot be used in compliance with IHL.
According to IHL, the lawful use of autonomous weapon systems, as broadly defined, will require that combatants retain a level of human control so that a human operator can maintain the ability to cancel an attack if it will violate some aspect of the IHL.Human combatants must be held accountable if these regulations are violated. Consequently, these legal obligations cannot be transferred to a machine, computer program or weapon system.
In 2019, United Nations Secretary-General António Guterres stated that, “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”
He has called on Member States, to agree on a legally binding instrument by 2026, that will prohibit lethal autonomous weapon systems that function without any human control or oversight.