Lethal Autonomous Weapons (LAWS) are one of the most pressing and controversial issues in international security today. They raise legal, security, moral, and ethical concerns about the future of warfare.
While earlier weapons systems—such as landmines—could act without further human input, they did not choose their targets. Modern LAWS are far more advanced. Using artificial intelligence and new technologies, these systems can actively select and engage targets across land, air, and sea. Militaries around the world are investing heavily in these technologies, sparking global debate about whether there should be limits on the role of machines in decisions about life and death.
The debate centers on a fundamental question: Should machines be allowed to decide when to take a human life?
Supporters of LAWS argue that they could make warfare more precise. Machines may react faster than humans, follow orders without emotion, and potentially reduce civilian casualties by adhering more strictly to the laws of war.
Critics warn that LAWS create serious dangers: they may make mistakes due to flawed programming or biased algorithms, blur responsibility when things go wrong, and remove essential human judgment from decisions about lethal force.
The United Nations has been addressing LAWS since 2013, when the issue was first raised by the UN Special Rapporteur on extrajudicial executions. Since then, groups of experts and Member States have met regularly to discuss possible regulation. Most states agree that international humanitarian law (IHL) applies to LAWS, and that no autonomous system should ever be used if it cannot comply with these rules. The UN Secretary-General, António Guterres, has gone further, calling LAWS “politically unacceptable and morally repugnant” and urging governments to agree on a legally binding treaty to regulate them by 2026.
Yet, deep divisions remain. Countries disagree on the definition of key terms such as “autonomy,” “meaningful human control,” and even what qualifies as a LAWS. Some states want a total ban on fully autonomous weapons—especially those that can target humans without oversight. Others prefer making sure that human judgment is maintained at critical points, such as activating a system, selecting targets, and
authorizing strikes.
Many states and NGOs argue that “meaningful human control” must be preserved — to keep responsibility firmly tied to human decision-makers. Courts rely on proving intent or recklessness in wrongful killings. But if a machine made an unpredictable decision, it becomes nearly impossible to prove human intent. This opens the door to impunity: serious harm could occur without anyone being held legally accountable. If civilians see
autonomous systems killing without justice or accountability, it could erode trust in international law and norms.
As the General Assembly’s First Committee considers this issue, delegates will face difficult questions. Should the international community ban fully autonomous weapons outright? Should it regulate them more strictly under existing law? Or should states retain flexibility to pursue these technologies while limiting their misuse?