The Legality of Autonomous Weapons Systems under International Humanitarian Law
Abstract
Technological developments constitute an integral part of modern reality. Present and future technological advances have a potential far-reaching impact on our standard of living. Within a few decades, improved algorithms will replace humans on the battlefield during armed conflict and make their presence less of a military necessity. This irreversible process of technological advances has made a discussion of the legal, ethical, and political challenges that autonomous weapons systems pose unavoidable. While it is undeniable that artificial intelligence can reduce civilian casualties, it is also highly feasible that technologies designed for the use of civilians might be transformed into lethal weapons where people may lose control over the battlefield.
This Article reviews key issues related to autonomous weapon systems under international humanitarian law. An analysis of advantages and disadvantages indicates that weapons that are unlimited in time and space are per se illegal, that fully autonomous weapons systems should be banned, that the scope of international humanitarian law and human rights law should be expanded to regulate autonomous weapons systems, which does not exclude human control, that the rights and obligations of States should be clearly defined, and that the accountability gap should be closed.
Published
How to Cite
Issue
Section
License
Copyright (c) 2020 Levan Alexidze Journal of International Law (LAJIL)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.