
Overview
Remote warfare is far from a new concept in human history. Over the centuries, various advancements have been made to reduce military casualties while inflicting maximum damage on the adversary. With the advent of Artificial Intelligence (AI)-based weaponry systems, such as autonomous weapon systems and drones, the landscape of remote warfare has been revolutionized.
Proponents argue that these AI-powered systems can reduce civilian casualties through enhanced precision. However, recent conflicts—such as those between Russia and Ukraine, and Iran and Israel—demonstrate that this precision remains largely a myth. The use of AI-based weaponry has resulted in significant civilian harm and destruction of civilian infrastructure, raising serious concerns under International Humanitarian Law (IHL).
The key principles of IHL, namely distinction, proportionality, and necessity, are the bedrock for protecting civilians during armed conflicts. However, it is unclear whether AI-based weaponry systems can consistently meet these requirements. Furthermore, these systems risk being misused by both state and non-state actors, potentially amplifying the risks to civilians.
This discussion will cover:
1. The evolution and current state of AI-based weaponry systems.
2. The challenges these systems pose to the application and enforcement of IHL.
3. Real-world examples of AI-based weapons in use, highlighting their effects on civilian protection.
4. The future of these technologies and how the legal frameworks must adapt to mitigate their risks.

