The introduction of autonomous weapon systems, which have gained widespread use and coverage in the Ukraine War, is considered by many to be the third revolution in warfare since the atomic bomb and gunpowder. As Artificial Intelligence (AI) technology continues to advance at breakneck speeds, the ways in which it is shaping our world are manifold, and the battlefield is no exception.
According to a report by Human Rights Watch, the number of countries developing autonomous weapons has risen significantly in recent years. The report states that as of November 2020, 30 countries were working on developing autonomous weapons.
Drones have become an integral part of modern warfare, providing valuable intelligence and support to troops on the ground. However, the exponential development of autonomous drone technology and the more recent introduction of “loitering munitions”, also known as suicide drones, like the Bayraktar TB2 and REX-1 that is displayed in the recent Armenia-Azerbaijan conflict and the Ukraine war, have raised serious questions about the morality and safety of its application.
Brad Smith, President of Microsoft, has called for more in-depth discussion on the topic of autonomous weapons, stating that “the question of autonomous weapons requires a much deeper conversation than is happening today”. While current drones in deployment still require human input for control and decision-making, there has been a growing push by international bodies, like NATO, to develop fully autonomous drones that can operate independently in combat situations.
Autonomous drones have a significant impact on reconnaissance and logistics. They are used to deliver supplies and equipment to troops in the field, reducing the need for human transport vehicles and potentially saving lives in the process. The drones, equipped with advanced sensors and imaging technology, also provide real-time intelligence to military commanders, allowing them to make more informed decisions regarding the deployment of troops and resources. However, the most controversial application of the autonomous drone technology is in combat.
Stuart Russell, Professor of Electrical Engineering and Computer Science at UC Berkeley, has warned that “autonomous weapons aren’t merely a problem of accountability; they’re a problem of avoiding catastrophic outcomes”. Although it can be argued that they could be used to reduce the number of human casualties, with the current technology of AI, they run the risks of errors and malfunctions, potentially causing civilian casualties or unintended damage to infrastructure. A study by the Rand Corporation found that in scenarios where AI-controlled machines make decisions on the battlefield, human casualties could increase by up to 150%. As most of what occurs inside an AI system is a black box, as with self-driving cars, this raises another question: Who would be responsible in the event of unintended harm?
Apart from ethical issues, certain challenges when incorporating AI must be acknowledged. The first being that developers need access to data, which can be difficult, especially for an industry that prefers to classify data and restrict access to it. Without a sufficient dataset, an effective AI system cannot be created. Also, in essence, when creating a training model, the best model requires an infinite amount of data to be completely accurate, which is unachievable with the current technology due to limited computing power, time, and resources.
As Andrew Ng, former Vice President and Chief Scientist at Baidu, notes, “data is the new oil”. But to train an AI system to recognise images of every possible weapon system in existence would involve thousands of categories, let alone taking into consideration factors such as lighting conditions, obtuse angles, or partially obscured images due to obstacles. Unless these types of images were in the training model, the model may inaccurately identify the content and struggle with dimensionality issues. According to a report by Wired, IBM’s Watson weather-prediction tool also struggled with these issues, resulting in incorrect predictions.
Despite these concerns, to abandon the use of advanced AI technology in weapon systems is akin to abandoning electricity during the Industrial Revolution. As countries continue to search for a technological edge over their adversaries and as a form of deterrence, the use of autonomous drones in combat is likely to persist.
As we witness the rise of AI technology, it is up to policymakers to ensure that the use of autonomous drones and other weapon systems remains responsible and ethical, with human accountability at the forefront. In conclusion, it is not just about winning the war, but also about upholding the integrity and dignity of humanity.
Access Partnership is closely monitoring AI and emerging tech developments. For more information regarding such developments or engagements, please contact Shayna Lee at [email protected].