Skip to content
Opinions

LAHIRI: Artificial intelligence stands as looming military threat

 – Photo by null

Whereas the use of artificial intelligence in areas like healthcare, self-driving cars and surveillance might stir unrest among concerned members of the population, the increased use of Artificial Intelligence (AI) in the military should be deeply unsettling for anyone with the capacity to anticipate long-term consequences of AI-assisted warfare.

Chinese strategists see the future of warfare as “intelligentized” and China has transformed its military accordingly, according to the Center for New American Security. Similarly, the United States’ Joint Artificial Intelligence Center, an organ of its Department of Defense, is increasingly determined on adopting and integrating AI militarily, with help from a $268 million 2019 budget request

Across the world, military AI is proliferating vertically as well as horizontally, as states struggle to build up defenses against one another in an escalating arms race akin to the nuclear arms race of the Cold War era. But while nuclear weapons offered their own set of ethical challenges, the development of military AI opens the floodgates for questions surrounding ethical warfare.

The possibility of autonomous weapons systems poses a serious threat to humanity in the sense that these weapons could relinquish any semblance of accountability for the permanent altering, ruining and taking of human lives in a way that no other kind of weapon can. 

The widespread use of machine guns in the first World War introduced the gradual decline of accountability surrounding harm inflicted by people upon other people. The use of drones in more recent years has had horrific effects in relation to the psychology of accountability and anonymity. 

A 2014 Government Accountability Office report regarding drones found that “Having the sense of inflicting danger on others while not being in danger oneself could have psychological ramifications on operators that are not yet well understood.” 

Though fully autonomous weapons would seem to eliminate the human component of killing altogether, they are created, programmed and implemented by humans who ultimately carry the onus of responsibility for the weapons’ actions.

Beyond the psychological implications, the ethics of AI-controlled military technology is especially vague in regard to legal culpability. Weapons acting unilaterally may inflict damage beyond the scope of what is considered lawful warfare, such as attacking people fleeing disabled aircraft as per Protocol 1 of the Geneva Conventions collection of treaties and therefore may commit war crimes that must be prosecuted in coordination with international law. 

In these instances, it is not clear who — or what — is criminally liable. Senior Arms Division researcher at Human Rights Watch Bonnie Docherty said of the matter: “No accountability means no deterrence of future crimes, no retribution for victims, no social condemnation of the responsible party.” 

The personnel involved with making and implementing AI-controlled weapons could possibly be tried and found guilty of war crimes due to negligence, but this would hardly bring the complex situation to justice in a meaningful way. In order to account for the inability to properly address war crimes committed at the hands of AI, there would need to be comprehensive reform or amendments to existing international law pertaining to war and a redefining of which actors are to be held responsible in cases like this.

​Our fixation on AI is evident in the success of movies, shows and literature based on technological science-fiction. In assessing the future risks of AI, it might not be too far off to wonder if this technology might be used by the military to help soldiers justify the killing of humans, as in the "Black Mirror" episode "Men Against Fire," or if autonomous weapons will ever become self-aware.

For now, the capacities in which AI is used in the military largely correspond to data retrieval, cybersecurity, logistics, combat simulation and threat monitoring. Various countries including China and Russia have robust plans to fortify their AI technology. 

In 2017, China released a plan to become the global leader in AI development by 2030. Highly concentrated and widespread efforts into AI development will inevitably put humans in the position to assess the aforementioned implications of autonomous weapons. 

Already, efforts have been made by AI experts to urge international institutions like the United Nations to address this issue and a multilateral ban on autonomous weapons was implemented by 22 countries, yet the various committees and groups established to regulate military AI have struggled to make cohesive decisions about the categorization, legality and morality of the weapons. 

The only thing that everyone can agree on is that AI can and will revolutionize the nature of warfare even to the detriment of society. At this point, there is no going back.

 Anuska Lahiri is a School of Arts and Sciences junior majoring in political science. Her column, “Ethical Questions,” runs on alternate Mondays.


*Columns, cartoons and letters do not necessarily reflect the views of the Targum Publishing Company or its staff.

YOUR VOICE | The Daily Targum welcomes      submissions from all readers. Due to space limitations in our print       newspaper, letters to the editor must not exceed 500 words. Guest      columns and commentaries must be between 700 and 850 words. All    authors   must include their name, phone number, class year and  college     affiliation or department to be considered for publication.  Please     submit via email to [email protected] by 4 p.m. to be  considered   for   the following day’s publication. Columns, cartoons  and letters do   not   necessarily reflect the views of the Targum  Publishing Company  or  its   staff.


Related Articles


Join our newsletterSubscribe