Modern Warfare: US publishes declaration of responsible use of AI in the military operations

Modern Warfare: US publishes declaration of responsible use of AI in the military operations

Feb 20, 2023 - 17:30
 0  58
Modern Warfare: US publishes declaration of responsible use of AI in the military operations

The US State Department published a “Political Statement on Responsible Military Use of Artificial Intelligence and Autonomy” on Thursday, urging states that research AI to use technology in military operations in an ethical and responsible manner. 

The paper outlines 12 recommended practices for developing military AI capabilities while emphasising human responsibility.

Declaration a result of the recent summit at Hague
The announcement coincides with the United States’ participation in an international forum on the appropriate use of military AI in The Hague, Netherlands. According to Reuters, the seminar was “the first of its type.” “We ask all parties to join us in adopting international rules as it applies to military research and use of AI,” said US Assistant Secretary of State for Arms Control Bonnie Jenkins at the meeting.

The US statement states in its prologue that an increasing number of countries are developing military AI capabilities, which may include the employment of autonomous systems. Concerns have been expressed concerning the possible hazards of deploying such technology, particularly in terms of compliance with international humanitarian law.

Military use of AI can and should be ethical, and responsible, and enhance international security. The use of AI in armed conflict must be in accord with applicable international humanitarian law, including its fundamental principles. Military use of AI capabilities needs to be accountable, including through such use during military operations within a responsible human chain of command and control. A principled approach to the military use of AI should include careful consideration of risks and benefits, and it should also minimize unintended bias and accidents. States should take appropriate measures to ensure the responsible development, deployment, and use of their military AI capabilities, including those enabling autonomous systems.

Assigning accountability (and blame) seems to be the agenda
The document’s 12 best practices cover nuclear weapons safety, responsible system design, people training, and auditing techniques for military AI capabilities. The document also emphasises the significance of testing to verify the safety and efficacy of military AI capabilities, as well as the need to reduce inadvertent bias and mishaps.

When it comes to autonomous systems, notably nuclear weapons, the text includes a few important examples of retaining responsible individuals in the chain of command: “States should preserve human control and engagement in all acts important to informing and carrying out sovereign choices about the use of nuclear weapons.”

It also addressed the issue of unintended behaviour in military systems, which has recently become a concern with consumer deep-learning systems: “States should design and engineer military AI capabilities so that they possess the ability to detect and avoid unintended consequences, as well as the ability to disengage or deactivate deployed systems that demonstrate unintended behaviour.”

The proclamation does not clarify what kind of autonomous or AI-powered systems are included by it, but because there is no common definition of “artificial intelligence,” it defines it in a footnote. “For the purposes of this Declaration, artificial intelligence may be understood to refer to the ability of machines to perform tasks that would otherwise require human intelligence—for example, recognising patterns, learning from experience, drawing conclusions, making predictions, or taking action—whether digitally or as the smart software behind autonomous physical systems,” according to the document.

Wide adoption of the US’s declaration possible
On Thursday, almost 60 countries signed a “call to action” supporting the appropriate military use of AI. According to Reuters, human rights experts and academics expressed worry that the declaration is not legally enforceable and “failed to address concerns such as AI-guided drones, ‘slaughterbots’ that may murder without human participation, or the potential that an AI could exacerbate a military confrontation.”

The complete declaration document, prepared by the Bureau of Arms Control, Verification, and Compliance, is available on the US Department of State website.

Read all the Latest NewsTrending NewsCricket NewsBollywood News,
India News and Entertainment News here. Follow us on FacebookTwitter and Instagram.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow