
last Thursday, The U.S. State Department has outlined a new vision for developing, testing and validating military systems, including weapons, that use artificial intelligence.
The Political Declaration on the Responsible Military Use of AI and Autonomy represents an attempt by the United States to guide the development of military AI at a technologically critical time. The document is not legally binding on the U.S. military, but allies are expected to agree to its principles to set a global standard for building artificial intelligence systems responsibly.
Among other things, the declaration states that military AI needs to be developed in accordance with international law and that countries should be transparent about the principles behind their technology and implement high standards to verify the performance of AI systems. It also said it was up to humans alone to decide whether to use nuclear weapons.
When it comes to autonomous weapons systems, U.S. military leaders often reassure people that humans will be “involved” in decisions about the use of lethal force. But the Department of Defense’s official policy, first issued in 2012 and updated this year, doesn’t require that.
Attempts to ban autonomous weapons internationally have so far failed. Campaign groups such as the International Red Cross and Stop Killer Robots have pushed for a deal at the United Nations, but some major powers – the US, Russia, Israel, South Korea and Australia – have proven unwilling to commit.
One reason is that many inside the Pentagon believe the growing use of artificial intelligence in the military, including outside of non-weapon systems, is critical and inevitable. They argue that a ban will slow U.S. progress and hamper its technology relative to rivals such as China and Russia. The Ukrainian war showed that autonomy in the form of cheap, disposable drones becoming increasingly powerful, thanks to machine-learning algorithms that help them sense and act, could help them gain an edge in conflict.
Earlier this month, I wrote about ex-Google CEO Eric Schmidt’s personal mission to bolster the Pentagon’s artificial intelligence to ensure the U.S. doesn’t fall behind China. It just took months to report on the efforts to adopt AI in key military systems and how it is at the heart of U.S. military strategy—even when many of the technologies involved are still nascent and untested in any crisis A story emerges.
Lauren Kahn, a fellow at the Council on Foreign Relations, welcomed the new U.S. statement as a potential building block for more responsible use of military artificial intelligence around the world.
twitter content
This content can also be viewed on the site begins at from.
Some nations already have weapons that can operate without direct human control in limited circumstances, such as missile defense systems that need to react at superhuman speeds to function. Greater use of AI could mean systems acting autonomously in more situations, such as when drones operate outside communication range or in swarms too complex for anyone to manage.
Some claims that weapons need artificial intelligence, especially from companies developing the technology, still seem far-fetched. There have been reports of fully autonomous weapons being used in recent conflicts, with artificial intelligence assisting in targeted military strikes, but these have not been proven, and in fact many soldiers may be hostile to systems that rely on algorithms that are far from foolproof. Be cautious.
However, if autonomous weapons cannot be banned, their development will continue. This makes it crucial to ensure that the AI involved behaves as expected—even if the engineering needed to fully implement intent like the one in the new U.S. statement is not yet perfect.