Pentagon’s Chief Tech Officer Clashes with Anthropic Over Autonomous Warfare Development

Pentagon-s-Chief-Tech-Officer-Clashes-with-Anthropic-Over-Autonomous-Warfare-Developmentdata

US Pentagon Disputes with Anthropic Over Autonomous Warfare

The US Department of Defense’s chief technology officer, Emil Michael, has revealed a dispute with artificial intelligence company Anthropic over the use of its technology in autonomous warfare.

Disagreement Over Chatbot Use

According to Michael, the disagreement arose from Anthropic’s restrictions on the use of its chatbot, Claude, in fully autonomous weapons.

Michael stated that the Pentagon requires a reliable partner that can provide AI technology for use in autonomous systems, such as swarms of armed drones and underwater vehicles. However, Anthropic’s restrictions on the use of Claude for mass surveillance and fully autonomous weapons were deemed too restrictive by the Pentagon.

Dispute and Consequences

The dispute began when Michael took over the military’s AI portfolio in August and started scrutinizing Anthropic’s contracts. He questioned the company’s terms of use, which he deemed too restrictive, and sought to negotiate exceptions for specific use cases. However, Anthropic resisted, arguing that its technology was not reliable enough to power fully autonomous weapons.

The Pentagon ultimately designated Anthropic as a supply chain risk, cutting off its defense work, and ordered federal agencies to stop using Claude. Anthropic has vowed to sue over the designation.

Autonomous Warfare and AI Development

Michael revealed that the Pentagon is developing procedures for enabling different levels of autonomy in warfare, depending on the risk posed. He cited a hypothetical scenario in which the US would have only 90 seconds to respond to a Chinese hypersonic missile, and an autonomous counterattack would be necessary.

Anthropic’s Response and Implications

Anthropic has disputed parts of Michael’s version of the talks, emphasizing that the protections it sought were narrow and not based on existing uses of Claude. The company has also stated that it understands that the Department of Defense, not private companies, makes military decisions, and that it has never raised objections to particular military operations.

The dispute highlights the challenges of developing and deploying AI technology in autonomous systems, particularly in the context of national security. As the use of AI in warfare continues to evolve, the need for clear guidelines and regulations on its development and deployment will become increasingly important.

Broader Implications

The Pentagon’s designation of Anthropic as a supply chain risk has significant implications for the company’s business partnerships with other military contractors. The dispute is also likely to have broader implications for the development and deployment of AI technology in autonomous systems, both in the military and civilian contexts.

Related Development

In a related development, President Trump has ordered federal agencies to phase out the use of Anthropic’s technology, giving the Pentagon six months to do so. The move has sparked controversy, with some arguing that it will hinder the development of AI technology in the US.

Conclusion

The dispute between the Pentagon and Anthropic is just one example of the challenges of developing and deploying AI technology in autonomous systems. As the use of AI continues to evolve, it is likely that similar disputes will arise, highlighting the need for clear guidelines and regulations on its development and deployment.



About Author

en_USEnglish