Anthropic refuses Pentagon’s demand in AI safeguards dispute

<span>Story: Anthropic says it will not agree to Pentagon demands to remove safeguards on its artificial intelligence systems.</span><span>This is despite threats to deem the company a “supply chain risk” and remove it from the Department of Defense’s systems, putting a multi-million dollar contract at risk.</span><span>The dispute stems from the artificial intelligence startup’s refusal to lift safeguards that prevent its technology from being used in the United States to autonomously target weapons and conduct surveillance.</span><span>:: document</span><span>Anthropic CEO Dario Amodei stressed in a statement Thursday that the company opposes the use of its artificial intelligence models for mass home surveillance.</span><span>He also said that “cutting-edge artificial intelligence systems are simply not reliable enough to power fully autonomous weapons.” </span><span>Earlier in the day, Pentagon spokesman Sean Parnell said on X that the department has no interest in using artificial intelligence to conduct mass surveillance of Americans…</span><span>Nor does it want to use artificial intelligence to develop autonomous weapons that can operate without human involvement.</span><span>Their request, he said, was to “allow the Pentagon to use models of humans for all lawful purposes.”</span><span>Parnell said the company has until 5:01 p.m. ET on Friday to make a decision.</span><span>Anthropic, backed by Google and Amazon, has contracts worth up to $200 million with the unit.</span><span>More than 200 Google and OpenAI employees supported its position in an open letter.</span><span>None of the companies immediately responded to requests for comment.</span>

Spread the love
See also  Human remains found after fatal shooting and fire in San Bruno, police say

Leave a Reply

Your email address will not be published. Required fields are marked *