Story: Anthropic says it will not agree to Pentagon demands to remove safeguards on its artificial intelligence systems.This is despite threats to deem the company a “supply chain risk” and remove it from the Department of Defense’s systems, putting a multi-million dollar contract at risk.The dispute stems from the artificial intelligence startup’s refusal to lift safeguards that prevent its technology from being used in the United States to autonomously target weapons and conduct surveillance.:: documentAnthropic CEO Dario Amodei stressed in a statement Thursday that the company opposes the use of its artificial intelligence models for mass home surveillance.He also said that “cutting-edge artificial intelligence systems are simply not reliable enough to power fully autonomous weapons.” Earlier in the day, Pentagon spokesman Sean Parnell said on X that the department has no interest in using artificial intelligence to conduct mass surveillance of Americans…Nor does it want to use artificial intelligence to develop autonomous weapons that can operate without human involvement.Their request, he said, was to “allow the Pentagon to use models of humans for all lawful purposes.”Parnell said the company has until 5:01 p.m. ET on Friday to make a decision.Anthropic, backed by Google and Amazon, has contracts worth up to $200 million with the unit.More than 200 Google and OpenAI employees supported its position in an open letter.None of the companies immediately responded to requests for comment.