Site icon Technology Shout

Why Anthropic is beefing with the Pentagon

00:00 Speaker A

Well, tensions between Silicon Valley and Washington have now entered a new phase. The debate between the Pentagon and Anthropic has cast a spotlight on how, how and whether artificial intelligence tools should be used in areas such as surveillance and autonomous weapons. For more, we’re joined by Axios’ tech policy reporter Maria Curry. Maria, nice to meet you. Why don’t we start at a high level, Maria? Because not everyone is following this story closely, right? So, what is this story about, Maria? What’s at the heart of this debate?

00:46 Speaker B

This is what is at the heart of the current debate. The Pentagon essentially wants to be able to use Anthropic’s AI model Claude however it sees fit. It doesn’t want to be in a position where every time it has to conduct an operation or do something that involves national security, it has to check with a company and make sure it’s following that company’s specific safeguards.

01:21 Speaker B

The problem is that Anthropic’s usage policy has two very clear red lines that it doesn’t want to cross, not even the Pentagon wants to cross, and that’s mass surveillance of Americans and autonomous weapons. The Pentagon has no such idea. They said we should be able to use your technology the way we want, and that’s the Friday deadline it set.

01:50 Speaker A

You know, Maria, when I read your story, feel free to say Josh, you really misunderstood this. You simply don’t understand. Well, my feelings won’t be hurt. But when I read it, Maria, I found myself sympathizing with Hegseth at the War Department, because I felt, Maria, if you choose as a technology company, listen, we’re going to choose to work with the War Department.

02:26 Speaker A

It seems to me that you have to say, yes, we allow all legitimate uses of our technology, because I don’t understand, Maria, how else can it work. Like, if Hegseth, you know, he’s OK, he’s going to send SEAL Team Six into the war zone, and before those guys, you know, jump out of the C130, they have to check to make sure they’re complying with Anthropic’s software agreement and user terms.

03:00 Speaker A

This seems impossible to me. Or you’d say, no, I misread this.

03:07 Speaker B

So I thought a little background here would be helpful. We know Claude was used during the attack on Baghdadi, right? So, this was a very successful operation by the Pentagon. Claude was used without any problems. Of course, the public has no idea how this technology could be used in a classified environment, but we can surmise that no mass surveillance is required, and no autonomous weapons are required.

03:41 Speaker B

You know, the human use policy was not violated, but the Pentagon was able to conduct a very successful operation. The second thing here, uh, about the standard of legitimate purpose that the Pentagon is trying to have for all, not just humans, but all AI labs is that the law right now doesn’t necessarily take into account all applications of AI. So, for example, in the case of mass surveillance, it is legal today for governments to collect publicly available data.

04:23 Speaker B

For example, social media posts, concealed carry permit, uh, if you attend a rally or protest, uh, voter registration role. The government can legally collect all this data. You can then imagine how injecting artificial intelligence could enhance real-time continuous analysis of data to specifically target and monitor people. That’s the thing that’s shocking, you know, not just to humans but to civil liberties groups and people on Capitol Hill.

05:07 Speaker A

If Maria, the Pentagon does decide to blacklist humans, how unusual a decision is that?

05:16 Speaker B

Yes. Well, there is another point that is added based on the previous point. The Pentagon has not explicitly said it wants to conduct mass surveillance or use the technology for autonomous weapons. They actually think it’s just a matter of doing business with the Pentagon and very important in the tech race with China. So that’s something they would argue about. Well, in terms of how different it would be to label a U.S. company as a supply chain risk.

05:58 Speaker B

This is a very severe punishment. This is usually reserved for foreign adversaries, such as companies from China. So, uh, you know, Anthropic’s $200 million contract with the Pentagon is a drop in the bucket compared to their $380 billion valuation. But blacklisting them means any company doing business with the Pentagon would have to prove they no longer do business with Anthropic, which could be a bigger blow to the company.

06:40 Speaker B

We reported yesterday that the Pentagon began the process of this uh supply chain risk uh designation. They contacted Boeing, they contacted Lockheed Martin and asked them to evaluate what their uh business with Anthropic would look like.

Spread the love
Exit mobile version