MENU

© 2026 QuickCut.

All rights reserved.

EXPLORE

General14 FEB 2026, 03:53 PM3

US military used Anthropic’s AI model Claude in Venezuela raid, report says

Synced from Source
US military used Anthropic’s AI model Claude in Venezuela raid, report says

US military used Anthropic’s AI model Claude in Venezuela raid, report says The Guardian

Did the Pentagon use Anthropic’s Claude AI to capture Nicolas Maduro?

That’s what a report in the Wall Street Journal has claimed. The United States captured the long-time Venezuelan leader in a lightning raid on Caracas in early January.

The outlet reported that Claude was deployed via Anthropic’s partnership with Palantir, which is used by the Defence Department and federal law enforcement.

But what do we know? How did the Pentagon use Claude to capture Maduro?

The Wall Street Journal quoted people in the know as saying that US military forces used Anthropic’s Claude AI model during the classified operation in Venezuela that led to Maduro’s capture and transfer to the US on federal charges.

The exact role Claude played in the operation remains under wraps. The AI was likely used to analyse intel, synthesise data, to offer real-time support in decision-making or to interpret satellite data and images. However, the specifics of Claude’s use in the raid, which left dozens of Venezuelans and Cubans dead, remain classified. Claude can be used for everything from summarising documents to controlling autonomous drones, the outlet noted.

This appears to be one of the first publicly known cases of a major AI model being used in a classified Pentagon military operation. Axios quoted sources as saying that Claude was used during the operation and not just in the lead-up. Defence Secretary Pete Hegseth has pushed for American forces to use AI to stay ahead of China. The US military has previously used Claude in the past to analyse satellite imagery or intelligence.

The Pentagon is pushing top AI companies, including OpenAI and Anthropic, to make their artificial intelligence tools available on classified networks without many of the standard restrictions that the firms apply to users.

Many AI companies are building custom tools for the US military, most of which are available only on unclassified networks typically used for military administration. Anthropic is the only one that is available in classified settings through third parties, but the government is still bound by the company’s usage policies.

The Pentagon is “moving to deploy frontier AI capabilities across all classification levels,” an official who requested anonymity has said.

However, the company is now facing pushback after the news went public. A senior Trump administration official has said the Pentagon will be taking a relook at its partnership with Anthropic after the news broke.

“Anthropic asked whether their software was used for the raid to capture Maduro, which caused real concerns across the Department of War, indicating that they might not approve if it was,” the official said. “Any company that would jeopardise the operational success of our warfighters in the field is one we need to re-evaluate our partnership with going forward.”

The Wall Street Journal previously reported that such concerns from the company caused officials at the Pentagon to consider cancelling a $200 million (around Rs 1,811 crore) contract with the firm.

Hegseth in January said the Defence Department would not “employ AI models that won’t allow you to fight wars,” referring to discussions administration officials have had with Anthropic, The Wall Street Journal reported. Hegseth in December said, “the future of American warfare is here, and it’s spelled AI.” “As technologies advance, so do our adversaries,” he added. “But here at the War Department, we are not sitting idly by.”

The usage policies of Anthropic, which raised $30 billion (around Rs 2.72 lakh crore) in its latest funding round and is now valued at $380 billion (around Rs 34.41 lakh crore), forbid using Claude to support violence, design weapons or carry out surveillance.

A source in the know told Fox News Digital that Anthropic knows about the use of its AI in classified and unclassified matters. It remains confident that all usage has been in line with Anthropic’s usage policy, as well as its partners’ own compliance policies, the source added.

An Anthropic spokesperson told Axios, “Anthropic didn’t make any such call to the Department of War.”

“We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,” an Anthropic spokesperson told Axios.

“Any use of Claude — whether in the private sector or across government — is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance.”

Claude AI is made by Anthropic, a rival of OpenAI’s ChatGPT and Google’s Gemini. The firm was founded in 2021 by former OpenAI executives including CEO Dario Amodei. The company, backed by Google and Amazon, has rapidly built its revenue base — the company said its current run-rate revenue is $14 billion (around Rs 1.27 lakh crore).

The company has also taken a different approach to AI regulation. While tech companies have pushed for less regulation, Anthropic has drawn up plans to donate $20 million (around Rs 181 crore) to back US political candidates who support regulating the AI industry.


Discussion

Posting as Guest

Loading comments...

Continue Reading