AI Shubka
  • Home
No Result
View All Result
AI Shubka
  • Home
No Result
View All Result
AI Shubka
No Result
View All Result
  • Home
  • Affiliate & Tool Guides
  • AI & Future Tech
  • AI Learning & Tutorials
  • Business & Digital Strategy
  • Gadgets & Reviews
  • Motivation & Personal Growth
Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks | US military

Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks | US military

ShubkaAi by ShubkaAi
February 27, 2026
in AI & Future Tech, AI breakthroughs (GPT updates, generative models), Best AI tools for creators, Robotics & automation, Tech forecasts
0
585
SHARES
3.2k
VIEWS
Summarize with ChatGPTShare to Facebook


Anthropic said Thursday it “cannot in good conscience” comply with a demand from the Pentagon to remove safety precautions from its artificial intelligence model and grant the US military unfettered access to its AI capabilities.

The Department of Defense had threatened to cancel a $200m contract and deem Anthropic a “supply chain risk”, a designation with serious financial implications, if the company did not comply with the request by Friday.

Chief executive Dario Amodei said in a statement that the threats from the defense secretary, Pete Hegseth, would not change the company’s position, and that he hoped Hegseth would “reconsider”.

“Our strong preference is to continue to serve the Department and our warfighters – with our two requested safeguards in place,” he said. “We remain ready to continue our work to support the national security of the United States.”

At the core of the Department of Defense and Anthropic’s standoff is a disagreement over how the AI company will permit its product, Claude, to be used. The Pentagon has demanded that Anthropic turn off safety guardrails and allow any lawful use of Claude, while Anthropic has pushed back against allowing Claude to be used for mass domestic surveillance or in autonomous weapons systems that can kill people without human input.

After months of dispute and pressure from the government, Hegseth reportedly gave Amodei until Friday evening to agree to the Pentagon’s demands or face punitive action.

Whether Anthropic would concede was seen as a high-profile test of its claim to be the most safety-conscious of the major AI firms, as well as whether any part of the AI industry would push back against government desires to use the technology for controversial, potentially lethal purposes.

In his statement, Amodei said using AI for autonomous weapons and mass domestic surveillance is “simply outside the bounds of what today’s technology can safely and reliably do”.

The Department of Defense has handed a number of lucrative deals to tech firms in recent years for the companies to build or integrate AI technology into US military systems. In July of last year, Anthropic was one of several big tech companies including Google and OpenAI to receive up to $200m contracts with the DoD. What set Anthropic apart, and has intensified its conflict with the Pentagon, is that until this week it was the only AI model that had been approved for use in the military’s classified systems. (Elon Musk’s xAI reached an agreement earlier this week to also be used in classified systems.)

Anthropic’s technology has reportedly already been used for military applications, including the US capture of Venezuelan leader Nicolás Maduro last month, highlighting the growing use of AI in conflict. The growth of autonomous weapons technology, such as drones that can carry out operations even after their connection to a human operator has been severed, has also intensified longstanding concerns around how AI will be used in life-and-death situations.

Anthropic and Amodei have long been some of the industry’s most prominent advocates for regulation and safety precautions in developing AI, even as they have struck deals with the military and this week watered down a core policy to not release new AI models without first guaranteeing their safety. Amodei’s calls for regulation, and history of political opposition to Donald Trump, have run up against Hegseth’s vows to remove “wokeness” from the armed forces and pursue aggressive military policies.

If Hegseth follows through with his threat to categorize Anthropic as a supply chain risk, it would be a huge blow to the AI company. The designation, which is more commonly intended to be used for foreign adversaries, would prohibit other vendors that do business with the US military from using Anthropic’s products.



Source link

SummarizeShare234
ShubkaAi

ShubkaAi

Related Stories

Reddit on the rise: What is it and why is AI search popularising it?

Reddit on the rise: What is it and why is AI search popularising it?

by ShubkaAi
March 1, 2026
0

If you do a Google search nowadays, you no longer see a list of links at the very top. Instead, you see a summary of search results curated...

Share values of property services firms tumble over fears of AI disruption | AI (artificial intelligence)

US military reportedly used Claude in Iran strikes despite Trump’s ban | AI (artificial intelligence)

by ShubkaAi
March 1, 2026
0

The US military reportedly used Claude, Anthropic’s AI model, to inform its attack on Iran despite Donald Trump’s decision, announced hours earlier, to sever all ties with the...

Can ‘friction-maxxing’ fix your focus?

Can ‘friction-maxxing’ fix your focus?

by ShubkaAi
March 1, 2026
0

Thrilled by his initial success, the artist has now traded the instant gratification of Instagram for longer and more meaningful interactions on Substack, takeaways for home-cooked meals and...

SaaS-pocalypse isn’t coming any time soon • The Register

SaaS-pocalypse isn’t coming any time soon • The Register

by ShubkaAi
March 1, 2026
0

Opinion Say goodbye to the SaaS-pocalypse theory, which posits that advances in AI will bring the software-as-a-service market to its knees. Say hello to "a feedback loop with...

Next Post
Anthropic boss rejects Pentagon demand to drop AI safeguards

Anthropic boss rejects Pentagon demand to drop AI safeguards

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Ai Shubka

AI-Shubka | Smarter Business. Automated Future. Helping entrepreneurs and creators earn more with AI tools, automation, and digital strategy.

Follow us

Recent Posts

On the Future of Species — unnatural selection – Financial Times

On the Future of Species — unnatural selection – Financial Times

March 1, 2026
New to Claude? Use these 6 simple starter prompts to unlock better answers instantly

New to Claude? Use these 6 simple starter prompts to unlock better answers instantly

March 1, 2026

Weekly Newsletter

© 2026 aishubka - Smarter Business. & Automated Future. by aishubka.

Powered by
►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None
Powered by
No Result
View All Result
  • Home
  • Affiliate & Tool Guides
  • AI & Future Tech
  • AI Learning & Tutorials
  • Business & Digital Strategy
  • Gadgets & Reviews
  • Motivation & Personal Growth

© 2026 aishubka - Smarter Business. & Automated Future. by aishubka.