AI Shubka
  • Home
No Result
View All Result
AI Shubka
  • Home
No Result
View All Result
AI Shubka
No Result
View All Result
  • Home
  • Affiliate & Tool Guides
  • AI & Future Tech
  • AI Learning & Tutorials
  • Business & Digital Strategy
  • Gadgets & Reviews
  • Motivation & Personal Growth
Anthropic boss rejects Pentagon demand to drop AI safeguards

Anthropic boss rejects Pentagon demand to drop AI safeguards

ShubkaAi by ShubkaAi
February 27, 2026
in AI & Future Tech, AI breakthroughs (GPT updates, generative models), Best AI tools for creators, Robotics & automation, Tech forecasts
0
585
SHARES
3.2k
VIEWS
Summarize with ChatGPTShare to Facebook


At issue for Anthropic is the potential use of its AI tools like Claude for two purposes: “Mass domestic surveillance” and “Fully autonomous weapons.”

Amodei said “such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now.”

The Department of War is a secondary name for the Defense Department under an executive order signed by US President Donald Trump in September.

“Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider,” Amodei said.

An Anthropic spokeswoman added on Thursday that while the company received updated wording from the DoD for its contract on Wednesday night, it represented “virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”

“New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” she said. “Despite [the Department of War’s] recent public statements, these narrow safeguards have been the crux of our negotiations for months.”

A representative of the Defense Department could not be reached for comment.

Before Amodei’s comments, Sean Parnell, a spokesman for the Pentagon, wrote on X, “This narrative is fake and being peddled by leftists in the media”, referring to claims the DoD wanted to use Anthropic for mass surveillance or fully autonomous weapons.

“Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes”, Parnell added.

A Pentagon official previously told the BBC that should Anthropic not comply, Hegseth would ensure the Defense Production Act was invoked on the company.

The act essentially gives a US president the authority to deem a given company or its product so important that the government can require it to meet defence needs.

But Hegseth also threatened to label Anthropic a “supply chain risk”, meaning the company would be designated as not secure enough for government use.

A former DoD official who asked not to be named told the BBC on Thursday that Hegseth’s grounds for either measure were “extremely flimsy”.

A person familiar with the negotiations who asked not to be named said tensions between Anthropic and the Pentagon “go back several months,” before it was publicly known that Claude was used as part of a US operation to seize Venezuelan President Nicolás Maduro.

While Amodei did not specify exactly how Anthropic could be or had been used by the DoD for mass surveillance or fully autonomous weapons, he wrote in a company blog post that AI can be used to “assemble scattered, individually innocuous data into a comprehensive picture of any person’s life – automatically and at massive scale.”

“We support the use of AI for lawful foreign intelligence and counterintelligence missions,” Amodei said. “But using these systems for mass domestic surveillance is incompatible with democratic values.”

As for AI being used in weapons, Amodei said even today’s most advanced and capable AI systems “are simply not reliable enough to power fully autonomous weapons.”

“We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” Amodei said. “Without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.”

He added that Anthropic had “offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer.”

Hegseth had demanded the Tuesday meeting with Amodei, a source previously told the BBC.



Source link

SummarizeShare234
ShubkaAi

ShubkaAi

Related Stories

Reddit on the rise: What is it and why is AI search popularising it?

Reddit on the rise: What is it and why is AI search popularising it?

by ShubkaAi
March 1, 2026
0

If you do a Google search nowadays, you no longer see a list of links at the very top. Instead, you see a summary of search results curated...

Share values of property services firms tumble over fears of AI disruption | AI (artificial intelligence)

US military reportedly used Claude in Iran strikes despite Trump’s ban | AI (artificial intelligence)

by ShubkaAi
March 1, 2026
0

The US military reportedly used Claude, Anthropic’s AI model, to inform its attack on Iran despite Donald Trump’s decision, announced hours earlier, to sever all ties with the...

Can ‘friction-maxxing’ fix your focus?

Can ‘friction-maxxing’ fix your focus?

by ShubkaAi
March 1, 2026
0

Thrilled by his initial success, the artist has now traded the instant gratification of Instagram for longer and more meaningful interactions on Substack, takeaways for home-cooked meals and...

SaaS-pocalypse isn’t coming any time soon • The Register

SaaS-pocalypse isn’t coming any time soon • The Register

by ShubkaAi
March 1, 2026
0

Opinion Say goodbye to the SaaS-pocalypse theory, which posits that advances in AI will bring the software-as-a-service market to its knees. Say hello to "a feedback loop with...

Next Post
Disability charities warn of “fetishising” AI profiles on social media

Disability charities warn of "fetishising" AI profiles on social media

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Ai Shubka

AI-Shubka | Smarter Business. Automated Future. Helping entrepreneurs and creators earn more with AI tools, automation, and digital strategy.

Follow us

Recent Posts

On the Future of Species — unnatural selection – Financial Times

On the Future of Species — unnatural selection – Financial Times

March 1, 2026
New to Claude? Use these 6 simple starter prompts to unlock better answers instantly

New to Claude? Use these 6 simple starter prompts to unlock better answers instantly

March 1, 2026

Weekly Newsletter

© 2026 aishubka - Smarter Business. & Automated Future. by aishubka.

Powered by
►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None
Powered by
No Result
View All Result
  • Home
  • Affiliate & Tool Guides
  • AI & Future Tech
  • AI Learning & Tutorials
  • Business & Digital Strategy
  • Gadgets & Reviews
  • Motivation & Personal Growth

© 2026 aishubka - Smarter Business. & Automated Future. by aishubka.