AI Shubka
  • Home
No Result
View All Result
AI Shubka
  • Home
No Result
View All Result
AI Shubka
No Result
View All Result
  • Home
  • Affiliate & Tool Guides
  • AI & Future Tech
  • AI Learning & Tutorials
  • Business & Digital Strategy
  • Gadgets & Reviews
  • Motivation & Personal Growth
Using LLMs to distill rivals • The Register

Using LLMs to distill rivals • The Register

ShubkaAi by ShubkaAi
February 14, 2026
in AI & Future Tech, AI breakthroughs (GPT updates, generative models), Best AI tools for creators, Robotics & automation, Tech forecasts
0
585
SHARES
3.2k
VIEWS
Summarize with ChatGPTShare to Facebook


Two of the world’s biggest AI companies, Google and OpenAI, both warned this week that competitors including China’s DeepSeek are probing their models to steal the underlying reasoning, and then copy these capabilities in their own AI systems.

“This is coming from threat actors throughout the globe,” Google Threat Intelligence Group chief analyst John Hultquist told The Register, adding that the perpetrators are “private-sector companies.” He declined to name specific companies or countries involved in this type of intellectual property theft.

“Your model is really valuable IP, and if you can distill the logic behind it, there’s very real potential that you can replicate that technology – which is not inexpensive,” Hultquist said. “This is such an important technology, and the list of interested parties in replicating it are endless.”

Google calls this process of using prompts to clone its models “distillation attacks,” and in a Thursday report said one campaign used more than 100,000 prompts to “try to replicate Gemini’s reasoning ability in non-English target languages across a wide variety of tasks.”

American tech giants have spent billions of dollars training and developing their own LLMs. Abusing legitimate access to mature models like Gemini, and then using this information to train newer models, makes it significantly cheaper and easier for competitors to develop their own AI chatbots and systems.

Google says it detected this probe in real time and protected its internal reasoning traces. However, distillation appears to be yet another AI risk that is extremely difficult – if not impossible – to eliminate.

This is such an important technology, and the list of interested parties in replicating it are endless

Distillation from Gemini models without permission violates Google’s terms of service, and Google can block accounts that do this, or even take users to court. While the company says it continues to develop better ways to detect and stop these attempts, the very nature of LLMs makes them susceptible.

Public-facing AI models are widely accessible, and enforcement against abusive accounts can turn into a game of whack-a-mole.

Plus, as Hultquist warned, as other companies develop their own models and train them on internal, sensitive data, the risk from distillation attacks is going to spread.

“We’re on the frontier when it comes to this, but as more organizations have models that they provide access to, it’s inevitable,” he said. “As this technology is adopted and developed by businesses like financial institutions, their intellectual property could also be targeted in this way.”

Meanwhile, OpenAI, in a Thursday memo [PDF] to the House Select Committee on China, blamed DeepSeek and other Chinese LLM providers and universities for copying ChatGPT and other US firms’ frontier models. It also noted some occasional activity from Russia, and warned illicit model distillation poses a risk to “American-led, democratic AI.” 

China’s distillation methods over the last year have become more sophisticated, moving beyond chain-of-thought (CoT) extraction to multi-stage operations. These include synthetic-data generation, large-scale data cleaning, and other stealthy methods. As OpenAI wrote:

OpenAI also notes that it has invested in stronger detections to prevent unauthorized distillation. It bans accounts that violate its terms of service and proactively removes users who appear to be attempting to distill its models. Still, the company admits that it alone can’t solve the model distillation problem.

It’s going to take an “ecosystem security” approach to protect against distillation, and this will require some US government assistance, OpenAI says. “It is not enough for any one lab to harden its protection because adversaries will simply default to the least protected provider,” according to the memo.

The AI company also suggests that US government policy “may be helpful” when it comes to sharing information and intelligence, and working with the industry to develop best practices on distillation defenses. OpenAI also called on Congress to close API router loopholes that allow DeepSeek and other competitors to access US models, and to restrict “adversary” access to US compute and cloud infrastructure. ®



Source link

SummarizeShare234
ShubkaAi

ShubkaAi

Related Stories

Reddit on the rise: What is it and why is AI search popularising it?

Reddit on the rise: What is it and why is AI search popularising it?

by ShubkaAi
March 1, 2026
0

If you do a Google search nowadays, you no longer see a list of links at the very top. Instead, you see a summary of search results curated...

Share values of property services firms tumble over fears of AI disruption | AI (artificial intelligence)

US military reportedly used Claude in Iran strikes despite Trump’s ban | AI (artificial intelligence)

by ShubkaAi
March 1, 2026
0

The US military reportedly used Claude, Anthropic’s AI model, to inform its attack on Iran despite Donald Trump’s decision, announced hours earlier, to sever all ties with the...

Can ‘friction-maxxing’ fix your focus?

Can ‘friction-maxxing’ fix your focus?

by ShubkaAi
March 1, 2026
0

Thrilled by his initial success, the artist has now traded the instant gratification of Instagram for longer and more meaningful interactions on Substack, takeaways for home-cooked meals and...

SaaS-pocalypse isn’t coming any time soon • The Register

SaaS-pocalypse isn’t coming any time soon • The Register

by ShubkaAi
March 1, 2026
0

Opinion Say goodbye to the SaaS-pocalypse theory, which posits that advances in AI will bring the software-as-a-service market to its knees. Say hello to "a feedback loop with...

Next Post
Where will the ‘AI scare’ strike next?

Where will the ‘AI scare’ strike next?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Ai Shubka

AI-Shubka | Smarter Business. Automated Future. Helping entrepreneurs and creators earn more with AI tools, automation, and digital strategy.

Follow us

Recent Posts

On the Future of Species — unnatural selection – Financial Times

On the Future of Species — unnatural selection – Financial Times

March 1, 2026
New to Claude? Use these 6 simple starter prompts to unlock better answers instantly

New to Claude? Use these 6 simple starter prompts to unlock better answers instantly

March 1, 2026

Weekly Newsletter

© 2026 aishubka - Smarter Business. & Automated Future. by aishubka.

Powered by
►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None
Powered by
No Result
View All Result
  • Home
  • Affiliate & Tool Guides
  • AI & Future Tech
  • AI Learning & Tutorials
  • Business & Digital Strategy
  • Gadgets & Reviews
  • Motivation & Personal Growth

© 2026 aishubka - Smarter Business. & Automated Future. by aishubka.