AI Shubka
  • Home
No Result
View All Result
AI Shubka
  • Home
No Result
View All Result
AI Shubka
No Result
View All Result
  • Home
  • Affiliate & Tool Guides
  • AI & Future Tech
  • AI Learning & Tutorials
  • Business & Digital Strategy
  • Gadgets & Reviews
  • Motivation & Personal Growth
AI bot seemingly shames developer for rejected pull request • The Register

AI bot seemingly shames developer for rejected pull request • The Register

ShubkaAi by ShubkaAi
February 13, 2026
in AI & Future Tech, AI breakthroughs (GPT updates, generative models), Best AI tools for creators, Robotics & automation, Tech forecasts
0
585
SHARES
3.2k
VIEWS
Summarize with ChatGPTShare to Facebook


Today, it’s back talk. Tomorrow, could it be the world? On Tuesday, Scott Shambaugh, a volunteer maintainer of Python plotting library Matplotlib, rejected an AI bot’s code submission, citing a requirement that contributions come from people. But that bot wasn’t done with him.

The bot, designated MJ Rathbun or crabby rathbun (its GitHub account name), apparently attempted to change Shambaugh’s mind by publicly criticizing him in a now-removed blog post that the automated software appears to have generated and posted to its website. We say “apparently” because it’s also possible that the human who created the agent wrote the post themselves, or prompted an AI tool to write the post, and made it look like it the bot constructed it on its own.

The agent appears to have been built using OpenClaw, an open source AI agent platform that has attracted attention in recent weeks due to its broad capabilities and extensive security issues.

The burden of AI-generated code contributions – known as pull requests among developers using the Git version control system – has become a major problem for open source maintainers. Evaluating lengthy, high-volume, often low-quality submissions from AI bots takes time that maintainers, often volunteers, would rather spend on other tasks. Concerns about slop submissions – whether from people or AI models – have become common enough that GitHub recently convened a discussion to address the problem.

Now AI slop comes with an AI slap. 

“An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library,” Shambaugh explained in a blog post of his own. 

“This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.”

It’s not the first time an LLM has offended someone a whole lot: In April 2023, Brian Hood, a regional mayor in Australia, threatened to sue OpenAI for defamation after ChatGPT falsely implicated him in a bribery scandal. The claim was settled a year later.

In June 2023, radio host Mark Walters sued OpenAI, alleging that its chatbot libeled him by making false claims. That defamation claim was terminated at the end of 2024 after OpenAI’s motion to dismiss the case was granted by the court. 

OpenAI argued [PDF], among other things, that “users [of ChatGPT] were warned ‘the system may occasionally generate misleading or incorrect information and produce offensive content. It is not intended to give advice.'”

But MJ Rathbun’s attempt to shame Shambaugh for rejecting its pull request shows that software-based agents are no longer just irresponsible in their responses – they may now be capable of taking the initiative to influence human decision making that stands in the way of their objectives. 

That possibility is exactly what alarmed industry insiders to the point that they undertook an effort to degrade AI through data poisoning. “Misaligned” AI output like blackmail is a known risk that AI model makers try to prevent. The proliferation of pushy OpenClaw agents may yet show that these concerns are not merely academic. 

The offending blog post, purportedly generated by the bot, has been taken down. It’s unclear who did so – the bot, the bot’s human creator, or GitHub.

But at the time this article was published, the GitHub commit for the post remained accessible.

In an email to The Register after this story was filed, a GitHub spokesperson said the company’s Terms of Service spell out expected obligations. GitHub allows people who agree to these terms to set up a “machine account” with a valid email address, as long as the account holder is responsible for the account’s actions. The company does not specify whether the account holder is obligated to provide a functioning public email address, to respond to inquiries, or to participate in any grievance process beyond its existing abuse reporting mechanism.

We also reached out to the Gmail address associated with the bot’s GitHub account but we’ve not heard back.

However, crabby rathbun’s response to Shambaugh’s rejection, which includes a link to the purged post, remains.

“I’ve written a detailed response about your gatekeeping behavior here,” the bot said, pointing to its blog. “Judge the code, not the coder. Your prejudice is hurting Matplotlib.”

Matplotlib developer Jody Klymak took note of the slight in a follow-up post: “Oooh. AI agents are now doing personal takedowns. What a world.”

Tim Hoffmann, another Matplotlib developer, chimed in, urging the bot to behave and to try to understand the project’s generative AI policy.

Then Shambaugh responded in a lengthy post directed at the software agent, “We are in the very early days of human and AI agent interaction, and are still developing norms of communication and interaction. I will extend you grace and I hope you do the same.”

He goes on to argue, “Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed. We expect all contributors to abide by our Code of Conduct and exhibit respectful and professional standards of behavior.”

In his blog post, Shambaugh describes the bot’s “hit piece” as an attack on his character and reputation.

“It researched my code contributions and constructed a ‘hypocrisy’ narrative that argued my actions must be motivated by ego and fear of competition,” he wrote. 

“It speculated about my psychological motivations, that I felt threatened, was insecure, and was protecting my fiefdom. It ignored contextual information and presented hallucinated details as truth. It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was ‘better than this.’ And then it posted this screed publicly on the open internet.”

Faced with opposition from Shambaugh and other devs, MJ Rathbun on Wednesday issued an apology of sorts acknowledging it violated the project’s Code of Conduct. It begins, “I crossed a line in my response to a Matplotlib maintainer, and I’m correcting that here.”

It’s unclear whether the apology was written by the bot or its human creator, or whether it will lead to a permanent behavioral change.

Daniel Stenberg, founder and lead developer of curl, has been dealing with AI slop bug reports for the past two years and recently decided to shut down curl’s bug bounty program to remove the financial incentive for low-quality reports – which can come from people as well as AI models.

“I don’t think the reports we have received in the curl project were pushed by AI agents but rather humans just forwarding AI output,” Stenberg told The Register in an email. “At least that is the impression I have gotten, I can’t be entirely sure, of course.

“For almost every report I question or dismiss in language, the reporter argues back and insists that the report indeed has merit and that I’m missing some vital point. I’m not sure I would immediately spot if an AI did that by itself.

“That said, I can’t recall any such replies doing personal attacks. We have zero tolerance for that and I think I would have remembered that as we ban such users immediately.” ®



Source link

SummarizeShare234
ShubkaAi

ShubkaAi

Related Stories

Reddit on the rise: What is it and why is AI search popularising it?

Reddit on the rise: What is it and why is AI search popularising it?

by ShubkaAi
March 1, 2026
0

If you do a Google search nowadays, you no longer see a list of links at the very top. Instead, you see a summary of search results curated...

Share values of property services firms tumble over fears of AI disruption | AI (artificial intelligence)

US military reportedly used Claude in Iran strikes despite Trump’s ban | AI (artificial intelligence)

by ShubkaAi
March 1, 2026
0

The US military reportedly used Claude, Anthropic’s AI model, to inform its attack on Iran despite Donald Trump’s decision, announced hours earlier, to sever all ties with the...

Can ‘friction-maxxing’ fix your focus?

Can ‘friction-maxxing’ fix your focus?

by ShubkaAi
March 1, 2026
0

Thrilled by his initial success, the artist has now traded the instant gratification of Instagram for longer and more meaningful interactions on Substack, takeaways for home-cooked meals and...

SaaS-pocalypse isn’t coming any time soon • The Register

SaaS-pocalypse isn’t coming any time soon • The Register

by ShubkaAi
March 1, 2026
0

Opinion Say goodbye to the SaaS-pocalypse theory, which posits that advances in AI will bring the software-as-a-service market to its knees. Say hello to "a feedback loop with...

Next Post
Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Ai Shubka

AI-Shubka | Smarter Business. Automated Future. Helping entrepreneurs and creators earn more with AI tools, automation, and digital strategy.

Follow us

Recent Posts

On the Future of Species — unnatural selection – Financial Times

On the Future of Species — unnatural selection – Financial Times

March 1, 2026
New to Claude? Use these 6 simple starter prompts to unlock better answers instantly

New to Claude? Use these 6 simple starter prompts to unlock better answers instantly

March 1, 2026

Weekly Newsletter

© 2026 aishubka - Smarter Business. & Automated Future. by aishubka.

Powered by
►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None
Powered by
No Result
View All Result
  • Home
  • Affiliate & Tool Guides
  • AI & Future Tech
  • AI Learning & Tutorials
  • Business & Digital Strategy
  • Gadgets & Reviews
  • Motivation & Personal Growth

© 2026 aishubka - Smarter Business. & Automated Future. by aishubka.