AI Shubka
  • Home
No Result
View All Result
AI Shubka
  • Home
No Result
View All Result
AI Shubka
No Result
View All Result
  • Home
  • Affiliate & Tool Guides
  • AI & Future Tech
  • AI Learning & Tutorials
  • Business & Digital Strategy
  • Gadgets & Reviews
  • Motivation & Personal Growth
Mind launches inquiry into AI and mental health after Guardian investigation | AI (artificial intelligence)

Mind launches inquiry into AI and mental health after Guardian investigation | AI (artificial intelligence)

ShubkaAi by ShubkaAi
February 20, 2026
in AI & Future Tech, AI breakthroughs (GPT updates, generative models), Best AI tools for creators, Robotics & automation, Tech forecasts
0
585
SHARES
3.2k
VIEWS
Summarize with ChatGPTShare to Facebook


Mind is launching a significant inquiry into artificial intelligence and mental health after a Guardian investigation exposed how Google’s AI Overviews gave people “very dangerous” medical advice.

In a year-long commission, the mental health charity, which operates in England and Wales, will examine the risks and safeguards required as AI increasingly influences the lives of millions of people affected by mental health issues worldwide.

The inquiry – the first of its kind globally – will bring together the world’s leading doctors and mental health professionals, as well as people with lived experience, health providers, policymakers and tech companies. Mind says it will aim to shape a safer digital mental health ecosystem, with strong regulation, standards and safeguards.

The launch comes after the Guardian revealed how people were being put at risk of harm by false and misleading health information in Google AI Overviews. The AI-generated summaries are shown to 2 billion people a month, and appear above traditional search results on the world’s most visited website.

After the reporting, Google removed AI Overviews for some but not all medical searches. Dr Sarah Hughes, chief executive officer of Mind, said “dangerously incorrect” mental health advice was still being provided to the public. In the worst cases, the bogus information could put lives at risk, she said.

Hughes said: “We believe AI has enormous potential to improve the lives of people with mental health problems, widen access to support, and strengthen public services. But that potential will only be realised if it is developed and deployed responsibly, with safeguards proportionate to the risks.

“The issues exposed by the Guardian’s reporting are among the reasons we’re launching Mind’s commission on AI and mental health, to examine the risks, opportunities and safeguards needed as AI becomes more deeply embedded in everyday life.

“We want to ensure that innovation does not come at the expense of people’s wellbeing, and that those of us with lived experience of mental health problems are at the heart of shaping the future of digital support.”

Google has said its AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, are “helpful” and “reliable”.

But the Guardian found some AI Overviews served up inaccurate health information and put people at risk of harm. The investigation uncovered false and misleading medical advice across a range of issues, including cancer, liver disease and women’s health, as well as mental health conditions.

Experts said some AI Overviews for conditions such as psychosis and eating disorders offered “very dangerous advice” and were “incorrect, harmful or could lead people to avoid seeking help”.

Google is also downplaying safety warnings that its AI-generated medical advice may be wrong, the Guardian found.

Hughes said vulnerable people were being served “dangerously incorrect guidance on mental health”, including “advice that could prevent people from seeking treatment, reinforce stigma or discrimination and in the worst cases, put lives at risk”.

She added: “People deserve information that is safe, accurate and grounded in evidence, not untested technology presented with a veneer of confidence.”

Quick Guide

Contact Andrew Gregory about this story

Show

If you have something to share about this story, you can contact Andrew using one of the following methods.

The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.

If you don’t already have the Guardian app, download it (iOS/Android) and go to the menu. Select ‘Secure Messaging’.

Email (not secure)

If you don’t need a high level of security or confidentiality you can email andrew.gregory@theguardian.com

SecureDrop and other secure methods

If you can safely use the tor network without being observed or monitored you can send messages and documents to the Guardian via our SecureDrop platform.

Finally, our guide at theguardian.com/tips lists several ways to contact us securely, and discusses the pros and cons of each. 

Illustration: Guardian Design / Rich Cousins

Thank you for your feedback.

The commission, which will run for a year, will gather evidence on the intersection of AI and mental health, and provide an “open space” where the experience of people with mental health conditions will be “seen, recorded and understood”.

Rosie Weatherley, information content manager at Mind, said that although Googling mental health information “wasn’t perfect” before AI Overviews, it usually worked well. She said: “Users had a good chance of clicking through to a credible health website that answered their query, and then went further – offering nuance, lived experience, case studies, quotes, social context and an onward journey to support.

“AI Overviews replaced that richness with a clinical-sounding summary that gives an illusion of definitiveness. They give the user more of one form of clarity (brevity and plain English), while giving them less of another form of clarity (security in the source of the information, and how much to trust it). It’s a very seductive swap, but not a responsible one.”

A Google spokesperson said: “We invest significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information.

“For queries where our systems identify a person might be in distress, we work to display relevant, local crisis hotlines. Without being able to review the examples referenced, we can’t comment on their accuracy.”



Source link

SummarizeShare234
ShubkaAi

ShubkaAi

Related Stories

Reddit on the rise: What is it and why is AI search popularising it?

Reddit on the rise: What is it and why is AI search popularising it?

by ShubkaAi
March 1, 2026
0

If you do a Google search nowadays, you no longer see a list of links at the very top. Instead, you see a summary of search results curated...

Share values of property services firms tumble over fears of AI disruption | AI (artificial intelligence)

US military reportedly used Claude in Iran strikes despite Trump’s ban | AI (artificial intelligence)

by ShubkaAi
March 1, 2026
0

The US military reportedly used Claude, Anthropic’s AI model, to inform its attack on Iran despite Donald Trump’s decision, announced hours earlier, to sever all ties with the...

Can ‘friction-maxxing’ fix your focus?

Can ‘friction-maxxing’ fix your focus?

by ShubkaAi
March 1, 2026
0

Thrilled by his initial success, the artist has now traded the instant gratification of Instagram for longer and more meaningful interactions on Substack, takeaways for home-cooked meals and...

SaaS-pocalypse isn’t coming any time soon • The Register

SaaS-pocalypse isn’t coming any time soon • The Register

by ShubkaAi
March 1, 2026
0

Opinion Say goodbye to the SaaS-pocalypse theory, which posits that advances in AI will bring the software-as-a-service market to its knees. Say hello to "a feedback loop with...

Next Post
Client Challenge

Client Challenge

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Ai Shubka

AI-Shubka | Smarter Business. Automated Future. Helping entrepreneurs and creators earn more with AI tools, automation, and digital strategy.

Follow us

Recent Posts

On the Future of Species — unnatural selection – Financial Times

On the Future of Species — unnatural selection – Financial Times

March 1, 2026
New to Claude? Use these 6 simple starter prompts to unlock better answers instantly

New to Claude? Use these 6 simple starter prompts to unlock better answers instantly

March 1, 2026

Weekly Newsletter

© 2026 aishubka - Smarter Business. & Automated Future. by aishubka.

Powered by
►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None
Powered by
No Result
View All Result
  • Home
  • Affiliate & Tool Guides
  • AI & Future Tech
  • AI Learning & Tutorials
  • Business & Digital Strategy
  • Gadgets & Reviews
  • Motivation & Personal Growth

© 2026 aishubka - Smarter Business. & Automated Future. by aishubka.