AI Shubka
  • Home
No Result
View All Result
AI Shubka
  • Home
No Result
View All Result
AI Shubka
No Result
View All Result
  • Home
  • Affiliate & Tool Guides
  • AI & Future Tech
  • AI Learning & Tutorials
  • Business & Digital Strategy
  • Gadgets & Reviews
  • Motivation & Personal Growth
ChatGPT’s AI caricature social media trend could be a gift to fraudsters, experts warn

ChatGPT’s AI caricature social media trend could be a gift to fraudsters, experts warn

ShubkaAi by ShubkaAi
February 15, 2026
in AI & Future Tech, AI breakthroughs (GPT updates, generative models), Best AI tools for creators, Robotics & automation, Tech forecasts
0
585
SHARES
3.2k
VIEWS
Summarize with ChatGPTShare to Facebook


By&nbspEuronews

Published on
14/02/2026 – 8:00 GMT+1

The artificial intelligence (AI) created caricature trend that showcases everything a chatbot knows about someone in a colourful picture, can pose serious security risks, according to cybersecurity experts.


ADVERTISEMENT


ADVERTISEMENT

Users upload a photo of themselves with a company logo or details about their role and ask OpenAI’s ChatGPT to create a caricature of them and their job using what the chatbot knows about them.

Cybersecurity experts told Euronews Next that social media challenges, such as AI caricatures, can provide fraudsters with a treasure trove of valuable information. A single image, paired with personal details, can be more revealing than users realise.

“You are doing fraudsters’ work for them – giving them a visual representation of who you are,” according to Bob Long, vice-president at age authentication company Daon.

The wording of itself should raise red flags, he argued, because it “sounds like it was intentionally started by a fraudster looking to make the job easy.”

What happens to images once they’re uploaded?

When a user uploads an image to an AI chatbot, the system processes the image to extract data, such as the person’s emotion, environment, or information that could disclose their location, according to cybersecurity consultant Jake Moore. That information may then be stored for an unknown period of time.

Long said the images collected from users can be used and retained to train AI image generators as part of their datasets.

A data breach at a company like OpenAI could mean sensitive data, such as uploaded images and personal information gathered by the chatbot, could fall into the hands of bad actors who could exploit it.

In the wrong hands, a single, high-resolution image could be used to create fake social media accounts or realistic AI deepfakes that could be used to run a scam, according to Charlotte Wilson, head of enterprise at Check Point, an Israeli cybersecurity company.

“Selfies help criminals move from generic scams to personalised, high-conviction impersonation,” she said.

OpenAI’s privacy settings state that uploaded images may be used to improve the model, which can include training it. When asked about the model’s privacy settings, ChatGPT clarified that this does not mean every photo is placed in a public database.

Instead, the chatbot said it uses patterns from user content to refine how the system generates images.

What to do if you want to participate in AI trends

For those who still want to follow the trend, experts recommend limiting what you share.

Wilson said users should avoid uploading images that reveal any identifying information.

“Crop tightly, keep the background plain, and do not include badges, uniforms, work lanyards, location clues or anything that ties you to an employer or a routine,” she said.

Wilson cautioned against oversharing personal information in the prompts, such as their job title, city or employer.

​Meanwhile, Moore recommended reviewing privacy settings before participating, including the option to remove data from AI training.

OpenAI has a privacy portal which lets users opt out of AI data training by clicking on “do not train on my content.”

Users can also opt out of training from their text conversations with ChatGPT by turning off an “improve the model for everyone” setting.

Under EU law, users can request the deletion of personal data collected by the company. However, OpenAI notes it may retain some information even after deletion to address fraud, abuse and security concerns.



Source link

SummarizeShare234
ShubkaAi

ShubkaAi

Related Stories

Reddit on the rise: What is it and why is AI search popularising it?

Reddit on the rise: What is it and why is AI search popularising it?

by ShubkaAi
March 1, 2026
0

If you do a Google search nowadays, you no longer see a list of links at the very top. Instead, you see a summary of search results curated...

Share values of property services firms tumble over fears of AI disruption | AI (artificial intelligence)

US military reportedly used Claude in Iran strikes despite Trump’s ban | AI (artificial intelligence)

by ShubkaAi
March 1, 2026
0

The US military reportedly used Claude, Anthropic’s AI model, to inform its attack on Iran despite Donald Trump’s decision, announced hours earlier, to sever all ties with the...

Can ‘friction-maxxing’ fix your focus?

Can ‘friction-maxxing’ fix your focus?

by ShubkaAi
March 1, 2026
0

Thrilled by his initial success, the artist has now traded the instant gratification of Instagram for longer and more meaningful interactions on Substack, takeaways for home-cooked meals and...

SaaS-pocalypse isn’t coming any time soon • The Register

SaaS-pocalypse isn’t coming any time soon • The Register

by ShubkaAi
March 1, 2026
0

Opinion Say goodbye to the SaaS-pocalypse theory, which posits that advances in AI will bring the software-as-a-service market to its knees. Say hello to "a feedback loop with...

Next Post
The machines that will predict the criminals of the future

The machines that will predict the criminals of the future

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Ai Shubka

AI-Shubka | Smarter Business. Automated Future. Helping entrepreneurs and creators earn more with AI tools, automation, and digital strategy.

Follow us

Recent Posts

On the Future of Species — unnatural selection – Financial Times

On the Future of Species — unnatural selection – Financial Times

March 1, 2026
New to Claude? Use these 6 simple starter prompts to unlock better answers instantly

New to Claude? Use these 6 simple starter prompts to unlock better answers instantly

March 1, 2026

Weekly Newsletter

© 2026 aishubka - Smarter Business. & Automated Future. by aishubka.

Powered by
►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None
Powered by
No Result
View All Result
  • Home
  • Affiliate & Tool Guides
  • AI & Future Tech
  • AI Learning & Tutorials
  • Business & Digital Strategy
  • Gadgets & Reviews
  • Motivation & Personal Growth

© 2026 aishubka - Smarter Business. & Automated Future. by aishubka.