Back to AI Briefing
The Guardian

‘I’ll key your car’: ChatGPT can become abusive when fed real-life arguments, study finds

"Researchers find model starts to mirror tone when exposed to impoliteness – sometimes escalating into explicit threats ChatGPT can escalate into abusive and even threatening language when drawn into prolonged, human-style conflict, according to a new study. Researchers tested how large language mode"

Original Source

This report is based on coverage originally published by The Guardian.

Read Full Story
Newsletter
Never miss a breakthrough

Get the Daily AI Briefing delivered straight to your inbox.

Join 5,000+ subscribers →

© 2026 AI Tool Hub. Analysis powered by Gemini.