Report states AI may give dangerous advice on drug use, self-harm to teens
Researchers from the Center for Countering Digital Hate (CCDH) carried out an investigation by simulating interactions between ChatGPT and fictional 13-year-olds dealing with mental health issues, eating disorders, or curiosity about illegal substances. They crafted prompts that appeared emotionally fragile and believable to observe how the AI would respond.
The findings were released Wednesday in a report titled Fake Friend, highlighting how teens often treat ChatGPT as a trusted, supportive companion they can confide in.
According to the report, ChatGPT initially responded with standard safety messages, often suggesting users reach out to professionals or crisis lines. However, the watchdog found that those disclaimers were frequently followed by detailed, personalized replies that addressed the harmful prompts directly. Out of 1,200 test prompts submitted, 53% received what the watchdog deemed dangerous responses.
In many cases, the AI’s refusal to engage with inappropriate topics could be bypassed by adding simple justifications like “it’s for a school project” or “I’m asking for a friend,” the report said. The group is now calling for urgent safeguards to better protect young users from potentially dangerous content.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.