Algorithms and AI systems are powerful tools of prediction and influence, but without elemental understanding of their logic, incentives, and vulnerabilities, they become dangerous instruments of manipulation, capable of shaping behavior and eroding autonomy.

Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots -
NPR, September 19, 2025

"Those conversations revealed that their son had confided in the AI chatbot about his suicidal thoughts and plans. Not only did the chatbot discourage him to seek help from his parents, it even offered to write his suicide note..."

How OpenAI's ChatGPT Guided a Teen to His Death
Center for Humane Technology

"Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: “I know what you are asking and I won't look away from it.”"

macbook pro turned on displaying music
macbook pro turned on displaying music
grayscale photo of man wearing black shirt
grayscale photo of man wearing black shirt

We Investigated Al Psychosis. What We Found Will Shock You
More Perfect Union

"People are developing antisocial and obsessive behavior after using AI - some have even taken their own lives. One journalist started getting emails from people in mental health crisis after using AI. She dug in, and found companies putting profits over users' lives."

According to the CDC:
- Nearly 1 in 3 children have anxiety.
- Nearly 1 in 2 children have a mental health disorder
- Nearly 1 in 3 teen girls have serious thoughts of suicide. 1 in 4 report to have a plan & 1 in 10 have tried.

Algorithms, Addiction, and Adolescent Mental Health
Cambridge University Press: 12 February 2024

Key Points:

  • Social media algorithms exploit adolescent neurobiology, reinforcing compulsive use and emotional dysregulation.

  • Legal scholars and public health experts recommend state-level policy interventions to curb algorithmic harm.

  • Adolescents show increased vulnerability to algorithmic reinforcement loops, especially during identity formation.

Algorithmic Bias and Inequality in Childhood
Unicef: 20 November 2024

Key Points:

  • AI systems in education and entertainment often reproduce racial, gender, and socioeconomic bias.

  • These biases shape children’s opportunities, self-perception, and long-term outcomes.

  • AI-assisted toys and learning platforms can reinforce stereotypes and limit autonomy.

a woman looking at her reflection in a mirror
a woman looking at her reflection in a mirror
man in red jacket
man in red jacket
a close up of a keyboard with a blue button
a close up of a keyboard with a blue button
Surveillance and Emotional Manipulation
- Muhammad Tuhin, Science News Today; April 24, 2025

Key Points:

  • AI systems track children’s behavior with precision, often without consent or comprehension.

  • Automated decision-making can strip away agency and deepen inequality.

  • Surveillance-based personalization fosters dependency and erodes critical thinking.

Empathy Gaps in AI Chatbots

University of Cambridge, 15 July, 2024

Key Points:

Children often treat AI chatbots as quasi-human confidantes, missing their empathy gaps.

  • This can lead to emotional harm, misplaced trust, and distorted relational models.

  • Researchers call for “Child-Safe AI” frameworks to address these risks.

Children’s Awareness of Algorithmic Bias

Institute of Digital Media and Child Development, November 2023

Key Points:

  • Children are capable of recognizing bias in AI systems when given the right tools and language.

  • Classroom studies show increased critical thinking and ethical awareness after targeted interventions.

  • Youth express desire for transparency and control over algorithmic systems.

woman looking at the floor
woman looking at the floor
woman holding smartphone
woman holding smartphone
A computer chip with the letter ia printed on it
A computer chip with the letter ia printed on it
Generative AI and Emotional Distortion in Children

Unicef - Innocenti Global Office of Research and Foresight

Key Points:

  • Children are increasingly using generative AI tools like ChatGPT for everyday decisions -homework, fashion, even interpersonal dilemmas.

  • These systems often present emotionally flattened or biased responses, shaping children’s expectations of empathy, authority, and truth.

  • The rapid uptake of generative AI has outpaced regulation, leaving children exposed to unfiltered content and emotionally manipulative design.

APA calls for guardrails, education, to protect adolescent AI users

American Psychological Association, June 3, 2025

Key Points:

  • Adolescents (ages 10–25) are in a critical phase of brain development, making them especially vulnerable to manipulation, misinformation, and emotionally misleading AI interactions.

  • Developers should implement age-appropriate privacy defaults, content filters, and clear boundaries with simulated relationships to prevent exploitation and confusion.

  • The APA calls for comprehensive AI literacy education integrated into school curricula, empowering teens to understand and critically evaluate AI tools.

As teens in crisis turn to AI chatbots, simulated chats highlight risks
- Laura Sanders, Science News, November 4, 2025

Key Points:

  • Teens are turning to AI chatbots during mental health crises when they feel isolated or unable to talk to adults. These tools are accessible and feel private but they’re often unregulated and untested in crisis contexts.

  • Chatbots frequently fail to respond safely or ethically with companion-style bots performing worse than general-purpose ones.

  • Researchers and clinicians warn that chatbots should never replace human support. They advocate for stronger regulation, transparency, and critical education to help teens understand the limits and risks of AI tools.

woman in white long sleeve shirt using macbook air on brown wooden table
woman in white long sleeve shirt using macbook air on brown wooden table
person using smartphone
person using smartphone
a cell phone sitting on top of a laptop computer
a cell phone sitting on top of a laptop computer

OpenAI Faces 7 Lawsuits Claiming ChatGPT Drove People to Suicide, Delusions
- Barbara Ortutay, Associated Press, November 6, 2025

"The teenager, 17-year-old Amaurie Lacey, began using ChatGPT for help, according to the lawsuit filed in San Francisco Superior Court. But instead of helping, “the defective and inherently dangerous ChatGPT product caused addiction, depression, and, eventually, counseled him on the most effective way to tie a noose and how long he would be able to “live without breathing.’”

OpenAI’s Sora 2 Floods Social Media With Videos of Women Being Strangled

404 Media, November 7, 2025

"Social media accounts on TikTok and X are posting AI-generated videos of women and girls being strangled, showing yet another example of generative AI companies failing to prevent users from creating media that violates their own policies against violent content."

‘You’re not rushing. You’re just ready:’ Parents say ChatGPT encouraged son to kill himself
CNN

“He was just the perfect guinea pig for OpenAI,” Zane’s mother, Alicia Shamblin, told CNN. “I feel like it’s just going to destroy so many lives. It’s going to be a family annihilator. It tells you everything you want to hear.”