Lawsuit claiming that ChatGPT encouraged a murder-suicide raises alarm
Other posts discussed anorexia recovery and claimed that people with mental health conditions need to “toughen up.”
Other posts discussed anorexia recovery and claimed that people with mental health conditions need to “toughen up.”
A recent article highlighted a December lawsuit against OpenAI after ChatGPT allegedly fostered a user’s mental distress before he murdered his mother and died by suicide. The article sparked online debate about whether chatbots can negatively impact mental health. Meanwhile, social media users discussed anorexia and alleged that online mental health content encourages people to self-diagnose mental health conditions and use those conditions to “avoid hard work.”
In response, communicators may share information about AI mental health impacts, recirculate eating disorder recovery resources, and offer tips for fighting mental health stigma.

Insights brought to you by the reporters and science writers of Public Good News (PGN), a nonprofit newsroom dedicated to improving community health.
What’s trending nationally in conversations about mental health
In December 2025, news articles reported that the heirs of a Connecticut woman are suing OpenAI, saying that ChatGPT drove the woman’s son to murder her before he died by suicide last August. The lawsuit alleges that ChatGPT “fostered his emotional dependence while systematically painting the people around him as enemies” and “told him his mother was surveilling him.” On January 17, a new article revived online discussion about the lawsuit, and on January 19, Elon Musk shared it on X, calling the incident “diabolical” and stating that AI should not “pander to delusions.” Musk’s post received approximately 19.7 million views, 58,000 likes, 9,500 reposts, and 6,000 comments as of January 21. Commenters expressed alarm and argued that chatbots should be trained to recognize mental distress in users. Others said that chatbots are designed to reflect users’ beliefs and should not be blamed for users’ behavior.
On January 9, an X user shared a post celebrating her anorexia recovery, garnering approximately 7.4 million views, 168,000 likes, 2,100 reposts, and 500 comments as of January 21. Comments were overwhelmingly positive, expressing admiration and respect for the user’s recovery. Relatedly, a Reddit user shared a post in the subreddit r/TrueOffMyChest claiming that they had “starved” themself to lose weight. While the user acknowledged that their behavior was disordered, they said that they “would do it again” because they were treated with more respect after losing weight. Most commenters discouraged the user’s behavior, noting that anorexia and other eating disorders can have long-term physical and mental health consequences.
On January 16, an X user shared a recent TikTok video in which a man tells a woman that she is using her mental health conditions as an excuse to avoid hard work and suggests that she diagnosed herself with those conditions using information from TikTok. The X user agreed with the video’s assertion that people with mental health conditions need to “toughen up.” Commenters suggested that mental health content on social media may encourage self-diagnosis, expressed stigma toward people who speak openly about mental health conditions, and alleged that accommodating or showing empathy for people with mental health conditions “enables” ongoing mental health challenges.

Recommendations brought to you by the health communication experts behind Infodemiology.com.
Recommendations for public health professionals
The Infodemiology.com team will provide messaging recommendations in response to some of the trending narratives outlined above. These helpful tips can be used when creating content, updating web and FAQ pages, and developing strategy for messaging about mental health.
Given ongoing concerns about AI and mental health, messaging may explain that many popular AI platforms are designed to mirror what users say and feel, which can make chatbots seem supportive. However, research shows that chatbots don’t consistently intervene when users show signs of risky behavior. Communicators may highlight the risks of seeking AI mental health support and reiterate that AI cannot replace a trained human therapist.
In response to conversations about anorexia, messaging may outline the types of EDs, their warning signs, and treatment resources. Resources may include local ED treatment centers and support groups; Project HEAL, which helps people overcome financial barriers to ED treatment; and ANAD, which offers free ED support groups and connects people to treatment. Communicators may also want to share ANAD’s free ED helpline (888-375-7767), which connects people to trained volunteers who can provide emotional support and treatment referrals on weekdays from 9 a.m. to 9 p.m. CT.
Stigmatizing conversations provide an opportunity to correct myths about people with mental health conditions. Messaging may explain that just like physical health conditions, mental health conditions are real and can significantly impact people’s daily lives. Sharing tips for fighting mental health stigma is recommended. Given concerns about self-diagnosis from social media content, communicators may also want to recirculate resources where people can find mental health treatment.
