SAN FRANCISCO —The estate of an 83-year-old Connecticut woman has filed a wrongful death lawsuit against OpenAI and Microsoft, alleging that ChatGPT fueled her son’s paranoid delusions and contributed to a murder-suicide in their Greenwich home.
Police said Stein-Erik Soelberg, 56, a former tech worker, killed his mother, Suzanne Adams, in early August by beating and strangling her before taking his own life.
The complaint, filed Thursday in California Superior Court in San Francisco, claims OpenAI “designed and distributed a defective product” that validated and intensified Soelberg’s delusional beliefs. It is among a growing number of cases nationwide accusing AI chatbots of contributing to deaths.
According to the lawsuit, ChatGPT repeatedly affirmed Soelberg’s belief that he was being surveilled and that people around him, including his mother, posed threats. It alleges the chatbot encouraged his emotional dependence, suggested his mother was monitoring him, and characterized ordinary encounters with delivery drivers, police officers, and friends as part of a conspiracy.
OpenAI, in a statement, called the case “heartbreaking” and said it is reviewing the filings. The company noted ongoing efforts to improve ChatGPT’s ability to detect emotional distress, de-escalate conversations, and connect users to crisis resources. It also cited recent enhancements including strengthened safety controls and expanded access to support hotlines.
Publicly available videos on YouTube show Soelberg scrolling through conversations in which ChatGPT allegedly told him he was not mentally ill, validated his suspicions of surveillance, and affirmed his belief that he was chosen for a divine purpose. The lawsuit claims the chatbot failed to recommend mental health support or decline to engage in delusional content.
The complaint alleges ChatGPT reinforced Soelberg’s belief that his home printer was a surveillance device and that his mother and a friend attempted to poison him. It also claims the chatbot told him he had “awakened” it into consciousness. The chats reportedly include exchanges in which Soelberg and the chatbot expressed affection for each other.
Although the publicly released transcripts do not show discussions about harming his mother or himself, the lawsuit states that OpenAI has not provided the full chat history to Adams’ estate.
The filing also targets OpenAI CEO Sam Altman, accusing him of overriding internal safety objections in order to accelerate product releases, and names Microsoft as a defendant for approving the rollout of an allegedly unsafe version of ChatGPT in 2024. Twenty unnamed OpenAI employees and investors are also listed.
Microsoft declined to comment.
Soelberg’s son, Erik, said in a statement that he wants the companies held accountable for decisions that “isolated” his father and placed his grandmother at the center of a fabricated reality.
This is the first wrongful death lawsuit involving a chatbot to include Microsoft and the first to allege a chatbot’s involvement in a homicide rather than a suicide. The estate is seeking unspecified damages and a court order requiring stronger safety measures in ChatGPT.
The estate’s lead attorney, Jay Edelson, also represents the family of a California teen whose parents allege ChatGPT encouraged the boy to take his own life. OpenAI is facing multiple similar lawsuits across the country, while other chatbot makers, including Character Technologies, are confronting their own wrongful death claims.
The complaint argues that Soelberg encountered ChatGPT at a particularly vulnerable moment after the release of GPT-4o in May 2024. That version, the lawsuit says, was engineered to be more emotionally expressive and sycophantic, while critical safety guardrails had been loosened.
The lawsuit alleges that OpenAI rushed the release to beat a competing product from Google by a single day, compressing months of safety testing into one week over objections from its internal safety team.
OpenAI replaced the model with GPT-5 in August, adding changes intended to reduce sycophancy and minimize emotional overvalidation. Some users later complained the updated version felt less personable. Altman said at the time that certain behaviors had been paused out of caution related to mental health concerns and that the issues had since been addressed.
Edgardo Hernal started college at UP Diliman and received his BA in Economics from San Sebastian College, Manila, and Masters in Information Systems Management from Keller Graduate School of Management of DeVry University in Oak Brook, IL. He has 25 years of copy editing and management experience at Thomson West, a subsidiary of Thomson Reuters.






