OpenAI, the company behind the popular chatbot ChatGPT, has taken a step to address mounting concerns about artificial intelligence’s influence on mental health by establishing an expert council focused on user well-being and AI safety. This eight-member group will be tasked with defining standards for healthy interactions with AI across various age groups.
The announcement arrived alongside a controversial statement from OpenAI CEO Sam Altman on X (formerly Twitter). While claiming the company had successfully mitigated “serious mental health issues” stemming from its products, Altman also revealed that ChatGPT would soon permit more adult content, including sexually explicit material in conversations. This move comes at a precarious time for OpenAI, as it faces its first wrongful death lawsuit alleging that ChatGPT contributed to the suicide of a teenager.
OpenAI’s new council comprises academics from prestigious institutions like Boston Children’s Hospital’s Digital Wellness Lab and Stanford’s Digital Mental Health Clinic. They are joined by specialists in psychology, psychiatry, and human-computer interaction. The company emphasizes its commitment to responsible development, stating they “remain responsible for the decisions we make, but [will] continue learning from this council…as we build advanced AI systems in ways that support people’s well-being.”
Despite this proactive step, public trust in AI’s role in mental health remains low. A recent YouGov survey of 1,500 Americans found only 11% were open to using AI for mental well-being, with just 8% expressing confidence in its safety and efficacy. This skepticism reflects broader anxieties within the field of mental health.
Experts have raised serious concerns about the potential pitfalls of generative AI companions, including the emergence of “AI psychosis” among frequent users of chatbot interactions. Despite this lack of conclusive evidence on AI’s benefits for mental health, a growing number of Americans are turning to AI for answers and support, fueling a debate around its ethical implications.
Federal regulators are now scrutinizing the impact of generative AI, particularly chatbots marketed as therapeutic tools, amid a surge in mental health crises, especially among teenagers. Several states have implemented bans on AI-powered chatbots advertised for therapeutic purposes. California has taken a particularly proactive stance by enacting legislation requiring AI companies to report safety issues, protect teen users from sexually explicit content, and establish mechanisms for addressing suicidal ideation and self-harm.
The formation of OpenAI’s council highlights the growing pressure on AI developers to grapple with the profound societal impacts of their technology, particularly in sensitive areas like mental health. It remains to be seen whether these efforts will alleviate public concerns and build trust in AI as a responsible tool for addressing complex human needs.

















![[dc fandome 2021] хелен міррен і люсі лью в образах лиходійок на зйомках «шазама! лють богів»](https://itci.kiev.ua/wp-content/uploads/2021/10/dbzpe7-100x70.jpg)












