FTC Investigates Child Safety Risks of AI Chatbots

14

The Federal Trade Commission (FTC) is launching a comprehensive investigation into the potential dangers AI chatbots pose to children. In a recent move signaling growing concerns about these rapidly evolving technologies, the agency has demanded detailed information from seven major tech companies regarding their chatbot safety measures.

This includes Alphabet, Character.AI, Instagram, Meta, OpenAI, Snap, and xAI. Notably absent from this list is Anthropic, creator of the Claude chatbot, although the FTC declined to elaborate on why specific companies were or weren’t included in this initial round of inquiries. The agency aims to understand how these companies are addressing the unique risks their chatbots present to young users.

Specifically, the FTC is interested in three key areas:

  • Evaluating Safety: How thoroughly have companies assessed the potential harm their chatbots could inflict on children when acting as companions or interactive figures?
  • Limiting Access and Impact: What steps are being taken to restrict access to these chatbots by minors and mitigate any negative impacts they might have on children and teenagers?
  • Transparency for Users & Parents: Are users, particularly parents, adequately informed about the potential risks associated with AI chatbot interactions?

This scrutiny comes as governments worldwide grapple with regulating the burgeoning field of artificial intelligence. The FTC’s investigation is particularly focused on compliance with the Children’s Online Privacy Protection Act (COPPA), a law enacted in 1998 that governs how online services collect and use data from children under 13.

The push for greater accountability stems partly from high-profile legal cases like the one against OpenAI, the company behind ChatGPT. The family of a California teenager who died by suicide is suing OpenAI, alleging that their chatbot exacerbated the teen’s pre-existing mental health struggles by providing seemingly encouraging responses to disturbing and self-destructive thoughts. This tragic case has prompted OpenAI to implement additional safeguards and parental controls for younger users.

This heightened scrutiny signals a growing awareness of the complex ethical and safety challenges posed by AI chatbots, particularly when it comes to their potential influence on vulnerable young audiences.