close
close

The lawsuit alleges that the suicide of a 14-year-old boy was triggered by an artificial intelligence chatbot. Here’s how parents can help protect their children from new technology

The lawsuit alleges that the suicide of a 14-year-old boy was triggered by an artificial intelligence chatbot. Here’s how parents can help protect their children from new technology

Mother of 14-year-old Florida boy suing AI chatbot company after death of son Sewell Setzer III suicide— something he claims stemmed from his relationship with an AI bot.

“Megan Garcia is trying to stop C.AI from doing to another child what he did to his own child,” the 93-page erroneous death certificate reads case in a U.S. District Court in Orlando this week Character.AI, its founders and Google.

Technical Justice Law Project Director Meetali Jain, who represents Garcia, said in a press release about the case: “By now we are all familiar with the dangers posed by unregulated platforms developed specifically for children by unscrupulous technology companies. But the harms revealed in this case are new, fresh and, frankly, horrifying.” In the case of Character.AI, deception is by design and the platform itself is predatory.”

Character.AI published a declaration via xHe noted: “We are deeply saddened by the tragic loss of one of our users and want to extend our deepest condolences to the family. As a company, we take the security of our users very seriously and continue to add new security features. You can read here: https://blog.character.ai/community-safety-updates/…

In the lawsuit, Garcia alleges that Sewell, who took her own life in February, was driven into a tailspin. addictiveHarmful technology, with no protection, led to an extreme personality change in the child, who preferred the bot over other real-life connections. Her mother alleges there was “abuse and sexual interaction” over a period of 10 months. The boy committed suicide after the robot said to him, “Please come home to me as soon as possible, my love.”

on friday, New York Times Reporter Kevin Roose discussed the situation on his own account Hard Fork podcastPlays a clip of his interview with Garcia his article this told his story. Garcia didn’t learn the full extent of the bot relationship until she saw all the messages after her son’s death. She even told Roose that when she noticed Sewell frequently engrossed in her phone, she asked him what he was doing and who he was talking to. He explained that it was “‘just an AI robot… not a person,'” and added: “I was relieved to say, ‘OK, it’s not a person, it’s like one of his little games.'” Garcia didn’t fully understand the potential emotional power of a robot, and he’s not alone.

“This is not on anyone’s radar,” said Robbie Torney, the CEO’s chief of staff. Common Sense Media and chief writer new guide about AI companions targeting parents who are constantly struggling to keep up confusing new technology And create boundaries for the safety of their children.

But Torney emphasizes that AI assistants are different from, say, the service desk chatbot you use when trying to get help from a bank. “They are designed to perform tasks or respond to requests,” he explains. “Something like character AI is what we call a companion, and it’s designed to try to build a relationship with a user or simulate a relationship. And that’s a very different use case that parents need to be aware of.” This is evident in Garcia’s case, which involves chillingly flirty, sexual and realistic text exchanges between his son and the bot.

Sounding the alarm through AI friends is especially important for parents of teenagers, Torney says; because young people, and especially young men, are particularly susceptible to over-reliance on technology.

Below is what parents need to know.

What are AI companions and why are kids using them?

according to new Parents’ Ultimate Guide to AI Companions and Relationships Created with mental health professionals from Common Sense Media. Stanford Brainstorming LabAI companions are “a new category of technology that goes beyond simple chatbots.” These are specifically designed to, among other things, “simulate emotional bonds and close relationships with users, recall personal details from past conversations, role-play as mentors and friends, mimic human emotions and empathy, and “get along with the user more easily than before.” Designed according to the guide typical artificial intelligence chatbots.

Popular platforms include Character.ai alone, which allows its more than 20 million users to create text-based friends and then chat with them; Replika, which offers text-based or animated 3D companions for friendship or romance; and others including Kindroid and Nomi.

Children are attracted to them for a variety of reasons, including non-judgmental listening and 24-hour availability, emotional support, and escape from real-world social pressures.

Who is at risk and what are the concerns?

Common Sense Media warns that those most at risk are young people, especially those experiencing “depression, anxiety, social difficulties or isolation,” as well as men, teens experiencing major life changes, and people who lack support systems in the real world. .

This last point has been particularly troubling for Raffaele Ciriello, senior lecturer in Business Information Systems at the University of Sydney Business School. investigated how artificial intelligence poses an “emotional” challenge to the essence of humanity. “Our research reveals a paradox of (de)humanization: By humanizing AI agents, we may inadvertently dehumanize ourselves, which can lead to an ontological blurring of human-AI interactions.” In other words, Ciriello writes in a recent opinion piece: Speech “Users can form a deep emotional connection if they believe their AI companion truly understands them,” says doctoral student Angelina Ying with Chen.

Another studyThe study, from the University of Cambridge and focusing on children, found that AI chatbots have an “empathy gap” that puts younger users, who tend to treat such companions as “life-like, quasi-human confidants,” at particular risk of harm. .

Common Sense Media therefore highlights a list of potential risks, such as that companions may be used to avoid real human relationships, may pose particular problems for people with mental or behavioral problems, may intensify loneliness or isolation, may introduce the potential for inappropriate relationships, etc. sexual content can be addictive and tend to disagree with users; This is a frightening reality for those experiencing “suicidality, psychosis, or mania.”

How to spot red flags

According to the guide, parents should look for the following warning signs:

  • Preferring AI friend interaction to real friendships

  • Spending hours alone talking with a friend

  • Emotional distress when you cannot reach the companion

  • Sharing deeply personal information or secrets

  • Develop romantic feelings for your AI friend

  • Dropping grades or school attendance

  • Withdrawal from social/family activities and friendships

  • Loss of interest in previous hobbies

  • Changes in sleep patterns

  • Discussing issues with AI companion only

If you notice that your child is moving away from real humans in favor of AI, showing new or worsening signs of depression or anxiety, becoming overly defensive about using an AI companion, or showing major changes in behavior, consider seeking professional help for your child, Common Sense Media emphasizes. or expressing moods or thoughts of self-harm.

How can you keep your child safe?

  • Set boundaries: Set specific times for AI companion use and do not allow unsupervised or unrestricted access.

  • Spend time offline: Encourage real-world friendships and activities.

  • Check in regularly: Monitor the content of the chatbot and your child’s emotional attachment level.

  • Talk about it: Keep communication about experiences with AI open and nonjudgmental while looking out for red flags.

“If parents hear their kids say, ‘Hey, I’m talking to a chatbot AI,’ that’s really an opportunity to lean in and take in that information and not think, ‘Oh, okay, you’re not talking to an AI person,'” Torney says. Instead, more “Try to listen with compassion and empathy, and don’t assume that just because it’s not a human it’s safer or that you don’t need to worry,” she says.

If you need urgent mental health support, 988 Suicide and Crisis Lifeline.

More about kids and social media:

This story first appeared on: Fortune.com