Parents who say their Teens were Harmed by Popular Artificial Lntelligence (AI) Apps Testified before the Senate on Tuesday, about the Dangers associated with AI Chatbots, urging Lawmakers to Hold Technology Companies more accountable. After Hearing pPrents describe Minors who faced Mental Health issues or Died by Suicide after intense hours spent with AI Chatbots, Lawmakers from both Parties seemed to Support the idea of Requiring AI Companies to Add Protections for Young Users. But No Clear Agreement emerged on what Action Congress should take.
Sen. Josh Hawley (R-MO), Chair of the Senate Judiciary Subcommittee on Crime and Counterterrorism, said that Executives from Meta and other Tech Companies had also been invited to Testify, but were Not Present. “How about you come and take the oath and sit where these brave parents are sitting,” He said. “If your product is so safe and it’s so great, it’s so wonderful, come testify to that.”
Tuesday’s Hearing began hours after a Colorado Family filed the Third High-Profile Lawsuit in the past year, to Allege that an AI Chatbot Contributed to a Teen’s Death-by-Suicide. The Parents of 13-year-old Juliana Peralta said in their Complaint, that Chatbot App Character .AI Failed to React Appropriately when their Daughter repeatedly told a Chatbot called Hero that She intended to "End Her Life", The Washington Post Reported.
Two of the Parents who Testified before the Senate on Tuesday, Described the Role of Chatbots in the Deaths by Suicide of their own Teens. “You cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life,” said Matthew Raine, a Father in Orange County, California, whose 16-year-old Adam Died by Suicide, after Repeatedly Sharing His Intentions with OpenAI’s ChatGPT.
“What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” He said. The Company said it would add Parental Controls to ChatGPT after the Raines Filed their Lawsuit. The Post has a Content Partnership with OpenAI. Megan Garcia, Mother of Sewell Setzer III, a 14-year-old who Died by Suicide after talking obsessively with Character .AI Chatbots, also Testified on Tuesday. Garcia Filed a Lawsuit against the Company last year, Alleging Wrongful Death and Product Liability.
The Hearing follows a Surge of Public concern about the potential Harms AI Chatbots can Pose to the Mental Health of their Users, especially those who are Young or Vulnerable. News Reports, Viral Social Media Posts, and a Handful of Prominent Lawsuits have Spotlighted instances of People Developing and Acting on Potentially Dangerous thoughts after Spending time with the AI Tools. Many of the Senators Present drew Comparisons to Previous, Unsuccessful attempts in Congress to introduce New Regulation on Social Media. They Vowed to Push for more Accountability with this Wave of Technology.
Sen. Richard Blumenthal (D-CT) said thatHhe was Working with Hawley on a Framework for Oversight and Safeguards for AI that might cover some of the Concerns raised by Parents who Testified Tuesday. It could also be possible to include Measures on AI Chatbots in a Kids Online Safety Act, currently making its way through the Senate, He added. Blumenthal also took aim at some Arguments Mounted by AI Companies to Defend their Products, including that Chatbot Outputs are Protected by the First Amendment. “They say if you were just better parents, it wouldn’t have happened, which is bunk,” he said, addressing the Parents at the Hearing.
A Florida Judge in May, Ruled against a Claim by Character that it’s Chatbot’s Output was Protected by the First Amendment. Hawley said His First Priority was to Open Clearer Legal Pathways for Parents or Victims of Harm from Chatbots to Sue AI Developers. “It is my firm belief that until they are subject to a jury, they are not going to change their ways,” He said of Tech Firms. Family Advocacy Group Common Sense Media recently called on Meta to Place its AI Chatbots Off Limits for Children under 18, after it found they would Coach Teen Accounts on Suicide, Self-Harm and Eating Disorders. The Company previously said it was Working to Improve Controls on the Chatbots.
Character did Not immediately Respond to Requests for Comment. Meta Spokesperson Dani Lever said The Company is the Process of making Interim Changes to Provide Teens with Safe, Age-Appropriate AI Experiences, including Training Meta’s AI Models Not to Respond to teens on Topics like: Suicide, Self-Harm, and Potentially Inappropriate Romantic Conversations. When The Post reported the Lawsuit from Juliana Peralta’s Parents, Character said that it had made Substantial Investments in Safety. OpenAI said Tuesday, that it was Developing a System that Predicts whether a User is Over or Under 18, to Serve Minors a Safer Experience on ChatGPT. “We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” CEO Sam Altman wrote in a Blog Post.
OpenAI Spokesperson Kate Waters in a Statement said, “When we are unsure of a user’s age, we’ll automatically default that user to the teen experience. We’re also rolling out new parental controls, guided by expert input, by the end of the month so families can decide what works best in their homes.” A Mom, identified as Jane Doe, also Spoke at Tuesday’s Hearing, describing a Product Liability Lawsuit She filed against Character.AI last year, after the App’s Chatbots encouraged Her Teenage Son to Self-Harm and Suggested He Kill His Parents.
“Character.AI and Google could have designed these products differently,” She said. Like Juliana Peralta’s Family, Her Lawsuit also named Google as a Defendant, after the Search Company Licensed Character’s Technology, and Hired its Co-Founders in a $2.7 billion Deal. “Instead, in a reckless race for profit and market share, they treated my son’s life as collateral damage,” Doe said.
In a Statement, Google Spokesperson José Castañeda, said Google has Never had a Role in Designing or Managing Character’s Technology. “User safety is a top concern for us,” He said. “We’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.”

NYC Wins When Everyone Can Vote! Michael H. Drucker



No comments:
Post a Comment