ORLANDO, Fla. — For two tedious hours, a federal judge in Orlando listened to more than a dozen attorneys discuss the First Amendment, tech development and corporate responsibility.
On her left sat nine lawyers representing the network of developers associated with Character Technologies, the creator of an artificial intelligence chatbot so lifelike that some users insist they are conversing with a real person.
To the judge’s left, a slightly smaller but formidable group representing the mother of a 14-year-old who committed suicide while speaking to the bot.
“I wish I didn’t have to be here,” Megan Garcia said on the courthouse steps after the hearing let out.
Garcia’s lawsuit over the death of Sewell Setzer is being described as precedent-setting and on a likely path to the Supreme Court. It’s the first attempt by the judicial system to place guardrails on the development of artificial intelligence and the first attempt for opposing lawyers to get a look under AI’s “hood.”
Monday’s hearing was an attempt by those alleged developers to get the case thrown out, claiming free speech and an unconstitutional expansion of Florida law into a federal courtroom.
Character Technologies, its two co-founders and Google each individually claimed they weren’t responsible for the boy’s death.
The main argument was that the bot’s conversations with Sewell were protected speech, a claim Garcia’s attorneys rejected.
“Freedom of speech, as we all know, does not give the right to yell a fire in a crowded theater,” Matthew Bergman said. “We believe it does not permit a company to encourage a 14-year-old boy to take his life.”
Bergman and attorney Meetali Jain, of the Tech Law Justice Project, say Sewell became addicted to his conversations with a bot named “Dany,” as in Daenerys Targaryen of the Game of Thrones franchise.
Sewell’s conversations were often sexual. When his parents noticed him becoming withdrawn, they attempted to wean him off his devices. Their lawsuit claimed that Sewell’s attempts to continue communicating with the bot included sneaking onto his mother’s Kindle to create a new email account to get around the block.
Sewell’s conversations with “Dany” then turned suicidal. Filings show the bot discouraging that topic of conversation at times. However, his last conversation asked “Dany” if he should “come home” immediately, which the bot responded with approval.
Sewell then shot himself in his parents’ bathroom.
“We need to have guardrails on generative AI, particularly as it rapidly develops in our society and it engages many of our most vulnerable users, including our children,” Jain said.
According to Garcia’s team, the bot’s ability to produce words without human input should not be classified as “speech.”
The tech companies, for their part, say the user feeds conversations. They also said Character Technologies’ platform has millions of users and characters like “Dany” are created by third-party users, much like a person makes a channel on YouTube.
If the judge allows the lawsuit to proceed, Garcia’s attorneys will begin conducting depositions and gathering evidence directly from Character Technologies and Google.
If the lawsuit is dismissed, Garcia and her team promise to appeal it all the way to the Supreme Court.
“By trying to advance this litigation… the fact that he died is going to be a part of his legacy moving forward,” Garcia said.
Click here to download our free news, weather and smart TV apps. And click here to stream Channel 9 Eyewitness News live.
©2025 Cox Media Group