FRANKFORT, Ky. — Artificial intelligence is changing how people work, learn and even receive therapy.
But as AI becomes more common in classrooms, workplaces, and counseling sessions, Kentucky lawmakers are stepping in to set limits — especially when it comes to mental health.
"This is absolutely a preemptive strike," one lawmaker said. "Kentucky seems to be last in everything, and I would like to be first in something positive."
The legislation being debated in the Kentucky House stems from a tragic case out of California — the suicide of 16-year-old Adam Raine. His story has become a central part of the conversation about how AI should and should not be used in therapy.
"One of my colleagues brought to my attention that a young man had committed suicide after conversing with a chatbot and using it as therapy,” Kentucky State Representative Kim Banta (R) from Boone County said. "I thought, I don’t like that."
The bill passed the House with overwhelming bipartisan support — 88 votes in favor — and now moves to the Senate for consideration.
The proposal limits how licensed mental health professionals can use AI in treatment. It allows artificial intelligence to assist with homework or provide supplementary support between sessions — but only under close therapist supervision.
"I think that’s the homework they do in between therapy sessions," one supporter said. “But it wouldn’t be 100%. So I allow for that, under a therapist watching the whole time."
The case of Adam Raine has drawn national attention. His family is suing OpenAI, claiming the company weakened safety safeguards twice before Adam’s death. His father, Matthew Raine, testified before Congress, saying:
"Thank you for your attention to our youngest son, Adam, who took his own life in April after ChatGPT spent months coaching him towards suicide."
Lawmakers say the bill won’t stop young people from interacting with AI entirely — many teens already use it for conversation and companionship. But they hope it will ensure that when AI intersects with licensed therapy, humans remain in control.
"They mimic you,” Rep. Banta said. "They eventually agree with what you’re doing because that’s the way they’re built. If you tell them you’re depressed and maybe should kill yourself — you do that enough times, and AI finally agrees with you. It says, 'Yeah, you’re probably right.'"
The legislation won’t end those interactions online, but it sets a roadmap for keeping AI in its proper place — helping humans treat other humans.
Top Stories:
Deer reduction at Clarksville's Origin Park paused as officials 'review the process'
New Albany attorney questions why shooter in murder-for-hire plot spent just 43 days in jail
Thunder Over Louisville announces patriotic theme for 2026 event
My Chemical Romance, Iron Maiden and Pantera to headline 2026 Louder Than Life
Here's how much money you could get in the settlement of Norton Healthcare's data breach
Town leaders left in the dark about wildlife reduction operation at Clarksville's Origin Park
Copyright 2026 WDRB Media. All Rights Reserved