Photo by Emiliano Vittoriosi on Unsplash

Tennessee Lawmakers are pushing AI companies to strengthen transparency and safety protocols regarding child safety.

Sen. Ken Yager (R-Kingston) and Rep. Jason Zachary (R-Knoxville) introduced the Artificial Intelligence Public Safety and Child Protection Act. Officials said the legislation would require major AI companies to have plans in place to address risks facing children.

These safety plans would have to address safety risks including emotional harm and โ€œdangerousโ€ advice from the software.

This bill would apply to big tech companies with software accessible to users under the age of 18. This includes chatbots that are used by at least one million people per month.

Developers who have an annual revenue of at least $500 million with its affiliates, as well as โ€œlarge chatbot providersโ€ would also be liable for enforcing and complying their public or child safety plan.

Companies would also be responsible for reporting โ€œserious incidents” to law enforcement. Reporting must include the date of the incident, why it qualifies as a safety incident, and a brief summary of the event.

If the incident increases the likelihood of death or physical injury, the developer must tell authorities within 24 hours of discovery.

The legislation applies to events and incidents happening on or before January 1, 2027.

โ€œTennessee families are telling us loud and clear that theyโ€™re concerned about what AI is doing to their kids,โ€ Yager said.

According to polling conducted by Anchor Research, 90 percent of Tennesseans believe it is important to protect children from โ€œAI-related harmsโ€ through state laws. Out of 503 participants, 94 percent want to see AI companies publish child protection plans.

Sixty-seven percent of participants urged the state to act as opposed to waiting on Congress.

Lawmakers said the legislation comes after several lawsuits which allege that AI chatbots were responsible for children deaths. 

โ€œA suit brought by the Raine family in 2025 alleges that ChatGPT instructed their 16-year-old son on how to make a noose and encouraged him to commit suicide,โ€ officials said.  โ€œAnother case in 2024 brought by Megan Garcia alleges that a Character.AI chatbot caused her 14-year-old son to detach from reality, instructing him to โ€˜please come home to me as soon as possibleโ€™  minutes before his death.โ€

Zachary called the legislation โ€œcommon senseโ€ and said protecting children is โ€œone of his highest priorities.โ€

โ€œWeโ€™ve already seen tragic cases where AI chatbots have contributed to the harm and death of children across the country,โ€ Zachary said. โ€œTennessee families shouldnโ€™t have to wonder whether the AI systems their kids are using have basic safety measures in place.โ€