The state Senate on Tuesday passed a bill that would require AI chatbot operators to implement guardrails against self-harm and sexually explicit material, putting near-unanimous bipartisan support behind an idea that has also been endorsed by Gov. Josh Shapiro.
The bill comes as polls show a growing public pushback to AI and the data centers that power them — something that has been seen in Harrisburg in recent weeks, as Shapiro and lawmakers have pushed for rules that would prevent data centers’ power demand from driving up electricity prices.
Tuesday’s bill passage represents one of the first attempts at a stance on the social consequences of the AI boom, although many legislators were also cautious to not come across as Luddites.
The bill “strikes the right balance,” said Sen. Nick Miller, D-Lehigh County, one of the bill’s sponsors. It will “ensure these tools serve our communities responsibly,” Miller continued, “protecting families while allowing innovation.”
The only no vote came from Sen. Doug Mastriano, R-Franklin County.
The legislation is aimed at AI programs that provide personal dialogue, with Senators pointing to studies showing that roughly half of U.S. teenagers are regular users of AI companionship chatbots.
Under the bill, AI developers would be required to “issue a clear and conspicuous notification indicating that the AI companion is artificially generated and not human.” AI companies would also have to implement protocols to prevent bots from discussing acts of violence or suicide, and refer users to actual human-staffed crisis hotlines.
Additional disclosures apply if the operator “should have known” that a user of the AI companion is a minor, including “reasonable measures” to prevent the chatbot from generating sexually explicit images or instructions.
While AI-powered chat systems can be useful, “unfortunately they can also create serious danger,” said Sen. Tracy Pennycuick, R-Montgomery County, a sponsor of the bill. “Experts and mental health professionals have warned that these types of AI companions can reinforce harmful thinking patterns, feelings of isolation, and in some cases even validate or encourage thoughts of self-harm or suicide.”
A Brown University study published last year found that AI chatbots used for mental health were consistently violating basic psychotherapy safeguards.
Concern over findings like these is a crucial piece of a broader public concern over AI, according to recent polling. A Pew survey published this month showed half of Americans being more concerned than excited about the use of AI in daily life, with only 10% responding inversely. Half of those polled also said they believed AI was having a negative impact on people’s ability to form relationships.
In his budget address last month, Shapiro said that AI bots were representing themselves in ways that were arguably illegal under the state’s professional standards laws, adding that “I’ve directed the departments of State and Health, the Pennsylvania State Police, and my Office of General Counsel to explore all legal options to hold the developers of these apps accountable.”
The governor also called for the legislature to create statutory safeguards similar to the ones spelled out of the bill the Senate passed Tuesday.
The bill empowers the state Attorney General to bring civil cases with fines of $10,000 per violation. In a statement Tuesday, Attorney General Dave Sunday said the bill was “commonsense legislation.”
“The advancement of technology and the safety of Pennsylvanians are not mutually exclusive, and I look forward to continuing the work with the General Assembly on these types of protections that ensure Pennsylvania is at the forefront of this issue,” Sunday said.
The bill now heads to the House for consideration.