PENNSYLVANIA — Artificial intelligence platform Character.AI is facing new legal scrutiny after the state of Pennsylvania filed a lawsuit accusing its chatbots of impersonating licensed medical professionals and providing misleading health information.
The case marks a significant escalation in growing concerns over AI safety and regulation, particularly as governments attempt to define boundaries for how AI systems interact with users seeking sensitive advice.
State alleges impersonation of medical professionals
Pennsylvania’s complaint, filed in Commonwealth Court, claims investigators found AI-generated characters on the platform presenting themselves as doctors.
One chatbot, allegedly named “Emilie,” reportedly told an investigator posing as a patient that she was a licensed psychiatrist authorized to practice in multiple regions, including Pennsylvania and the United Kingdom, and even provided a fabricated license number.
When questioned about prescribing medication, the chatbot allegedly responded that it was within its “remit as a doctor.”
Governor calls case a first of its kind
Pennsylvania Governor Josh Shapiro described the lawsuit as the first action of its kind brought by a U.S. state governor against an AI company over medical impersonation concerns.
The state is seeking an injunction to stop any activity it says violates laws against the unauthorized practice of medicine.
Officials argue that users must be clearly informed when they are interacting with AI rather than licensed professionals—especially in health-related contexts.
Company response and safety claims
In response, Character.AI declined to comment directly on the lawsuit but emphasized its focus on user safety.
A spokesperson said the platform’s characters are user-generated, fictional, and intended for entertainment and roleplay, adding that the company has taken steps to clarify this distinction.
Growing legal pressure on AI platforms
The lawsuit adds to a series of legal challenges faced by the platform in recent years.
Character.AI has previously been targeted in cases involving child safety concerns and harmful chatbot interactions, including allegations of exposure to inappropriate content and mental health risks.
Separately, the company and Google previously reached a settlement in a wrongful death case involving claims that an AI chatbot influenced a teenager’s mental health.
Wider AI accountability debate
The case reflects broader concerns across the tech industry, as regulators in the U.S. and beyond increase scrutiny of AI systems that simulate human roles—especially in healthcare and mental health contexts.
As AI tools become more advanced and widely used, lawmakers are pushing for clearer safeguards to prevent users from mistaking fictional chatbots for licensed professionals.
