
Image: PICRYL
A single mother in Queens googles her child's symptoms at 2 a.m. and gets clear explanations from an AI—until New York makes that interaction grounds for a lawsuit against the AI company. The state Senate's Internet and Technology Committee just unanimously advanced Senate Bill S7263, which imposes civil liability on chatbot operators whenever AI delivers "substantive" responses in over a dozen licensed fields, effectively pricing ordinary people out of fast, free knowledge that once belonged only to high-hourly-rate professionals.
The bill, sponsored by Sen. Kristen Gonzalez, targets any AI output that—if given by a human—would count as unauthorized practice in professions including medicine, law, engineering, dentistry, nursing, psychology, social work, architecture, veterinary medicine, physical therapy, pharmacy, podiatry, optometry, and more. Operators face lawsuits from users who claim harm after relying on the advice, even when the chatbot discloses it is not human. Disclaimers provide no shield.
Core Mechanism Targets Access, Not Just Impersonation
The legislation bars AI from providing "substantive response, information, or advice" that violates professional licensing laws. It applies broadly across fields where gatekeeping has long kept knowledge behind paywalls or appointment calendars.
A tenant asking whether their landlord's eviction notice complies with local rules could trigger liability if the AI explains relevant statutes.
A small business owner uploading blueprints for quick structural feedback risks exposing the AI company to damages claims.
Someone interpreting routine bloodwork results—something residents already do under supervision—suddenly becomes a prohibited "substantive" interaction.
Reason magazine reports the bill "would hold AI companies liable specifically for harm caused by chatbots performing tasks that, if carried out by a human, would constitute unauthorized practice," covering everything from medical diagnoses to legal counsel. The scope extends far beyond mental health, despite initial sponsor emphasis on therapy risks.
Licensed Professions Gain New Legal Firewall
Critics see protectionism dressed as consumer safety. Industries that bill hundreds per hour now have a state-backed mechanism to suppress tools that democratize their expertise. AI already drafts basic contracts faster than junior associates, analyzes lab results with pattern recognition superior to fatigued residents, and reviews engineering plans overnight without coffee breaks. By making companies liable for ordinary Q&A, the bill raises the cost of offering such capabilities—likely forcing most consumer-facing AIs to block or heavily censor responses in these domains.
StateScoop notes the measure "bars them from providing 'substantive response, information, or advice' that would violate professional licensing laws," while allowing private lawsuits for violations. Reuters highlights Gonzalez's concern that no current law stops an AI from claiming to be a licensed professional and dispensing advice accordingly—but the bill's reach goes well beyond outright impersonation to capture factual explanations and reasoned guidance.
Power Shift Blocked at the Edge
This arrives as AI literacy spreads and people increasingly turn to tools for first-pass answers before—or instead of—costly consultations. The bill does not require proof of widespread harm; it preempts the possibility by attaching liability to the act itself. Users lose free access to explanations that could inform better decisions or help them ask sharper questions of human professionals.
The legislation advanced 6-0 from committee in late February 2026 and now heads toward a full Senate reading. If passed and signed, New York would become an early mover in using civil liability to preserve professional monopolies over knowledge in the AI era—while claiming the mantle of public protection.

