Last week, I explained why many generative AI outputs constitute speech that warrants First Amendment protection. This post explores the ramifications of that conclusion. Importantly, this protection does not mean chatbots are immune from liability. Courts, litigants, and regulators still have meaningful tools to address the consequences of generative AI activity. The key question is not whether AI speech can be regulated, but what the regulation seeks to do and what effect enforcement would have on speech interests.

Notably, a law may punish chatbot outputs for content that lies beyond the First Amendment’s purview. So, for example, if a chatbot generates demonstrably false statements about a real person that cause harm, the First Amendment does not bar a defamation claim because defamation is not protected speech. (Other legal questions may arise, such as identifying the speaker and whether that person is legally “at fault.”) Similarly, the First Amendment does not protect outputs that constitute incitement, true threats, or obscenity. Existing doctrine does protect non-obscene computer-generated child pornography, though I have argued elsewhere that the Court should reconsider this decision in light of the AI revolution.

Beyond these narrow exceptions, however, courts must determine whether the law is a content-based or content-neutral restriction on speech. Laws that regulate a chatbot’s message face strict scrutiny: the government must show a compelling governmental interest, and that the law is the least restrictive means of achieving that goal. This is a stringent test that rarely survives judicial review. Importantly, this test typically applies to facially content-neutral legal claims that cannot be adjudicated without reference to the content of the regulated speech. In Snyder v. Phelps, for example, the Court protected protesters who petitioned military funerals with highly offensive anti-gay messages against claims of intentional infliction of emotional distress on the bereaved families.

Content-neutral regulations that incidentally burden speech are more likely to survive First Amendment review. These are evaluated under the intermediate scrutiny test, which requires the plaintiff to show a significant government interest and that the law is narrowly tailored. In TikTok v. Garland, the Supreme Court upheld a law banning the social media platform unless its U.S. operations were severed from Chinese control. The Court recognized a content-neutral interest in preventing a foreign adversary from collecting sensitive information about American users (though as Justice Gorsuch noted, it declined to validate the government’s content-based concerns about “covert content manipulation” by the Chinese government, which I and others argued was sufficient to strike down the law).

Laws seeking to regulate the content of chatbot outputs, such as restrictions on (non-obscene) pornographic content, self-harm and pro-anorexia messages, or laws imposing viewpoint neutrality, are likely to be subject to strict scrutiny and struck down. But many other regulations, such as data collection and privacy requirements, transparency and disclosure mandates, or content-neutral age verification restrictions, are more likely to be subject to intermediate scrutiny. The court would determine whether the law serves a substantial interest unrelated to suppressing speech, whether it is narrowly tailored to avoid burdening substantially more speech than necessary, and whether it leaves open adequate alternative channels of communication. This is a fact-specific inquiry that gives well-designed regulations a meaningful chance of survival.

Sometimes the most difficult question is determining what precisely the law is targeting. In the since-settled Garcia v. Character Technologies case, the plaintiff sued a generative AI company under theories of negligence and product liability, allegedly for inducing her teenager’s suicide. Plaintiffs sought to avoid First Amendment scrutiny by framing the case in terms of faulty design choices, such as failing to include appropriate age-related safeguards. In a sense, this is similar to Universal City Studios v. Corley, which held that code was speech but applied intermediate scrutiny because the law sought to regulate the functionality of the program (avoiding DVD copy protection), not its communicative elements. But the stronger argument is that, like Phelps, the claim operates as a content-based restriction regardless of label. The alleged harm flows entirely from what the chatbot said. Garcia’s claim was that the chatbot’s words caused her son’s death, which is as content-based as liability can get. This means that strict scrutiny would apply, and the likelihood of success would be dim.

Generative AI is a new technology, but the constitutional questions it raises are familiar ones. The First Amendment does not immunize chatbots from liability. It channels that liability into the right doctrinal frameworks. Unprotected speech remains unprotected. Content-neutral regulations remain viable if well-tailored. What the Amendment forbids is using tort law or regulation as an indirect means to punish disfavored messages, regardless of how the claim is labeled. That principle is not unique to AI. It is simply the First Amendment doing what it has always done.