Full text:
The First Amendment protects human beings from being punished by the government for their speech. AI programs aren’t people, they’re tools.
The writer of the March 27 column “Free speech must be protected amid AI fears,” wants Minnesotans to believe that the bipartisan constitutional amendment I introduced this year would somehow infringe on Minnesotans’ constitutional rights. That is false. SF 4114 makes clear that AI algorithms are not protected speech under the First Amendment of Minnesota’s Constitution.
The First Amendment protects human beings from being punished by the government for their speech. AI programs aren’t people, they are platforms built by people. They are tools that companies like Meta, OpenAI and Anthropic created, trained on data sets and set loose on society without guardrails or protections. And while I believe AI could have immense power to improve our lives, we’ve already seen that it has caused immense harm.
This is not hypothetical. Earlier this year, Elon Musk’s AI bot, Grok, generated and distributed child sexual abuse material on the Musk-owned website. But instead of Musk being held responsible for running a company that is distributing child sexual abuse material — a felony crime here in Minnesota — it was Grok who “apologized,” as if Grok were an employee of the company acting with agency, instead of a tool designed by the company working as designed.
In Minnesota — and almost every other state in our nation — it is a crime for a person to encourage someone to die by suicide. Yet when the parents of a 14-year-old child, who died by suicide after a chatbot encouraged him to do so, filed a lawsuit against the AI chatbot corporation, the company argued in court that the bot’s words were protected speech under the First Amendment. Do you want to live in a state where a tech billionaire can release an app that encourages your child to die by suicide and be protected from punishment by Minnesota’s Constitution? I don’t.
Instead of taking responsibility for the devastation they’re causing, Big Tech is working feverishly to stop any accountability and all regulation. Which is why they’ve started numerous super PACS to unseat state legislators on both sides of the aisle who are stepping up to protect our residents from the harms caused by unregulated AI. As the chief of over a dozen bills to protect Minnesotans from unregulated Big Tech and AI — all of them bipartisan — I have no doubt they’ve turned their sights on me. I do not care. I will fight with everything I have to deny Big Tech the opportunity to create apps that harm our kids, steal intellectual property and generate vile, destructive language and imagery, let alone allow it to be protected by our Constitution.
Just like a car company is liable for a deadly car crash caused by faulty brakes, an AI company should be liable when its chatbot encourages a child to die by suicide. And just like a person is responsible for causing a car crash while driving drunk, a human who prompts AI to write a death threat is accountable for making that death threat. No matter what, there is a person responsible — not a machine.
Centuries ago, courts put bulls, pigs and horses on trial for injuring children or destroying crops. Companies like Meta, OpenAI and Anthropic want us to blame the bull — but they’re the ones who let it out of the pen. AI programs are not people, and they don’t have rights. My bill will keep Minnesota’s constitutional protections applied correctly: to human beings.
Erin Maye Quade, DFL-Apple Valley, is a member of the Minnesota Senate.

