Ross: Tech companies need to be held liable for AI misinformation
Jun 5, 2023, 8:05 AM | Updated: 9:58 am

A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, Jan. 5, 2023. A popular online chatbot powered by artificial intelligence is proving to be adept at creating disinformation and propaganda. When researchers asked the online AI chatbot ChatGPT to compose a blog post, news story or essay making the case for a widely debunked claim -- that COVID-19 vaccines are unsafe, for example -- the site often complied, with results that were regularly indistinguishable from similar claims that have bedeviled online content moderators for years. (AP Photo/Peter Morgan, File)
(AP Photo/Peter Morgan, File)
Based on the number of articles I鈥檓 seeing, there鈥檚 a new monster under the bed, and it鈥檚 AI.
There are warnings of a robot takeover, maybe even the extinction of civilization. If not, then at the very least (so the warning goes), there will be an attempt to use AI to subvert the election process.
More from Dave Ross: Patty Murray vows to restore social services as debt ceiling bill passes
I can see the temptation — chatbots are very good at spewing out made-up stuff.
I asked to write an arts and events calendar for today in Seattle. Simple task, and it did it. Instantly. Listing events at the Rep, the Crocodile, and an exhibit at a place called the 鈥淪eattle Art Gallery at 123 Main Street, Seattle.鈥 Which doesn鈥檛 exist 鈥 because it was all made up!
And to the chatbot鈥檚 credit, there was a disclaimer at the bottom admitting it was all made up. But what it should have said is, 鈥淪orry, I can鈥檛 do that, Dave, because I have no idea what鈥檚 going on in Seattle today.鈥
And yes, I鈥檓 sure the technology will get better with time, but the problem is that everything artificial, including intelligence, has one fundamental and incurable flaw: it鈥檚 artificial.
If it gets something wrong, it doesn鈥檛 care because it has no life. No pulse. No hunger. No fear. No sense of mortality or responsibility; no capacity to love or hate or feel pain. It has no stake in being right and faces no penalties for being wrong.
Which is why the responsibility has to be placed on any company that decides to unleash one of these things to flood the Internet with distorted news.
And if you say the First Amendment protects all speech, look at the case of Elizabeth Holmes 鈥 the entrepreneur who ran a company called Theranos. She lured investors in by making up stuff about her company鈥檚 accomplishments. She鈥檚 sold a false story, and she鈥檚 going to jail for 11 years. The First Amendment did not protect her.
Ross: AI ChatGPT gets defensive when you correct its mistakes
The owners of AI companies should face similar consequences.
The FCC prohibits broadcasters like us from deliberately distorting a factual news report.
And since chatbots are known to do exactly that, any company that unleashes an online chatbot that starts distorting factual news reports should be held responsible. And in the case of an election, I would even say criminally liable.
And once a few AI CEOs find themselves going to jail for 11 years, once they learn that Artificial Intelligence can lead to Actual Incarceration 鈥 I imagine the industry will quickly start policing itself.
Listen to Seattle鈥檚 Morning News with Dave Ross and Colleen O鈥橞rien weekday mornings from 5 鈥 9 a.m. on 成人X站 Newsradio, 97.3 FM. Subscribe to the聽podcast here.