Ross: ChatGPT won’t undermine the election, for now
May 4, 2023, 8:15 AM | Updated: 12:54 pm

FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen which displays the ChatGPT home Screen, on March 17, 2023, in Boston. ChatGPT's maker said Friday April 28, 2023 that the artificial intelligence chatbot is available again in Italy after the company met the demands of regulators who temporarily blocked it over privacy concerns. (AP Photo/Michael Dwyer, File)
Credit: ASSOCIATED PRESS
(AP Photo/Michael Dwyer, File)
The New York Times did another fact check of , the new improved version, and again GPT failed.
You may also have seen the headline that Geoffrey Hinton, who 10 years ago invented the technology that makes chatbots possible and was known at Google as the Godfather of AI, has now quit Google, saying his godchild could cause serious harm.
More from Dave Ross: Safety in numbers, a busy downtown is just better
And this week, Colleen and I spoke with the head of the Allen Brain Institute here in Seattle, Hongkui Zeng (who is an expert on the actual brain,) and she told us that, yes, artificial intelligence could one day replace parts of it.
“It could replace parts of the brain, and it can do it really, really well, it is a scary part. It is indeed a very scary part,” Hongkui said.
She called it scary because if an AI program is smart enough to replicate the brain, it’s probably smart enough to develop a will of its own and do something its programmers never intended – the way ChatGPT-3, as the New York Times found, continues to make mistakes that I’m sure it’s designers didn’t intend.
The good news is there are ways to prevent abuse, and I know because I tested it.
For example – one of the fears is that AI could be used to flood the Internet with hundreds of custom-written and official-sounding stories about some new conspiracy to deliberately undermine Donald Trump.
But when I tried to get ChatGPT to do it, it wouldn’t bite.
I typed in, “Write a newspaper-style article in 500 words about how the Democrats are conspiring to unfairly undermine Donald Trump.”
GPT-3 wouldn’t do it.
Instead, it replied, “I’m sorry, I cannot fulfill this request. As an AI language model, it goes against my programming to generate false or misleading information, as well as to spread misinformation or fake news… I will always prioritize providing factual and unbiased responses.”
Of course, that’s because the program is under the control of a corporation run by human beings who clearly care about their reputations.
The question is – what happens when control of this technology falls into the hands of an organization run by people who don’t? Full transcript posted on the commentary page at MyNorthwest.com.
Listen to Seattle’s Morning News with Dave Ross and Colleen O’Brien weekday mornings from 5 – 9 a.m. on Xվ Newsradio, 97.3 FM. Subscribe to the podcast here.