Many of you know that I’m an Artificial Intelligence skeptic. I don’t believe machines can achieve consciousness, and I don’t believe they can think in the human sense, because they don’t have human needs.
I do believe that if you feed them enough raw text they can mimic an interview, but ultimately, they’re just assembling the written thoughts of actual people.
More from Dave Ross: An open letter to the Mariners, we still love you
And that’s what University of Washington Professor Yejin Choi is trying to change – by bringing some common sense to AI programs. She was awarded a MacArthur grant for recognizing you can’t simply feed raw text into these programs, and expect them to think straight.
“Currently the AI just consumes any data out there but that’s not safe. That’s dangerous even for humans,” Choi said.
So she came up with a solution.
“One way to fix this, to make an analogy, is to write textbooks for machines to learn from, in the same way, that humans also learn from textbooks, not just any random data from the internet,” Choi said.
Yes – her idea is to prepare a textbook to help the computer teach itself to recognize information that might be compromised by sexism, racism, or outright lies.
“And then that textbook could consist of examples of what’s right from wrong,” Choi explained.
But – here’s the catch – a lot depends on who writes the textbook.
“I mean, there are cases where needs are so obvious that it’s a clear case of sexism and racism,” Choi said. “But then there are cases where two people disagree depending on their upbringing, or depending on their depth of understanding of the issue. Someone might think that oh, ‘that’s freedom of speech,’ while others think that that’s a clear case of microaggression.”
What that tells me is that an AI program is going to take on the personality and prejudices of its creator.
“That’s right. The Creator should be not just one person, but a diverse set of people representing it, but even if so, it’s going to reflect some biases,” Choi confirmed.
So the idea is to get a spectrum of smart, well-adjusted people to feed the AI program examples of right and wrong so that the computer can teach itself which information to accept and which information to avoid.
And some of you may be thinking – that’s great! In fact, if you can come up with a universal truth filter, why limit it to computers? Why not teach it to humans?
The answer, of course, is that it’s been tried for thousands of years and we’re still fighting over things like pronouns and bathrooms.
I say we test it on the computers first, and see if they really learn to behave, or just try to unplug each other.
Listen to Seattle’s Morning News with Dave Ross and Colleen O’Brien weekday mornings from 5 – 9 a.m. on KIRO Newsradio, 97.3 FM. Subscribe to the podcast here.