${imageAlt}
Illustration by Daniel Hertzberg

Feature

The Advance of AI: Should We Be Worried?

Artificial intelligence is here to stay. We look behind the hype

By Bruce Grierson, '86 BA(Spec)

Illustration by Daniel Hertzberg
May 09, 2018 •

Last spring, Anthony Levandowski, a Silicon Valley engineer, filed papers with the U.S. Internal Revenue Service to register his new non-profit. Its mission: "the realization, acceptance and worship of a Godhead based on Artificial Intelligence developed through computer hardware and software." The IRS granted Levandowski's brainchild, called "The Way of the Future," tax-exempt status. And just like that, the first church of AI, with its own "dean," disciples and holy book (called "the manual") was born - and set to commence communion with the new god on Earth, when she comes.

"Not a god in the sense that it makes lightning or hurricanes," Levandowski told Wired magazine not long ago. "But if there is something a billion times smarter than the smartest human, what else are you going to call it?"

Now, we've heard this kind of talk for at least a half-century. A balky machine called the Perceptron was supposed to be poised to take our lunch money in 1958. The device, which ran on an IBM mainframe, could "reproduce itself and be conscious of its own existence," bugled the New York Times.

But a recent event has many observers believing we've finally cracked the nut. In the second game of its match with the legendary world Go champion Lee Se-dol, the computer program AlphaGo (developed by Google's AI project DeepMind, led by University of Alberta grad David Silver, '09 PhD, and former U of A post-doctoral fellow Aja Huang) made a move that flummoxed all the analysts. It made no sense. Yet it broke the back of the human champion and sent murmurs through the culture that a kind of tipping-point moment may finally be close: a machine can think in moves that people don't understand. It smacked of the cognitive jump that AI enthusiasts have been waiting for - what AI theorist Eliezer Yudkowsky calls the "intelligence explosion." When machines can learn from their own mistakes and then go wide, creatively connecting dots as we do - but astronomically faster - it's a whole new ball game. We're talking, according to "explosionists" like Yudkowsky and cryptologist I.J. Good, about an evolutionary leap at least as big as the one from water to land, or from Earth to interplanetary life.

That's a good thing if you think the new intelligence we're uncorking is friendly. That we're heading for a happy synergy of human and machine, in which our messes will be cleaned up by a higher level of thinking than the one that created them in the first place. "The Singularity," as it has been hailed. (Or "Rapture of the Nerds," in the coinage of the movement's chief pitchman, the futurist Ray Kurzweil.)

For his part, Levandowski, a pioneer of self-driving cars, envisions our relationship with AI as more like a New Testament God. Equal partners in this arrangement we are not. He favours the term "The Transition" for the moment when machines finally outsmart us and we hand over power. The only question is whether this god will treat us as pets or livestock.

"The level of existential fear in the machine-learning community is quite a bit less than among the public."
- Mike Bowling, U of A professor of computing science

The Swedish philosopher Nick Bostrom ups the ante. Let's say you program machines to produce paper clips. Off they merrily go, mining minerals until they deplete the Earth. Then they turn to us. "The AI does not hate you, nor does it love you," Yudkowsky said of this thought experiment. "But you are made of atoms, which it can use for something else."

The rest of us could dismiss all this talk as loopy dorm-room catastrophizing if it weren't for some of the names in the conversation. Bill Gates, Elon Musk and the late Stephen Hawking have said they're worried. Should we be?

Last year, dozens of high-impact social visionaries - physicists, roboticists, philosophers, tech CEOs and the odd Nobel laureate economist among them - gathered on California's Monterey Peninsula to put some rules in place while humans still have the upper hand. They aimed to create a kind of founding document of guiding principles, a road map to Friendly AI. Ferocious debates erupted around the ethical dimensions of AI research. Strategies to foster AI's best pro-social self. Worst-case scenarios. Big questions: how to prevent an arms race of AI-enabled weapons? How to steer this thing without unduly constraining it?

Few would disagree that we need to inject into AI the ability for a human referee to step in and intervene, to prevent a program from improvising on the instructions we whisper in its ear and charging off to pursue its own agenda. But how?

"Stop button," says Mike Bowling, U of A professor of computing science.

"You wouldn't build an escalator without a stop button. You don't build a robot without a stop button. If something unpredictable were to happen - just as with any other tool - we'd stop using it, turn it off, investigate, fix, and return it to working order." (This reasoning applies to an AI system we might deploy in the near future, Bowling says. Beyond that, things get pretty speculative - which isn't to say we shouldn't be speculating.)

After one too many viewings of The Terminator, a conversation with Bowling is guaranteed to bring your blood pressure back down. "The level of existential fear in the machine-learning community is quite a bit less than among the public," says Bowling, whose research focus is machine learning, games and robotics. (Machine learning being an area in which U of A researchers are considered at the forefront.)

Bowling is untroubled by reports that have made some people turn pale and draw the shades. Like the news two years ago about Microsoft's self-learning bot, Tay. The company equipped Tay with a sunny disposition and parachuted it into the Twitterverse. Within 24 hours, Tay turned into a jackleg neo-Nazi, churning out poisonously racist tweets. But Tay wasn't revealing some innate germ of malevolence. It was learning how people talk on the internet. "When your children start parroting things they hear on the playground," Bowling notes, "it just looks foolish."

Plenty of AI's foremost researchers openly doubt computers will assume apex-predator status any time soon, (including Richard Sutton). Then again, we've seen this movie before. When there's a big technological disruption, the Cassandras line up against the skeptics who insist the unimaginable can't possibly happen … until it does and the whole culture heaves. Is this revolution different?

"Well, it's different in the sense that computers are the first machines designed to control other machines," says Andrew Ede, a professor who lectures on the history of technology. "In earlier technological shifts, more jobs were created than were lost. This one's going to take jobs without replacing them, and that's a valid worry." Indeed, Ede says, there's lots to be troubled by as computing power accelerates. At the top of the list is the disappearance of privacy. The "algorithms that keep such close tabs on us," as he puts it, AI will put all of that into overdrive.

Bowling worries about bad human habits so engrained that we pass them on to machines without even realizing it. For instance, "the gender bias baked into the language manifests as sexism or racism." Then we proudly teach AI everything we know. "What if [that programming] is then used to process university admissions or mortgage applications?" But as far as a hostile takeover by robots in our lifetime, these two don't buy it.

"I think the true believers in the Singularity kind of misunderstand how different the human mind is from computers," Ede says. "[Futurist, Ray] Kurzweil, in particular, thinks that once we've mapped out all the connections in the brain we can just download ourselves onto some kind of software or hardware. But our brains don't work on a binary system - ones and zeros. They work on a much more complicated electrochemical system."

At present, the fanciest computer in the world is roughly as bright as a cockroach. No one has yet made a robot that can successfully fold laundry and cook an egg. AlphaGo's victory, while remarkable, was a triumph in an extremely narrow space. We're still a long way from creating a kind of intelligence that mimics even the worst of us on our worst day.

"Neil Postman argued that information isn't knowledge - and I'd add that knowledge isn't wisdom," says Ede. "That's something people who are waiting for the Singularity don't seem to grasp."

We at New Trail welcome your comments. Robust debate and criticism are encouraged, provided it is respectful. We reserve the right to reject comments, images or links that attack ethnicity, nationality, religion, gender or sexual orientation; that include offensive language, threats, spam; are fraudulent or defamatory; infringe on copyright or trademarks; and that just generally aren’t very nice. Discussion is monitored and violation of these guidelines will result in comments being disabled.

Latest Stories

Loading...