When bots go bad: study sheds light on the dark underbelly of artificial intelligence

To understand how to get AI right, we need to know how it can go wrong: researcher

EDMONTON — You can find artificial intelligence in many parts of everyday life, including medical diagnostics, driverless cars, personalized shopping, and facial recognition. But when AI fails, it does so “quite spectacularly,” says Vern Glaser with the Alberta School of Business.

In a recent study, “When Algorithms Rule, Values Can Wither”, Glaser illustrates why the costs can be high when human values are placed in AI: “if you don't actively try to think through the value implications, it's going to end up creating bad outcomes,” he explains.

Examples of bad outcomes include Microsoft’s Tay — when introduced in 2016, trolls taught the chatbox to spew racist language — and Australia’s “robodebt” scandal, where an algorithm wrongfully identified more than 730,000 overpayments of unemployment and disability benefits.

“The idea was that by eliminating human judgment, which is shaped by biases and personal values, the automated program would make better, fairer and more rational decisions at much lower cost,” says Glaser, however, it led to emotional trauma and stress for those impacted. AI still promises to bring enormous benefits to society.

To guard against bad outcomes, Glaser has several principles to keep in mind:

Algorithms are mathematical, so they rely on digital representations of real phenomena: For example, Facebook gauges friendship by how many friends a user has or how many likes they receive on a post. “Is that really a measure of friendship? It's a measure of something, but whether it's actually friendship is another matter,” says Glaser.

Intensity, nuance and complexity of human relationships can easily be overlooked in AI: “When you're digitizing phenomena, you're essentially representing something as a number. And when you get this kind of operationalization, it's easy to forget it’s a stripped-down version of whatever the broader concept is,” explains Glaser.

Note to AI designers: insert human oversight into algorithmic decision-making: “There's a tendency when people implement algorithmic decision-making to do it once and then let it go,” says Glaser, but AI that embodies human values requires continuous oversight to prevent its ugly potential from emerging.

AI is simply a reflection of who we are — at our best and our worst: The latter could take over very easily: “We want to make sure we understand what's going on, so the AI doesn't manage us,” Glaser says. “It's important to keep the dark side in mind. If we can do that, it can be a force for social good.”

More information can be found here. To speak with Vern Glaser please contact: Sarah Vernon | University of Alberta communications associate | svernon@ualberta.ca