Colourful grid of different coloured bananas
Getty Images

Did You Know

Does ChatGPT Really Understand Us?

Generative AI models may sound like us, but to truly understand language requires human experience

By Sandrine Camminga

October 31, 2023 •

This story will appear in the 2023 issue of Contours, the Faculty of Science magazine.

Ask an AI language model like ChatGPT to generate a travel itinerary, and you’ll receive a surprisingly human-like reply. Because language models sound human, it may be tempting to think they understand words like we do — but do they?

Alona Fyshe, ’05 BSc(Spec), ’07 MSc, is one of the researchers investigating this question. An assistant professor in the Faculty of Science, she co-authored a study assessing whether nine- to 12-month-old infants and language models process language similarly. Fyshe discussed both her research and the basics of language models at a Science Talks webinar.

Thoughts can be mapped

First, Fyshe’s team recorded how infant brains and machine neural networks responded to a number of simple words such as banana and spoon.

To collect data from the infants, researchers prompted them with one word at a time. As the babies heard the word, their brain activity was recorded using an electroencephalogram, or EEG.

The team entered the same words that the babies had heard into the language model, and they analyzed the response from the model’s neural network. Language models compute information using a series of digital neurons that take in user input and transform it into numbers before spitting out a prediction. The numbers generated by the neural network create a sort of map of the model’s ‘thought’ process.

Similar doesn’t mean equal

Next, Fyshe and her colleagues fed the infants’ EEG data into a separate AI model that was tasked with predicting the numbers the neural network had come up with for that same word. If the neural network and brain were too dissimilar, the prediction would be impossible.

Ultimately, their model predicted the neural network’s computations with above-chance accuracy, suggesting both the infants’ and neural networks’ responses to words were more similar than different.

“What a neural network is doing is not exactly what the brain is doing, but it’s not completely random, either,” says Fyshe.

It comes down to humanity

If a model can predict a neural network’s computed numbers based on an infant’s brain activity with some accuracy, does that mean neural networks get what we’re saying?

Not exactly. A neural network may produce a similar result to an infant brain, but that doesn’t indicate true language comprehension. Not only do neural networks have a different structure and complexity than the brain, they also lack the experience with the banana or spoon that the infant brings.

“Neural networks don’t exist in the world. They’ve never opened a door, they’ve never seen a sunset. Can an AI that’s never had real-world experiences actually understand language that is about the world? A lot of people would say no.”

Fyshe is one of many speakers to share expertise at alumni events. For more, visit ualberta.ca/alumni/events.

We at New Trail welcome your comments. Robust debate and criticism are encouraged, provided it is respectful. We reserve the right to reject comments, images or links that attack ethnicity, nationality, religion, gender or sexual orientation; that include offensive language, threats, spam; are fraudulent or defamatory; infringe on copyright or trademarks; and that just generally aren’t very nice. Discussion is monitored and violation of these guidelines will result in comments being disabled.

Latest Stories

Loading...