Simultaneous localization and mapping (SLAM) enables robots to explore environments that are completely unknown to them.
- Jonathan Klippenstein, M.Sc. Computing Science, University of Alberta
Graduate students Jonathan Klippenstein and Jing (Cathy) Wu.
Every day, your brain does nifty things for you that you don’t even notice. For example, as you read this, you know exactly where you are. If you were to get up and walk around, you would continue to know exactly where you are, even as you change positions.
Knowing where you are might seem ridiculously obvious, but for a robot, it isn’t so straightforward. Robots don’t have our inherent ability to instantaneously analyze our surroundings and know where we are in the context of these surroundings. In order to localize itself (to know where it is), a robot needs a map of sorts.
Though localization doesn’t come naturally to robots, it is a research problem that has essentially been solved, says Dr. Hong Zhang, a professor of computing science at the University of Alberta (U of A).
“With localization, if you are given a precise map, determining the robot’s location is easy. You just make reference to known locations in the map to figure out where you are,” he says. “Mapping is also easy if you know exactly where the robot is all the time.”
Where it gets tricky is getting the robot to figure out where it is when it doesn’t have a map. In this scenario, the robot has to construct a map of its surroundings as it moves around, and at the same time, it has to figure out where it is within this map.
This problem is called SLAM (simultaneous localization and mapping), and it is the key to enabling robots to navigate autonomously, to move through the world as effortlessly as humans do. It’s a hot research area, and an area that Zhang and several of his students work in.
The U of A spin on SLAM
“Our niche in SLAM research is to use the camera (as the robot’s vision sensor),” says Zhang.
“The camera is challenging because you get a 2D projection of a 3D world. You’ve lost a dimension in the production process and you have to recover that. But camera vision is very important because it has information that you don’t get from proximity sensors.”
Laser range scanners have often been used as the “vision” sensor for robots, says master’s student Jonathan Klippenstein. However, they are typically expensive, whereas cameras keep getting cheaper and cheaper.
“You get a lot more information from cameras,” adds Klippenstein, “because you can look at the appearance of something… Whereas the laser has no way of identifying things. It just says along this angle there’s an object this far away.”
Research on the camera as a SLAM sensor
Klippenstein’s research is on feature extraction, the process of identifying landmarks for the robot’s map. These landmarks are points of interest in the images provided by the camera sensor.
“The visual processing community has been working with this sort of thing for years, but no one’s examined what the best solution is for SLAM. That’s what I’ve been working on,” says Klippenstein.
Another master’s student, Jing (Cathy) Wu, is working on a camera sensor model that predicts errors the camera sensor makes in measuring and depicting the world. Predicting these errors helps the robot compensate for them, Wu says.
Dealing with uncertainty
The errors made by the camera sensor are part of the uncertainty inherent in all realworld systems. Robot researchers use the term uncertainty to refer to the unpredictable nature of environments—for example, the varying speeds of cars on a highway—and errors made by the robot’s software and hardware.
“One thing you learn very quickly when you start doing robotics is that the hardware you use is never accurate,” says Klippenstein.
Uncertainty is arguably the number one obstacle in the way of robots moving independently through the world without human help. The problem of uncertainty has spawned a whole field of research called probabilistic robotics, which seeks solutions for managing the uncertainty that robots must deal with. One such solution is Wu’s camera sensor model.
Thanks to probabilistic robotics, the ability of robots to navigate autonomously has improved dramatically in recent years. Robots can play soccer, drive cars, vacuum and mop your floor, and explore the bottom of the ocean.
With the research efforts of people like Wu, Klippenstein, and Zhang, robots will undoubtedly get better and better at finding their way around. And some day, robots will probably know where they are just as well as you.
Article and photos by Erin Ottosen, 2007.