Computing Science

Research Profile: The U of A's Camera Array

Tele means at a distance. So television brings vision from a distance to you. What we are trying to do is bring not just that vision, but the whole scene to you… I think it will bring about a new way of sensing the environment, a new way of actually knowing our environment, which goes beyond just two-dimensional.

- Herb Yang, Professor of Computing Science, University of Alberta


Cheng Lei explaining the uses of the camera array.

It is a movie moment that has been often imitated but is still stunning. Actress Carrie-Anne Moss, while fighting a policeman, jumps straight up in the air to deliver an almighty kick to his chest. When she arrives in mid-air, ready to strike, the scene freezes. Now that in itself isn’t special, but what happens next is—our viewpoint wheels rapidly around her in a half-circle, showing her from all the angles you would see if you were to walk around her. But time is standing still, and she remains frozen in mid-air. In real life, seeing such a thing would be impossible.

This shot as well as other time-defying, roving-camera shots from The Matrix, the 1999 hit movie, were created with the help of a camera array, a system of cameras used to simultaneously capture a scene from different angles.

The Matrix’s camera array consisted of 122 still cameras (cameras for shooting still pictures, not video) set up in a circle. As the action happened, each camera took pictures sequentially. Afterwards, the special effects team used painstaking image-based rendering (IBR) techniques to blend the pictures into one seamless panning shot.

These techniques were revolutionary at the time. The next revolution could come from the University of Alberta (U of A), says Herb Yang, a U of A professor of computing science.

One of the many examples of versatile camera array work.

Yang, Cheng Lei (Ph.D. student), and other U of A researchers are developing a camera array that could be superior to all existing camera arrays, including an impressive eight-camera array that Microsoft unveiled in 2004. Unlike all other camera arrays (that they know of), their array will be asynchronous, readily portable, and dynamic.

This means a team of videographers could easily pack it up and take it somewhere to film something. They could set up individual cameras anywhere they like, shoot a scene at different times rather than at the same time, and even walk around and shoot with handheld cameras if they so desired. The U of A camera array will also be able to operate with varying numbers of cameras and varying models of cameras, more unique traits for a camera array.

The versatile U of A camera array will be able to do all this because of cutting-edge algorithms designed by Lei and other team members. “The reason we can use our array asynchronously is we have developed a very nice algorithm for synchronizing video cameras,” says Yang. “Cheng is very close to completing an algorithm that can synchronize and do all the other calibrations in one swoop… That’s something rarely done because there are many challenges, but I think we are meeting these challenges with interesting results.”

Some excellent stereo algorithms have also been developed for this camera array. A stereo algorithm uses the visual information collected by two or more cameras to determine distances, much like a pair of human eyes does. U of A stereo algorithms were ranked number seven on the Middlebury College stereo vision research page for quite some time, and Yang says researchers at other universities have been paying attention to the U of A’s stereo algorithm work.

It all adds up to a camera array that extends itself beyond the world of moviemaking to be useful for scientific research and everyday applications.

For example, Yang is working with Sally Leys from the U of A Department of Biological Sciences to develop a small camera array for observing ocean life. He would also like to explore how camera arrays could help surgeons. “In surgery, a surgeon may be able to see one view of the patient while doing surgery. Now what if the surgeon was able to look at other views of parts he is operating on without moving his head? Maybe the doctor will be able to do the surgery more accurately. If we were able to provide an unobstructed view of surgery, medical students would also benefit a lot.”

Another possible application is crime scene investigation. “Right now people take pictures in multiple locations. But we could set up the camera array at a crime scene and capture as many possible views as available and use the information to provide investigators with a more complete look-around view of the area,” says Yang. “We could populate Whyte Avenue with many cameras and find out who’s making trouble.”

If researchers like Yang and Lei continue to streamline camera arrays, which they undoubtedly will, it may become commonplace for movies and TV to be shot three-dimensionally rather than two-dimensionally, giving directors the freedom to choose from virtually any angle in a scene. Viewers at home might even be able to change the angle with their remote control, Yang says.  

John Gaeta, the visual effects supervisor for the Matrix trilogy, agrees. On his DVD commentary for The Matrix, he says, “It’s just a matter of time (before) standardized techniques for shooting a scene will result in that scene becoming a complete three-dimensional entity just by way of the number of cameras you use, the number of perspectives you use… All the customized software related to that is a clear evolutionary track towards dimensional filmmaking.”

Article and photo by Erin Ottosen, 2007.