CS Doctoral student uses image fusion to enhance photos
How many times has a great portrait turned into a silhouette because your subject was standing in front of a window? Or perhaps you have taken a photo in bright sunlight and lost all of the details to a blinding white spot? Well, maybe someday soon your camera will be able to correct this for you!
Both of these scenarios have to do with over- (too bright) or under- (too dark) exposed regions of the photograph. Rui Shen, a Ph.D. student in the Department of Computing Science, is exploring image fusion techniques to correct problems with image exposure.
“We propose a novel probabilistic model-based fusion technique for multi-exposure images to combine the scene details revealed under different exposures.” says Rui.
The technique combines multiple images of the same scene, captured at varying exposure settings, into a single image that best represents all its details. Unlike previous multi-exposure fusion methods, the one Rui is working on also aims to achieve an optimal balance between local contrast and color consistency.
“A generalized random walks framework is proposed to calculate a globally optimal solution subject to the two quality measures by formulating the fusion problem as probability estimation.” explains Rui, “Experiments demonstrate that our algorithm generates high-quality images at low computational cost.”
In addition to potentially being incorporated into digital cameras or photo processing software, image fusion techniques also have applications in medical image enhancement.
Article & photos, 2011.