Zazzle Shop

Screen printing

Thursday, October 1, 2009

The Computer Graphics Group sharpens photographs by capturing multiple low-quality images instead of a single higher-quality image.

Stay focused


In the image on the bottom, the eye is in the foreground and the text is in the background — and both are blurry because the photographer has focused on a point between the two. A new MIT system instead captures multiple images at several focal depths and stitches them into a sharper composite (top).

Courtesy Sam Hasinoff

For photographers, it's sometimes difficult to keep both the foreground and background of an image in focus. Focusing somewhere between the two can ensure that neither is blurry; but neither will be particularly sharp, either. On Friday, at the IEEE Conference on Computer Vision in Kyoto, Japan, members of the MIT Graphics Group will show that combining several low-quality exposures with different focal depths can yield a sharper photo than a single, higher-quality exposure.

Given enough time, a digital camera could take a dozen well-exposed photos, and software could stitch them into a perfectly focused composite. But if the scene is changing, or if the photographer is trying to hold the camera steady by hand, there may not be time for a dozen photos. When time is short, says postdoc Sam Hasinoff, lead author on the paper, "there's a trade-off between blur, on the one hand — not having an image which is in focus — and noise, on the other. If you take an image really fast, it's really dark; it's not going to be of high quality."

Hasinoff, MIT professors Fredo Durand and William Freeman, and Kiriakos Kutulakos of the University of Toronto devised a mathematical model that determines how many exposures will yield the sharpest image given a time limit, a focal distance, and a light-meter reading. Hasinoff says that experiments in the lab, where the number and duration of digital-camera exposures were controlled by laptop, bore out the model's predictions.

A digital camera could easily store a table that specifies the ideal number of exposures for any set of circumstances, Hasinoff says, and the camera could have a distinct operational setting that invokes the table. The multiple-exposure approach, he says, offers particular advantages in low light or when the scene covers a large range of distances.

For the time being, however, the technique is limited by the speed of camera sensors. Today's fastest consumer cameras can capture about 60 images in a second, Hasinoff says. If the MIT researchers' model determined that, under certain conditions, the ideal number of exposures in a tenth of a second would be eight, the fastest cameras could manage only six. "But there's still a big gain to be had," Hasinoff says.

The Graphics Group's work on multiple-exposure composites uses an analytical approach first presented at this summer's Siggraph — the major conference in the field of computer graphics. There, Anat Levin, who was a postdoc at the time, Durand, Freeman, and colleagues described their "lattice-focal lens," an ordinary lens filter with what look like 12 tiny boxes of different heights clustered at its center. Each box is in fact a lens with a different focal length, which projects an image onto a different part of the camera's sensor. The raw image would look like gobbledygook, but the same type of algorithm that can combine multiple exposures into a coherent composite can also recover a regular photo from the raw image.

"Only time will tell whether that new, proposed piece of hardware will be better than the others, but I think their way of analyzing the whole thing is brilliant," says Marc Levoy, a professor of computer science and electrical engineering at Stanford University. "There's been a lot of work on different ways of extending the depth of field, and what this paper did was, it tried to analyze all of them together. And I actually think that it's a seminal paper. I think it's a landmark paper."

0 comments: