Light Field Photography Offers a Path to Re-focusable Photographs

A new research development offers the potential to re-focus a photograph after it has been taken and provide a use for the higher and higher pixel resolution that sensors are capable of.
Researchers at Stanford University have developed a way to refocus photographs after they have been taken. The technique involves placing a special filter in front of the image sensor that breaks the image up and allows the camera to record a lot of the extra information that is encoded into the light coming from a scene but which conventional photography and its capture methods ignores.


The image if taken formally


The image processed for a near focus point


The image processed for a far focus point

The research team comprises Ren Ng, Marc Levoy, Mathieu Bredif, Gene Duval, Mark Horowitz,  and Pat Hanrahan has been investigating ways of using the extra dimensions of data encoded into the light. The technique allows them to retrace the rays of light coming into the camera and from this recompute things like focus.


The test camera


The back with filter in place

Camera lenses focus on a particular plane and have a depth of field that is the area in front of and behind this plane that also appears to be in focus. The size of the depth of field depends on the lens focal length and on the aperture used to take the picture. Wider apertures (with a smaller f-number, such as f2.8 or f4) produce a shallow depth of field whilst smaller apertures (with larger f-numbers, like f16 or f32) produce a larger depth of field. Ng, et al’s technique enables an image captured with a shallow depth of field at f4 to be recomputed to place other objects at different planes in the image to appear in focus as if the image was captured at an aperture of f22.

The down side of the technique is that multiple pixels of the camera’s sensor are used to generate each pixel of the final image. So in the prototype system they are using a 4000 x 4000 pixel digital back, with these pixels grouped in 12×12 pixel blocks. So the 16MP camera takes 300×300 pixel images. The amount that you can refocus is determined by the resolution of these blocks of pixels, the 12×12 used producing the f4 to f22 improvement in tests. Note that in theory it should have been the equivalent of f45 but minor inaccuracies in the system reduced this by a factor of 2.


What was seen in the viewfinder


Refocused to a near point


Processed so that it is all in focus

It should be noted that as well as recomputing focus the same methods can be used to allow a recomputed point of view or the extraction of 3D data. This has been demonstrated by the team in a macrophotography situation.

The light field research points to a real, practical use for the increasing resolutions of digital image sensors. Somewhat higher resolution than the test setup’s 16MP would allow the technique to be applied to video footage, providing the sensor could be read fast enough. Still higher resolutions would allow higher resolution output images. An obvious beneficiary would be in sports photography where a wide aperture is needed for higher shutter speeds to freeze the action, but results in shallow depth of field and the focus sometimes not being where you want it to be.

Photography is a long way from having run its course of what technology can offer.

More information including a highly technical paper can be found at the link below:
http://graphics.stanford.edu/papers/lfcamera/AC

1 thought on “Light Field Photography Offers a Path to Re-focusable Photographs”

  1. frank verpillat

    bonjour,
    It seems very interresting,
    I’m working (in France) in the 3D lenticular images

    my question is :
    is it possible to find, in the internal data of your machine, the way to deform a plane picture,
    in order to make a 3D picture (with the opportunity to calculate a few -ie 59- different multi-stereoscopic pictures…

    thanks

    fv*
    (hautrelief.fr)

Comments are closed.

Scroll to Top