I don't think this is from a single shot. The work of Mark Levoy and
the other Computational Photography people use many, many shots and
"choose" the rays they'd like from the large, virtual aperture created
by moving the camera. What I'd like to know is, how is he "choosing"
the focal plane in this setup?
It also appears to be distinct from Tufuse, which uses focus-bracketed
shots (this technique does not require focus bracketed shots).
Hopefully he does a more comprehensive writeup detailing his
technique, or someone here has a little more info.
So.. here's how I do it (and why)...
I use Hugin, a program which is designed to stitch panoramas for it's ability to precisely remap photographs to fit into the same plane in 3 dimensions. It's a short cut to avoid a lot of programming and debugging that I'd rather not due for what was to be a simple experiment to satisfy my curiosity.
The use of Hugin also means that I don't have all of the nice equations and things which can precisely do what I want to be able to do next.. dynamically move the focus plane. I may go there, and do the grunt work for this and other reasons.... but not right now.
The general point of it all is to generate a virtual focal plane which allows the combining of several views into a new coherent composite. This has the benefit of forcing anything not in the focal plane out of focus, and into the bokeh of the photo.
I saw this window on my way to work last Wednesday. I thought it would be interesting to try the technique to see how well it worked at getting rid of the annoying chains while still showing the merchandise. I took several photos being sure to move in both the horizontal and vertical directions while keeping the same distance from the window.
I then used Hugin to manually place pairs of "control points" on the drawing on the notepad on the left display. I then told Hugin to allow each image to have it's own X and Y offset to compensate for the motion of the camera. (The feature was originally added to help stitch together scanned images, but turns out to be quite usable for my purposes as well)
Once you've entered a sufficient number of control point pairs, you can then have the program optimize the merge parameters and output a series of images which are all remapped to be blended into a panorama.
I then take the results, use Paint Shop Pro to combine them into a layered image, and set the transparency of each layer to result in the average pixel value of all layers.
I've done this with 10 frames taken a bit closer to the window shown above, and you can see it as a slide show.
I hope this answers your questions, and gives others a bit of a jumping off point.