Photography algorithm alters perspective after images are captured

The quick video below gives an example of the kinds of results you can get with this technique:

As the research paper notes, the woman in the images didn’t move at all throughout the photo shoot, but the background was able to be manipulated from a wide-angle shot into a close-up with software after the fact.

Of course, those images have to come from somewhere. To get computational zoom to work, you’ll need a “stack” of images captured a fixed focal length at different distances. In layman’s terms, that means you’ll need to use your feet and move through the scene; you can’t cheat by using a zoom lens. So no, computational zoom can’t magically create scenes without having the image data to start with — but once it does have those images, it can do some pretty creative things.

Once those photos are shot, they’re fed into the computational zoom system and run through its algorithm, which can figure out the camera’s orientation based on the rest of the images — ultimately it can build out the entire scene in 3D from a variety of viewpoints, which lets the photographer create a final image combining multiple perspectives. There’s no word on when this technology might be available to photographers to try themselves, but it’s easy to imagine professionals using this to give themselves a lot more flexibility in adjusting image composition after the fact.

Source link

No comments:

Post a Comment