Faezeh Sadat Zakeri: Research project 11, “Lightfield imaging for cinema quality media production” (Fraunhofer IIS)
Faezeh Sadat Zakeri has a Bachelor’s degree from university of Kashan, Iran in Computer Software Engineering, and a Master’s degree from CIMET (Color in Informatics and Media Technology), an Erasmus Mundus Program consortium of four European universities. She finalized her Master’s Thesis, “Light Field Super Resolution” at Media Computing Lab of Technicolor R&D company in Rennes, France.
New capabilities that lightfield can provide allow us to go way beyond the conventional imaging. Today immersive media production and post production are highly interest in the capabilities that lightfield can offer, since it provides features like wide field of view in spatial domain which circumvents physical limitations of cameras like their aperture size.
I started working about several months to test and work with different stitching software available on the market in order to determine their quality issues. I got familiar with Facebook surround 360, Nuke by Foundry, Nokia OZO, and Kolor Autopano. Listing quality issues were found and the state of the art in classic image stitching was studied. Which gave me insights about the source of the problems and how possible is to go beyond them. Therefore I aimed to focus on the ways for stitching multiple camera views under the assumption that geometry information is only visible for parts of the scene. Having not enough overlap in stitching is a big challenge. Consequently using an approximation solution gets necessary and disparity becomes a key. Reusing the current lightfield pipeline from Fraunhofer IIS to estimate disparity has been taken into account for this step. Moreover, I assumed situations with high parallax since it is one of the main reasons of visible artifacts in the stitched image. I have been working on a concept for stitching different views in a non-planar camera array by view rendering so far.
Current state of the project:
- A proof of concept has been written down.
- Considering the assumptions mentioned above, a stereo setup for planar and non-planar camera array and a scene have been simulated using Heidelberg lightfield render plugin and Blender CGI.
- Images have been rendered and the depth and disparity maps have been estimated.
Work in progress:
- Revision and completing the proof of concept;
- Implementation of the concept for planar setup;
I have attended the training school 3 in Kiel in March 2017. I participated in the 119th MPEG and 76th JPEG meetings. In November 2017 I am going to attend to the workshop on visual data capture and in January 2018 I am going to attend the training school 4 in Newcastle.