Overview of Lightfield Rendering
In traditional 3D computer graphics, scenes are drawn with respect to a camera. This camera is the idealized pinhole camera, with a viewable area which is known as a frustum. The frustum is calculated
based on a field of view of the camera, as well as the closest and furthest distances the camera is able to see from its current position. When all this data is put together, a perspective matrix can be computed, which is later used for rendering geometry. The frustum for this type of camera is symmetric, so when divided in half vertically or horizontally, the frustum is the same size. Objects are then drawn onto the near plane based on this projection matrix. The entire scene is drawn, with potentially many objects. When finished, the final frame will show everything in focus, unless effort is taken to create visual effects to make the scene appear similar to what the world would look like from a human eye (such as blurring, lighting changes, etc).
When rendering for the Leia display, which is using lightfield technology, everything becomes more interesting. In the current version, Leia suggests having a total of 4 cameras. While drawing objects with these cameras is similar to the traditional method, due to the requirements of Leia rendering we need to change some aspects.
The human brain is able to visual depth because the two eyes see a very similar world, but their location is different. Each eye sees the world, and the brain takes the image, combines it with the image from the other eye, and during this combination we are able to visualize objects being closer or more distant than others.
In order for Leia rendering to accomplish the same job, we need more than the normal, single camera. The more cameras used, the more of a smooth 3D effect we can see. The current Leia display technology recommends using a set of 4 by 1 camera setup. This means we need a total of 4 cameras next to each other. These cameras are centered around one point, and the distance each is away from their neighbor is called the camera baseline .
This is different from traditional computer graphics because the cameras need to be parallel. However, the camera frustums of the 4 cameras must also intersect at a single plane. We must apply a shear to the perspective matrices to achieve this. If the scene only contained a single object, say a square, that existed exactly on the plane of intersection, then all of the cameras would see the exact same view. This plane of intersection is called the convergence plane. When an application is running, the elements the developer wants the user paying attention to should be on or very near this plane, and if
necessary new perspective matrices should be created when the applications main focus moves around in the frustum.
When objects are in front of the focal plane, or behind it, the object will look slightly different to each view. The difference, in pixels, between one point of an object from views which are next to each other is called disparity . The further the cameras are away from each other, the more disparity will exist between objects when projected, and the more different the neighboring views will be.
When these 4 views are appropriately combined, a process called view interlacing , a 3D effect will appear on the Leia display. From a correctness perspective, everything is working.
In order to create a bigger effect of 3D, we need to view objects further from the focal plane, or move the cameras further apart. In either of these cases, the effect will be that the views are more different. When the disparity becomes “too large”, humans have a harder time mentally putting the images together. This makes the scene confusing or worse. At this point we have to either:
- Keep the views fairly similar in order to keep the brain believing in the depth shown
- Blur each view more the further from the focal plane the pixel is based on depth.
Leia recommends applying a DOF effect to each view before the view interlacing occurs. Without DOF, a general statement of 6 pixels of disparity are acceptable. However, when DOF is properly added, disparity can become significantly larger, giving the developer and designer the opportunity to push the 3D effect as far as possible. As the developer, this allows the views to be significantly more different, creates a more impressive 3D, as well as providing the end-user with a specific and obvious location to focus on for a minimal amount of computational effort in traditional 3D computer graphics, scenes are drawn with respect to a camera.