When integrating the Leia Native SDK, you may want to update the OpenGL pipeline. The modifications listed below are the recommended approach to handle this update to work efficiently.
One of the most important parts of the rendering setup is the size of the render targets each part the pipeline will be filling. It is common to have the main camera rendering into a view the size of the screen, which fills the number of pixels in a display.
One of the most important changes to rendering with the Leia Native SDK is rendering four views instead of just one from the main camera. The Leia-enabled display shows the four views; each of which vary depending on how the device is positioned in front of you. By splitting the display into multiple views, the resolution of each view is reduced, and in this case the reduction is one fourth the width and one fourth the height. Effectively the four views can render one sixteenth the number of pixels. When the number of pixels are added together from each view, this renders one fourth the total number of pixels the original full screen application would render. It is important to note that the CPU effort is increased due to the number of draw calls increasing by a factor of four (the same number of draws from the original full screen game are processed per view).
This same optimization works for the Depth of Field (DOF) effect. The DOF effect works on each of the four views, allowing the render to occur at one fourth the width and one fourth the height of the display resolution.
The end of the Leia Native SDK pipeline operates on the larger view size, which is equivalent to the final resolution of the application. With the view interlacing and view sharpening passes rendering at the full screen resolution, the rendering load is potentially less than that of the original application because of overdraw content and the CPU effort is increased slightly.
One final optimization which is possible is to render the view interlacing and view sharpening passes at one fourth the height. The current implementation of the view interlacing shader is to copy a set of four pixels (one for each view) to be four high (one for each potential view). This means that the current shader renders a block of four pixels high as the exact same color for every view. If the shader were to be modified to only render one fourth the height, then the view sharpening pass were ran, a final pass could be used to scale the height. It is also possible that instead of scaling the height the default render target which appears on the display could be declared at one fourth the height and the display hardware could upsample the final output directly, saving even more rendering and CPU time.
One of the most common states of GL to update is the depth test. For normal rendering (the rendering of objects in a scene) depth is usually turned on to limit overdraw, keeping the object at each pixel which is closest to the camera as what is finally rendered. However, this test becomes less important when rendering a quad which covers the full view, such as what occurs for the Depth of Field effect, view interlacing effect, and view sharpening effect. These functions explicitly disable this test due to potential FBO management issues, meaning two things for your pipeline. The first being if you want the depth to be turned on after calling one of the rendering functions from the Leia Native SDK, it will be necessary for you to enable it from your pipeline. The second is, because the depth test will be disabled during the rendering of the depth of field, view sharpening, and view interlacing passes, having the depth attached becomes useless. Since it will not be used, taking the time to prepare the depth target for the framebuffer uses resources from the CPU unnecessarily.
It is important for you to note that the Leia rendering functions (leiaDOF, leiaViewInterlacing, leiaViewSharpening) will attempt to interact with specific uniform and vertex data. The uniform values are expected to continue to be available by the string name. The shaders are provided in case experienced users wish to change the functionality to work better for their own needs, but in order for the render functions provided by Leia to work correctly the uniform and input variables which are already in use should remain.
When using the leiaDOF function, a kernel is being used on the input image data which effects the output. The kernel size can be fairly large, and around the edges this causes some points of the convolution to be outside the texture bounds. When this occurs, the way the texture is created and the parameters provided to it (or the sampler) become important. For example, when the texture/sampler wrap mode (GL_TEXTURE_WRAP_S/T) is not set to GL_CLAMP_TO_EDGE or GL_CLAMP_TO_BORDER then odd rendering artifacts appear. It is recommended that wrapping modes which enable reading from another edge of a texture be disabled to reduce the artifacts of rendering near the edges.