My deferred shading engine is almost complete. I've been reading the two articles form
GPU Gems 2 & 3 on the subject and I started work on it earlier this year. Its created with C++ and
DirectX 9. Mesh material data is stored in the G-Buffer (G for geometry). I chose to use 4 64bit render targets to construct the buffer. The following data is stored:
- World space position.
- World space normals.
- Diffuse colour.
- Specular colour.
- Specular power.
- Specular intensity.
- Material ID.
Below are the four textures that make up the G-Buffer:
World space positions
World space normals
Diffuse colour
Specular colour
There is also support for normal mapping and parallax occlusion mapping with offset limiting. When normal mapping is applied the render target texture that stores the normals looks dramatically different.
World space normals when normal mapping are enabled
The material ID will eventually be used as the z coordinate in a volume texture which stores lighting results, which means that the engine will be able to apply different lighting models without the use of branching.
Lighting in
deferred shading is handled a a
completely different way than a traditional rendering engine. A
light volume is projected onto the screen. For a direction light this is a full screen
quad, for a point light a sphere and for a spot light a cone is used. This makes
the directional light the most expensive light type to draw as it effects every pixel in the G-Buffer. Spot and point lights are very cheap to draw as
the light volume culls out the
vast majority of the G-Buffer. It is possible to render 50-100 point lights in a deferred shading engine! Below is a view of how the lights and light volumes are rendered, captured with the help of
nVidia's PerfHUD.
A directional light drawn with a full screen quad.
A point light drawn with a sphere light volume.
A spotlight drawn with a cone light volume.
The scene is also rendered with HDR lighting. This is simple to achieve as the data in the G-Buffer is already stored in 64bit render targets so after lighting is applied, one simply creates a Gaussian blurred version of the scene and combines it with the original and then applies some tone mapping.
There is also a particle engine in there which is simulated in the GPU's pixel shader, it is implemented in a similar way to the GPU based particle engine created I created in DirectX 10.
Final scene.