Shadow Mapping


As a side project for the Geneticist game I developed a demo application to implement shadow mapping technology using OpenGL and GLSL. This implementation was the heart behind the technology used for the shadows in the Geneticist project. Similar to the edge-detection algorithm this implementation took advantage of the frame buffer objects in order to utilize multiple renders per frame.

Simple Shadow Mapping demonstration

The main concept behind the shadow mapping is to render the distance of the scene from the view of the light source. These depth values are then stored in a “depth buffer” which is later passed to a shader as a texture. When the engine performs the rendering from the camera, it transforms the visible points to the lights view space. If the distance to the light source of this point is greater than the distance stored in the depth texture at that point, than the transformed point must be shadowed as the light cannot see the point. Because the depth from the light source is rendered every frame, this allows dynamic objects, such as the character, to cast shadows in real-time. The main downfall of this method is the precision loss when mapping from light space to the view space. If you look at the shadows in this screenshot from Genetics you can see the edges can become very aliased. To correct this problem many people will multi-sample or implement algorithms to increase the resolution at the shadow edges.


Genetics Project showing the shadow mapping in use. Both the static terrain and dynamic objects generate shadows.
Source for the demo application is available on github.