top of page

Tiled Deferred Rendering with MSAA

 

Forward and deferred rendering are subjects that can be described on a whole book.

With RJE, I wanted to implement a feature that was both using DirectX 11 features and that was used in the current or "next-gen" rendering pipelines.

Current and future techniques tend to use the same principle : use tiled rendering with GPGPU.

Intel published a paper on tiled deferred rendering at Siggraph 2010 (you can find it here)

AMD published not long ago the Leo tech demo, showcasing Forward + (you can find it here)

 

Tiled deferred and tiled forward (a.k.a. as forward +) both use the GPU to break the viewport into screen tiles, allowing to work per tile instead of per pixel. It is a significant improvement over the previous rendering techniques. I decided to start from the article written by Andrew Lauritzen (Intel) for 2 reasons :

    - it's using DirectX 11

    - The full source code is available so it's easy to understand the details.

 

The first obstacle I had to deal with was the deferred renderer itself : filling the G-buffer, work with MSAA and resolve with the lighting accumulation pass. I had to change many things in my engine in order to make it work. After that, the "serious work" could start : reduce the screen into tiles and compute lighting per tile instead of per pixel.

I had to deal with world space and view space conversion issues for quite a long time but I finally managed to understand why everything wasn't working. Working with compute shaders is difficult when you're not experienced, but I managed to learn quite fast and to deal with GPGPU problems such as synchronization between different threads, shared memory, unordered access views, shader resource views, etc.

G-Buffer : Albedo - Normals - Position - Depth

Edge detection for MSAA samples

Final result

Light count per tile

bottom of page