miércoles, 29 de enero de 2014

Deferred Lightning

A few months ago I gave a talk in the National University about game programming.
I asked the attendees a few questions, and was amazed that none of them had
heard about Deferred Lightning. They were still students, so its completely
normal they didnt have a clue about it, but being Deferred Lightning
such a cool thing, I thought on posting an introduction to it, for all
those game programmers that still havent heard about it.
In OpenGL and DirectX you have a limit as to how many lights can
lit an object, in PC and mobile phones, this limit tends to be 8, (in SGI
it used to be 16). Though you can have more than 8 lights in your scene,
the system has to decide at every frame and for every object which 8 to render.
For complex scenes like an indoor scene, where you have candles an torchlights
8 is never enough. Also, the rendering of 8 lights is VERY expensive, because
for every lit object the object has to be rendered once for each light. So you
end up rendering your scene 8 times.
This is where Deferred Lightning comes in. Deferred Lightning is a technique
where lights are not rendered through the normal fixed function pipeline, but
as a postprocessing effect AFTER the scene has been rendered with no lights.

To give you an estimate, without deferred lightning I used to render a scene
with 2 lights at 60 FPS, with deferred lightning that scene had 120 lights
at 60 FPS.

So Deferred Lightning is very cool. BUT its quite difficult to implement. It took
me in PhyreEngine 1 whole week to get it running the way it should. And it
has a few drawbacks, for example, Alpha Objects, must be rendered separetly
and to a different Render Texture and the composed at the end of the
rendering pipeline.

How does it works?

Well, actually, its very clever how it works.

First, the scene is drawn WITHOUT lights (that means everything is lit as if
it had a WHITE Ambient Light) to a render texture the COLOR BUFFER 
(that means, we render it to a separate texture NOT the back buffer)
We render the NormalDepthBuffer
to ANOTHER texture. The normalDepthBuffer is a texture which holds all
the Normal and Depth information of every pixel written to the Color Buffer.
That means after the scene render, we end up with two textures, the 
COLOR BUFFER and the NORMALDEPTHBUFFER
The Color Buffer holds the diffuse
color of the pixels rendered, and the NormalDepthBuffer holds de DEPTH
and NORMAL information of the pixels written in the Color Buffer.

When we have this two textures, we render the lights on top of them.
A Point Light is rendered as a Sphere, a Spot Light is rendered as a CONE
and a Directional Light is rendered as a PLANE.

We render this objects, on top of the COLOR BUFFER, accessing the
NormalDepthBuffer in the Light's Shader. With the informtion in the 
NormalDepthBuffer we know the Depth and Normal, of the Pixel
being rendered to, so we can calculate how much do we have
to LIT that pixel.

For example, if we are rendering a Point Light, we render a sphere
on top of the Color Buffer, and in the PointLight Fragment Shader
(that is being executed for every pixel written) we read the NORMAL DEPTH
BUFFER so we can calculate now much that particular pixel is being lit
(if its in the Point Light Attenuation range), and if it should be bright
because of the normal, because of specular effects.

After rendering all the lights, we end up with a COLOR BUFFER
that is lit by all the lights.

We then copy or postprocess that COLOR BUFFER to the Back Buffer
and voila.

Why is it so FAST?

Well, with normal lightning, we end up rendering:

Number of Objects X Number Of Lights(max 8)

With Deferred Lightning:

Number of Objects + Number Of Lights(any number)


Cheers!


 Deferred Lightning 120 lights running at 60FPS
 NormalDepthBuffer Texture
Normal Lightning, 1 Light running at 60FPS

No hay comentarios:

Publicar un comentario