DEFERRED RENDERING IN OPENGL
In this blog post I will go over my implementation for my CENG469 COMPUTER GRAPHICS II term project.
![]() |
| G-buffer visualization |
Initial Goal
Starting my project, I had set the following goals:
- Implement deferred rendering in OpenGL using a complex scene
- Compare deferred rendering results with forward rendering (FPS comparison)
Different to previous assignments, I implemented this program using Modern OpenGL, since it is more compatible with MacOS.
Deferred Rendering Implementation Process
1. Creating the G-buffer
The first step in my project was to set up the G-buffer successfully so that I could safely proceed with the deferred rendering pipeline implementation. I planned to have the following textures to record geometry information of the scene:
- gPosition: Stores the fragment positions
- gNormal: Stores normal information of each fragment
- gAlbedoSpec: Stores the diffuse and specular color information of fragments
- gTexDepth: Stores the depth value of fragments (copied from depth buffer)
Practicing Texture Mapping
To check the success of my implementation, I needed to render each texture in the G-buffer to a screen-filled quad. Since I didn't have previous experience with texture mapping in OpenGL (other than creating a skybox), I practiced rendering a textured quad with a jpg image for testing. This step was very informative to me and made sure that I could visualize the G-buffer content once it was initialized.
Even though this is a basic task, I had some trouble and took some time to complete it. I had problems such as getting a solid color on the quad or obtaining a diagonally skewed texture.
![]() |
| Diagonally skewed texture |
![]() |
| Solution of skewing problem |
Initializing G-buffer
Once I made sure I could render a texture in my program, I proceeded with initializing the G-buffer. To avoid running into complex bugs and errors, I added textures to the G-buffer one by one and rendering them to check for errors. My testing scene was very simple, with just two armadillo objects and one point light. I positioned the armadillos to have varying depth and position values, which would be enough to verify the G-buffer content I would create.
![]() |
| Test scene |
For the G-buffer, I first did the required framebuffer and texture initializations, which looks like this:
I initially thought to create the depth texture by using a color attachment and recording the z values of fragments, but from the course material I learned and applied this method where the existing depth buffer information gets copied to our texture:
![]() |
| Depth texture initialization |
Next, I created the shader files for writing to the G-buffer I created. The vertex shader (geometry_vert.glsl) converts world space positions to clip space coordinates, computes the desired values (position, normal, color) which are then passed to the fragment shader and written to the corresponding G-buffer textures.
Here are the separate G-buffer textures I rendered:
![]() |
| gPosition |
![]() |
| gNormal |
![]() |
| gAlbedoSpec |
![]() |
| gTexDepth |
Initializing the G-buffer and writing to it concluded the geometry pass of the deferred shading pipeline. After successfully obtaining these textures, I moved on to implementing the lighting pass.
2. Lighting pass
The main operation in implementing the lighting pass is to implement the complex lighting shader to render the final scene by just using the values in the G-buffer.
The vertex shader is very simple. It passes the texture coordinates of the rendering quad to the fragment shader. Since I already experimented with drawing a screen filled quad and rendering a texture on it, I had an important component of the lighting pass already done.
The fragment shader takes the G-buffer textures as sampler2D uniforms and uses the values in the lighting computation.
In my first test, I used a simple lighting computation with no specular component, just to see if the G-buffer content would successfully come together to draw the same scene I observed with forward rendering. The first lighting pass result I got is as follows:
![]() |
| First deferred rendering result |
When comparing the deferred rendering result with the forward rendered scene, I had noticed that deferred rendering produced a slightly better lit image even though their lighting computations were identical. I believe that the reason for this is that the G-buffer stores position and normal data with high precision (GL_RGBA16F) which results in more accurate lighting calculations.
![]() |
| Forward vs. Deferred rendering |
Creating a complex scene
Now that I had the deferred rendering pipeline implemented, I needed to create a complex scene to see actual performance improvement to forward rendering. To benefit from deferred rendering, I aimed to create a scene with many objects and moving point lights. I took inspiration from a deferred rendering assignment description published by Texas A&M University for their Computer Graphics course. I had also included this idea in my initial project proposal.
![]() |
| Texas A&M CSCE 441 - Computer Graphics |
For the ground, I rendered a 40x40 white quad centered at (0, 0, 0). I defined an object array populated with the object types Teapot, Bunny and Armadillo. I utilized this array to continuously transform and draw different objects on the quad, given a displacement factor and object quantity. I created the separate VBOs for all the different objects and created functions to bind and draw each one. I also generated a color array which is populated with randomized colors for each object.
For the point lights, I defined 20 different light positions myself, which are all white. The light positions array is used by the lighting pass shader as a uniform variable.
Since adding a lot of lights resulted in a scene that is way too bright, I had to significantly decrease light intensity:
![]() |
| Scene with 20 point lights with high intensity |
Here are some intermediate results I obtained while creating my scene and playing around with object and light positions. Note that there is no specular effect in these examples:
Performance Comparison
After creating a scene with varying number of objects and lights, I wanted to make an FPS comparison between the forward rendered scene and deferred rendered scene. I added functions to my program to use the forward rendering pipeline with different shaders than the deferred shading.
In my first run, I actually did not observe any performance improvement. The forward rendered scene actually performs better for a smaller number of objects and lights. I knew that I needed to have complex geometry and larger number of lights to see a performance improvement by deferred rendering. However, testing with more objects and lights, I encountered a bottleneck. Unfortunately my personal computer is seemingly not capable of handling such scenes, and the FPS of dropped significantly for all renderings. I also decided to make the lights rotate around the quad center to hopefully get more performance improvement in deferred rendering.
In my demo, a slight improvement can be seen between the FPS of deferred rendering and forward rendering, however the FPS values fluctuate a lot and in some runs it's difficult to observe if there is a significant improvement. My demo also shows how different renderings can be tested by modifying the main loop in my program.
I also recorded a demo that shows the visualization of G-buffer content created for the complex scene. This visualization can also be tested by modifying the main loop in my program.
Conclusion
At the end of my project, I have completed the following goals I initially set, which are:
- Implement deferred rendering in OpenGL using a complex scene (Completed)
- Compare deferred rendering results with forward rendering (FPS comparison) (Completed)
My final observation is that the decreased performance is due to hardware limitations and the next step in improving this project would be to test it on a more powerful computer.
I also wished to improve my scene visually by rendering small spheres placed in the light positions. I had created another G-buffer texture to record emissive color values for light spheres, but this extra emissive color caused an error in my lighting pass calculations and resulted in a rainbow colored scene. This is another improvement I am planning for this project. I have removed this feature from my program but here's a successful intermediate result of this feature:
![]() | |
|

.jpeg)




.jpeg)

.jpeg)
.jpeg)



.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
Comments
Post a Comment