Environment Mapping in OpenGL
In this blog post I will go over my implementation for the second homework of CENG469 Computer Graphics II.
Initial Goal vs. Progress
- This assignment consisted of several different concepts:
- Equirectangular to cubemap texture generation
- Environment mapping (Skybox creation, reflection and refraction)
- Image based lighting (Median Cut Sampling)
- HDR tonemapping
Unfortunately, due to my slow progress and the time constraints, I wasn't able to accomplish the image based lighting and tonemapping steps which is the heart of the task. However, I believe I've improved my knowledge on some important OpenGL concepts that needed improving such as VBO generation, texture generation and utilizing shader programs. I have also learned new concepts, which include the usage and generation of HDRI textures and skybox generation.
Equirectangular to cubemap texture generation
The first step of my implementation was to take a spherical HDR image and accurately produce 6 cubemap faces from it that will be used to create a skybox. To check if I'm generating the faces correctly, I initially wrote the code to generate 6 PNG images for each face. I used LodePNG which is a PNG image encoder and decoder.
The steps I followed for the face generation are as follows:
1. Set cameras field of view to 90 degrees and define a square projection plane.
Although it felt like an odd constraint, I defined a square projection plane because I needed to obtain direction vectors that fill a square area and my starting logic was to obtain these vectors with each pixel of the projection plane (as stated in the course material). Since I only use these direction vectors to map pixels from a square are to a spherical image, I am guessing that perhaps defining and using any square plane placed in front of the camera and of sufficient size (not necessarily using the projection plane) would produce the same result but I haven't tested this idea in my implementation.
2. Derive a 3D direction vector for every pixel in the camera projection.
3. Rotate vectors according to the cubemap face we're dealing with.
This step initially confused me because at this point I couldn't yet comprehend what the direction vectors were being generated for exactly. I figured out that the initial direction vectors (from the camera to the -z direction) were for representing the front face of the cubemap. So, while processing each face we can utilize the same set of -z oriented direction vectors by rotating all of them 90 degrees left, right, etc. to get the other faces' direction vectors.
4. Compute the spherical angles from the direction vectors.
5. Compute the corresponding spherical image index.
6. Copy the color from the spherical image to the cubemap face.
When creating PNGs for debugging, I had to multiply the HDR values by 255 to scale them to an acceptable range which resulted in some weird pixels but helped me understand the image orientation and accuracy nonetheless. Since HDR images have a wider range, the faulty pixels are the ones that exceed 255 after being multiplied. But for my cubemap texture, I directly copied the HDR values to a float array for each face.
Implementing the steps and the math, I obtained the 6 cubemap faces without trouble.
Environment mapping
- glGenTextures to create a texture name.
- glBindTexture to bind the cubemap texture to the generated texture name.
- glTexImage2D to bind each face data we generated to the cubemap texture faces.
Reflection and Refraction
After setting a cubemap environment, it's fairly straightforward to render mirror and glass-like objects. For the mirror reflections, I created a shader program where I used GLSL's built-in reflect function that returns the reflection vector and directly samples the texture to set the fragment color.
Similarly for the glass refraction, I used the refract function that returns a vector bent according to the given refractive index ratio. I have used air/glass refractive in my implementation, but I also tested how the refraction would look given air/water and air/diamond ratios.
![]() |
| Air/Glass refraction |
![]() |
| Air/Water refraction |
![]() |
| Air/Diamond refraction |
As the refractive index of the object gets larger, the refraction ratio of Air/Object decreases and the light vectors are bent less. We can see that the diamond object has the clearest environment refraction and water has the blurriest, which checks out because diamond (2.42) > glass (1.52) > water (1.33) is the refractive index relationship among them.
Trying to accomplish Image Based Lighting...
- From I, J, calculate theta and phi (spherical angles).
- From theta and phi, calculate x, y and z (3D texture coordinates).









Comments
Post a Comment