Environment Mapping in OpenGL

In this blog post I will go over my implementation for the second homework of CENG469 Computer Graphics II.

Initial Goal vs. Progress

  • This assignment consisted of several different concepts:
  • Equirectangular to cubemap texture generation
  • Environment mapping (Skybox creation, reflection and refraction)
  • Image based lighting (Median Cut Sampling)
  • HDR tonemapping

Unfortunately, due to my slow progress and the time constraints, I wasn't able to accomplish the image based lighting and tonemapping steps which is the heart of the task. However, I believe I've improved my knowledge on some important OpenGL concepts that needed improving such as VBO generation, texture generation and utilizing shader programs. I have also learned new concepts, which include the usage and generation of HDRI textures and skybox generation.

Equirectangular to cubemap texture generation

The first step of my implementation was to take a spherical HDR image and accurately produce 6 cubemap faces from it that will be used to create a skybox. To check if I'm generating the faces correctly, I initially wrote the code to generate 6 PNG images for each face. I used LodePNG which is a PNG image encoder and decoder.

The steps I followed for the face generation are as follows:

1. Set cameras field of view to 90 degrees and define a square projection plane.

Although it felt like an odd constraint, I defined a square projection plane because I needed to obtain direction vectors that fill a square area and my starting logic was to obtain these vectors with each pixel of the projection plane (as stated in the course material). Since I only use these direction vectors to map pixels from a square are to a spherical image, I am guessing that perhaps defining and using any square plane placed in front of the camera and of sufficient size (not necessarily using the projection plane) would produce the same result but I haven't tested this idea in my implementation.

2. Derive a 3D direction vector for every pixel in the camera projection.

3. Rotate vectors according to the cubemap face we're dealing with.

This step initially confused me because at this point I couldn't yet comprehend what the direction vectors were being generated for exactly. I figured out that the initial direction vectors (from the camera to the -z direction) were for representing the front face of the cubemap. So, while processing each face we can utilize the same set of -z oriented direction vectors by rotating all of them 90 degrees left, right, etc. to get the other faces' direction vectors.

4. Compute the spherical angles from the direction vectors.

5. Compute the corresponding spherical image index.

6. Copy the color from the spherical image to the cubemap face.

When creating PNGs for debugging, I had to multiply the HDR values by 255 to scale them to an acceptable range which resulted in some weird pixels but helped me understand the image orientation and accuracy nonetheless. Since HDR images have a wider range, the faulty pixels are the ones that exceed 255 after being multiplied. But for my cubemap texture, I directly copied the HDR values to a float array for each face.

Implementing the steps and the math, I obtained the 6 cubemap faces without trouble.

Environment mapping

After generating the data for the 6 cubemap faces, it was time to generate a cubemap texture from them. The difficulty I had in this step was binding the cubemap texture. I hadn't used textures in OpenGL before and I initially thought that I needed to bind the cubemap texture before the draw call for the cubemap. Through trial and error I have reached the correct steps:

  1. glGenTextures to create a texture name.
  2. glBindTexture to bind the cubemap texture to the generated texture name.
  3. glTexImage2D to bind each face data we generated to the cubemap texture faces.

I also needed to refresh my VBO usage knowledge so it took a while to debug and see the textured cubemap environment on the screen. I finally obtained success with this result:


When I tested my cubemap with a texture with the respective face names written on it, I saw that the images were inverted and looked like they were written backwards. This happened because I forgot that I need to negate the z component of my direction vectors to account for the cubemap being in a left-handed coordinate system. After fixing this problem I got the desired result:


Reflection and Refraction

After setting a cubemap environment, it's fairly straightforward to render mirror and glass-like objects. For the mirror reflections, I created a shader program where I used GLSL's built-in reflect function that returns the reflection vector and directly samples the texture to set the fragment color.

Similarly for the glass refraction, I used the refract function that returns a vector bent according to the given refractive index ratio. I have used air/glass refractive in my implementation, but I also tested how the refraction would look given air/water and air/diamond ratios.

Air/Glass refraction

Air/Water refraction

Air/Diamond refraction

As the refractive index of the object gets larger, the refraction ratio of Air/Object decreases and the light vectors are bent less. We can see that the diamond object has the clearest environment refraction and water has the blurriest, which checks out because diamond (2.42) > glass (1.52) > water (1.33) is the refractive index relationship among them.

Trying to accomplish Image Based Lighting...

At this point in my implementation, I had limited time left and the desired median cut algorithm seemed daunting since it could require much debugging after implementing. I wanted to obtain a result that used at least some amount of environment lighting so I simplified my goal. I wanted to split the HDR image in only 4 regions, calculate the centroid of each region and add the 4 light probes to my lighting calculations. In a non-reusable and hardcoded fashion I split my image data in 4. I calculated the centroids by accumulating the total luminance of the image and the luminance weighted pixel image coordinates.

centroid position = luminance weighted pixel image coordinates sum / total luminance

I then implemented the reverse of the pixel coordinate operations I did in the equirectangular image to cubemap texture generation. Instead of taking direction vectors and finding I, J in spherical image coordinates, I took I, J (in this case the centroid pixel coordinates) and found the corresponding texture coordinate (same as direction vector) of the cubemap. These steps follow the initial math in a straightforward way:

  1. From I, J, calculate theta and phi (spherical angles).
  2. From theta and phi, calculate x, y and z (3D texture coordinates).

I passed the light positions and the colors to the object shader for the light calculations. I hardcoded the light colors as red, green, blue and white for testing purposes. However I saw that all 4 lights accumulated in a similar position and produced the following erroneous result:


This came quite late in the time window I had left so I didn't get to fix this and implement image based lighting. I finally compared the lighting result using different cubemap textures and saw that the light positions slightly differ, and all the light is coming from the brightest direction in the image. From this I concluded that the error could be that even though there's an error in my calculations I am possibly correctly returning the 3D position of the high luminance pixel in the image and the problem is probably in the centroid calculations.



Conclusion

Since I couldn't implement HDR tonemapping, I could not get the benefit of a high quality skybox in my implementation. However, being able to view a seemingly infinite and surrounding natural environment around my object was very motivating for me. Even though I could not complete the desired implementation, I have learned a lot about even the concepts I couldn't successfully cover and hope to finish this project in my personal time.





Comments

Popular posts from this blog

DEFERRED RENDERING IN OPENGL

Grass Rendering in OpenGL with Geometry Shader and Bezier Curves