VROOM 3D (short for Volume Rendering Object Oriented Machine) is a volume rendering engine. It was also my university graduation project.
The traditional (mainstream) approach is to store and use only data describing the visible surfaces of objects – i.e. vertices and faces (aka meshes). This happens to be a very compact, yet powerful approach, especially when combined with finer relief information (through normal maps).
The purpose of the project is to explore an alternative method for rendering 3D objects stored in a different format – volumes. A volume is analogous to a 3 dimensional raster image (can also be thought of as layers of 2D images) and has pixels with 3 coordinates, called voxels. Current hardware renders meshes and two-dimensional images very efficiently but has very limited built-in support for volumes.
The project was been written in C++ and GLSL, using the OpenGL API.
The primary goal of my research was to answer the question “can volume data be rendered in real-time and if so would be a feasible alternative to traditional mesh rendering?”. It was mainly driven by personal curiosity about esoteric rendering techniques in the context of game development.
The objectives were:
- compare and contrast direct volume rendering and mesh rendering
- compare and contrast the different techniques for rendering volume data.
- deliver a software solution that can render volume data at 30 FPS or more.
- employ advanced rendering techniques (such as lighting, blending, etc…)
- demonstrate the advantages over mesh rendering
The project was discontinued. There is much potential for future development, however better solutions are already in existence.
Algorithm and technology wise there were quite a few interesting bits. I explored two alternative techniques – volume ray-tracing and 3D texture mapping, both of which involved some fun tricks with maths.
Ray-tracing required working out the intersection of a ray and a box and then sampling the volume along the ray segment. Unfortunately I was not ready to implement it as a GPU program and it was rather slow on the CPU.
3D texture mapping on the other hand required efficient generation of proxy geometry. This could be as simple as slicing a box along its cardinal axes and rendering the volume as a set of quads mapped to a 3D texture, but that resulted in some skewing of the volume at certain camera angles, followed by a sudden pop when the quad set changed. To solve this I implemented an alternative proxy geometry generator which slices the volume bounding box along the viewing ray – I do not take credit for the idea (Salama and Kolb).
Another bit of technical challenge was the creation of a 3D normal map to be used for lighting. At first the algorithm was implemented on the CPU and it was quite simple to achieve, although that was not optimal – there was a RAM to VRAM transfer which could be avoided. So my second take on the problem was to use “Render To Texture” (which happened to work for 3D texture targets as well) and a fragment shader program which calculates the normals from alpha gradients. At some point after moving the algorithm to the GPU I tried on-the-fly normal computation, but that was obviously slow and unnecessary.
Shading itself was simple to add after normals have been taken care of. I implemented a basic Blinn-Phong and a toon shader just for demo purposes, but at that point any of the more advanced shading models could have been applied as the rendering equation remains the same.
The project was a moderate success in that for the time available I was able to achieve most of the objectives. The resulting application uses OpenGL 3D textures to store data on the GPU and maps it to multiple polygonal slices from a cube, rendered back to front to display a reconstruction of the volume. It is able to construct a normal map of the volume which is used to apply basic Blinn-Phong lighting. The number of slices can be adjusted on the fly to reduce the rendering time which allows it to run at higher FPS at the expense of quality. The bigger constraint turned out not to be the rendering speed, but rather the limited memory; and volumes require a lot of space on the graphics card. My solution did not address this issue. Further more it fails to demonstrate the rendering of a semitransparent volume (because the proper alpha blending has not been implemented), which is one of the biggest advantages of volume rendering against mesh rendering.
Halfway through the project I found out about many more people who were enthusiastic about the idea and someone had already been working on the same problem and has achieved far more impressive results. This is why VROOM is now discontinued, but it has been an important milestone in my understanding of graphics hardware and low level graphics programming.