The core of the thesis is the scalable volume renderer shader system. The volume is rendered using raycasting and the fragment shader itself is split into multiple functions with a well defined interface and task. There are functions for clipping the traced ray, clipping inside the volume, reconstruction, classification, shading, gradient estimation and illumination. For every shader function, there are multiple alternative implementations which can e.g. be used to switch the used illumination model from the traditional Blinn-Phong to for instance Cook-Torrance. Each function implementation is allowed to provide the shader in multiple shader languages, currently GLSL as well as Cg versions are provided.
A shader compositor is analysing the current scene setup and visualization parameters in order to dynamically generate a shader which can be processed by the GPU. Some shader functions are template based, meaning that for example a ray can be clipped by using multiple clip planes and/or the depth buffer. This results in a scalable volume rendering system which can easily be extended by new algorithms for gradient estimation and so on without the need to rewrite the complete raycasting shader. This also means that one and the same system can be used to cover many use cases, from interactive rendering up to high quality rendering using cubic B-splines for reconstruction and a high sampling rate.
Here are some videos of the volume rendering plugin. The videos were recorded on a notebook with a Radeon HD 6970M.
I started with a first experiment on desktop-PC using a procedurally generated pyroclastic cloud in order to get the idea of volume rendering (only 12 triangles are rendered in order to archive the following results)
and then tested the same on Android 2.3 using a LG Optimus Speed P-990 smartphone
The following video is using the well known visible male dataset
while the next video is using a CT-scan of a PixelLight team member... the team consists of...
He will probably never ever give my more of his scans due to horrible misuse
For testing purposes I received a 876 MiB CT-scan of a pig containing 512×512×1743 voxel
- There's a downloadable demo showing volume rendering within PixelLight using GPU shader based volume raycasting: http://www.pixellight.org/site/index.php/page/25.html
- Screenshots at http://www.gamedev.net/gallery/album/104-pixellight/page__sortby__idate and http://www.pixellight.org/site/index.php/photos/album/3.html
- The volume rendering plugins are now within the PixelLight Git repository