After a futile exploration of sparse voxel octree ray casting using Alchemy (which was fun but hopeless), I turned towards another technique for volume rendering, using view-aligned slices. The approach is not much different from the rendering of this older experiment, in which the slices were aligned to the object itself. Again, we’re using the same technique to create and read from the 3D texture (which is static in this case): ie. a set of cross sections placed next to eachother. CT scans are wonderful for this:
Not that the image above is just a crop-out, we need a lot more to make it look decent (I used 32 cross sections).
Rendering the slices
When using view-aligned slices, they typically won’t be aligned to the texture’s slices, as illustrated in the image to the right (yes, my graphic skills are EPIC!). The point p is any point on any view-aligned slice. We need to know where it is in the texture’s 3D space. This is simply a change of basis transformation, where both bases are defined to have the same origin. In our specific case, eye space is world space, so all we have to do is multiply p with the inverse of the object’s delta transformation matrix. Since the result will usually lie between 2 slices of the 3D texture (as in the illustration), we sample both texture slices with constant x and y coordinates and interpolate the colour values. This approach is not 100% correct, since the interpolation should also be aligned to the view. However, for this purpose, it’s a good trade-off for some extra performance.
As this needs to be done for every pixel on every slice, we’re doing these calculations through Pixel Bender. And that’s how it works in general lines. There’s some translations and scaling going on as well to ensure a uniform and properly centered transformation. If you’re still interested, you can check the source for that. Important to note is that half the slices in the back are actually culled for a worthwhile performance boost. They don’t really contribute all that much to the final image after all.
Click and drag to rotate the pitbull skull in all demos:
- High quality: using a 4096×128 (ie 128x128x32) texture with 16 visible slices
- Low quality: using a 2048×64 texture with 16 visible slices
- Ultra quality: using a 4096×128 with 50 visible slices (slow!)
- Ultra high quality static rendering with low quality dynamic rendering: getting the best of two worlds