Convolution Light Probes in Away3D 4.0

As mentioned in the previous blog post, the dev branch of Away3D 4.0 now has light probes. Light probes are essentially a special sort of light source that contain global lighting info for the scene location they’re placed at, encoded as cube maps. For diffuse lighting, these cube maps are simply indexed by the surface normal of the shaded fragment. For specular lighting, the view reflection vector is used. As such they’re a fast basic way of faking global illumination. The fact that it’s using cube maps means that the lighting would only be physically correct at the precise point where the light was sampled, but the results can still be convincing regardless. Several light probes can be placed in a scene; the material’s light picker will assign weights to each to determine the importance in the lighting step based on the rendered object’s position relative to each.

You can use normal lights alongside the light probes, and in case of the default material system, you can specify which lighting component (diffuse and specular) is affected by which light types (material.diffuseLightSources and material.specularLightSources accepting LightSources.LIGHTS, LightSources.PROBES,  or LightSources.ALL). This can be useful if you want to use light probes for diffuse global lighting, but want specular lights from traditional light sources without those affecting the diffuse light.

 

Pre-process step

A downside of this approach over conventional light sources is that it requires a pre-process step to generate the cube maps. To start with, you need to generate (HDR) cube map renders for every point where you’ll want to put a light probe. From this, two process steps can be applied: a diffuse convolution and a specular convolution, depending on which type of lighting the probe needs to support. These can be generated in the eminent Paul Debevec’s HDRShop. I’ll warn you, however, that these convolutions can be very time-consuming – but the results are very accurate. If you’re willing to sacrifice some accuracy and get your hands dirty, you can write your own “convolutor”. Be warned however, that Flash doesn’t support HDR in BitmapData or textures, so if you’re using BitmapData and Stage3D, you’ll end up with under-lit results due to the low dynamic range of “bright” light. Having said that, I’ll quickly outline the principles behind it, just in case it interests anyone.

Lighting functions are spherical functions (the domain is the unit sphere, being mapped to scalar values), so you’ll need to integrate the diffuse or specular lighting function over the unit sphere of surface normals/view reflections. In fact, integrating over the hemisphere is usually enough since there’s no light coming from inside objects in our case. But as I was lazy and it was much easier to just implement it for the entire sphere I’ll use that approach ;) For basic (Lambertian) diffuse reflections, this semi-formally boils down to:

Where N is the surface normal, “ω_i” is the incident light vector, and L(ω_i) is the light colour coming from the direction ω_i. The part “max(N.ω_i, 0) L(ω_i)”, is in fact just the same as the diffuse lighting calculation done for point and directional lights (where L is a constant function). In this case, L(ω_i) is simply a sample from the HDR cube map we rendered before. The integral just means that we’re taking the diffuse contributions for all directions (ie: all vectors ω_i in the unit sphere S) and accumulating them. Finally, to be able to sample the incoming light for each normal in an environment map, we would need to perform this calculation for every point in that map (remember, cube maps coordinates are simply 3D vectors, in this case representing the surface normal N).

In theory, the integral means we’d be summing up infinite amounts of infinitesimal samples. In practice, we’ll be using discrete steps to solve this. That’s okay, since we only have finite amount of samples to work with anyway; 6*n² pixels to be exact, n being the size of the HDR cube map. So, the most accurate way of doing things would be to add up all the contributions from the source map, and do that for each pixel in the destination map. If we’re rendering to a cube map of size m, that means it’s an order of 6*m² * 6*n² = 36m²n² (6 sides of m*m, each pixel sampling 6 sides of n*n). This can easily become a very expensive operation. Luckily, there are some numerical methods to solve this faster (at the cost of accuracy, or what did you expect). The most useful in this case is definitely Monte Carlo integration, which allows less samples from the source material per pixel. There are many articles written about the subject, and a great introduction is found in this article by Robin Green.

Enough to get you cracking, I think ;)

 

Demo

See the Cornell Boxness of it all!

Use the arrow keys to move the head, space bar to toggle the texture (shows the lighting impact), and finally click and drag to rotate the head.

The head model and textures are still by Lee Perry-Smith (who by now must feel pretty awkward appearing in random tech demos). Source for the demo can be found in the texture-refactor branch of the examples repo (which will be merged to master together with the dev branch).

The environment maps were made as outlined before using a custom generation tool, and placed close to each corner of the box where they were generated.  I was using Stage3D (and thus a low dynamic range) to generate them, so I had to do some touch-ups in Photoshop to get them to look right. I’m not done investigating the options for further development of such map generation tools, but who knows they’ll make it public some day. In any case, HDRShop is more reliable! Diffuse lighting is set to use light probes only, while specular lighting uses the point light only.

The demo also shows how to use point light shadows, rim lighting and light mapping for added static shading.

8 thoughts on “Convolution Light Probes in Away3D 4.0

  1. Eco: Yep, it’s in the Away3D 4.0 version, which is FP11 only.

    Sebastiano: You could change the light *positions* in real time, but generating the convoluted maps at real-time would be too slow. So in effect, the moved light probes will end up giving incorrect lighting. However, I suppose some dynamic effects could be faked that way. And in some cases, perhaps downsampling a real-time rendered cube-map could substitute a convoluted map; but don’t get your hopes up ;)

    Cheers,
    David

  2. Pingback: [业界动态] Flash平台酷成员 | Flash开发者大会

  3. Pingback: 3D, some theory and basic mesh tweaking « HIDIHO!

  4. Are these lightmaps really encoded as cubemaps, or are they spherical harmonics encoded vector data?

  5. Chris: They’re processed (convoluted) cube maps – no spherical harmonics. The article explains how they’re made.

Leave a Reply

Your email address will not be published. Required fields are marked *