Away3D 4.1 (dev) Dynamic Reflections

R2D2*2One of the features we considered important for the next release of the Away3D engine (4.1) were real-time dynamic reflections, allowing for more realism and precision than the common static environment maps. In the dev branch of the engine, you can now find two flavours: reflections based on dynamic environment maps, and planar reflections.

Dynamic Environment Maps

This technique simply uses cube maps that are rendered to on the fly. Usually, they’re used for any non-planar surfaces, and while they can look convincing enough for complex models they do suffer the same flaws as normal environment maps. Since the scene is rendered for each face of the cube from a single point, the calculated reflections would obviously only be correct for that single point in space, but using it for relatively small and complex objects can look convincing enough. However, since it renders the scene 6 times, it can be slow for more complex situations.

The necessary functionality is exposed through CubeReflectionTexture, a class that can be used wherever a CubeTextureBase is expected: EnvMapMethod and variations, or even as a Skybox texture. I have yet to come up with a use for the latter case, though ;) To get the best results, it’s usually a good idea to set the CubeReflectionTexture’s position to the centre of your reflective object. The cube map will be generated from this point and on average will yield the best results for all other points.

Check out the demo.
Source in the “dev” examples repository

Planar Reflections

For planar surfaces, a much cheaper approach and one that is very precise can be used. This means normal flat mirrors, polished floors, water, … which are quite common in games can be rendered much more effectively. Since the rules for reflection for the entire surface are the same, we can simply render the scene from a mirrored camera perspective. The only thing we need to make sure is that objects (or parts of objects) behind the mirror aren’t being rendered in this way. Usually, in OpenGL or DirectX, you’d simply introduce a new user-defined clip plane. Flash, however, doesn’t support anything of the sort. Instead, the projection matrix needs to be adapted so that the near plane becomes oblique; aligned with the mirrored plane, effectively clipping any straddling geometry along the mirror. Unfortunately, this also wreaks havoc on the far plane, which will starting cutting geometry that is in the mirrored view due to being at a different angle. Eric Lengyel effectively describes the issue and how he cleverly solves it in his Oblique View Frustum paper. (And while I’m at it, his book is awesome too.)

The texture target is provided as PlanarReflectionTexture. Similar to the cube maps, they need some information about where their respective reflective surfaces are. In this case, it’s the plane property; referencing a Plane3D object. Furthermore, it has a “scale” property that lets you define how much the texture should be scaled down to control quality vs rendering speed. Due to the different math and texture types involved, PlanarReflectionTexture can only be used with specific material methods. Currently, these are PlanarReflectionMethod and FresnelPlanarReflectionMethod. Except for internals and texture type, these function pretty much identical to their EnvMap counterparts.

Check out the demo
Source in the “dev” examples repository.

In closing, this is of course only in the dev branch, which means it’s still subject to change!

15 thoughts on “Away3D 4.1 (dev) Dynamic Reflections

  1. Pingback: full reflection « 3dflashlo

  2. So, you used ObliqueNearPlaneLens to clip the geometry in Away3D? Or must the plane be specified in PlanarReflectionTexture and the method merely kils geometry that lies behind the supplied Plane2D?

  3. Glidias: The ObliqueNearPlaneLens is indeed used for this purpose internally, but it all happens behind the scenes. If you set the plane in PlanarReflectionTexture, everything should be set to go.

  4. I created a plane on the floor with my 3d object standing on it. I think I did everything as in the example, but…

    it shows a reflection, but a rather odd one. The object in the floor mirror is not upside down, as it should be when mirrored from below, and also not mirrored at all. It’s just like a copy of the original object that lies beneath.
    Also, the reflection is drawn with some strange graphic glitches. Not quite sure, but it looks to me as if the object is drawn from within (you see only the textures from the inside and not the outside)

    Is this error occured to anyone so far?

  5. I’ve got a question. If you switched lens on the fly behind the scenes to ObliqueLens for certain objects that need clipping and switch back to PerspextiveLens for other objects, would they write different z-buffer values at differene precision and thus run the risk of not depth sorting correctly between each other?

    If i need zbuffer accruacy, can I use a vertex shader with 2 matrices, the oblique and perspective one… such that if the oblique transformed z value is within visible plane range, the persoective matrix can than be used to retrieve the accruate perspective z value to replace that instead? Likewise, what happens to far clipping? What if I still need perspective correct farclipping ( exact same farclipping distance value) to be considered? An accruate clip perspective z value can be determined separately with a dp4 operation (instead of full m44 op) on the 3rd matrix row vector of the perspective matrix, right?

    I tried to implement yr code in alternativa but the depth order is reversed unless I swap the near and far plane values.

  6. Glidias: The clipping is in fact a direct result of altered depth-values, so yes the depth values will be incorrect with respect to those rendered with a normal PerspectiveLens.
    Also, you can’t set your own depth value based on a second projection matrix, as Stage3D doesn’t allow fragment shaders to output fragment depth. The clip space vertex positions used for clipping are the values used to write to the depth buffer.

  7. Yea, after writing the post i realised u cant just swap z values in the vertex shader since they are linearly interpolated and fragment depth control isn’t available in agal ( and across various hardware). So, such a method is only used for rendering reflected images of small regions that don’t need much zbuffer precision, right? The real R2 bot still uses regular zbuffer from the mirror plane to have it clipped, correct?. Only the reflected image needs to use the oblique lens. Thus using kil in a fragment shader is still a valid option for the mirrored object image, but may have performance implications and is material dependant.

  8. Pingback: Reconstructing positions from the depth buffer | Der Schmale - David Lenaerts's blog

Leave a Reply

Your email address will not be published. Required fields are marked *