Sunday, October 23, 2011

Maya - How to manually generate a custom depth pass

There are several methods of generating depth passes in Maya; this tutorial illustrates how to generate one primarily by using utility nodes. This works with both the Maya software and Mental Ray renderer. A true depth buffer test doesn't consider material properties, only geometry along with only sampling each pixel once, but in some cases, you may want material properties (such as transparency) to be respected in your depth pass. Note that this pass will require its own render layer, or if you prefer, a separate file. In addition, you'll learn how to set appropriate render settings, all the way to applying the pass for defining depth-of-field using blurring effects in Nuke or any other image editing software.

Before starting, you should set up the output to some high dynamic range format, such as 32-bit floating point; set up your output file type to HDR or EXR in the Common tab of the Render Settings. Using a 16-bit unsigned integer format, such as SGI16, is okay (it gives 65,536 depth values, derived by 2^16), but floating-point files (even 16-bit half-precision float) give millions of potential grayscale values, which is vastly superior. In Mental Ray, to set up the framebuffer, again in the Render Settings dialog, go to the Quality tab and scroll down to the bottom, to the Framebuffer tab. In that tab, change the Data Type to RGBA (Float) 4x32 Bit; remember that the output file type must be compatible with the framebuffer data type. If you're rendering a depth pass with the default 8-bit framebuffer (and saving out TGAs for example), you'll only have 256 levels of depth in your image; this is no way near enough information for a quality depth pass.

Now feel free to generate a very simple scene composed of some primitive objects, or load up one of your own scenes. Let's create the necessary nodes for the custom depth pass this tutorial is all about: In the Hypershade, within the Create Bar, create a samplerInfo, multiplyDivide, setRange, Ramp, and a Surface Shader. The place2dTexture1 that is automatically generated (by default) for the Ramp node won't make a difference here so it may be deleted. Turn your attention to the Sampler Info node's attributes, shown below.

The Sampler Info utility node is designed to allow the user access to certain rendering data, and use the results in a shading network. Note that this data is only available during the rendering process, and it is generated per-pixel, so whatever effect you're attempting to achieve will not be visible until render time, meaning you won't see any accurate representation of the result in the viewports. Now onto a few interesting attributes: Flipped Normal produces a 0 or 1 (boolean), per pixel, depending on what side of a polygon the camera sees, which is also true for NURBS geometry, since these mathematically defined curves and surfaces are only approximated as triangles in raster-based rendering software. When combined with a Condition Node, the Flipped Normal attribute is able to generate a two-sided material. Facing Ratio is also a highly interesting attribute. Based on the angle at which the camera views a pixel, from 0 to 90 degrees, a floating point number from 0 to 1 is generated. The result, when connected to a Surface Shader's outColor attribute and applied to scene geometry, is an image that may be used for multiple purposes, such as helping to introduce light wrapping, velvet-like effects or even interactively and approximately adjusting the BRDF (Bidirectional Reflectance Distribution Function) of a reflection pass in the compositing stage (achieved by using a matte to isolate the object, and using an exponential operation on the Facing Ratio pass that retains the high and low value, such as gamma). Of course, all these attributes may be used within the rendering process itself to affect shading of surfaces. These are just a few examples of how useful the Sampler Info node is. Now let's take a look at the attribute that will be used in this tutorial: Point Camera. As the name suggests, the attribute returns a pixel's location relative to the camera as a floating point number. We're interested in the Z coordinate, which is the third one of Point Camera, this is effectively the Z depth of each pixel. No values in the Sampler Info node need to be adjusted since all this data is generated at render time.

Simply connect the samplerInfo's pointCameraZ to the multiplyDivide's Input1X using the Connection Editor. This may also be accomplished with a single-line MEL command:

connectAttr samplerInfo1.pz multiplyDivide1.i1x

Also, set the multiplyDivide node's Input2X to -1; the reasoning for the multiplyDivide node being used in this network is due to the fact that the samplerInfo's Point Camera Z is returned as a negative number, and to make this network a bit more intuitive, we're changing it to a positive number by multiplying the Point Camera Z by -1.

Now let's take a look at the Set Range attributes. Set Range simply takes an input value, then maps it directly to a new defined linear range. Value is the incoming floating point connection, Min and Max are the new minimum and maximum values derived from the Old Min and Old Max. For example, an incoming value from 0-1 could be remapped to a new 0-15 easily by using the Set Range utility node. In this tutorial, this node will be used to remap the range of world units given by the Sampler Info node to a range of 0 to 1, so they'll be useful in a ramp which will assign a grayscale color value to each pixel, all in relation to the camera's Z depth. First, connect the multiplyDivide's outputX to the setRange's valueX. Next, let's derive the maximum Z depth distance based on the camera's Far Clip Plane. If you want this to be automated, simply type, in the Old Max X field of the Attribute Editor:

=cameraShape1.farClipPlane

Replace cameraShape1 with whatever camera's shape node you're deriving the value from, such as perspShape. Since this is a single-lined MEL expression, you don't need to include a terminator (;) at the end. The attribute field will turn purple indicating an incoming expression connection, and it will update atuomatically as you change the camera's Far Clip Plane attribute. Of course, feel free to connect the attributes using the Connection Editor, or simply type the value of the Far Clip Plane attribute, located on the camera's shape node, into the Old Max X field. Leave the Old Min at 0. Using the world grid along with the camera's view itself as a guide, decide on the smallest value necessary for the camera's Far Clip Plane. By default, cameras in Maya 2011 are generated with a Far Clip Plane value of 10000, so you might want to bring it within a reasonable range of the scene's depth. So what about the Min and Max values, what will those be set to? Simply set Min X to 0 and Max X to 1. I'll explain what is going on throughout the network once it's set up, so it'll make more sense later on.

Connect the outValueX of the setRange node into the vCoord (located in the uvCoord double attribute) of the Ramp node. By default a Ramp's colors are measured along the V-coordinate, and though the custom depth pass won't rely at all on UV coordinates, it's important the right coordinate connection is made (vCoord for a V-Type Ramp), which will produce the proper result. Remove the middle color swatch of the ramp and set the Selected Color (located at the bottom of the Ramp) to black (RGB of 0,0,0). Now at the top of the ramp, set the Selected Color to a value of 5.000 (RGB of 5,5,5). The ramp will go into a high dynamic range so you won't see a smooth gradient after the ramp reaches 1.0, about 1/5 of the way up, since the Interpolation is set to Linear. Leave it as is, this is exactly what you'll want. The reason 0 to 5 is being used instead of 0 to 1 is because the Zblur node in Nuke has a slider that goes from 0 to 5 and this will map perfectly with that slider. Now simply connect the Ramp's outColor to the Surface Shader's outColor attribute. Apply the shader to geometry objects in your scene. Set the camera's Background Color to white (a value of 1 seems to be fine), and render. You'll probably not see what you would maybe expect; this is because the ramp's value of 5 is pushing the colors out of range. Feel free to change the color value of 5, in the ramp, to a 1, for testing purposes, or keep it at 1 if you're going to be compositing in After Effects and most other programs. However, switching back to 5 for the final render will ease your time in Nuke with the ZBlur node. If you're using the HDR Render View in Maya 2011 and newer versions (enabled by selecting 32-bit floating-point under the Display menu of the Render View window), you'll be able to adjust the exposure and view the out of displayable range values generated by the Ramp node, but it's not necessary.

If you're sure you're doing it right and there's a problem with the render, make sure the camera's scale is 1, 1, 1; this is because the Z value distance scales as the camera scales, and some renderers like Mental Ray take this into account; the Maya software renderer doesn't, however. If your camera must stay at whatever scale it's at (say it might be 8, 8, 8,), then compensate for it in the multiplyDivide node. For example, to compensate for a camera scale of 8, 8, 8, in muliplyDivide1.input2X type: -8. If you don't want to deal with a scaled camera, simply Parent Constrain a new camera to the already animated one; match initial translation and rotation first, then parent constrain (don't parent, as the new camera will just inherit the parent camera's world unit scaling), and use the new camera for this custom depth pass.

The entire network resulting in a custom depth pass.
Okay, let's go over what this network does. As an example, I'll explain what happens to a Z value, at render time, for one pixel. Let's say a particular pixel's pointCameraZ value is -20 at render time; it becomes positive by the multiplyDivide node. Next, 20 enters the setRange's valueX field. oldMaxX is controlled by the camera's far clip plane, and whatever value falls between 0 and oldMaxX has its range set from 0 to 1. In this example, my camera's far clip plane is set to 50, so 20 becomes 0.4. Next, the 0.4 enters into the ramp node, and samples the 0.4 position of the ramp. This ramp goes from 0 to 5 (the ramp color that is), the pixel will pick up a 2.0 value, which was sampled at the 0.4 position on the ramp. In the rendered image, the pixel will appear to be "white" on your monitor, but in actuality, it's twice as bright as white and is not displayable; this extra data is only viewable if you use exposure controls or tone mapping, but seeing it doesn't matter; in Nuke you'll be able to harness this extra data.

The ramp's color entry at 1.0 position is of value 1.0, for ease of viewing.

In the image on the right, you'll see the ramp's color entry at position of 1.0 is set to a 1.0 value. Remember to set it to 5.0 for the final render if you want the best experience in Nuke.

Now, a few statements about anti-aliasing, which is usually needed when a raster-based screen represents a resolution-independent set of data (such as a 3D computer graphics scene). A depth buffer is defined with 1 sample per pixel, which means, no anti-aliasing at all. If you introduce anti-aliasing into a depth map, you'll quickly get edge artifacts in your depth-of-field blur. If you take a moment to think about what happens when pixels are anti-aliased, you'll understand why. To achieve an aliased (1 sample per pixel) rendering in mental ray, open the Render Settings dialog, and in the Quality tab, in Raytrace/Scanline quality, change the Sampling Mode to either "Fixed Sampling" or "Custom Sampling": either one will allow you to set the Min Sample Level and Max Sample Level to 0. If you're setting this up on a Render Layer, be sure to set this new setting as a Render Layer Override. Also, make sure you don't have any lens or environment shaders interfering with your depth pass render layer; use the render layer overrides to disable and then (if necessary) break connections to such shaders and settings on a per render layer basis. Now let's take the rendered image and the custom camera depth render into Nuke.

It doesn't matter how you apply the depth pass; it could be used as a mask in a ZBlur node, but I'll go with the channel workflow Nuke offers. First, select the Read Node of the depth pass image, and then the Read Node for the main image file. Now press K, which will insert a copy node into the tree, connecting the nodes based on your selection order. Nuke offers a depth.Z channel by default, so we won't be creating a new channel; let's copy the rgba.red (any color will do) of the depth pass into the depth.Z of the B input of the copy node.

Depth pass color copied to the start of a very basic composite path.
Now you're ready to use the ZBlur node. Select the Copy1 node and press Tab. Type "ZBlur" and press Enter. Click the image to the left to see the settings of the ZBlur node.

I'll quickly go over this node, in order of its attributes: Set the channels to perform the operation on (all is the default), based on what channel (depth.Z is the default) as a multiplier/mask. There's a few mathematical interpretations you can use if your depth map isn't typical, though the default, depth, will work fine for this setup. Hover your mouse over the math drop-down field for a detailed tool-tip list of what each mode does if you want to try another.


Begin by clicking focal-plane setup. This allows for a simple, easy to understand view of how much blurring will be performed on your image. Adjust the focus plane parameter and watch the line of focus change. Red is near, blue is far, the dark gradient to the mid-line is where focus will be. The depth-of-field parameter is available to adjust and shows up as green, designating total focus area with no out of focus blurring. However, it's recommended to keep depth-of-field at 0 (which is consistent with real camera lenses) unless you have a specific reason to adjust it; I'm doing so here just for this demonstration. When you're finished, toggle off focal-plane setup and watch the viewer redraw the image; complete with blurring beyond the near and far range of focus. You might want to adjust the size and maximum parameters for further control of the blur as well, keeping in mind photographic concepts such as circle of confusion, f/stop, etc; to mimic the way a real camera would work. Hover your mouse over the parameters to see their tool-tips. Filter shape at 0 (the default) gives a Gaussian blur, while at 1, disk blurs the image. Another really important parameter to be aware of is the occlusion toggle. If this is on, further objects won't blur ones closer to the camera, based on the math setting; this is a much more accurate (and of course slower) way of computing the blur, but it's worth it especially if the blur in your image is going to be rather significant.


Multi-pass rendering in Maya has made manual methods (like the one presented here) of achieving a depth pass almost obsolete, but these tricks are good to know if you're in a situation where the more automated tools to render a depth pass don't present you with an opportunity to, for example, respect transparency or certain other properties in materials. In such a case, you might want to use a setup like this and proceed to edit each material of the scene on a new render layer, or set up a script to do it. For example, to respect transparency in the mia_material, disable all lights (or simply don't add them to the render layer) and plug the depth pass shading network's ramp texture into the Additional Color attribute; also set Reflectivity to 0. If all your reflections are raytraced, a simpler option is to disable Raytracing in the renderer instead of bringing Reflectivity to 0 on all materials; in Mental Ray this is done by unchecking Raytracing in the Features tab of the Render Settings window. Below is an example of the custom depth pass respecting material transparency:

Custom depth respecting material transparency. No lights in the render layer; notice the red diffuse color has no effect.
Regardless of how you use it, the workflow presented here will give you more control in the creation of a depth pass for use in adding depth-of-field blurring and other effects in the compositing stage.

9 comments:

  1. you are excelent teacher
    i would like to view your some videos of teaching maya.
    thanks

    get a facebook fanpage

    ReplyDelete
  2. This is a great guide, thanks for taking the time to write it. I'm making a short film using only the depth pass, so this tutorial is perfect. Don't suppose you have any idea if rendering smoke/cloud effects is possible with this setup? I have a sinking feeling that it's impossible :-(

    ReplyDelete
    Replies
    1. I have very little knowledge of Maya's dynamics, though I'll be getting into those sorts of things eventually. With that in mind, to answer your question, I'd imagine there's a way (for example with sprite particles) to simply plug this shader into the color aspect of it (for each ramp position on the lifespan). Maybe particle opacity could be the alpha channel and depth could be the color, and you might have to render separately and then combine these in compositing into a single RGBA frame. The Maya Software renderer has some settings for depth, but you'd probably want to use Mental Ray and 32-bit floating point frame buffers to capture depth information. I'm confident that there are many methods to do what you're asking, but what I've just said here in this comment would possibly be one way, even though it's not elegant and would require manual setup.

      Delete
  3. This comment has been removed by the author.

    ReplyDelete
  4. Thank you for making the time to put this tutorial together ... I'm exploring depth passes and this has by far, been the most helpful tutorial.

    In the Multi-Pass Rendering paragraph, you make no mention of the "Cutout Opacity" connection that is displayed in the screenshot. What did you plug into this node slot?

    In the interim, I have created an "mia_material_x_passes", applied the Glass preset and manually adjusted the value of Cutout Opacity to .500 and enabled Propagate Alpha in the material. I did not apply the depth "Surface Shader" to the glass objects in the render layer; the remaining object have the depth shader; which leads me to asking ...

    I'm assuming that the rendered layer pass serves as the Depth pass for my compositing tool (ex: After Effects, Nuke, etc.) - i.e. the Camera Depth Remapped MR pass isn't necessary. Is my assumption correct?

    Cheers,

    hgagne
    hilaire[dot]gagne[at]gmail.com

    ReplyDelete
  5. Thanks for the awesomeness!

    ReplyDelete
  6. A great piece that sheds much needed light on eCommerce & CMS Web Design and Development Services and its impact on business as there are many new details you posted here. Sometimes it is not so easy to build a "eCommerce & CMS Web Design and Development Services" without custom knowledge; here you need proper development skills and experience. However, the details you mention here would be very much helpful for the beginner. Here is yet another top-notch solution provider “X-Byte Enterprise Solutions” who render feasible and credible solutions to global clients.

    Know more here: https://www.xbytesolutions.com/ecommerce-and-cms-development-company.php

    ReplyDelete