Before starting, you should set up the output to some high dynamic range format, such as 32-bit floating point; set up your output file type to HDR or EXR in the Common tab of the Render Settings. Using a 16-bit unsigned integer format, such as SGI16, is okay (it gives 65,536 depth values, derived by 2^16), but floating-point files (even 16-bit half-precision float) give millions of potential grayscale values, which is vastly superior. In Mental Ray, to set up the framebuffer, again in the Render Settings dialog, go to the Quality tab and scroll down to the bottom, to the Framebuffer tab. In that tab, change the Data Type to RGBA (Float) 4x32 Bit; remember that the output file type must be compatible with the framebuffer data type. If you're rendering a depth pass with the default 8-bit framebuffer (and saving out TGAs for example), you'll only have 256 levels of depth in your image; this is no way near enough information for a quality depth pass.
Now feel free to generate a very simple scene composed of some primitive objects, or load up one of your own scenes. Let's create the necessary nodes for the custom depth pass this tutorial is all about: In the Hypershade, within the Create Bar, create a samplerInfo, multiplyDivide, setRange, Ramp, and a Surface Shader. The place2dTexture1 that is automatically generated (by default) for the Ramp node won't make a difference here so it may be deleted. Turn your attention to the Sampler Info node's attributes, shown below.
Bidirectional Reflectance Distribution Function) of a reflection pass in the compositing stage (achieved by using a matte to isolate the object, and using an exponential operation on the Facing Ratio pass that retains the high and low value, such as gamma). Of course, all these attributes may be used within the rendering process itself to affect shading of surfaces. These are just a few examples of how useful the Sampler Info node is. Now let's take a look at the attribute that will be used in this tutorial: Point Camera. As the name suggests, the attribute returns a pixel's location relative to the camera as a floating point number. We're interested in the Z coordinate, which is the third one of Point Camera, this is effectively the Z depth of each pixel. No values in the Sampler Info node need to be adjusted since all this data is generated at render time.
Simply connect the samplerInfo's pointCameraZ to the multiplyDivide's Input1X using the Connection Editor. This may also be accomplished with a single-line MEL command:
connectAttr samplerInfo1.pz multiplyDivide1.i1x
Also, set the multiplyDivide node's Input2X to -1; the reasoning for the multiplyDivide node being used in this network is due to the fact that the samplerInfo's Point Camera Z is returned as a negative number, and to make this network a bit more intuitive, we're changing it to a positive number by multiplying the Point Camera Z by -1.
Replace cameraShape1 with whatever camera's shape node you're deriving the value from, such as perspShape. Since this is a single-lined MEL expression, you don't need to include a terminator (;) at the end. The attribute field will turn purple indicating an incoming expression connection, and it will update atuomatically as you change the camera's Far Clip Plane attribute. Of course, feel free to connect the attributes using the Connection Editor, or simply type the value of the Far Clip Plane attribute, located on the camera's shape node, into the Old Max X field. Leave the Old Min at 0. Using the world grid along with the camera's view itself as a guide, decide on the smallest value necessary for the camera's Far Clip Plane. By default, cameras in Maya 2011 are generated with a Far Clip Plane value of 10000, so you might want to bring it within a reasonable range of the scene's depth. So what about the Min and Max values, what will those be set to? Simply set Min X to 0 and Max X to 1. I'll explain what is going on throughout the network once it's set up, so it'll make more sense later on.
If you're sure you're doing it right and there's a problem with the render, make sure the camera's scale is 1, 1, 1; this is because the Z value distance scales as the camera scales, and some renderers like Mental Ray take this into account; the Maya software renderer doesn't, however. If your camera must stay at whatever scale it's at (say it might be 8, 8, 8,), then compensate for it in the multiplyDivide node. For example, to compensate for a camera scale of 8, 8, 8, in muliplyDivide1.input2X type: -8. If you don't want to deal with a scaled camera, simply Parent Constrain a new camera to the already animated one; match initial translation and rotation first, then parent constrain (don't parent, as the new camera will just inherit the parent camera's world unit scaling), and use the new camera for this custom depth pass.
|The entire network resulting in a custom depth pass.|
|The ramp's color entry at 1.0 position is of value 1.0, for ease of viewing.|
Now, a few statements about anti-aliasing, which is usually needed when a raster-based screen represents a resolution-independent set of data (such as a 3D computer graphics scene). A depth buffer is defined with 1 sample per pixel, which means, no anti-aliasing at all. If you introduce anti-aliasing into a depth map, you'll quickly get edge artifacts in your depth-of-field blur. If you take a moment to think about what happens when pixels are anti-aliased, you'll understand why. To achieve an aliased (1 sample per pixel) rendering in mental ray, open the Render Settings dialog, and in the Quality tab, in Raytrace/Scanline quality, change the Sampling Mode to either "Fixed Sampling" or "Custom Sampling": either one will allow you to set the Min Sample Level and Max Sample Level to 0. If you're setting this up on a Render Layer, be sure to set this new setting as a Render Layer Override. Also, make sure you don't have any lens or environment shaders interfering with your depth pass render layer; use the render layer overrides to disable and then (if necessary) break connections to such shaders and settings on a per render layer basis. Now let's take the rendered image and the custom camera depth render into Nuke.
It doesn't matter how you apply the depth pass; it could be used as a mask in a ZBlur node, but I'll go with the channel workflow Nuke offers. First, select the Read Node of the depth pass image, and then the Read Node for the main image file. Now press K, which will insert a copy node into the tree, connecting the nodes based on your selection order. Nuke offers a depth.Z channel by default, so we won't be creating a new channel; let's copy the rgba.red (any color will do) of the depth pass into the depth.Z of the B input of the copy node.
|Depth pass color copied to the start of a very basic composite path.|
I'll quickly go over this node, in order of its attributes: Set the channels to perform the operation on (all is the default), based on what channel (depth.Z is the default) as a multiplier/mask. There's a few mathematical interpretations you can use if your depth map isn't typical, though the default, depth, will work fine for this setup. Hover your mouse over the math drop-down field for a detailed tool-tip list of what each mode does if you want to try another.
Begin by clicking focal-plane setup. This allows for a simple, easy to understand view of how much blurring will be performed on your image. Adjust the focus plane parameter and watch the line of focus change. Red is near, blue is far, the dark gradient to the mid-line is where focus will be. The depth-of-field parameter is available to adjust and shows up as green, designating total focus area with no out of focus blurring. However, it's recommended to keep depth-of-field at 0 (which is consistent with real camera lenses) unless you have a specific reason to adjust it; I'm doing so here just for this demonstration. When you're finished, toggle off focal-plane setup and watch the viewer redraw the image; complete with blurring beyond the near and far range of focus. You might want to adjust the size and maximum parameters for further control of the blur as well, keeping in mind photographic concepts such as circle of confusion, f/stop, etc; to mimic the way a real camera would work. Hover your mouse over the parameters to see their tool-tips. Filter shape at 0 (the default) gives a Gaussian blur, while at 1, disk blurs the image. Another really important parameter to be aware of is the occlusion toggle. If this is on, further objects won't blur ones closer to the camera, based on the math setting; this is a much more accurate (and of course slower) way of computing the blur, but it's worth it especially if the blur in your image is going to be rather significant.
Multi-pass rendering in Maya has made manual methods (like the one presented here) of achieving a depth pass almost obsolete, but these tricks are good to know if you're in a situation where the more automated tools to render a depth pass don't present you with an opportunity to, for example, respect transparency or certain other properties in materials. In such a case, you might want to use a setup like this and proceed to edit each material of the scene on a new render layer, or set up a script to do it. For example, to respect transparency in the mia_material, disable all lights (or simply don't add them to the render layer) and plug the depth pass shading network's ramp texture into the Additional Color attribute; also set Reflectivity to 0. If all your reflections are raytraced, a simpler option is to disable Raytracing in the renderer instead of bringing Reflectivity to 0 on all materials; in Mental Ray this is done by unchecking Raytracing in the Features tab of the Render Settings window. Below is an example of the custom depth pass respecting material transparency:
|Custom depth respecting material transparency. No lights in the render layer; notice the red diffuse color has no effect.|