Tuesday, March 20, 2012

Maya - Full linear workflow for Viewport 2.0

A gamma of 2.2 is for "practical" purposes, sRGB-encoding, and it's important to be aware of when gamma is being applied so it can be removed for proper 3D lighting calculations, then re-applied when required. If you're confused, here's an excellent page on Understanding Gamma Correction. I won't go into explaining the details of "linear workflow" in Maya, since there's plenty of information elsewhere about it. This post will focus on addressing a problem I was confronted with when using the "Gamma Correction" option in Maya 2012's Viewport 2.0; doubling up of gamma-encoded color textures being displayed. I wanted to have linear sRGB textures in the viewport too, not only in the rendering process. Color Management doesn't work for the viewports, and the gammaCorrect nodes aren't supported in Viewport 2.0 yet (as of Maya 2012), but the problem can be solved by using the Mental Ray image conversion utility imf_copy.

By default, the folder path to imf_copy.exe (C:\Program Files\Autodesk\Maya2012\bin) already exists in the system PATH variable under "System > Advanced system settings" in Windows. It's not important to check, but you'll need to add the path to imf_copy if you encounter a "program not found"-related error. To bring up the command-line interpreter (cmd.exe) with the working folder at the currently focused (active) window: Press Shift, right-click, and then choose "Open command window here".

For scalar textures (such as bump maps, scalar and vector displacement maps, specular amount maps, reflection amount maps, normal maps, etc) along with HDRs (should already be linearly mapped), you could use the following command to convert the image, which produces a memory mappable image file (.map) with the same bit-depth as the input file:

imf_copy -p "input_image" "output_image.map"

However, for color textures (diffuse color and 8-bit reflection color textures), you should use this:

imf_copy -p -e -g 0.4545 "input_image" "output_image.map" map rgba_16

The main difference here is that you're doing a 0.4545 gamma operation on the input image, which approximates the sRGB gamma correction (1/2.2) by bringing the image into a linear representation of color values. This is essentially the same as using a gammaCorrect node with the same setting. So don't use the gammaCorrect node in Maya, and if you're using Color Management (you don't have to), set the file node for the .map image to "Linear sRGB". An important aspect of the newly produced image is that its bit depth will be 16-bit integer per-channel. Converting an 8-bit image with a gamma of 0.4545 will expose posterization (easily seen in the dark colors) when applying the display gamma; by converting to 16-bit, you remove the potential for color banding artifacts. Another option is used: -e (error diffusion), this is enabled to effectively remap the color values to 16-bit using dithering.

Note that you don't have to convert your scalar and HDR images at all, unless you're interested in the memory performance benefits the .map format offers. Also, you don't have to use that format; you could use (for example) Photoshop actions to apply the gamma edits on a set of images and store them to 16-bit output image files. Once you've output the image files, you can import them into file nodes as usual, but however you apply the gamma to the rendered image itself or Render View display (using mia_exposure_photographic, mia_exposure_simple, or Color Management's view manager in the Render View), you'll have a setup that is perceptually correct in Viewport 2.0, so lights will give you a close approximation of how the rendered result will be, at least for relatively simple scenes.

Viewport 2.0 closely matching Mental Ray software render.
To verify my idea, I tested a basic scene using a gradient image in sRGB produced from Photoshop. The Stanford bunny has a real-time reflection map applied (an HDR image) through an envBall node in the "Reflection Color" of the mia_material_x_passes, and the floor is a substance procedural texture converted to a color texture, but has no gamma change applied since procedurals are linearly color mapped already. The photographic lens shader is being used purely for the view gamma, and I reduced the exposure slightly under "Render View > Display > Color Management". If you want the Render View color manager to deal with the view gamma instead of an exposure shader, set the "Image Color Profile" to Linear sRGB, and Gamma on the exposure node to 1.0. Color Management in the Render Settings was disabled, as all images are already in a linear color mapping (or approximated as linear sRGB with the gamma 0.4545 baked into the .map images). There is a bit of noticeable color banding in the gradient image (even with the Viewport 2.0 floating-point render target enabled), but that's fine and doesn't appear in the rendered result.

When working in a real scene, you won't bind yourself to the lighting results the viewport is showing you (unless you're rendering with the Hardware 2.0 renderer), and you would probably want to have tone mapping applied; "Burn Highlights" and "Crush Blacks" at their defaults will alter the rendered image even more from the viewport, but that's a good thing and mimics the human perceptual response better; before doing the final renders ready for compositing purposes, remember to remove all tone mapping and gamma effects; for example, with mia_exposure_photographic, setting Burn Highlights to 1.0, Crush Blacks to 0.0, Vignetting to 0.0, and Gamma to 1.0 will give you a non-tonemapped image; all the other settings in that shader, such as cm2_factor, are simply multiplier effects and won't alter the masterBeauty pass in a non-linear way. In Nuke or other compositing software, you can then re-apply tone mapping effects.

As a concept however, you can see how closely the render matches the viewport, which was the goal of this setup. It's at least useful for properly displaying color maps, and rendering correctly with Hardware 2.0 for pre-visualization work, regardless of how the exposure or lighting eventually diverges from the viewport setup in a software rendering. If you're working in a scene with dozens or even hundreds of textures at 4k resolution, eventually your graphics card won't be able to handle all the textures; that's fine, simply work without textures being displayed.

To simplify all this image converting, you could use a FOR loop in the command line. To avoid having to enter the text editor frequently, use the following batch scripts:
 
Save the text below as "_img-map_scalar.bat" and drag your scalar and HDR images to it:
:convertfile
@IF %1 == "" GOTO end
imf_copy -p %1 "%~d1%~p1%~n1.map"
@SHIFT
@GOTO convertfile
:end
@ECHO.
@ECHO Done!
@pause

Save the text below as "_img-map_color16.bat" and drag color and reflection color images to it:
:convertfile
@IF %1 == "" GOTO end
imf_copy -p -e -g 0.4545 %1 "%~d1%~p1%~n1_linear.map" map rgba_16
@SHIFT
@GOTO convertfile
:end
@ECHO.
@ECHO Done!
@pause

The "color" script is only for applying a 0.4545 gamma correction to 8-bit color and 8-bit reflection maps and converting them to 16-bit (integer) for rendering without banding artifacts. In general, you can use the "scalar" script for everything else; a typical scene might consist of mostly scalar textures and a few HDR color maps, which would be converted using the "scalar" script; note that the "scalar" script outputs a .map with the exact same bit-depth as the input, because no "rgba_*" was specified. If you paint and produce 16-bit color maps from Mari or Photoshop, use the "color" script; however, when exporting a 16-bit color map, if you know you've done a gamma-compensated or full linear workflow while texture painting, use the "scalar" script. The same concepts apply for painting 32-bit color maps; again, be aware of gamma being "baked in" if using typical 8-bit images as painting sources. You're probably painting using 8-bit sRGB gamma-encoded images and the gamma is now "baked" into your 32-bit floating-point color map (if you're basing all your painting in a non-color managed view, which can be changed in Mari under "View > Palettes > Color Manager"). This is fine, but simply be aware of it so you can apply a 0.4545 gamma (with imf_copy) to your 32-bit image exported from Mari. To make a new batch script for 32-bit floating-point color maps that need a 0.4545 gamma correction, just replace the "rgba_16" with "rgba_fp" in the "color" script above, and save it as a new script maybe named "_img-map_color32". Again, it all depends on how you're working, and if a texture image looks a bit washed out or darkened, then you know that somewhere in the image pipeline, you've not compensated for the gamma. Remember, the goal is to have all color data being sent to the renderer to be linear, and you can achieve that using gammaCorrect nodes, Color Management, the imf_copy utility, Photoshop, Nuke, and many other options.

You can name the batch files however you like to, but including an underscore ( _ ) at the beginning of their name will alphabetically sort them to the top of the file browser (with alphabetical sorting enabled). Simply drag the image file(s) over the batch script and it will take each image in the selection and output a .map file to the input's source folder, regardless of where the batch script is located. The Windows command line won't work if you have too many files in your selection (I can't say how many since I think it's based on the path name length of all the files, something like 2048 total characters), so keep that in mind before you drag twenty or so images to the batch script and wonder why it refuses to produce results.

If you're not into dragging the files over the batch scripts and are interested in a more familar menu-based approach, you can append these scripts to the right-click context menu in Windows with the free and highly useful program called "FileMenu Tools" from LopeSoft. Install that program, then in the settings; Add a command, set the action to "Run program", give it a name, set the Element Types for Drives and Folders to "No", then set the path to the batch script. Repeat for both and now you'll be able to right click a selection of image files and convert right from the Windows Explorer context menu. You can set it to spawn multiple instances simultaneously, which would speed up the task significantly since each selected image will be given a CPU thread on its own imf_copy instance, rather than sequentially converting each image on only one thread. Remember that it won't work if you select too many files, as mentioned in the paragraph above.

Using "FileMenu Tools" to use the batch scripts in a familiar menu interface; select images, right click, choose option.


8-bit files converted to 16-bit memory mapped files will be significantly larger than their sources. You could remove the "-p" and " flag to disable filtered image pyramid creation which will reduce the output file size slightly, as it's not entirely necessary (it's useful for efficient texture loading into memory). In future versions of Maya, if the gammaCorrect node is then supported in Viewport 2.0, and/or Color Management works in the viewport (I sent this as a feature request to Autodesk), then you won't have to "bake" the gamma into the .map files. For now, this method works very well.

Update: Maya 2013 has significant improvements in Viewport 2.0, along with support for the gammaCorrect node. If you don't want to use the Color Management feature, you could just use gammaCorrect nodes set to 0.4545 on typical 8-bit sRGB-encoded color file textures and the images will be transformed into linear color space not only for rendering, but for viewport display too (with Gamma Correction enabled in Viewport 2.0), which was the point of this post; getting file textures to look correct in Viewport 2.0. By doing the gamma corrections using gammaCorrect nodes, you'll only need to use one batch script for all your textures (the one that doesn't alter the gamma, the "scalar" one). I'll explore further possibilities as I get into rendering some assets for my current project and I'll post updates if necessary here.

4 comments:

  1. So wait, the scalar batch will convert HDR images to .map 16bit, clamping them?

    ReplyDelete
    Replies
    1. The "scalar" batch script will convert the source image to the exact same bit-depth that it was already in. For example: give it a source 8-bit bump map texture, and you'll get a .map file in 8-bit. For .hdr in 32-bit floating point, you'll get a .map in 32-bit floating point; this is because no "rgba_*" is specified in that script, which is nice. Use the "scalar" script for everything (I should probably rename it to something more appropriate) except "8-bit" color and reflection maps. For 8-bit color and reflection maps, you'll want to use the "color" batch script which will convert them to 16-bit and 0.4545 gamma for rendering without banding (that's the whole purpose of that "color" batch script). So what about if you're using a .hdr as a color map (which should already be linear)? Use the "scalar" batch script in that case. I'll make it all more clear on the post as it's not very obvious, thanks for asking!

      Delete
    2. Hey Gary,
      I have been using a linear workflow with the Photographic exposure node, but the problem is that as soon as I bring in a surface shader it renders as black (of course it has a texture mapped into it)
      I use mostly mia shaders (with individual gamma nodes at .455) and set the RGB conversion to .318 and the Cm2 is left on 1. All the shaders other than the surface shader work fine. The only way that I can get everything to render is to pump up the cm2 to 50000 and take the RGB down to .00032 with its multipler no set to 2 to balance the Cm2 increase.
      Can you tell me where I am going wrong as all the tutorials I have seen are either using the simple exposure, or are not discussing the use of surface shaders with mia photographic exposure node.
      BTW love the tips/blog.
      Cheers

      Delete
    3. From what I generally understand, the Surface Shader should always work in a predictable way regardless of which exposure shader you're using. There's a way to bypass the problem you're having; I don't use Surface Shaders when rendering with Mental Ray. I use a mia_material_x_passes and reduce "Weight" and "Reflectivity" to 0, then plug in the color texture to the "Additional Color" slot, effectively this makes the material act just like a Surface Shader.

      In this way, all my materials are always Mental Ray materials, it just makes everything work better. If you have a render layer that has, for example, mattes, then sure, some Surface Shaders for that one render layer are fine, but the main render layer where the scene is being rendered, I do my very best to have every single material a Mental Ray material that can support render passes.

      And I apologize for the lengthy amount of time since I've logged into my blog and responded to comments. I'm doing a lot right now on other tasks in life, but I'll eventually find time to continue my learning of and production of 3D content. Thanks for reading and I'm glad you've been able to learn something here!

      Delete