This page is outdated, please see the new Intro to AOVs page and Integrated AOVs page for relevant information.

Table Of Contents

Introduction

AOV stands for "arbitrary output variables". It means the different types of per-pixel information Redshift can produce. While typically Redshift will only produce a color for each pixel of the frame, you can configure it to also produce depth information, individual shading elements, etc.

For example, a depth AOV can help combine the Redshift-rendered frame with graphics produced by other renderers. Individual shading elements (diffuse, reflections, etc) can help with fast small-scale color adjustments without having to re-render the entire frame.

Managing AOVs


AOV clamping

In Softimage's Redshift Output tab and in Maya's Redshift AOV tab there is an option to clamp the color/AO AOVs. Redshift can perform sub-sample intensity clamping during unified sampling, which limits noise (grain) that can be produces when combining from depth of field and motion blur with bright light sources. The AOV clamping offers the same type of sub-sample control for AOVs. Please refer to the AOV tutorial page for more information about how and when these should be used.

The "Apply Color Processing" option

Several AOVs have an "apply color processing" option which is enabled by default. These AOVs are:

  • Diffuse Lighting
  • Diffuse Lighting Raw
  • Specular Lighting
  • Subsurface Scatter
  • Subsurface Scatter Raw
  • Reflections
  • Reflections Raw
  • Refractions
  • Refractions Raw
  • Emission
  • Global Illumination
  • Global Illumination Raw
  • Caustics
  • Caustics Raw
  • Volume Lighting
  • Translucency Lighting Raw
  • Translucency GI Raw

The "apply color processing" option means that gamma correction (if gamma is other than 1.0) and photographic exposure (if a photographic exposure lens shader is present) will both be applied to the AOVs.

The Available AOVs

An Example Scene

We'll demonstrate the use of the AOVs using the following example scene which combines reflections, refractions, bump mapping, SSS, ambient occlusion (architectural shader AO ambient lighting on the rightmost sphere), a spherical direct light, global illumination and caustics.

 

World position

The world position AOV produces per-pixel XYZ world coordinates in the R, G, B channels respectively.

Due to effects like antialiasing, depth of field and motion blur, Redshift has to generate multiple samples per pixels ('subsamples'). The filter option allows the user to define how these sub-samples are combined to generate the final per-pixel world position. The 'full' filter option will average the world positions together for the pixel using the same filter used for unified sampling. The 'min depth' option returns the world position of the closest-to-the-camera pixel sub-sample. The 'max depth' returns the farthest-to-the-camera world position. The 'center sample' option picks the position that corresponds to the middle of the pixel using a single sample. Choosing these options depends on how you plan on using the world positions. If you need antialiased results, you should select 'full' filtering. If, on the other hand, you need non-antialiased results the 'center sample' option will provide the best results.

The world position AOV also has options to scale the X, Y, Z coordinates by user-defined factors. This can help convert them to other unit spaces (inches to meters, for example), if required.

The following image is an RGB representation of the world position AOV for the test scene. The origin is around the cyan sphere near the middle of the frame.


Depth

The depth AOV outputs a single channel EXR file which contains per-pixel depth information. The channel is named 'Z' as this helps certain comp packages (e.g. Nuke) to automatically assume the image contains depth information.

Due to effects like antialiasing, depth of field and motion blur, Redshift has to generate multiple samples per pixels ('subsamples'). The filter option allows the user to define how these sub-samples are combined to generate the final per-pixel depth. The 'full' filter option will average the depths together for the pixel using the same filter used for unified sampling. The 'min depth' option returns the depth of the closest-to-the-camera pixel sub-sample. The 'max depth' returns the farthest-to-the-camera depth. The 'center sample' option picks the depth from the middle of the pixel using a single sample. Choosing these options depends on how you plan on using the depths. If you need antialiased results, you should select 'full' filtering. If, on the other hand, you need non-antialiased results the 'center sample' option will provide the best results.

By default, the depths returned range between the near and far clip values of the camera (in world units). Sometimes though, what is needed is a "0 to 1" range where 0 should be returned for pixels closest to the camera and 1 returned for pixels farthest from the camera. Enabling the 'normalize 0 to 1' option will produce such values.

Finally, the depth AOV has an option to scale the final depth result. This can help convert the depth values to other unit spaces, if required.

The following image shows the depth AOV for the test scene. The camera far plane was brought in and 'normalize 0 to 1' was enabled in order to demonstrate the full 0->1 range. Objects closer to the camera are darker than far away objects.


Motion Vectors

If the scene contains animation, the Motion Vectors AOV will contain per-pixel motion information. This includes any motion due to object transforms, object deformation and camera transforms. These motion vectors are two-dimensional and are therefore stored in the red and green channels of the Motion Vector AOV.

We recommend using EXR files to store Motion Vector AOVs as they can contain enough precision and will successfully work with all the options described below. 

The 'output raw vectors' option produces AOV values that contain the absolute motion that a pixel moved between frames. In other words, if a pixel moved 10 pixels to the left and 5 pixels up, the AOV red and green channels will be (-10.0, 5.0). Please note that this option overrides all other options, except 'Filtering'.

The 'max motion in pixels' option specifies the maximum possible motion that should be encoded into the AOV image. The default setting is 8. This means that any pixel that moves by more than 8 pixels will still be considered as if it moved by 8 pixels. Apart from clamping, this value also doubles as a normalization factor. I.e. the motion vector will be divided by it.

If 'no clamp' is enabled, then no motion clamping will happen but computed motion vectors will still be divided by the 'max motion in pixels' value. This is explained with examples below.

The 'image output min/max' options control how the motion vector should be encoded to the final image's red and green channels. Typically, only two valid combinations exist: either the [0, 1] which is our default, or [-1, 1] which some comp packages need.

With the [0, 1] default setting, a maximum negative motion will get an R or G value of zero. No motion will get an R/G value of 0.5 (or 128 for 8-bit formats). A maximum positive motion will get an R/G value of 1.0 (or 255 for 8-bit formats). In other words, the vectors are going to be 'centered' around 0.5 (or 128 for 8-bit formats).

With a [-1, 1] default setting, a maximum negative motion will get an R or G value of minus one. No motion will get an R/G value of 0.0. A maximum positive motion will get an R/G value of 1.0. Please note that only EXRs can represent negative numbers so if you use a [-1, 1] range, make sure to avoid 8-bit formats!

Here are a few examples to explain the 'max motion', 'no clamp' and 'image output min/max' options:

Motion vector (-8, 20) with 'max motion' set to 40 and 'image output min/max' set to [0, 1]

The max motion will first divide both axis by 40. So (-8, 20) becomes (-0.2, 0.5). The [0, 1] range multiplies these values by 0.5 and adds 0.5. It does that to fit a [-1, 1] range into a [0, 1] range. So (-0.2, 0.5) becomes (0.4, 0.75), which is what is stored in the final AOV.

Motion vector (-80, 200) with 'max motion' set to 40 and 'image output min/max' set to [0, 1]

This is the same example as above but with much stronger motion. Because the motion is greater than 'max motion', it will be clamped to 40 and then divided by 40. So (-80, 200) becomes (-1,1). The [0, 1] range multiplies these values by 0.5 and adds 0.5. So (-1, 1) becomes (0.0, 1.0), which is what is stored in the final AOV.

Motion vector (-80, 200) with 'max motion' set to 40 and 'image output min/max' set to [0, 1]. But with 'no clamp'!

This is the same example as above but with no clamping. The same division by 40 happens but no clamping, so (-80, 200) becomes (-2,5). The [0, 1] range multiplies these values by 0.5 and adds 0.5. So (-2,5) becomes (-0.5, 3), which is what is stored in the final AOV.

Motion vector (-8, 20) with 'max motion' set to 40 and 'image output min/max' set to [-1, 1]

This is the same as the first example but with a different min/max range. The max motion will first divide both axis by 40. So (-8, 20) becomes (-0.2, 0.5). The [-1, 1] range means that no offset/scale happens to these values. So (-0.2, 0.5) is stored in the final AOV.

Motion vector (-80, 200) with 'max motion' set to 40 and 'image output min/max' set to [-1, 1]

Same example as before but with stronger motion. Because the motion is greater than 'max motion', it will be clamped to 40 and then divided by 40. So (-80, 200) becomes (-1,1). The [-1, 1] range means that no offset/scale happens to these values. So (-1,1) is stored in the final AOV.

Motion vector (-80, 200) with 'max motion' set to 40 and 'image output min/max' set to [-1, 1]. But with 'no clamp'!

Same example as before but with no clamping. The same division by 40 happens but no clamping, so (-80, 200) becomes (-2,5). The [-1, 1] range means that no offset/scale happens to these values. So (-2,5) is stored in the final AOV.

Finally, the 'Filtering' options enables or disables antialiasing for the Motion Vector AOV.

The following image shows the Motion Vector AOV for our test scene. We have made the leftmost (bumpy) sphere move upwards and the middle (glass) sphere move to the right. The camera is static. We used the default AOV settings (8 pixels, and [0, 1] minmax range). As it can be seen, most of the pixels in the scene don't move and have an R=0.5, G=0.5 (mid-yellow) color. Given the [0, 1] minmax range, this translates into a zero vector, i.e. no motion. If the minmax range was [-1, 1], these pixels would have been black instead of yellow. The bumpy sphere is moving up which means a stronger Y (green channel) component. The glass sphere is moving to the right which means a stronger X (red channel) component.


Puzzle Matte

Redshift allows the assigning of IDs to objects or materials. The puzzle matte AOV allows each of the R, G, B channels to contain the per-pixel contribution of a single object or material. Traditional objectID/materialID AOVs typically hold a single value per pixel which can produce aliasing-related issues when the same pixel can 'see' multiple objects/materials. This is especially true when the scene uses depth of field and motion blur. Additionally, having a single ID per pixel prevents the production of matte masks through reflections or refractions.

The puzzle matte AOV, on the other hand, holds per-pixel contributions of object/material IDs on separate RGB channels, which means the same pixel can 'mix' contributions from multiple (up to 3) objects/materials and be workable even in the presence of depth of field, motion blur and reflection/refraction.

The first step in using the puzzle matte involves assigning material IDs or object IDs to your materials and objects respectively.

To assign material IDs in Maya: go to the material's shading group and set a value to the "Material ID" box.

To assign material IDs in Softimage: select the required material, click Get -> Property -> Redshift Material ID and then set the required value

To assign object IDs in Maya: go to the object's shape options -> Redshift and then set the required object ID value

To assign object IDs in Softimage: select the required object, click Get -> Property -> Redshift Object ID and then set the required value

Then, depending on whether the user decided to work with object IDs or material IDs, the appropriate mode for the puzzle matte AOV has to be set.

Then, each puzzle matte AOVs (there can be multiple puzzle matte AOVs) needs its Red/Green/Blue channels filled with the appropriate object/material IDs. Say we had three objects with object IDs 11, 21, 33 and we set the Red/Green/Blue channels to 11, 21, 33. This means that the Red channel will contain object ID 11, the Green channel will contain object ID 21 and the Blue channel will contain object ID 33.

If you have more than 3 objects/materials you want to create matte masks for, you can create multiple puzzle mattes and use different combinations of the IDs for red/green/blue.

The 'reflect/refract IDs' option defines whether the puzzle matte channels should also be filled using reflection/refractions. This allows the creation of object/material ID masks not only as seen from the camera but also as seen through reflections and refractions.

The following image shows how the puzzle matte looks like for our test scene. We assigned object IDs 1, 2, 3 to the yellow, cyan, magenta spheres respectively and made the puzzle matte red channel contain objectID 1, the green channel contains objectID 2 and the blue channel contains objectID 3. We enabled 'reflect/refract IDs' so that the puzzle matte also shows reflection/refraction coverage.


ObjectID

The ObjectID AOV stores the mesh's object ID at each pixel. Assigning object IDs is explained in the "Puzzle Matte" section above.

Using the ObjectID AOV instead of the Puzzle Matte can be convenient because the setup can be a bit easier and because it can hold lots of object IDs in the same image, instead of only 3 (like the Puzzle Matte does).

The drawback is that it doesn't properly support antialiasing, depth of field or motion blur and can therefore generate aliasing artifacts in the AOV data. This is because each pixel can hold only one object ID and, by definition, usage of DOF and motion blur means that multiple objects can 'share' the same pixel. For a solution to this, please use Deep EXRs (deep rendering) instead. With Deep EXRs, ObjectIDs can be extremely useful.

Depending on the image format, the ObjectID values are encoded differently in the pixels:

  • For 8-bit formats, the ObjectIDs can range between 0-255, saved as integer numbers, i.e. 1, 2, 3 and so on.
  • For 16-bit floating point formats, the ObjectIDs can range between 0 and several thousand, saved as floating point numbers, i.e. 1.0, 2.0, 3.0 and so on.
  • For deep EXRs, ObjectIDs are stored as integer numbers.

Diffuse Lighting (and raw lighting)

The diffuse lighting AOV contains the diffuse lighting component of the material's final shaded result. Typically, diffuse lighting reaching a material is multiplied by the material's diffuse color. The diffuse lighting AOV returns this multiplied result. If this multiplication is not desired, you can use the Diffuse Lighting Raw AOV, instead, which returns the non-multiplied diffuse lighting.

The following two images show diffuse lighting and raw diffuse lighting. The first picture is tinted by each object's material diffuse color. The second picture looks black and white because it contains only the light (without the multiplication with each material's diffuse color) and the light was white.

 

Diffuse filter

Any material that performs diffuse lighting typically has a diffuse color port which can be textured, a constant color or connected to an elaborate shading graph. The contents of the diffuse color can be returned via the Diffuse Filter AOV. The diffuse filter AOV, in conjunction with 'raw' lighting AOVs allows the user to tweak the lighting results separately from each material's diffuse color.

The following picture shows the diffuse color of each object. Notice that the yellow, cyan, blue spheres are not purely diffuse by actually have a bit of reflection on their silhouettes (Fresnel effect). For this reason, the diffuse component is a bit dimmer around their silhouettes since reflection 'takes away energy' from diffuse lighting.

 

Specular lighting

The specular lighting AOV contains the specular lighting component of the final shaded result. Please note that by 'specular lighting' we mean the reflections of lights only. This is demonstrated in the following picture.

 

Subsurface scattering

The subsurface scattering AOV contains the subsurface scattering component of the final shaded result.

 

Reflections

The reflections AOV contains the reflection component of the final shaded result. Please note that the reflections do not contain the reflections of lights: these are contained in the specular AOV. That's why, in the following picture, the reflection of the spherical light is black. Also note that most objects in the scene have a slight reflection on their silhouettes which matches the darkening of the diffuse filter around the same areas.

 

Refractions

The refractions AOV contains the refraction component of the final shaded result.

 

Emission

The emission AOV contains the emissive component of the final shaded result. The only emissive object in the test scene is the spherical light.

 

Global illumination (and raw GI)

The global illumination (GI) AOV contains the GI lighting component of the material's final shaded result. The GI lighting reaching a material is multiplied by the material's diffuse color. The GI AOV returns this multiplied result. If this multiplication is not desirable, you can use the Global Illumination Raw AOV, instead, which returns the non-multiplied GI.

The following two pictures show global illumination and raw global illumination. In the first case, the GI is multiplied by each object's diffuse color while the second image shows the GI lighting without any multiplication.

 


Caustics (and raw caustics)

The caustics AOV contains the caustics lighting component of the material's final shaded result. The caustic lighting reaching a material is multiplied by the material's diffuse color. The caustic AOV returns this multiplied result. If this multiplication is not desirable, you can use the Caustics Raw AOV, instead, which returns the non-multiplied caustic lighting.

The following two pictures show caustics and raw caustics. On the first picture, the caustic has been multiplied by the diffuse lighting of the floor (yellow tint) while the second picture contains the caustic lighting alone and without any multiplication.

 

Ambient Occlusion

The ambient occlusion AOV contains the ambient occlusion component of the final shaded result.

Shadows

The shadows AOV highlights the pixels that are shadowed because of objects. Please note that it won't highlight pixels that are in shadow because of the pixel's normal pointing away from the light. For this reason, the bumped sphere shows micro-shadows . These pixels would have received lighting if the sphere geometry itself wouldn't self-shadow. For this reason they appear in the shadow AOV.

 

Normals

The normals AOV contains the per-pixel surface normal, in world space. Please note that these normal are non-bumped, i.e. they don't contain bump or normal mapping.

The green on the following picture is because the normal is pointing up (0, 1, 0), which corresponds to the green color (R=0, G=1, B=0).

 

Bump Normals

The bump normal AOV contains the per-pixel bumped surface normal.

The following picture shows the bumped normal on the bump-mapped sphere.

 

Matte

The Matte AOV contains 'matte' surfaces that are typically constant colors not affected by lighting. Current shader nodes that output information for this channel are the 'Matte Shadow Caster' and Softimage 'Constant' shader nodes.

The following picture shows the beauty pass of a scene demonstrating the 'Matte Shadow Caster' shader. The green sphere in the foreground is using a 'Constant' shader node to drive its material.

 

 

The following picture shows the 'matte' components of the scene.

 

Volume Lighting / Volume Fog Tint / Volume Fog Emission

These AOVs describe the all the components of volume scattering in a scene. The 'Volume Lighting' AOV comes from lights illuminating the medium, while the 'Volume Fog Tint' and 'Volume Fog Emission' AOVs describe the affect of fogging through the medium, i.e. how objects become obscured by the fogging effect.

 

A simple example of volumetric scattering in a scene. It's the same scene as above, but with a red spot light and global volume scattering enabled, with attenuation:


Below you can see the same image decomposed into volume scattering AOVs:

 

Volume Lighting

Volume Fog Tint

Volume Fog Emission

 


Both the Volume Lighting and Volume Fog Emission are additive effects, whereas the Volume Fog Tint is a multiplicative effect which should be applied to the resultant material lighting AOV in your scene.

 

An example node graph in Nuke showing how to combine the volume AOVs with other lighting AOVs to get the final beauty image:

Translucency Filter, Translucency GI Raw, Translucency Direct Lighting Raw

In Redshift 'translucency' describes diffuse lighting through the back-face of a surface – i.e. 'back-lighting'. It is a cheap, fake version of sub-surface scattering that is useful for very thin, but translucent, objects such as paper or leaves. In order to complete the lighting equation in comp, Redshift offers three AOVs for translucency that become necessary when compositing with 'raw' lighting AOVs.

A simple example showing a scene lit by two red and blue spherical area lights at the bottom of the plane, with the rest of the lighting coming from primary GI rays (i.e. no secondary GI bounces) and a self illuminating green faceted sphere towards the top of the plane. The vertical plane has a front-lighting diffuse tint of grey, with a back-lighting translucency tint of white.



Below you can see the front-facing diffuse lighting broken down as AOVs:

 

Diffuse Filter

Diffuse Lighting Raw

Diffuse GI Raw

 


Below you can see the back-facing diffuse translucency lighting broken down as AOVs:

 

Translucency Filter

Translucency Lighting Raw

Translucency GI Raw

 


 

An example node graph in Nuke showing how to combine the diffuse lighting AOVs to get the final beauty image:

Hair Min Pixel Width and AOVs

The AOVs (aka MPW) feature reduces aliasing artifacts on fine hair strands by automatically making the strands partially transparent within a certain pixel width threshold. When this feature is enabled, primary ray 'MPW' hair is not treated as transparent as far as AOVs are concerned and thus will not write the shading results to the 'Refraction' AOV. Instead, 'MPW' hair shading results will be written directly to the 'DiffuseLighting', SpecularLighting' etc AOV channels.

In order to generate correct 'MPW' AOVs, all 'raw' AOVs will be disabled when 'MPW' is enabled. You should use the equivalent compound AOVs when MPW is enabled instead, for example:
DiffuseLighting should be used instead of DiffuseFilter * DiffuseLightingRaw Reflections should be used instead of ReflectionsFilter * ReflectionsRaw