Table Of Contents
During its execution, Redshift will print out a multitude of useful messages in your 3d app's script/console window. In order to avoid clutter, Redshift's default behavior is to only print out a subset of all the messages it generates. If you want to view all the messages in the script window, the user can set Redshift's verbosity level to "Debug". This option can be found in the System tab.
Apart from the 3d app's script/console window, Redshift stores all messages in log files.
Learning the structure and messages of the log file can be useful when diagnosing rendering errors and warnings such as an aborted render, unrecognized shading nodes, missing textures, etc. It can also help better understand the rendering process and know how (and when) to optimize your renders.
Log File Location
Redshift writes log files to a subdirectory of the log root path which is specified as follows:
Linux and macOS
If the environment variable
REDSHIFT_LOCALDATAPATH is not defined, the default location is:
Linux and macOS
Within the Redshift log folder you'll find a list of folders that might look like this
Each of these log folders represents a "render session". Assuming Redshift is installed and set to automatically initialize when your 3d app is launched, a new log folder will be generated each time you launch your 3d app. The most recent Redshift session is in the "Log.Latest.0" folder. The other folders will contain previous sessions.
The first part of the folder name is the date in Year-Month-Day form and the second part is the time. So, on the list shown above, the first folder with the name Log.20150904_1236.0 was created on 4 Sept 2015 at 12:36.
The number at the end of the folder name is used when multiple log files 'conflict' within the same hour:minute time. For example, if you open and close your 3d app very quickly then there will be multiple log files created in the same minute. When Redshift detects that, it will increment this last number of the folder name so you'll see folder names ending in .1 or .2 and so on. Such log folder conflicts can also happen when you're rendering with multiple instances of your 3d app – as example, when simultaneously rendering multiple frames via multiple instances of Maya.
Structure of a typical log file
The first part of the log file prints out info about the Redshift version, the path locations and some basic CPU and GPU information. These messages are printed as part of Redshift's initialization stage.
A quick diagnostic is run on each GPU to measure its PCIe (PCI express) performance. PCIe (also known as 'the bus') is the computer component that connects your GPU with the remaining computer, so it can affect rendering performance. A bad motherboard driver or a hardware issue can adversely affect PCIe performance which, in turn, can reduce rendering performance. Ideally, the pinned memory bandwidth should be close to 5 or 6GB/s or higher.
The last line printed during initialization stage is also very important! It tells us how much GPU memory (aka "VRAM") Redshift will be able to use while rendering. If you have other 3d apps running at the same time, Redshift will be able to use less VRAM. As we can see here, even though our videocard is equipped with 4GB of memory, only 3.2GB of these could be used by Redshift. This is because this computer had other 3d apps as well as the Chrome web browser running – both of which can consume big quantities of VRAM. For more information about VRAM usage, please read this document.
Scene extraction is the process during which Redshift is getting data from your 3d app into its own memory. That data includes your polygons, lights, materials, etc. If the scene has lots of objects or polygons the extraction stage can be long. If that's the case with your scene, we recommend using Redshift proxies.
Redshift also prints out the frame number which can be useful if the log file contains messages from several frames.
Redshift for Maya 2015 Version 1.2.90, Sep 3 2015 Rendering frame 1 Scene extraction time: 0.01s
Rendering – Preparation Stage
This is the first step of Redshift's rendering stage. Redshift does a first-pass processing of the scene's meshes for the ray tracing hierarchy. The ray tracing hierarchy is a data structure used during ray tracing in order to make rendering fast. Therefore, the building of the ray tracing hierarchy is a very important stage! If the scene contains many objects and/or polygons, the ray tracing hierarchy construction can take some time. Certain stages of tessellation/displacement can also happen here.
After the initial ray tracing hierarchy construction, Redshift does a pass over the scene textures and figures out if it needs to convert any of them. Redshift, by default, will convert all textures to its own internal format. If you want to avoid this kind of automatic processing, please use the texture processor and preprocess your textures offline.
For information on the GPU memory allocation, please read the next section.
Rendering – Prepasses
Before Redshift can render the final frame, it has to execute all the prepasses. In this example, it computes the irradiance point cloud.
Other possible prepasses are:
At the start of each prepass, Redshift has to configure the GPU's memory (VRAM). Notice that, in this case, Redshift can use roughly up to 2.8GB of VRAM. You might ask "hold on, a few paragraphs above you said Redshift could use up to 3.2GB!". Well, the lower VRAM is because of two reasons:
- Redshift doesn't use 100% of the GPU's free VRAM. By default, it uses 90%. But you can grow or shrink that percentage in the Redshift Memory tab.
- Once Redshift initializes, it needs to use some VRAM (a few tens of MBs) for its operation. That memory is not counted here.
After VRAM has been allocated, Redshift prints out how much VRAM it didn't allocate and is now free. In this case it's 336MB. It's important to leave some VRAM free so that the 3d app and the operating system can function without issues.
At the end of each prepass, Redshift prints out how long the prepass took. In this case, the irradiance point cloud took 1.36 seconds.
Irradiance point cloud... Allocating VRAM for device 0 (Quadro K5000) Redshift can use up to 2860 MB Geo/Tex: 2028 MB / 256 MB NRPR: 262144 Done! (0.03s). CUDA reported free mem: 336 MB Total num points before: 55 (num new: 55) Total num points before: 85 (num new: 37) Total num points before: 144 (num new: 59) Total num points before: 209 (num new: 65) Total num points before: 318 (num new: 109) Total num points before: 442 (num new: 124) Total num points before: 560 (num new: 118) Total num points before: 678 (num new: 118) Total num points before: 799 (num new: 121) Total irradiance point cloud construction time 1.36s
Rendering – Final image
This is the final rendering stage where Redshift renders the image block-by-block. Blocks are also known as "buckets".
Notice that Redshift has to reallocate VRAM for this stage too, similar to what it did for the prepasses.
While blocks are rendered, Redshift will print out a line for each block so you can track the general progress. It also prints the time it took to render the block. Depending on the scene complexity, some blocks can take a longer time to render than others.
Like with the prepasses, Redshift finishes by printing out the total time taken to render the final image.
This is a very important part of the log file! It provides various scene statistics such as the number of proxies, meshes, lights, unique polygons, polygons with instancing, etc.Knowing such statistics can be useful when profiling a scene. As an example, scenes with many lights can take a longer time to render than scenes with few lights. Using the statistics, you can quickly find out if the scene contains many lights.
You can also quickly find out how many unique polygons or strand segments (for hair) there are in the scene. Please note that "unique" means "non-instanced". Say you're modelling a forest and you have the same 1-million-polygon tree instanced 1000 times, you will see something like "1 million unique triangles" and "1 billion total triangles".
Perhaps the most important data entries are the ones referring to GPU memory.
At the very bottom of the log file we can see the "Ray acceleration and geometry memory breakdown". These figures refer to the polygons and splines (primitives) in our scene. The sum of these figures is how much GPU memory would be required to fit all the primitives without requiring any out-of-core-access.
A few lines higher, we can see the "GPU Memory" section. The "uploads" figures in it tell us how much PCIe traffic was required to send the data to the GPU. Those figures might be smaller than the "Ray acc. And geometry memory breakdown" total. This is because Redshift only sends data it absolutely needs to the GPU. In this example, less PCIe traffic was required because there were was a percentage of polygons that wasn't visible by the camera. There are other cases, however, where data might have to be re-sent to the GPU due to out-of-core access (or for other reasons). In such cases, the 'uploads' can potentially be (much) larger than the sum of 'Ray acc. And geometry memory breakdown'.
When profiling performance, you should mostly worry about the 'upload' figures, especially if they go into the "many gigabytes" range. This applies for both geometry and texture uploads.
End Of Frame
When Redshift has finished rendering the frame, it saves the image and prints out how much time the frame took in total. The total time includes extraction, preparation, prepasses and final rendering times.