DRQueue & Blender: Render Farm

February 3, 2012 Update: Heavy duty Work In Progress! I have filled in most of the commands and code needed, but I plan on elaborating with pictures and a diagram of my infrastructure. Also, installation on Ubuntu 11.10 has shown some special nuances that I haven’t added yet. Stay tuned!

Spend enough time on the internet researching render farms and it is inevitable that you will come upon the open source distributed render farm manager, DRQueue. Sounds great! Works with a variety of platforms and renderers, including Maya and Blender. It won’t take you long, however, to find yourself drowning in a sea of confusion and wires.

Contrary to popular belief, DRQueue doesn’t have to be a complicated setup. We’ll go so far here as to set up a render farm with DRQueue and get it on track with Blender 2.61, r43385+. If you don’t know what those numbers are, do not worry we’ll get you up to speed on the power and flexibility of versioning as well.

The concept of a RenderFarm is an interesting implementation of the Client/Server Model. In the RenderFarm scenario, the Clients (here referred to as “slaves”) do the bulk of the heavy lifting and the Server (here referred to as the “Master”) does some lightweight delegation of the renders to the client via another specialized Client known as the Queue Manager (here referred to as “DrQMan”).

Disclaimer: Perform anything mentioned in this blog at your own risk.


I will be building a Render Farm with a single Master Server and 10 Slave Clients. We are going to assume that they have already been wired up to the same network. The Master Server has the name “rfserver” and each of the slaves are named “render###” with the number representing which slave they are.

We will be approaching this is 7 Steps.

1. Prerequisites: Libraries and Environment Variables


Ubuntu 11.04+ on i686 or x86_64 architecture.


sudo mkdir -p /mnt/que/python2.7.2/src 
sudo mkdir -p /mnt/que/python3.2.2/src
sudo mkdir -p /mnt/que/blender-svn
sudo chown username -R /mnt
chmod 777 -R /mnt


sudo apt-get install vim subversion git-core build-essential g++ gcc pkg-config libgtk2.0-dev libglib2.0-dev libpango1.0-dev gettext libxi-dev libsndfile1-dev libpng12-dev libfftw3-dev libopenexr-dev libopenjpeg-dev libopenal-dev libalut-dev libvorbis-dev libglu1-mesa-dev libsdl1.2-dev libfreetype6-dev libtiff4-dev libavdevice-dev libavformat-dev libavutil-dev libavcodec-dev libjack-dev libswscale-dev libx264-dev libmp3lame-dev python3.2-dev scons libspnav-dev zlib1g-dev libncurses5-dev tcl8.5 tk8.5 swig csh tcsh fping samba smbfs samba-common system-config-samba nfs-kernel-server openssh-client openssh-server libncursesw5-dev libreadline-gplv2-dev libssl-dev libgdbm-dev libc6-dev libsqlite3-dev tk-dev gedit 


sudo gedit /etc/environment

-Add to PATH, before close quotation mark:


-Add under PATH, the following lines:

export DRQUEUE_ROOT=/mnt/que/drqueue 
export DRQUEUE_TMP=/mnt/que/drqueue/tmp 
export DRQUEUE_MASTER=rfserver 
export DRQUEUE_BIN=/mnt/que/drqueue/bin 
export DRQUEUE_ETC=/mnt/que/drqueue/etc 
export DRQUEUE_DB=/mnt/que/drqueue/db 
export DRQUEUE_LOGS=/mnt/que/drqueue/logs 
export DISPLAY=:0

-initiate Environment Variables

source /etc/environment

Here’s what your /etc/environment file should look like, in theory:

Drqueue Blender Environment Variables


We’ll set up the hostnames to their respective IP addresses in the “hosts” file. We will allow our current hostname to remain connected to it’s loop-back address, 127.x.x.x.

The “hosts” file is a part of the Name Resolution system and is usually called before the DNS.

“rfserver” is the hostname of the server, or “master”, of the renderfarm. We’ll make reference to it often.

The following example shows the “hosts” file of a single master Server and ten slave Clients.

/etc/hosts (ex: for the render001 machine): localhost RENDERFARM-render001 rfserver render002 render003 render004 render005 render006 render007 render008 render009 render010

Here’s what mine looks like for the render005 machine:

Drqueue Blender Linux Ubuntu Render Farm

Yes, it would’ve been easier to put “rfserver” on a later ip address so that I could keep a consistent numbering scheme, but whatever…


At this point we need to just make a few adjustments to make the installations run:

sudo cp /usr/lib/libtk8.5.so.0 /usr/lib/libtk8.5.so
sudo cp /usr/lib/libtcl8.5.so.0 /usr/lib/libtcl8.5.so

2. Render Farm Node Management: OpenSSH

OpenSSH provides us a means of ssh’ing into the Slave Client machines to finish the installations, run the slave and update the slave as needed.


cd ~/.ssh 
ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa.render001 
ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa.render002 
ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa.render003 
ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa.render004 
ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa.render005 
ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa.render006 
ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa.render007 
ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa.render008 
ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa.render009 
ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa.render010 
ssh-copy-id -i ./id_rsa.render001 render001@render001 
ssh-copy-id -i ./id_rsa.render002 render002@render002 
ssh-copy-id -i ./id_rsa.render003 render003@render003 
ssh-copy-id -i ./id_rsa.render004 render004@render004 
ssh-copy-id -i ./id_rsa.render005 render005@render005 
ssh-copy-id -i ./id_rsa.render006 render006@render006 
ssh-copy-id -i ./id_rsa.render007 render007@render007 
ssh-copy-id -i ./id_rsa.render008 render008@render008 
ssh-copy-id -i ./id_rsa.render009 render009@render009 
ssh-copy-id -i ./id_rsa.render010 render010@render010 


OpenSSH will use the .ssh/config file to make private key management much easier. Note: the “echo” line below is all one line.

touch ~/.ssh/config 
chmod 600 ~/.ssh/config
echo -e "Host render001\n User render001\n IdentityFile ~/.ssh/id_rsa.render001\nHost render002\n User render002\n IdentityFile ~/.ssh/id_rsa.render002\n\nHost render003\n User render003\n IdentityFile ~/.ssh/id_rsa.render003\n\nHost render004\n User render004\n IdentityFile ~/.ssh/id_rsa.render004\n\nHost render005\n User render005\n IdentityFile ~/.ssh/id_rsa.render005\n\nHost render006\n User render006\n IdentityFile ~/.ssh/id_rsa.render006\n\nHost render007\n User render007\n IdentityFile ~/.ssh/id_rsa.render007\n\nHost render008\n User render008\n IdentityFile ~/.ssh/id_rsa.render008\n\nHost render009\n User render009\n IdentityFile ~/.ssh/id_rsa.render009\n\nHost render010\n User render010\n IdentityFile ~/.ssh/id_rsa.render010\n" >> ~/.ssh/config


Biggest thing: do these steps in order!

chmod 700 ~/
chmod 700 ~/.ssh
chmod 600 ~/.ssh/id_rsa*
chmod 644 ~/.ssh/*.pub
chmod 644 ~/.ssh/authorized_keys
chmod 644 ~/.ssh/known_hosts
sudo restart ssh


In /etc/ssh/sshd_config:

—-Make sure they are uncommented (aren’t proceeded by a #):

RSAAuthentication yes
PubkeyAuthentication yes
Banner /etc/issue.net

In /etc/issue.net you can set a person banner message everytime you log in. OR, you can just modify /etc/motd, which is the post-message: Message Of The Day. This is mine:

Utah Teapot Drqueue Blender Render Farm MOTD


ssh render####

3. Render Farm Python Language Setup

We’ll be setting up necessary versions of Python here. We’ll need an installation of Python 3.2.2 and Python 2.7.2. From here, we’ll create the alternate versions accessible via /usr/bin/python and will show you how to switch between them, depending on our needs.


cd /mnt/que/python3.2.2/src 
wget http://www.python.org/ftp/python/3.2.2/Python-3.2.2.tgz && tar -xvf Python-3.2.2.tgz 
cd ./Python-3.2.2 
./configure --prefix="/mnt/que/python3.2.2"
sudo make altinstall


cd /mnt/que/python2.7.2/src
wget http://python.org/ftp/python/2.7.2/Python-2.7.2.tgz && tar -xvf Python-2.7.2.tgz
cd ./Python-2.7.2
./configure --prefix="/mnt/que/python2.7.2"
sudo make altinstall


sudo update-alternatives --install /usr/bin/python python /mnt/que/python3.2.2/bin/python3.2 1 


sudo update-alternatives --install /usr/bin/python python /mnt/que/python2.7.2/bin/python2.7 1 


sudo update-alternatives --config python

4. Render Farm Queue Manager: DRQueue


You will need to have the Python2.7.2 alternate chosen under “update-alternatives”, as described in the previous section.

cd /mnt/que/ 
git clone https://ssl.drqueue.org/git/drqueue.git 
cd ./drqueue/src
sudo scons PREFIX=/mnt/que install


Now, we will configure Drqueue to work with our created system. We already set the proper environment variables back in section 1!

-Master [/mnt/que/drqueue/etc/master.config]:


master.config Drqueue Blender Render Farm Renderfarm

-Slave [/mnt/que/drqueue/etc/slave.config]:

The “pools” here will be discussed in the last section, where we will learn how to create Jobs & Tasks. For now, leave them as “test1,test2,test3”.

pool=yourpool1, yourpool2, ...

slave.conf Drqueue Blender Render Farm Renderfarm

-Queue Manager (“Client”) [/mnt/que/drqueue/etc/drqman.config]:

And now we configure our Que Manager, often referred to as the Render Farm Client.


drqman.conf Drqueue Blender Render Farm Renderfarm

5. Render Farm Renderer: Blender 2.61

By far, this step takes the longest. Blender will be the largest installation in this compilation, so I advise using multiple threads while installing with scons. We want to stay up to date on the latest version of Blender, so we will use Blender’s svn versioning to grab the latest and, hopefully, best of blender. Pay attention to what architecture you are using (64- or 32- bit). The lscpu command can help you out.


cd /mnt/que/blender-svn
svn co https://svn.blender.org/svnroot/bf-blender/trunk/blender
svn co https://svn.blender.org/svnroot/bf-blender/trunk/lib/linux64 lib/linux64 (FOR 64-bit machines)
svn co https://svn.blender.org/svnroot/bf-blender/trunk/lib/linux lib/linux  (FOR 32-bit machines)
cp ./blender/build_files/scons/config/linux-config.py ./blender/user-config.py 


vim ./blender/user-config.py

—In “user-config.py” file in blender-svn/blender, replace the following:

BF_PYTHON = '/usr/local
BF_PYTHON_LIB = 'python${BF_PYTHON_VERSION}' #BF_PYTHON+'/lib/python'+BF_PYTHON_VERSION+'/#config/libpython'+BF_PYTHON_VERSION+'.a'
BF_PYTHON_LINKFLAGS = ['-Xlinker', '-export-dynamic']

—with this:

BF_PYTHON_ABI_FLAGS_SUFFIX = "m" # may be any combination of 'dmu' or empty
BF_PYTHON = '/mnt/que/python3.2.2'
BF_PYTHON_LINKFLAGS = ['-Xlinker', '-export-dynamic', '-ltk8.5', '-ltcl8.5', '-lz']

—You also need to replace the following:

BF_FFMPEG = LIBDIR + '/ffmpeg'
if os.path.exists(LCGDIR + '/ffmpeg'):

—With (obviously “linux64” would be “linux” on 32-bit machines):

BF_FFMPEG = '/mnt/que/blender-svn/lib/linux64/ffmpeg'
if os.path.exists('/mnt/que/blender-svn/lib/linux64/ffmpeg'):


The “-j 3” signifies how many threads to run the scons install. In this case, three. If you’re not sure how many cores/threads you have, use the lscpu command.

cd /mnt/que/blender-svn/blender
sudo python scons/scons.py -j 3
ln -s ../install/linux/blender ./blender

6. Render Farm File Server & Client: Samba & smbfs


Let’s collect some information and configure our server. The information we get here will also be used in the Slaves’ Client configurations.

serverusername is the user that you are creating the Samba Server on.


id -u serverusername
id -g serverusername


Go into the /etc/samba/smb.conf file:

sudo vim /etc/samba/smb.conf

Change/create the following lines:

-Under “global” section:

workgroup = RENDERFARM
 server string = %h RenderFarm DrQueue Server

-Under “Authentication” section:

security = user
usernamemap = /etc/samba/smbusers
encrypt passwords = true
map to guest = bad user

-Now, at the bottom, add the following Samba Share:

path = /mnt/que
comment = DRQUEUE Master Share
valid users = serverusername
guest account = nobody
browsable = no
writable = yes
printable = no
create mask = 777
directory mask = 777
read only = no


The serverusername that we use below will be the user that you use on the master machine to create the Samba Server. NOTE: It must be a user that already exists on the machine. I recommend it be the same user we are using to set this up on the Master Server machine. The serverpassword will be its password. The serverpassword does not have to be the same as the machine user’s password.

sudo smbpasswd -a serverusername
sudo smbpasswd -e serverusername 

Now create/edit the /etc/samba/smbusers file and add the following line:

serverusername = "serverusername"

Restart the Samba Server:

sudo restart smbd
sudo restart nmbd


Fill in the bold section with your appropriate information. Take Note: The information pertains to the Server/Master machine. So the username and password are the same username and password of the user which the Master Server is on.

Create Credentials File for Client:


Add the following lines to /etc/fstab:

//rfserver/masterrenderfarm/drqueue/tmp /mnt/que/drqueue/tmp smbfs username=serverusername,password=serveruserpassword,uid=####,gid=#### 0 0

//rfserver/masterrenderfarm/drqueue/logs /mnt/que/drqueue/logs smbfs username=serverusername,password=serveruserpassword,uid=####,gid=#### 0 0

Now mount the Drqueue Shares. The UID & GID we got in the Server section:

sudo mount -t smbfs //rfserver/masterrenderfarm/drqueue/logs /mnt/que/drqueue/logs -o username=serverusername,password=serveruserpassword,uid=####,gid=####
sudo mount -t smbfs //rfserver/masterrenderfarm/drqueue/tmp /mnt/que/drqueue/tmp -o username=serverusername,password=serveruserpassword,uid=####,gid=####

Housekeeping: prevent hanging:

sudo update-rc.d -f umountnfs.sh remove
sudo update-rc.d umountnfs.sh stop 15 0 6 .

If you need to unmount a share from a folder, use umount on the folder upon which the share is mounted. You may need to “sudo” in.

7. Render Farming

A lot more info to come!

Here I’ll discuss actual render jobs and how to go about a typical Blender rendering with DRQueue. The “&” after the commands signify to run the process in the background so that you can continue to work in the terminal. Press enter after the proc header comes up and you’ll be back at the prompt.

architecture = i686 or x86_64

Start Master: master.Linux.architecture -l3 -o &

Start Slave: slave.Linux.architecture -l3 -o &

Start Manager Client: drqman.Linux.architecture -l3 -o &

End Clients: killall -9 slave.Linux.architecture

End Master: killall -9 master.Linux.architecture

End Manager Client: killall -9 drqman.Linux.architecture


If you can get on the right track of thinking when it comes to designing, building, configuring and executing a renderfarm before you ever type “sudo apt-get install…“, then you might can save yourself a good deal of grief.

So, why do things break down when it comes to setting up our render farms? 5 W’s time:

1. What?

To build a program, the source code for that program needs all of the necessary libraries (external code used at compile/run-time). We’ll try to grab all of these in the first section, Prerequisites.

2. Where?

The source code and built programs need their libraries and configuration files to work. If it cannot find them, then the system fails or produces unexpected results. We combat this through careful organization of directory structures and file paths.

3. Who?

Permissions, in my experience, are often the number one issue for properly compiled programs. We’ll cover our bases on making sure that we have full access when setting up our render farm. The trade-off with permissions is security. I won’t be covering security here.

4. When?

Programs and their libraries are often upgraded rather frequently. Sometimes

5. How?

Blender to PRMan: Chapter 2 [RiSpec or Welcome to Renderman]

January 11, 2012 Update: Work in Progress. Extending RSL information. Continuing to fill in temp pictures until a full overview of spec complete. Currently completing section [RSL ShadeOp: Function Shadeops: Shading & Lighting]


What is Renderman?
Cracking the RiBs
The Ri
Reyes Rendering


What is Renderman?

Renderman is a specification for rendering three-dimensional geometry in the context of a scene. This scene will contain the geometry itself, data necessary for shaders (mini-programs which calculate the how the point on the object will look) to shade/alter the object, lights in the scene and all of the data concerning the camera with which to view the scene. Here, we will focus on Renderman Pro Server 16. The RiSpec is 3.2.

Rendering, in its simplest form, is taking all of this scene data in scalar & vector form and running calculations which will create a 2D image. From here, the image can be encoded/transcoded to various formats(still & motion) each with their own specification.

Renderman works to create photorealistic renders with a foundation on the three major cornerstones of rendering:

CORNERSTONES Shua Jackson Pixar PRMan Renderman

Comprehensive detail from each of these cornerstones are wrapped up into the Renderman Interface. The Interface and the Renderer each answers a specific question regarding rendering:

What is the desired picture to be rendered? -Renderman Interface
How is the desired picture to be rendered? -Pixar’s Photorealistic Renderman (PRMan) Renderer

There are two main ways that the Renderman Interface can be utilized to render the scene. These methods are represented as “bindings.” There is also a PRMan specific python binding, but I won’t discuss that here.

Method One (RiB Binding): These scene description values can be stored in a file known as the Renderman Interface Bytestream, or RiB. Consider the RiB a “metafile” containing information for the Ri API procedure calls (more later). The renderer here, PhotoRealistic Renderman (or PRMan), takes this RiB file, imports the necessary images (.tex files) and shaders (.slo files) and runs the render via Ri API procedure calls.

Method Two (C Binding): The Modeling Application can directly make the Ri API procedure calls to the renderer, internally invoking the renderer to run the Graphics State and Geometric Primitives. The RiB file is completely bypassed in this approach.

We will use Method One for our Blender to PRMan protocol.

RiB files (.rib) can be binary or ASCII. We will be focusing on ASCII. PRMan’s catrib can translate between the two.

As you can tell from the figure above, the first thing we need to work out is the implementation of RiB creation through Blender, our modeling application.

Here’s the gameplane: from inside the python addon module, we will make the calls necessary to create the RiB file and fill those calls with the necessary scene description data from Blender. This will happen upon export or render.

Two questions now.

-What scene description data goes into the RiB file?
-How do we get data from Blender into the RiB file?

This chapter will focus on answering the first question. Before we can know where we will get the data to create the RiB file, we must know what data is required or optional in a RiB.

Cracking the RiBs

Renderman Interface Bystreams are language independent files that are full scene descriptions. As mentioned above, they can be in ASCII text format or in binary. They can also be created with or without gzip compression. The binary format is useful for compressing/saving space and when transferring between servers. Obviously, the binary format is not human readable.

The structure of a RiB file is simple:

Frame Begin
–World Begin
—-Attributes Begin
——Geometric Primitives
—-Attributes End
–World End
Frame End

The PreFrame region encapsulates most of the metadata of the RiB file, including user comments.

The FrameBegin to FrameEnd Region contains all of the Graphics State’s information for the Renderer for the given frame and the Geometric Primitives subjected to the Graphics State (more on these later).

Options appear right after FrameBegin and are “frozen” for the given frame once WorldBegins is called.

All Geometric Primitives and their respective Attributes are called in the WorldBegin to WorldEnd Region.

A bit of a “gotcha” in Renderman is the idea that you define an object’s attributes (like color, transform, etc.) BEFORE you define the actual geometry of the object. This is actually a good case-in-point for how the RiB is read into the renderer. If you aren’t a programmer, think of it as working “outside-in.” So, essentially you start in the inner most parts of the RiB and work your way backward to accumulate all of the necessary information needed to render the object. This is actually in line with the Imaging Pipeline discussed in the Ri section on Options.

An extremely powerful aspect of RiBs is that of RiB Archives. In short, you don’t need to keep ALL of the scene description in a single RiB. A single object alone might have polygon counts upwards in the 10s of millions, so why clutter your main RiB with points making it hard to locate other important information? Well, these heavy objects can be saved in their own RiB file and injected into their specific areas of the main RiB with a single “ReadArchive” command. That’s nifty.

This represents the very basics of the RiB file format. How to fill out a RiB file will become a lot clearer once we understand the Ri Structure. In the mean time, let’s discuss the files external to the RiB that are also necessary for the renderer’s success.

Two external file types accompany the RiB family:

-Shaders -Textures


Shaders are miniature programs (or, more accurately, self-standing routines/functions). These are the “plug-ins” of the RiB file/API C-program. Shaders are most known for their use in surface shading; dictating how a surface responds to the lights around it. However, there are technically eight different shader types:

➀-Surface: These shaders compute the color and opacity of an object’s surface. You would be familiar with the computations/algorithms used in these shaders if you have heard of Lambert, Blinn, Phong, or Cook-Torrance shading.
➁-Light: Lights in modeling are essentially light shaders. They are called from the other shaders (namely, surface & volume) to determine values associated with lights (i.e. intensity, distance, color, etc.).
➂-Volume: Volume shaders work as interior or exterior shaders. Interior shaders shade the interior of a transparent object (translucency is a factor of surface shading; this is subsurface scattering). Exterior volumetric shading attenuates raytraced reflected/refracted light.
➃-Atmosphere: Atmospheric shaders shade the area between the object’s point and the camera. Consider it the “volume” between the object and camera.
➄-Displacement: Displacement shaders move the actual geometric points of the object. Bump Mapping moves the surface normals along a single axis. Normal Mapping displaces the surface normals in three-dimensional (3 axes) space.
➅-Deformation: Deformation is broader than Displacement. It alters the entire surface of a geometric primitive.
➆-Imager: These shaders perform filtering and compositing on the rendered 2D raster image.
➇-Projection: A shader which maps the camera space to the screen space.

Shaders don’t work in a vacuum. Often, they rely on the values of other shaders. For instance, the surface shaders often rely on the values of the light shaders and volumetric shading often needs the color from surface shaders.

All shaders for Renderman are written in Renderman Shading Language (RSL) in ‘.sl’ files. For PRMan, these shaders are compiled to ‘.slo’ files. PRMan’s shader executable can compile shaders written in RSL.

Shaders often come with a list of parameters that the user sets called the parameterlists. The are settings which are passed into the Renderman Interface via Ri Procedures. They can be values set for things like the surface color, displacement amount, mapping value, etc.

A fully comprehensive look at shaders can be found in the RSL section below.


Textures are images. Textures are maps. In CGI, “textures” often refer to color maps for surface shading, but textures can also serve as maps for displacement (Displacement, Bump & Normal Maps), shadows (Shadow & Deep Shadow Maps) and as caches (Irradiance Cache). While I placed textures alongside shaders, textures are technically a subgroup of files under shaders. Shaders essentially import these files for use in their calculations. Hence, the term maps/mapping. They map a value to the given point being shaded and the shader takes this value and performs its calculations based on the type of shader it is. The mapping of a 2D texture to a point on the object to begin with is done via parameterization (“UV mapping”).

Textures are obviously created from 2D images. But, for renderman to render correctly, these textures need to be optimized for use. The main optimization is the creation of the image as a MIP-map image. “Multum in parvo”, MIP, means “much in little.” Essentially, this means creating subsequent copies of the image at 1/4 resolution area of the previous copy. For PRMan, the executable txmake is used. The extension is typically .tex for texture images, .env for environment maps and .shd for shadow maps. However, this is just good practice and not a strict rule.

A note on texture images, whether you use Renderman or not: Texture images should follow the “powers of 2” Rule:

-Square image (Width = Height) unless an environment/reflection map.
-Dimensions at a power of 2 (2, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, …).
-If tiling for openEXR, 32×32.

Point Clouds & Brick Maps

Wait. These aren’t listed as the two file types used with RiB. But they are used. Point Clouds are pretty much radiance caches used as a lookup cache for shaders’ computations. Brick Maps offer the 3D version of Textures. Much more on these later…

The Ri

Finally, we move to the meat of the matter. The Renderman Interface lays out the entire specification on which the RiB stands. The name tells all. The Renderman Interface is just that. An Interface. It is a means of communicating between the modeling program with all of its objects, animation, lights, etc. and the renderer. In this case, it will be the interface layer between Blender and PRMan. And this layer will be passed between the two via RiB.

The Renderman Interface maintains what is known as the Graphics State & and its Geometric Primitives. These two classes encapsulate all of the Renderman Interface. We’ll begin with the Graphics State.

Graphics State

The Graphics State consists of all parameters needed to render the given frame in the context of a sequence of primitives (read Geometric Primitives). The Graphics State is composed of three elements:

These three elements compose the major sections of a RiB.

I mentioned earlier about making “Calls” to the Renderman Interface. From here on, we will refer to these as Ri API Procedures. They make up the C Binding of the Renderman Interface. In code, you will recognize them by RiSomeCallGoesHere(). These procedures create, enter, alter, or exit from the three elements described above. I’ll make mention of them in this text so that you can see the parallels between the RiB binding and the C binding. There are 128 non-deprecated Ri Procedures in the Renderman Interface.

There are 5 main Ri Procedure types which affect the Graphics State:

➀. Geometric Primitives Procedures
➁. (Interface) Mode Procedures
➂. Options Procedures
➃. Attributes Procedures
➄. Maintenance Procedures (Take care of texture mapping and various other elements of bookkeeping the Graphics State)


The Interface Mode of the Renderman Interface is the head honcho that overlooks the context of all other Options/Attributes. For those with programming experience: the Interface Mode manages the Graphics State’s stack, making Options analogous to global variables and Attributes analogous to local variables. Typically, the Interface Mode defines the change in scope that you’ll see in a RiB file. The scope keeps current states regarding the Graphics State. In the C Binding, the Interface Mode has 15 Ri Procedures.

Interface Mode Ri Procedures {RiB Binding}:
RiDeclare {Declare name class type}
RiFrameBegin {FrameBegin int}
RiFrameEnd {FrameEnd}
RiWorldBegin {WorldBegin}
RiWorldEnd {WorldEnd}


Options control the scene’s global rendering parameters. Once you get to the inner levels of the Graphics State (in World), these cannot change. They affect the parameters which are applied to all objects in the scene and are independent of the local parameters which are affecting those individual objects.

Options are composed of three sections:


The Camera is the first major section of Options in the Graphics State. The Camera settings are as follows.

-Resolution (Horizontal/Vertical Resolution, Frame Aspect Ratio & Pixel Aspect Ratio)
-Windows (Crop Window, Screen Window & Clipping Planes[near & far])
-Coordinate System Transformations (World->Camera Transformation, Camera Projection [Camera Coordinates to Screen Coordinates])
-Camera (F/Stop, Focal Length, Focal Distance, & Shutter Open/Close)

Essentially, almost all of these parameters are for setting up the coordinate system transformations which ultimately lead to what is and what isn’t seen in the final rendered image.

One area of rendering that at times may seem esoteric and confusing is the area of coordinates. Coordinate systems can seem perplexing but a clear view of coordinate systems and the transformations between them can open up a world of enlightenment for fixing rendering issues. There are six coordinate systems or “Spaces” used in Renderman:

➀-World Space: Non-Perspective. Global coordinates of scene.
➁-Object Space: Non-Perspective. Local coordinates of object in scene. “Origin” of object.
➂-Shader Space: Non-Perspective. Local coordinates of a point on an object in a scene. Normal = z, Binormal = y, Tangent = x. The binormal and tangent are dependent upon the current respective (u,v) coordinates. This makes texture mapping possible.
➃-Camera Space: Perspective. Coordinates relate to location of camera in World Space, z corresponds to depth in frustrum (0: near clipping, 1: far clipping).
➄-Screen Space: Perspective. x & y correspond to pixel locations.
➅-Raster Space: Perspective. 2D coordinates of final pixel width x pixel height dimensions.

Perspective spaces are relative to the Camera coordinates. Non-Perspective are independent of the Camera.

A closer look at the imaging transformations might help reveal a good deal more about the inner workings of coordinate systems in Renderman:

To get from one coordinate system to another coordinate system, Transformations are used. Matrices dictate how the Transformations calculations are carried out (Transformations themselves are discussed in more detail in Attributes). The parameters set in the camera directly change the matrices’ values.

For Imaging (bringing the object into the raster image) the imaging transformations go from inside the circle to the outer Raster ring. We’ll talk about Imaging in the Display section.

For setting the Camera, the reverse takes place. The Raster coordinates of the image (file or framebuffer) are set depending on the parameters requested. From here, a Screen Transformation is set in the current transformation matrix. This is the matrix that is currently in scope while creating the RiB. A projection matrix is appended to this matrix to go from Screen to Camera and from Screen to Raster. Then the current transformation matrix is changed to the camera transformation, which dictates transformations to and from the camera coordinate system. Once camera coordinates are established, all future transformations move the world coordinates relative to the camera coordinates. Another words, the Camera doesn’t move, the World moves. When the “world” begins, the world coordinates are established from this.

Other than coordinate systems, the camera settings dictate Motion Blur & Depth of Field.


The Display is the second major section of Options in the Graphics State.

All renderers should have the capability to produce Color, Opacity and Depth values for a given image. The Display dictates how the color, alpha and depth images are converted into a displayable form. The Display outputs to two forms: An Image File or the Framebuffer of a display device.

Display runs through a particular Imaging Pipeline. We touched on this pipeline briefly in the Camera section. Coming out of the Hidden Surface Algorithm (more on this later), a color Image is available. But there is much work yet to be accomplished. This is where the Display section of the Graphics State kicks in. Color, Opacity and Depth each are processed in separate sections of the pipeline before coming together for the final image.

Often, the color image coming out of the Hidden Surface Algorithm is at a much higher sampled rate that the resolution of the Raster space. The color values are first filtered with a selected Filter and then Sampled. The sampled color image then moves on to the Exposure level where the gain and gamma of the pixel values are adjusted. At this point an Imager Shader can be implemented to further process the image before it is quantized with the Color Quantizer(reduced to integer values and dithered). Depth values are essentially Screen space z-values (remember, left-handed, so from 0-1.0) and usually just go through an optional Imager Shader and the Depth Quantizer. Alpha essentially uses the same Pipeline as Color.

A thing to note: Color values (RGB), Opacity (Alpha) and Depth (Z-Values) can be stored in separate channels in an image. Other custom display channels can be created and used in Display.

A thorough knowledge of digital image processing helps to clear portions of this section up if you are left with any confusion.

Run-Time Renderer Controls

The Run-Time Renderer Controls is the third major section of Options in the Graphics State.

These Renderer Controls control three aspects of the renderer:


To get a good understanding of the Renderman renderer controls and many Attributes which follow, it helps to understand REYES. We’ll cover that sometime…

We’ve mentioned the Hidden-Surface Algorithms. These are known as Hiders. Hiders are essential in determining what objects/surfaces should be considered in the given section of the rendering pipeline and which should be discarded. They are Renderman’s version of Hidden Surface Determination. This is necessary to determine surfaces that are “hidden” from a certain viewpoint. There are eight Hiders which assist in selectively rendering objects in the context of the scene sequence:

➀-Hidden (aka Stochastic)

Operations control how the renderer (PRMan, in this case) performs the render. This is where a background in REYES is super handy.

-Bucket Size: Determines pixel x pixel size of buckets to be used in the Reyes rendering.
-Grid Size
: Determines grid size in micropolygon count in dicing.
-Arbitrary Bucket Order
: Dictates scanline order of bucket to bucket processing (default: left to right, top to bottom)
-Ray Tracing
-Visible Point (VP) Options
-Opacity Threshold
-Opacity Culling
-Shadow Maps
-Deep Texture Compression
-Texture Filtering
-RiB Output
-RiB Authoring
-Hair Length

Performance controls directly affect the performance vs. quality tug-of-war in the renderer.

-Search Paths
-Directory Mapping

Options Ri Procedures

There are 26 Options Ri Procedures.

Options Ri Procedures:



Render Controls:


While Options are global, Attributes are considered local. They are dependent upon their assigned geometric primitives and those alone. They can be altered throughout the course of the Graphics State. Two special Attributes, Transformations & Motion, will be discussed later.

The entirety of renderman Attributes is composed of two major Attribute types:

Individual renderers may have their own implementation specific Attributes and these can be assigned via RiAttribute.


Shading Attributes define the current shading states in the Graphics State. The Graphics State maintains a current color, current opacity, current surface shader, current atmosphere shader, current interior/exterior volume shader, current list of light sources, current area light and current displacement shader. This means that the only shaders which affect the Shading Attributes are Surface Shaders, Atmosphere Shaders, Volume Shaders, Light Shaders and Displacement Shaders.

The Shading Attributes can be broken down into four subgroups:

➁-Texture Mapping
➂-Lights (also a Shader, but discussed separately)
➃-Renderer Options


The Shading Attributes handles five of the major Shaders.

The color & opacity of a geometric primitive’s surface can be dictated by direct calls to Color & Opacity or via the Surface Shader call. Displacement Shaders alter the geometric primitive’s points before the lighting stage. Volume Shaders define the Interior & Exterior volumetric properties of the geometric primitives. Atmosphere Shaders are defined along with volume as well. Light Shaders are handled by the Shading Attributes as well, but we will cover them later in Light Sources.

Co-Shaders are Shaders which are defined in the interface but not called directly. They are often called by other Shaders.

The Surface, Interior Volume and Atmosphere Shaders can be run as Visible Point Shaders. This means that the shaders can be run after all “visible points” have been determined (see REYES). This can eliminate some motion blur issues on volumes.

Geometric objects can also be used as Mattes. When given a Matte Attribute, an object will hide whatever it visibly covers up and will leave a transparent hole where it was in the scene.

In addition to the Renderer Options for Shading Attributes, there are some special Attributes that exist for Volume Shading specifically. We won’t discuss those here.

Texture Mapping:

Now would be a good time to discuss texture mapping. Texture Mapping is the process that gets a 2D Image Texture (as described in the Textures section) mapped onto the coordinates of a 3D geometric object (aka geometric primitive). Each geometric primitive owns a set of surface parameters (u,v) which correspond to its parametric surface (x,y,z). Texture coordinates on the 2D Texture (s,t) have a mapping to these (u,v) coordinates. It is important to understand that while the default maps (s,t) & (u,v) as being the same value, this doesn’t have to be the case. Many people call renderman’s (s,t) coordinates as being renderman’s version of (u,v) coordinates. As we can see from above, this isn’t entirely true. We define these mappings by mapping (s,t) coordinates to the corners of (u,v) coordinates. The are set with RiTextureCoordinates.

texturecoords Renderman PRMan Pixar

Light Sources:

Lights illuminate surfaces. Lights in renderman are technically Shaders which are accessed from other Shaders. The Graphics State begins with no lights sources in its current light source list. There are two main light source types in the Ri’s light source list; current light source & current area light source. The The Renderman Interface comes with four light source types for current light sources :


The current area light source is a single area light defined by the geometric primitives included in its Attribute’s definition.

Renderer Options:

Renderer Options for Shading Attributes essentially set certain parameters for the renderer. These are not Options in the hierarchical sense because they are dependent on the the given context of the geometric primitive’s Shading Attributes.

Displacement Bounds: Dictates bounding boxes for primitives to account for displacement of surface points.
Shading Rate: Measured in pixel area. Defines frequency of primitive’s shading.
Shading Interpolation: “Constant” shading (aka Flat) or “Smooth” shading.
Derivatives & Normals: Defines how Derivatives & Normals are calculated to reduce artifacts.
Ray Tracing: Sets Ray Tracing controls for any Ray Tracing that is used in the renderer
Irradiance: Sets Irradiance controls for any Irradiance that is used in the renderer
Photon: Sets Photon Mapping controls for any Photon Mapping that is used in the renderer
Shading Strategy: Defaults to using “Grid” strategy of shading. Volume VP shading handled by separate calls now.
Shading Hit Mode: Defines what shaders are actually executed when shading points generated by the renderer are “hit.”
Motion Factor: Increases shading rate for objects that Motion Blur along a larger space
Focus Factor: Adjusts Dicing Rate for blurred objects from Depth of Field.
Shading Frequency: How often the object goes through shading through a duration of the object’s Motion Blur.


Geometry makes up the second portion of Attributes.

Geometry Attributes definitely pop the hood on the rendering engine. They expose the inner workings on a technical level regarding the rendering system that can go much deeper than that required for most of the Shading Attributes. Warning: It will be ridiculously easy to glaze over here, but I encourage you to pull through and get a firm understanding on the following concepts. If you do, it will open a world of rendering opportunities in your work (and may even get you work!).


The Graphics State maintains a current bound which specifies the bounding box for the current object primitives. This bounding box is critical to the rendering engine’s work. It defines the boundaries for the subsequent primitives in the Attribute’s section. Any primitives outside the bounding box is clipped or culled.

Level of Detail (AKA “LOD”):

The Graphics State also maintains what is known as detail. Detail, in the case of Geometry Attributes, defines whether a primitive is “drawn.” The detail is the area of the object’s bounding box when projected into Raster Space. If the detail area is within a specified detail range, then that primitive is drawn. So, why is it called detail? Well, if the range given only allows a primitive to be drawn if under a certain detail amount, then only a “low detail” version will ever be drawn. Likewise, if the range given only allows a primitive to be drawn if over a certain detail amount, then only a “high detail” version will ever be drawn. It helps if you think about it as a filter.


Geometric primitives have an orientation. Just like the coordinates spaces down to the Camera Space are a left-handed system, the primitives can have their own “handedness” defined by Transformations. The coordinate system implemented by the primitives affects how the normals on the surface of the object are calculated. If the handedness of the primitive is reversed, then the normal will point in the opposite direction. This will changed whether the surface is oustide or inside and facing the viewpoint or hidden. This directly affects Shading, Culling and Solids.


Objects can have 1 side or 2 sides. If the object is one-sided, it’s outside surface is front-facing when facing the viewer and back-facing when facing away from the viewer. Only its outside is visible. If the object is two-sided, both sides are visible. Simple.


The Visibility of a primitive to certain aspects of the renderer can be defined. The visibility to the camera, diffuse rays, specular rays, transmission/shadows, photon mapping and a special attribute known as mid-point visibility (shadow receiving but not shadow casting objects).


Culling removes points from being shaded. Backfacing & Hidden surfaces can be forced into shading by turning of their respective culling attributes. This is useful for point cloud baking, occlusion and indirect illumination.

Dicing Strategy:

The Dicing Rate (see REYES) is determined based on the screen space coordinates of a primitive’s area projected onto either a plane or sphere. One of two reference cameras can be used for the determined dicing strategy: World Camera & Frame Camera. There are some special restrictions on setting up these cameras and using them that are not described here.

Strategies for off-screen primitives can be defined for dicing as well. The original strategy is to never split offscreen objects and clamp their dicing rates. New strategies exist that can treat out of viewing frustrum objects with the spherical dicing rate strategy touched on above or a good middle ground strategy which splits the out of viewing frustrum objects less and reduces dicing rate based on distance from view.

The raster metric used for dicing can be the standard screen oriented raster metric or an unoriented raster metric. The unoriented metric can be useful for primitives which shouldn’t change dicing rate when the camera position moves.

Other dicing strategy Attributes can be defined for surface patches and curves. The lowest levels of surface patches can be diced into grids in the power of 2 to help eliminate patch cracking.

Attribute Ri Procedures

Attributes have 26 Ri Procedures

Attribute Ri Procedures:

Shading Procedures:



Transformations are a special breed of Attributes (1 of 3). We don’t put them in Attributes because they also directly alter the Graphics State. The Interface Mode will often dictate a main current transformation and these Transformation attributes alter the coordinate systems or transform points between them. Transformations have 14 Ri Procedures.

Transformation Ri Procedures:


Motion is the second type of special Attributes (2 of 3). Motion is created from two things:

➀-Moving Tranformations

➁-Moving Geometric Primitives

Motion provides us with two things as well: Motion Blur & Temporal Anti-aliasing.

Motion has 2 Ri Procedures. The Procedures on this page with a ⎈ symbol can appear within the RiMotionBegin-End block.

Motion Ri Procedures:


Resource is the third special Attribute (3 or 3). It basically encapsulates or “saves” a current part of the Graphics State and can be “restored” at a later time. Resource can restore the following five subsets: shading, transform, geometrymodification, geometrydefinition, & hiding. They are not subject to any scope rules. Resource has 3 Ri Procedures.

Resource Ri Procedures:

Geometric Primitives

Whew. The previous sections of The Ri lays out the Graphics State of the Ri Scene Description. While the Graphics State defines how your scene and everything within will be rendered, the Geometric Primitives supplies the what.

Renderman supports polygons, bilinear and bicubic patches, non-uniform rational B-spline patches, quadric surfaces, and retained objects for its geometric primitives.

Ri NOTE: Renderman Graphics Environment

A few points need to be addressed considering the environment in which the Graphics State resides.

-Coordinate System: left-handed. There are 6 coordinate systems used in Renderman: object, shader, world, camera, screen, raster.

-Transformation: Transformation procedures work in the given coordinate system that is set.

-Cameras: Cameras are not objects in Renderman. Transformation procedures are called before the World begins that define all the parameters of the camera.

-Lights: As mentioned briefly in the shaders section, lights are turned into Light Shaders in the Renderman Interface. The positioning is set in the parameterlist of the shader.

Reyes – Render Everything You Ever Saw.

So, what does PRMan do with all of this information in the Renderman Interface passed to it via a RiB??

Well. It renders it.

And it does so through the Reyes (Render Everything You Ever Saw) set of algorithms.

The original REYES paradigm has been enhanced with extended algorithms defining the use of buckets, additional culling attributes and selective dicing and shading options. A high-level view of REYES looks like the following:


Hopefully this section will help bring all of the Ri into perspective.



The geometric primitives have their bounding boxes calculated in Camera Space. This bounding box is then calculated into Screen Space. The bounding box is a volume enclosing all of the primitive. Those geometric primitives that do not have any of their bounding box inside the camera’s viewing space (frustrum for perspective, rectangle for orthographic) is culled, removed from the renderer for the current frame. Cull-testing then eliminates those primitives that are back-facing (this can be turned off).

Displacement bounds increases the bounding box by a specified “padding” amount for all primitives which have displacement shading. If the bounds are accounted for, then displacement may leave holes in objects whose vertices have moved out of bounds and were not properly rendered. Displacement bounds can leave a lot of primitives hanging around waiting for their respective bucket (read: increase in rendering time). It’s best to make the displacement bounding box as tight as possible.

Bounding boxes also need to cover the entire motion (motion blur) that the object goes through in the frame. Depth of Field needs to be considered for bounding as well.


The 2D image space, Raster Space, is then divided into equal sized “buckets.” The buckets are measured in pixels (width x height). The primitives then go through splitting. Splitting cuts the primitives up to a small enough size to be placed into the buckets they belong. The resulting primitives are then placed into the top of the loop. These sub-primitives go through the Bounding phase to determine if they are in the viewing volume and, if so, to what bucket they belong. Splitting continues until all primitives are small enough, designated to a particular bucket and no back-facing/outside viewing volume primitives remain. Occlusion culling can keep track of primitives which are depth-sorted in a bucket and cull those primitives who occur behind fully opaque areas in the bucket. This keeps hidden primitives from going through the expensive Dicing/Shading/Hiding phases.

Default bucket sizes are usually 16×16 pixels.

Eye Splits:

A unique case which pops up in REYES is that of eye splits. The near clipping plane of the camera’s viewing volume exists for its name’s sake. It is there to clip all objects that exist before and beyond the near/far clipping plane. Well, REYES doesn’t use typical “clipping” algorithms because it doesn’t leave primitives that are cleanly cut for dicing. REYES uses typical splitting for this. For primitives to be set up for dicing and shading they must be projectable. It must also be projectable for determining bounding. That means that the primitive must lie completely forward of the eye plane (imagine the point where the viewing frustrum comes to a point).

If a primitive is entirely, independently located before the clipping plane, that’s easy: it’s culled. Even if part of it lies before the eye plane. If a primitive lies both before and after the clipping plane, but not before the eye plane, then it’s split and carried on through the renderer. However, what if a primitive spans from before the eye plane all the way forward of the clipping plane? Well, that means that the forward part of the primitive needs to be carried through, but part of the primitive needs to be culled. Since part of the primitive lies before the eye plane, a proper bounding on the primitive cannot be calculated. Thus, the renderer has to shoot splits in the dark, hoping to be able to eventually classify a section of good vs. bad primitives which are carried through or culled.

The attempts made to shoot these eye splits are a predetermined value. For instance, if you set eye splits to 6, you are splitting the primitive up to 2^6 or 64 times, hoping that you’ll create a split in the “safety zone” between the eye plane and clipping plane from where you can discard the back parts of the primitive.

The shaded area represents the area where primitives are not projectable. “A”, although not projectable, does not lie forward of the near clipping plane, so we discard it all together. “B” has parts before the near clipping plane, but it is all projectable so we can easily split it and save what we need. We cannot fully discern how to split “C” to find out what to keep or discard so we have to — as smartly as possible — split what we do have and hope we get a split which lies between the eye line and clipping plane. If we do, we can just discard everything which comes before the split line and continue splitting the rest of “C”, treating it like we treated “B.” If splitting fails, then the whole of “C” is discarded, leaving a transparent hole where it was.



Dicing dices up the remaining primitives into a tessellation of quadrilateral facets that are tiny bilinear patches. These facets are known as micropolygons and are usually about 1 square pixel in area. The vertices of these micropolygon “grids” are what go through shading.

The size (micropolygon count) of the grids are dictated by the shading rate. The bucket area divided by the shading rate gives the grid size per bucket. Default grid sizes are usually 256 micropolygons.


The shading attributes given are now calculated for the grid vertices. There is a specific order of operations for shading:

➀-Displacement Shading
➁-Surface Shading
➂-Light Shading (run as a co-routing to the first time called, then it is cached for later calls)
➃-Volume Shading
➄-Atmosphere Shading

After shading, each vertex on the grid has at least a color and opacity value. The results of shading, other than displacement, have a relatively small affect on the Reyes Engine. Transparency can slightly increase the rendering time due to less primitives being culled because of a fully opaque opacity might not be reached on a sample. Other than that, the Renderman Shading Language (RSL) dictates the actual operations which are performed in this section. We will discuss that in the RSL section and maybe even dispel some confusion as to how Reyes also can perform some Ray Tracing (which is a technique altogether different).

The default shading rate is 1.0.

It is important to note that anti-aliasing is a separate process from the Dicing & Shading.



The micropolygons grids are then “busted” into individual micropolygons.

Hiding & Sampling/Filtering:

These micropolygons are bound- and cull-tested. They are checked to see if their Bounding is still on-screen (displacement shading can move primitives off-screen). The remaining micropolygon primitives are then cull-tested to keep front-facing primitives (optional).

The bounding then tests the micropolygon primitives to determine in which pixel they belong. Once the micropolygon has been sorted into its appropriate pixel, a predetermined number of “samples” over the area of the pixel is tested to see which samples overlap the micropolygon primitive. Each of these pixel samples are recorded as a visible point, which is a depth-sorted list of color and opacity values. How these visible points are recorded depends on the shading interpolation method chosen (i.e. “smooth” or “flat” ). These visible point lists are then resolved to final pixel values. This means that they are composited and filtered to be computed into the final pixels. Once the visible point list for a bucket is resolved, the dicing/shading/busting/hiding for the next bucket is performed in the bucket scanline order (can be altered with arbitrary bucket order).

Bucketing Note: Bucketing allows visible point lists to be sampled and resolved on a per-bucket basis, thus eliminating the gargantuan amount of memory that visible point lists can consume. The image-wide database of visible point lists is replaced by a much more compact database of per-bucket high-level primitives inventory.


grid size = (bucket size X * bucket size Y) / shading rate

micropolygons per pixel = 1/shading rate

RSL – Renderman Shading Language

A major section of the Reyes Pipeline is that of Shading, which occurs after dicing. The RSL opens up this section of the renderer and exposes the actual computations that are performed on each vertex of each grid.

As you dive deeper into the abyss, I encourage yout to remember the “Big Point”; the reason for Shading:

Big Point: The ultimate goal of the entire Shading Pipeline of a renderer is to produce a color at a specific point in an image.

That’s it. All of the opacity manipulation, lighting, object modeling, etc etc etc. is to eventually create the desired mix of red, green & blue at the desired pixel.

Get on. Let’s go.

Shading in RSL 2.0 follows a very specific pipeline:

1. Call Displacement Methods in Surface Shader if any
2. Execute Displacement Shader
3. Call Opacity Method in Surface Shader if any
4. Execute Surface Shader
5. Execute Interior, Exterior then Atmosphere Shaders

We will now dissect each of the Shader Types and their methods. We’ll classify each shader in the following fashion:

1. Mechanics: What the shader does and of what it is constructed.
2. Function: How the shader works.
3. Goal: Why the shader exists.

Shader Types



The Surface Shaders define the way the point on a surface, P, responds to the environment (Lights & Objects) surrounding it. The PRMan renderer gets this point, P, from the proceeding Reyes technique. (Real quick: if the point is discovered with Reyes and ray tracing is commenced via the shaders, does Renderman truly offer full ray tracing? We’ll see. All in time).

Color & Opacity are usually attached to what are known as “light rays.” These are essentially vectors. They are not normalized at the start. At some point, I might do a vector math tutorial.

Two major “rays” drive the surface shader. These are the I-ray and the L-ray. The I-ray, or incidence ray, comes from E, the “eye location.” A more advanced definition might be the entrance pupil of the imaging lens mechanism at which all incoming rays converge. The L-ray points from P in the direction of a given light, object, etc. which will drive the incoming light coming onto P. The I-ray contains the Color & Opacity values that we are seeking. Let’s see that:



The function of these shaders take predefined input variables and delivers result variables to the renderer. Most of the predefined variables are read-only, but a few are read/write including the result variables, of course.

Some of the values are known as “derivatives.” Remember what a derivative is? It’s the measurement that describes how much the function (read: method. read again: Shader) changes due to the change of some input variable. [Ever heard of differentiation? That’s a just a fancy word to name the way you find the value of this so-called derivative]. The binormal & tangent derivatives, (dPdu,dPdv), are technically geometric values in the Surface Shader.

So let’s see how the Surface Shader function is set up:

Pixar PRMan Renderman Surface Shader Function Method

For derivatives concerning position: The actual change of P’s position in each direction is P(u+du)=P+dPdu*du and P(v+dv)=P+dPdv*dv.


The goal of the Surface Shader is to compute the Color & Opacity of the -I-ray, the accumulation of light coming back from the surface along the I-ray.



The Light Source Shader defines the Opacity, Ol, and color of light, Cl, coming from the light’s origin point, P, to a point in space, Ps, along the L-ray. The value of the Color is also the intensity of the light. If this doesn’t make much sense, I recommend googleing colorimetry, radiometry and photometry. For now, just trust that the Cl value is color and intensity inclusive.

The L-ray, light ray, represents the vector which points to the point in space, Ps, in question. The geometric related variables, other than Ps, define the Light Source itself and not any other geometric primitives.



The function has relatively few variables that it needs to juggle around compared to the Surface Shader. Remember, the geometric variables other than Ps are in relation to the light source itself and no the geometry being shaded. Through this, the light source can be independent or attached to a geometric primitive.

Pixar PRMan Renderman Light Source Shader Function Method


The goal of the Light Source Shader is to define the amount & color of light and it’s direction.



The Volume Shaders attenuate/alter the Color & Opacity of the I-ray coming from the ray’s origin point, P. The input variables which drive the Volume Shader are the same variables which it outputs. In this respect, the mechanics of the Volume Shader work very much like a filter. It’s important to know that the Volume Shader is agnostic to its own location and the location of primitives in space. It is just fed the I-ray and its Color & Opacity from the renderer.

Also, the Volume Shader includes all volumetric shading: Interior, Exterior and Atmosphere.



The function of the Volume Shader can be seen as a filter. The output is in the same form as the input and can be “transparent” both in the literal sense and in the sense of signal filtration.

Pixar PRMan Renderman Volume Shader Function Method


The goal of a Volume Shader is to attenuate the light coming into the camera to simulate the volumetric qualities of the space or objects between the light rays origin and its final destination to the camera.



The Displacement Shader alters the geometric variables of the vertex position, P, the surface normal, N, and/or the displacement of P across time, dPdtime.



The function of the Displacement Shader seems pretty similar to the Surface Shader. The inputs are slightly different as no ray Color & Opacity values are used, nor the L-ray. Output is purely restricted to changing geometric properties.

Pixar PRMan Renderman Displacement Shader Function Method


The goal of the Displacement Shader is to alter the perceived location of the surface (Normal) of an object or the actual location (Point) of the point. It takes the displacement across time into account for motion blur consideration.



The Imager Shader provides access to post-processing in Screen Space the Color & Opacity of the values produced from the combination of all other shaders.



Keep in mind that the geometric properties here are now in reference to the Screen Window and the pixels it generates. Like the Volume Shader, the Imager Shader also acts as a filter and, in many respects, is a digital image processing filter.

Pixar PRMan Renderman Imager Shader Function Method


The goal of the Imager Shader is to provide further processing to the Color & Opacity of a pixel before the Reyes Algorithm leaves the Shading stage to move on to the next stage.

Shading Functions…AKA ShadeOps

Shading Types and their input/output variables show us what they need and what they provide. They do not show us how they process the information nor what goes on inside the Shader itself (as diagrammed as a circle in the function sections above).

This is where shadeops come in. Shader Operations equals ShadeOps. The five main Block Statement Constructs of Renderman Shading provide the foundation for retrieving, processing and returning the necessary values/variables discussed in the previous section.


Construct Shadeops

The Construct Shadeops are block statements; they “loop” through their functions and you define what they bring/send back.


Gather shoots a given number of rays from the shaded point, P, in the direction of vector dir. The number of rays that are shot are called samples and they are shot within a given cone angle from the point. The sample rays can return with values relating to the surface point that the ray intersects or we can simply use the values of the rays themselves.

Gather utilizes Ray Tracing to shoot its sample rays. To “shoot a ray” for sampling means to not only create the ray, but also calculate the values for it to return.

Pixar PRMan Renderman Shader Function gather gather()

Gather can perform computations for when the sample ray hits a surface and when a sample ray misses and hits nothing.

There are two categories of Gather. Each designates the intent of the information gathered from the samples:

➀-illuminance: for gathering illuminance-related data regarding the samples. The rays are created and shot to return values concerning the shading of the point that the ray intersects.
➁-samplepattern: for gathering informational data regarding the samples. Does not perform ray tracing, but delivers information about the rays set up for possible shooting for the shader. Since the rays aren’t actually fired, they are considered “missed” and the computations assigned for missed rays are performed.


There a number of parameters available for fine tuning in the Gather shadeop which often goes with little or no explanation. I’ll try to cover all of them here.

Pixar PRMan Renderman Shader shadeop Gather parameters

First of all, we start with…yep. A point, P, on the surface. The grey rectangular area patch represents the samplebase. It is the area from which the jittered ray samples have their origins. It defaults to the size of the micropolygon, so it fits perfectly into the micropolygon area in the picture.

A bias actually pushes the origin of all sample rays slightly off of the surface so we don’t run the risk of self-intersection. Another words, we don’t want the rays to accidentally hit the same point from which it is originating.

A max distance sets the maximum distance the sample ray is allowed to go before returning missed. This is infinity by default.

An opacity threshold can make the ray continue to go forward collecting hit surfaces until the Opacity, Oi, of the accumulated intersections reaches the threshold. An opacity hit threshold can determine whether a surface point has even been hit by the sample ray depending on whether or not the point has passed this threshold.

Other objects’ surfaces in the scene can be selectively turned on or off to the visibility of the sample rays by setting a subset which defines what points may or may not be hit by the sample rays. Furthermore, a hitsides parameter can dictate whether the sample rays can hit the front side, back side or both on a one-sided surface point.

Not all sample rays are necessarily created equally. Well, uniform distribution would make the weight of each ray the same, but cosine distribution of the samples shot out in Gather would weigh the value of each ray against the cosine of the angle the sample ray makes with the center directional ray or surface normal.  This is analogous to the cosine of the angles between vectors in classic shading models like Lambertian Shading.

Uniform vs Cosine Distribution PRMan Pixar Renderman Shading

Illuminance Gathered Data

The sample rays for illuminance gathering can bring back the output variables for Surface, Volume and Displacement Shaders. It is important to know that these values are retrieved by first executing the shaders on the point intersected by the sample ray. These values are labeled as shadertypes output parameters.

The sample rays for illuminance gathering can also bring back the input variables for the Surface, Volume and Displacement shaders that are available before the shader’s execution. These are known as Gather’s primitives output parameters.

The third data type available to Gather is Attributes assigned to the intersected point. Yes, Attributes as in Graphics State Attributes. These are the attribute output parameters.

Samplepattern Gathered Data

The rays here are created in this category, but do not bring back information about the points they intersect with. The information created gives us the ray’s origin, direction and length.

These are Gather’s ray output parameters.


The second block statement construct shadeop is Illuminance. Compared to the Gather shadeop, Illuminance should feel like a cake-walk.

Pixar PRMan Renderman Shader shadeop Illuminance

The Illuminance shadeop takes the integral (combines) all of the light sources which appear in the three dimensional cone created by an input axis (typically the surface normal), the apex of the cone — position P, and an angle which defines the width of the cone. Essentially, it obtains the L-rays and Cl values of those L-rays, making them available for further computations. An angle of PI/2 would be a hemisphere sampling, PI would be all encompassing and 0 would be an infinitely small ray along the axis given.

Notice that light sources outside of the defined integral space/cone are not included.

Additionally, you can define which light sources are allowed to be included in the integral. How the integral is computed is left up to the shader writer.

Take note, that the Light Source’s own direction and spread can play into whether or not it is included in the integration.


Illuminate creates an Light Source and its Color & Opacity (Cl & Ol). The parameters you pass to it creates the position, P, of the light source. Optionally, you can also create it’s direction vector, axis, and its coverage cone dictated by an angle from the axis given.

The L-ray in the illuminate shadeop is the same L-ray going to the surface point being shaded.

Pixar PRMan Renderman Shader Illuminate Shadeop

A quick note on Ol, the L-ray’s Opacity. It is deprecated. That means, for the most part, you’ll never have to worry about it or figure out its universal meaning.


The Solar shadeop is simple in design, but can be a bit tricky fully understanding the mechanics. It’s important to understand that solar doesn’t have a position. Rather, its position is infinity. The two parameters that you can input are a directional vector and an angle. This can be confusing when you throw in the fact that you can define a cone like in the previous shadeops. So, if we do not specify a position, or apex, of a cone, what does the cone represent?

Pixar PRMan Renderman Shading shadeop Solar

When the angle is not specified (or zero), the Solar shadeop is treated like a lightdome completely surrounding the point being shaded (See the greyed rays above). When the angle is specified, the cone signifies that the Solar Light Source is only coming from the given directions in the cone. Think about it for a second: a single, infinitely small ray signifying the Solar Light Source is easily blocked. What if we want some flexibility? We increase the possible directions that the source could come from. Still confused? Imagine that the cone creates a disc sized light source at a distant infinity.

It is important to realize that the parameters for Solar are for the direction and cone of the Light Source. The visualization above might appear as if the cone angle is dependent on the surface point being shaded; it is not.

The Solar shadeop does dictate its Color, Cl, as well.

Something interesting to note: “wrapping” occurs when the angle is so large that objects which normally wouldn’t be shaded by a single point light at infinity are in fact seen and shaded.

Pixar PRMan Renderman Shading shadeop Solar

There will be Case Study posts that will go up shortly after this Chapter is finished. They will further explain situations like these.


So, what if a Light Source Shader doesn’t contain an Illuminate or Solar shadeop? It isn’t run by the shader. Unless…you create an Ambience shadeop. Now, what would you do with Ambience? Set a Cl value, I suppose. With global illumination techniques available in large number, this isn’t the most grabbed tool in the shed nowadays.

Function Shadeops

These shadeops directly return a specific value of a specific type. In theory, you could perform most of the Shading & Lighting functions with the Construct Shadeops, but the internal code for the Function Shadeops can provide finer control or better efficiency for the very specific task for which the shadeop was created.

There are four main types of Function Shadeops:

➀-Shading & Lighting
➁-Texture Mapping
➃-General Type



[Returns Color]

Gives the ambient color value of the shaded surface point.  Point must be lit by ambient Light Source. (see “Ambience” shadeop)


[Returns Color]

Caustics are the result of a specular light ray reflecting off of a surface onto a diffuse surface, OR the result of a specular light ray refracting through a surface and landing on a diffuse surface.

Pixar PRMan Renderman Shading shaderop Caustic

Caustic needs two phases.  The first is to create a caustic photon map (.cpm) of the scene.  This would be all of the points, P, in the illustration above.  Then the shadeop Caustic can read the position and normal for the respective point in the photon map to shade the diffused points.

Caustic uses ray tracing to fire the rays from the light source to its final destination.


[Returns Color]

Diffuse wraps up the Illuminance shadeop in a form which produces a Lambertian shading model of the point being shaded.

Pixar PRMan Renderman Shading shadeop Diffuse Lambertian

The “^” above the Normal and the L-ray signify that these vectors are “normalized.”  I’ll post a vector math post sometime in the future.

The Diffuse shadeop loops through a hemispherical region above the shaded point, P, to find all of the Light Sources.  For each Light Source, it calculates the dot product between the L-ray and the Normal.  This is value between 0.0 – 1.0.  This is multiplied times the Cl of the L-ray and gives the Color for the shaded point.  This Color is often used for Ci.

Notice that the viewing angle, created by the I-ray, does not affect how the point is shaded.  This is an important property of Lambertian shading.

Indirectdiffuse & Occlusion:

[Returns Color]

Consider Indirectdiffuse & Occlusion as specialized Gather shadeops.  They shoot sample rays into a hemispherical direction above the surface.  The hemispherical region is centered around the surface Normal defined for it.

Indirectdiffuse returns the diffused shading of all the points that the samples hit.  This shadeop can be run in a ray tracing mode or point based mode.  Point Based is a shading technique we haven’t described yet.

Let me stop here for an important note about sample rays in general, for shadeops which require them.  It is most efficient to use a number of sample rays that is 4 times a squared number (Integer). Examples: 4, 16, 36, 64, 100, 144, 196, …


[Returns Color]

Phong is a type of Specular Light Model.  Unlike the Diffuse Light Model, the Specular Light Model is very much so dependent upon the viewing angle, I.

Pixar PRMan Renderman Shading shadeop Phong Specular

Phong specifically takes the surface Normal, N, and creates a normalized Reflection Ray with the I-ray.  The Reflection Ray is the R-ray.  The Dot Product of the R-ray and any L-rays are taken in an Illuminance shadeop loop.  If the Dot Product is greater than 0.0, it is raised to a power.  Otherwise, there is no Color returned.  This power exponent dictates the fall-off or size of the specular highlight.  How does it do this?

Well, the Dot Product of two normalized vectors will be between 0.0 – 1.0.  A number in this range fraised to an exponent yields interesting results.  If the exponent is positive and < 1.0, it raises the Dot Product above itself.  It approaches 1.0 as the exponent approaches 0.0.   If the exponent is positive and > 1.0, it lowers the Dot Product.  The Dot Product will approach 0.0 as the exponent climbs.

Pixar PRMan Renderman Exponent Power Phong Specular

The final output number calculated from all of this is multiplied times the Cl from the L-ray.  And this provides the return Color with which you can shade your surface point, if you so desire.

It may take some time to let the exponent section sink in.  That’s ok.


[Returns Color; Float]

SpecularBRDF utilizes the half-vector method of finding specular highlight.  If a view angle, I, is a perfect reflection with a light source, L, then the angle of incidence equals the angle of reflection.  In this case, the Normal vector is halfway between the I-ray and the L-ray. This gives a full value of the specular highlight reflected.    We create a half-vector which appears halfway between the I-ray and L-ray.  If it is dead-on with the surface Normal, N, then we have full reflection.  As the half-vector deviates from this, the value of specular highlight reflected decreases.

Pixar PRMan Renderman Shading shadeop Specular BRDF

Some new vector math here:  to get the half-vector, we add the normalized L-ray to the normalized I-ray.  This gives us the half-vector.  Then we normalize the half-vector to use it.

The SpecularBRDF includes a parameter similar to Phong’s size.  It is known as roughness.  Roughness is essentially the inverse of size in how it is computed.  As roughness goes up, the final power exponent for the Dot Product approaches zero.


[Returns Color]

Specular essentially uses the SpecularBRDF shadeop and runs it in an Illuminance shadeop loop for each L-ray.

Pixar PRMan Renderman Shading shadeop Specular Classic


To be continued…


[Returns Color; Float]

Trace shoots a ray tracing ray.  It shoots from the current shaded point, P, in the direction of a specified vector, dir.   It returns the Color of the surface point that it hits while traveling.

Pixar PRMan Renderman Shading shadeop Trace Ray Tracing

As you may have noticed, Trace can also return a Float value.  This value is the distance of the ray to the surface that it hit.


[Returns Color]

Transmission returns the amount of transmission allowed between a source point, Psrc, and a destination point, Pdst.  Transmission is the reciprocal of opacity [ 1/Opacity = Transmission ].  The result of Transmission can be multiplied against the Light Source Color, Cl, to determine how much light makes it to a certain surface point if there are transparent/translucent items on the way.

Pixar PRMan Renderman Shading shadeop Transmission

Where Prsc & Pdst is entirely up to you.  You can even specify and cone angle and number of samples to fire.  Why would you do this?

If you set Psrc up as the surface point and the Pdst as the Light Source: by specifying a cone sample you effectively create soft shadows on the surface point because this makes the Light Source a type of area light, in a sense.

Also, this all looks oddly familiar to the Solar shadeop minus the transmission gathering.




Blender to Pixar’s PRMan: Chapter I [Destination]

The goal is simple:

Create a way to fully utilize Pixar’s Photorealistic Renderman (PRMan) with the open source application Blender.

The “way” should provide full Renderman Specs including the ability to fully & manually control the RiB created for rendering and have full access to the PRMan’s shading capabilities.

The “way” I have decided to do this is to implement it as an Add-On Render Engine utilizing Blender’s Python API.  A good, thorough understanding of the following items will be necessary to pull this off:

-Blender’s Internal Structure (DNA/RNA/Python API)


-RiSpec (Renderman Interface Spec)/RSL (Renderman Shading Language)

-A version of Blender and PRMan (obtained honestly)

The extremely high level view will look like this:

Shua Jackson Joshua Jackson

I’ll first begin by taking a look at the the next step deeper into the goals of the RenderEngine Addon itself.  Once we know what we need to accomplish, then we can better know what we need to learn as far as how it is to be accomplished.


Constantly Revised...

Film Amount:

    • Diameter of Spool/Reel of Film:

D = 2 * sqrt{ [C + F] / π }
D: Diameter of Spool + Core
F (Film Area) = Length of Film * Thickness of Film
C (Core or Barrel Area) = π * (Radius of Core)²
π = 3.14…
Notes: Basically you’re just adding the area of the film plus the area of the core and finding the diameter of the combined area. ( AREA OF A CIRCLE = πr² ) Motion Picture Acetate based film thickness is around .006″ (152.4 μm).

Motion Picture Film Technology [Volume I: Prologue] : Take a seat

Motion Picture Film. Pretty self-explanatory and a revelation in itself as to how it works. There are plenty of sites devoted to the art of crafting a story through these moving images. Yet, when you begin to search for the science behind the form you are given a pretty mess of disheveled facts with no apparent cohesion. And why should you? To understand all you could about motion picture film technology would require several lifetimes unwrapping a complete understanding in chemistry, physics, mechanics, optics, signal processing, computer engineering, programming; we’re only just starting to discuss the elementary avenues involved.
That’s what this blog will attempt to do: To provide a comprehensive on the technology of motion picture film-making.
Why should you understand as many technological angles as you can in the creation of screened stories?
You might shouldn’t. You might should just make a damn film.
But also ask why film? Why should you tell your story on film, as opposed to writing a novel, poem, song?
If you can, then buy a pen, some paper and -if a song- the instruments you need. Go tell your story in the form of a play. Create a giant art installation piece of your favorite fruit and attach it to the side of a building.
Do not waste your time telling your story on film if you haven’t the guts to face what is required of you to pull it off.
Let’s continue to filter.
If you’ve got endless cash to throw at your problems, stop reading.
No? If you’ve got all the right friends in all the right places who love you enough to do an awesome job for no monetary compensation, stop reading.
No? If papa pays your rent and bought your car…stop reading.
If you’re broke, stop reading.
All right. Now that I’m here by myself writing to myself, I can continue.
Let’s make pretty pictures.

Finally…we begin

So. It took long enough to get started, didn’t it? Let me make this brief and unwrap the purpose of this blog.
This blog serves to inform the public on the various avenues of digital and analog imaging with particular respects to motion picture film.
At the moment, I am assembling a team of artists, technicians and enthusiasts to begin working on the currently titled film, History and Tragedy of Doctor John Faustus. I’ll attempt to chronicle the technical explorations of post-production techniques in this blog as well….
Here goes!

Antigone Teaser Trailer

The teaser trailer to Antigone is up!

The film premieres at Anderson University later this month.

View Shua Jackson's LinkedIn profile– Shua Jackson’s LinkedIn View Shua Jackson's Demo Reels – Shua Jackson’s Demo Reels

Antigone Film Stills!

Antigone, which screens at Anderson University January 27-30, now has some film stills up online!

The film follows the events leading to the climax of Sophocles’ Oedipus trilogy.  The film will be presented with the Theatrical Presentation of Antigone at Anderson University, SC  in late January.   The play is directed by my beautiful (!) wife Phyllis Jackson.  She has assembled an absolutely all star cast:

Elyse LeRoy will be playing Antigone with Marcus Salley hard opposite as Creon.  Steven Bailey, Lizzie Porcari, Robbie McCracken, Cassie Burton, Zach Bryant and Audrey Reed bring together a cast that is simply to powerful to miss.

The cast of the play will also be appearing in the film, following the series of events which lead to the unexpected climax.  It will change you and it will challenge you.

Let yourself be moved.

Reuben Slife Antigone Anderson University Joshua Jackson Cody Moore Antigone Anderson University Joshua Jackson Elyse LeRoy Antigone Anderson University Joshua Jackson Cody Moore and Vic Aviles Antigone Anderson University Joshua Jackson Jessie Davis Antigone Anderson University Joshua Jackson Steven Bailey Antigone Anderson University Joshua Jackson


View Shua Jackson's LinkedIn profile– Shua Jackson’s LinkedIn View Shua Jackson's Demo Reels – Shua Jackson’s Demo Reels   Shua Jackson Blogger  Site– Shua Jackson’s Blogger Site

Antigone screens at Anderson University January 27-30!

Antigone to Screen at Anderson University!

Joshua Jackson Anderson University Antigone  

Antigone, the last segment in Sophocles Oedipus Series, is just a couple of weeks from being wrapped. The film will be screening at Anderson University with the Theatrical Presentation for an expected audience of 550 people over the course of 4 days in late January!
I’m stoked about this film and really hope that it “catches on.”
The film has been building anticipation in the area with the support of many of Anderson’s local businesses and institutions. McGee’s Irish Pub, Dead Horse Productions, LLC and Anderson University’s Theatre Department has provide us with many of the fantastic shooting locations. 

The film’s Principal Photography is very near to being completed, having run a course of about 1.5 months. The offline editing and sound editing has already begun. We expect final picture changes to be settled around the first week of January with a final mix to 5.1 Surround by January 15th.
My beautiful wife, Phyllis Jackson, is directing the theatrical performance of Antigone. 

A teaser trailer of Antigone is expected to be released at the turn of the year!

Be on the lookout! 

View Shua  Jackson's LinkedIn profile– Shua Jackson’s LinkedIn  View Shua  Jackson's Demo Reels– Shua Jackson’s Demo Reels 

Shua Jackson Blogger  Site– Shua Jackson’s Blogger Site


Antigone screens at Anderson University January 27-30!