Blender to PRMan: Chapter 2 [RiSpec or Welcome to Renderman]

January 11, 2012 Update: Work in Progress. Extending RSL information. Continuing to fill in temp pictures until a full overview of spec complete. Currently completing section [RSL ShadeOp: Function Shadeops: Shading & Lighting]

Springboard:

What is Renderman?
Cracking the RiBs
The Ri
Reyes Rendering
The RSL

_

What is Renderman?

Renderman is a specification for rendering three-dimensional geometry in the context of a scene. This scene will contain the geometry itself, data necessary for shaders (mini-programs which calculate the how the point on the object will look) to shade/alter the object, lights in the scene and all of the data concerning the camera with which to view the scene. Here, we will focus on Renderman Pro Server 16. The RiSpec is 3.2.

Rendering, in its simplest form, is taking all of this scene data in scalar & vector form and running calculations which will create a 2D image. From here, the image can be encoded/transcoded to various formats(still & motion) each with their own specification.

Renderman works to create photorealistic renders with a foundation on the three major cornerstones of rendering:

CORNERSTONES Shua Jackson Pixar PRMan Renderman

Comprehensive detail from each of these cornerstones are wrapped up into the Renderman Interface. The Interface and the Renderer each answers a specific question regarding rendering:

What is the desired picture to be rendered? -Renderman Interface
How is the desired picture to be rendered? -Pixar’s Photorealistic Renderman (PRMan) Renderer

There are two main ways that the Renderman Interface can be utilized to render the scene. These methods are represented as “bindings.” There is also a PRMan specific python binding, but I won’t discuss that here.

Method One (RiB Binding): These scene description values can be stored in a file known as the Renderman Interface Bytestream, or RiB. Consider the RiB a “metafile” containing information for the Ri API procedure calls (more later). The renderer here, PhotoRealistic Renderman (or PRMan), takes this RiB file, imports the necessary images (.tex files) and shaders (.slo files) and runs the render via Ri API procedure calls.

Method Two (C Binding): The Modeling Application can directly make the Ri API procedure calls to the renderer, internally invoking the renderer to run the Graphics State and Geometric Primitives. The RiB file is completely bypassed in this approach.

We will use Method One for our Blender to PRMan protocol.

RiB files (.rib) can be binary or ASCII. We will be focusing on ASCII. PRMan’s catrib can translate between the two.

As you can tell from the figure above, the first thing we need to work out is the implementation of RiB creation through Blender, our modeling application.

Here’s the gameplane: from inside the python addon module, we will make the calls necessary to create the RiB file and fill those calls with the necessary scene description data from Blender. This will happen upon export or render.

Two questions now.

-What scene description data goes into the RiB file?
-How do we get data from Blender into the RiB file?

This chapter will focus on answering the first question. Before we can know where we will get the data to create the RiB file, we must know what data is required or optional in a RiB.

Cracking the RiBs

Renderman Interface Bystreams are language independent files that are full scene descriptions. As mentioned above, they can be in ASCII text format or in binary. They can also be created with or without gzip compression. The binary format is useful for compressing/saving space and when transferring between servers. Obviously, the binary format is not human readable.

The structure of a RiB file is simple:

PreFrame
Frame Begin
–Options
–World Begin
—-Attributes Begin
——Geometric Primitives
—-Attributes End
–World End
Frame End

The PreFrame region encapsulates most of the metadata of the RiB file, including user comments.

The FrameBegin to FrameEnd Region contains all of the Graphics State’s information for the Renderer for the given frame and the Geometric Primitives subjected to the Graphics State (more on these later).

Options appear right after FrameBegin and are “frozen” for the given frame once WorldBegins is called.

All Geometric Primitives and their respective Attributes are called in the WorldBegin to WorldEnd Region.

A bit of a “gotcha” in Renderman is the idea that you define an object’s attributes (like color, transform, etc.) BEFORE you define the actual geometry of the object. This is actually a good case-in-point for how the RiB is read into the renderer. If you aren’t a programmer, think of it as working “outside-in.” So, essentially you start in the inner most parts of the RiB and work your way backward to accumulate all of the necessary information needed to render the object. This is actually in line with the Imaging Pipeline discussed in the Ri section on Options.

An extremely powerful aspect of RiBs is that of RiB Archives. In short, you don’t need to keep ALL of the scene description in a single RiB. A single object alone might have polygon counts upwards in the 10s of millions, so why clutter your main RiB with points making it hard to locate other important information? Well, these heavy objects can be saved in their own RiB file and injected into their specific areas of the main RiB with a single “ReadArchive” command. That’s nifty.

This represents the very basics of the RiB file format. How to fill out a RiB file will become a lot clearer once we understand the Ri Structure. In the mean time, let’s discuss the files external to the RiB that are also necessary for the renderer’s success.

Two external file types accompany the RiB family:

-Shaders -Textures

Shaders

Shaders are miniature programs (or, more accurately, self-standing routines/functions). These are the “plug-ins” of the RiB file/API C-program. Shaders are most known for their use in surface shading; dictating how a surface responds to the lights around it. However, there are technically eight different shader types:

➀-Surface: These shaders compute the color and opacity of an object’s surface. You would be familiar with the computations/algorithms used in these shaders if you have heard of Lambert, Blinn, Phong, or Cook-Torrance shading.
➁-Light: Lights in modeling are essentially light shaders. They are called from the other shaders (namely, surface & volume) to determine values associated with lights (i.e. intensity, distance, color, etc.).
➂-Volume: Volume shaders work as interior or exterior shaders. Interior shaders shade the interior of a transparent object (translucency is a factor of surface shading; this is subsurface scattering). Exterior volumetric shading attenuates raytraced reflected/refracted light.
➃-Atmosphere: Atmospheric shaders shade the area between the object’s point and the camera. Consider it the “volume” between the object and camera.
➄-Displacement: Displacement shaders move the actual geometric points of the object. Bump Mapping moves the surface normals along a single axis. Normal Mapping displaces the surface normals in three-dimensional (3 axes) space.
➅-Deformation: Deformation is broader than Displacement. It alters the entire surface of a geometric primitive.
➆-Imager: These shaders perform filtering and compositing on the rendered 2D raster image.
➇-Projection: A shader which maps the camera space to the screen space.

Shaders don’t work in a vacuum. Often, they rely on the values of other shaders. For instance, the surface shaders often rely on the values of the light shaders and volumetric shading often needs the color from surface shaders.

All shaders for Renderman are written in Renderman Shading Language (RSL) in ‘.sl’ files. For PRMan, these shaders are compiled to ‘.slo’ files. PRMan’s shader executable can compile shaders written in RSL.

Shaders often come with a list of parameters that the user sets called the parameterlists. The are settings which are passed into the Renderman Interface via Ri Procedures. They can be values set for things like the surface color, displacement amount, mapping value, etc.

A fully comprehensive look at shaders can be found in the RSL section below.

Textures

Textures are images. Textures are maps. In CGI, “textures” often refer to color maps for surface shading, but textures can also serve as maps for displacement (Displacement, Bump & Normal Maps), shadows (Shadow & Deep Shadow Maps) and as caches (Irradiance Cache). While I placed textures alongside shaders, textures are technically a subgroup of files under shaders. Shaders essentially import these files for use in their calculations. Hence, the term maps/mapping. They map a value to the given point being shaded and the shader takes this value and performs its calculations based on the type of shader it is. The mapping of a 2D texture to a point on the object to begin with is done via parameterization (“UV mapping”).

Textures are obviously created from 2D images. But, for renderman to render correctly, these textures need to be optimized for use. The main optimization is the creation of the image as a MIP-map image. “Multum in parvo”, MIP, means “much in little.” Essentially, this means creating subsequent copies of the image at 1/4 resolution area of the previous copy. For PRMan, the executable txmake is used. The extension is typically .tex for texture images, .env for environment maps and .shd for shadow maps. However, this is just good practice and not a strict rule.

A note on texture images, whether you use Renderman or not: Texture images should follow the “powers of 2″ Rule:

-Square image (Width = Height) unless an environment/reflection map.
-Dimensions at a power of 2 (2, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, …).
-If tiling for openEXR, 32×32.

Point Clouds & Brick Maps

Wait. These aren’t listed as the two file types used with RiB. But they are used. Point Clouds are pretty much radiance caches used as a lookup cache for shaders’ computations. Brick Maps offer the 3D version of Textures. Much more on these later…

The Ri

Finally, we move to the meat of the matter. The Renderman Interface lays out the entire specification on which the RiB stands. The name tells all. The Renderman Interface is just that. An Interface. It is a means of communicating between the modeling program with all of its objects, animation, lights, etc. and the renderer. In this case, it will be the interface layer between Blender and PRMan. And this layer will be passed between the two via RiB.

The Renderman Interface maintains what is known as the Graphics State & and its Geometric Primitives. These two classes encapsulate all of the Renderman Interface. We’ll begin with the Graphics State.

Graphics State

The Graphics State consists of all parameters needed to render the given frame in the context of a sequence of primitives (read Geometric Primitives). The Graphics State is composed of three elements:

These three elements compose the major sections of a RiB.

I mentioned earlier about making “Calls” to the Renderman Interface. From here on, we will refer to these as Ri API Procedures. They make up the C Binding of the Renderman Interface. In code, you will recognize them by RiSomeCallGoesHere(). These procedures create, enter, alter, or exit from the three elements described above. I’ll make mention of them in this text so that you can see the parallels between the RiB binding and the C binding. There are 128 non-deprecated Ri Procedures in the Renderman Interface.

There are 5 main Ri Procedure types which affect the Graphics State:

➀. Geometric Primitives Procedures
➁. (Interface) Mode Procedures
➂. Options Procedures
➃. Attributes Procedures
➄. Maintenance Procedures (Take care of texture mapping and various other elements of bookkeeping the Graphics State)

INTERFACE MODE

The Interface Mode of the Renderman Interface is the head honcho that overlooks the context of all other Options/Attributes. For those with programming experience: the Interface Mode manages the Graphics State’s stack, making Options analogous to global variables and Attributes analogous to local variables. Typically, the Interface Mode defines the change in scope that you’ll see in a RiB file. The scope keeps current states regarding the Graphics State. In the C Binding, the Interface Mode has 15 Ri Procedures.

Interface Mode Ri Procedures {RiB Binding}:
RiBegin
RiEnd
RiDeclare {Declare name class type}
RiObjectBegin
RiObjectEnd
RiContext
RiGetContext
RiFrameBegin {FrameBegin int}
RiFrameEnd {FrameEnd}
RiWorldBegin {WorldBegin}
RiWorldEnd {WorldEnd}
RiAttributeBegin
RiAttributeEnd
RiSolidBegin
RiSolidEnd

OPTIONS

Options control the scene’s global rendering parameters. Once you get to the inner levels of the Graphics State (in World), these cannot change. They affect the parameters which are applied to all objects in the scene and are independent of the local parameters which are affecting those individual objects.

Options are composed of three sections:

Camera

The Camera is the first major section of Options in the Graphics State. The Camera settings are as follows.

-Resolution (Horizontal/Vertical Resolution, Frame Aspect Ratio & Pixel Aspect Ratio)
-Windows (Crop Window, Screen Window & Clipping Planes[near & far])
-Coordinate System Transformations (World->Camera Transformation, Camera Projection [Camera Coordinates to Screen Coordinates])
-Camera (F/Stop, Focal Length, Focal Distance, & Shutter Open/Close)

Essentially, almost all of these parameters are for setting up the coordinate system transformations which ultimately lead to what is and what isn’t seen in the final rendered image.

One area of rendering that at times may seem esoteric and confusing is the area of coordinates. Coordinate systems can seem perplexing but a clear view of coordinate systems and the transformations between them can open up a world of enlightenment for fixing rendering issues. There are six coordinate systems or “Spaces” used in Renderman:

➀-World Space: Non-Perspective. Global coordinates of scene.
➁-Object Space: Non-Perspective. Local coordinates of object in scene. “Origin” of object.
➂-Shader Space: Non-Perspective. Local coordinates of a point on an object in a scene. Normal = z, Binormal = y, Tangent = x. The binormal and tangent are dependent upon the current respective (u,v) coordinates. This makes texture mapping possible.
➃-Camera Space: Perspective. Coordinates relate to location of camera in World Space, z corresponds to depth in frustrum (0: near clipping, 1: far clipping).
➄-Screen Space: Perspective. x & y correspond to pixel locations.
➅-Raster Space: Perspective. 2D coordinates of final pixel width x pixel height dimensions.

Perspective spaces are relative to the Camera coordinates. Non-Perspective are independent of the Camera.

A closer look at the imaging transformations might help reveal a good deal more about the inner workings of coordinate systems in Renderman:

To get from one coordinate system to another coordinate system, Transformations are used. Matrices dictate how the Transformations calculations are carried out (Transformations themselves are discussed in more detail in Attributes). The parameters set in the camera directly change the matrices’ values.

For Imaging (bringing the object into the raster image) the imaging transformations go from inside the circle to the outer Raster ring. We’ll talk about Imaging in the Display section.

For setting the Camera, the reverse takes place. The Raster coordinates of the image (file or framebuffer) are set depending on the parameters requested. From here, a Screen Transformation is set in the current transformation matrix. This is the matrix that is currently in scope while creating the RiB. A projection matrix is appended to this matrix to go from Screen to Camera and from Screen to Raster. Then the current transformation matrix is changed to the camera transformation, which dictates transformations to and from the camera coordinate system. Once camera coordinates are established, all future transformations move the world coordinates relative to the camera coordinates. Another words, the Camera doesn’t move, the World moves. When the “world” begins, the world coordinates are established from this.

Other than coordinate systems, the camera settings dictate Motion Blur & Depth of Field.

Display

The Display is the second major section of Options in the Graphics State.

All renderers should have the capability to produce Color, Opacity and Depth values for a given image. The Display dictates how the color, alpha and depth images are converted into a displayable form. The Display outputs to two forms: An Image File or the Framebuffer of a display device.

Display runs through a particular Imaging Pipeline. We touched on this pipeline briefly in the Camera section. Coming out of the Hidden Surface Algorithm (more on this later), a color Image is available. But there is much work yet to be accomplished. This is where the Display section of the Graphics State kicks in. Color, Opacity and Depth each are processed in separate sections of the pipeline before coming together for the final image.

Often, the color image coming out of the Hidden Surface Algorithm is at a much higher sampled rate that the resolution of the Raster space. The color values are first filtered with a selected Filter and then Sampled. The sampled color image then moves on to the Exposure level where the gain and gamma of the pixel values are adjusted. At this point an Imager Shader can be implemented to further process the image before it is quantized with the Color Quantizer(reduced to integer values and dithered). Depth values are essentially Screen space z-values (remember, left-handed, so from 0-1.0) and usually just go through an optional Imager Shader and the Depth Quantizer. Alpha essentially uses the same Pipeline as Color.

A thing to note: Color values (RGB), Opacity (Alpha) and Depth (Z-Values) can be stored in separate channels in an image. Other custom display channels can be created and used in Display.

A thorough knowledge of digital image processing helps to clear portions of this section up if you are left with any confusion.

Run-Time Renderer Controls

The Run-Time Renderer Controls is the third major section of Options in the Graphics State.

These Renderer Controls control three aspects of the renderer:

➀-Hiders
➁-Operation
➂-Performance

To get a good understanding of the Renderman renderer controls and many Attributes which follow, it helps to understand REYES. We’ll cover that sometime…

We’ve mentioned the Hidden-Surface Algorithms. These are known as Hiders. Hiders are essential in determining what objects/surfaces should be considered in the given section of the rendering pipeline and which should be discarded. They are Renderman’s version of Hidden Surface Determination. This is necessary to determine surfaces that are “hidden” from a certain viewpoint. There are eight Hiders which assist in selectively rendering objects in the context of the scene sequence:

➀-Hidden (aka Stochastic)
➁-Raytrace
➂-Depthmask
➃-Null
➄-Opengl
➅-Paint
➆-Photon
➇-Z-Buffer

Operations control how the renderer (PRMan, in this case) performs the render. This is where a background in REYES is super handy.

-Bucket Size: Determines pixel x pixel size of buckets to be used in the Reyes rendering.
-Grid Size
: Determines grid size in micropolygon count in dicing.
-Arbitrary Bucket Order
: Dictates scanline order of bucket to bucket processing (default: left to right, top to bottom)
-Ray Tracing
-Shading
-Visible Point (VP) Options
-Opacity Threshold
-Opacity Culling
-Shadow Maps
-Deep Texture Compression
-Texture Filtering
-RiB Output
-RiB Authoring
-Hair Length

Performance controls directly affect the performance vs. quality tug-of-war in the renderer.

-Threads
-Memory
-Statistics
-Search Paths
-Directory Mapping

Options Ri Procedures

There are 26 Options Ri Procedures.

Options Ri Procedures:

Camera:
RiCamera
RiFormat
RiFrameAspectRatio
RiScreenWindow
RiCropWindow
RiProjection
RiClipping
RiClippingPlane
RiDepthOfField
RiShutter

Display:
RiDisplay
RiDisplayChannel
RiPixelVariance
RiPixelSamples
RiPixelFilter
RiExposure
RiQuantize
RiColorSamples
RiRelativeDetail
RiImager
RiPixelSampleImager
RiMitchellFilter
RiSeparableCatmullRomFilter
RiBlackmanHarrisFilter

Render Controls:
RiOption
RiHider

ATTRIBUTES

While Options are global, Attributes are considered local. They are dependent upon their assigned geometric primitives and those alone. They can be altered throughout the course of the Graphics State. Two special Attributes, Transformations & Motion, will be discussed later.

The entirety of renderman Attributes is composed of two major Attribute types:

Individual renderers may have their own implementation specific Attributes and these can be assigned via RiAttribute.

SHADING:

Shading Attributes define the current shading states in the Graphics State. The Graphics State maintains a current color, current opacity, current surface shader, current atmosphere shader, current interior/exterior volume shader, current list of light sources, current area light and current displacement shader. This means that the only shaders which affect the Shading Attributes are Surface Shaders, Atmosphere Shaders, Volume Shaders, Light Shaders and Displacement Shaders.

The Shading Attributes can be broken down into four subgroups:

➀-Shaders
➁-Texture Mapping
➂-Lights (also a Shader, but discussed separately)
➃-Renderer Options

Shaders:

The Shading Attributes handles five of the major Shaders.

The color & opacity of a geometric primitive’s surface can be dictated by direct calls to Color & Opacity or via the Surface Shader call. Displacement Shaders alter the geometric primitive’s points before the lighting stage. Volume Shaders define the Interior & Exterior volumetric properties of the geometric primitives. Atmosphere Shaders are defined along with volume as well. Light Shaders are handled by the Shading Attributes as well, but we will cover them later in Light Sources.

Co-Shaders are Shaders which are defined in the interface but not called directly. They are often called by other Shaders.

The Surface, Interior Volume and Atmosphere Shaders can be run as Visible Point Shaders. This means that the shaders can be run after all “visible points” have been determined (see REYES). This can eliminate some motion blur issues on volumes.

Geometric objects can also be used as Mattes. When given a Matte Attribute, an object will hide whatever it visibly covers up and will leave a transparent hole where it was in the scene.

In addition to the Renderer Options for Shading Attributes, there are some special Attributes that exist for Volume Shading specifically. We won’t discuss those here.

Texture Mapping:

Now would be a good time to discuss texture mapping. Texture Mapping is the process that gets a 2D Image Texture (as described in the Textures section) mapped onto the coordinates of a 3D geometric object (aka geometric primitive). Each geometric primitive owns a set of surface parameters (u,v) which correspond to its parametric surface (x,y,z). Texture coordinates on the 2D Texture (s,t) have a mapping to these (u,v) coordinates. It is important to understand that while the default maps (s,t) & (u,v) as being the same value, this doesn’t have to be the case. Many people call renderman’s (s,t) coordinates as being renderman’s version of (u,v) coordinates. As we can see from above, this isn’t entirely true. We define these mappings by mapping (s,t) coordinates to the corners of (u,v) coordinates. The are set with RiTextureCoordinates.

texturecoords Renderman PRMan Pixar

Light Sources:

Lights illuminate surfaces. Lights in renderman are technically Shaders which are accessed from other Shaders. The Graphics State begins with no lights sources in its current light source list. There are two main light source types in the Ri’s light source list; current light source & current area light source. The The Renderman Interface comes with four light source types for current light sources :

➀-Point
➁-Spot
➂-Ambient
➃-Distant

The current area light source is a single area light defined by the geometric primitives included in its Attribute’s definition.

Renderer Options:

Renderer Options for Shading Attributes essentially set certain parameters for the renderer. These are not Options in the hierarchical sense because they are dependent on the the given context of the geometric primitive’s Shading Attributes.

-Displacement Bounds: Dictates bounding boxes for primitives to account for displacement of surface points.
-Shading Rate: Measured in pixel area. Defines frequency of primitive’s shading.
-Shading Interpolation: “Constant” shading (aka Flat) or “Smooth” shading.
-Derivatives & Normals: Defines how Derivatives & Normals are calculated to reduce artifacts.
-Ray Tracing: Sets Ray Tracing controls for any Ray Tracing that is used in the renderer
-Irradiance: Sets Irradiance controls for any Irradiance that is used in the renderer
-Photon: Sets Photon Mapping controls for any Photon Mapping that is used in the renderer
-Shading Strategy: Defaults to using “Grid” strategy of shading. Volume VP shading handled by separate calls now.
-Shading Hit Mode: Defines what shaders are actually executed when shading points generated by the renderer are “hit.”
-Motion Factor: Increases shading rate for objects that Motion Blur along a larger space
-Focus Factor: Adjusts Dicing Rate for blurred objects from Depth of Field.
-Shading Frequency: How often the object goes through shading through a duration of the object’s Motion Blur.

GEOMETRY

Geometry makes up the second portion of Attributes.

Geometry Attributes definitely pop the hood on the rendering engine. They expose the inner workings on a technical level regarding the rendering system that can go much deeper than that required for most of the Shading Attributes. Warning: It will be ridiculously easy to glaze over here, but I encourage you to pull through and get a firm understanding on the following concepts. If you do, it will open a world of rendering opportunities in your work (and may even get you work!).

Bounding:

The Graphics State maintains a current bound which specifies the bounding box for the current object primitives. This bounding box is critical to the rendering engine’s work. It defines the boundaries for the subsequent primitives in the Attribute’s section. Any primitives outside the bounding box is clipped or culled.

Level of Detail (AKA “LOD”):

The Graphics State also maintains what is known as detail. Detail, in the case of Geometry Attributes, defines whether a primitive is “drawn.” The detail is the area of the object’s bounding box when projected into Raster Space. If the detail area is within a specified detail range, then that primitive is drawn. So, why is it called detail? Well, if the range given only allows a primitive to be drawn if under a certain detail amount, then only a “low detail” version will ever be drawn. Likewise, if the range given only allows a primitive to be drawn if over a certain detail amount, then only a “high detail” version will ever be drawn. It helps if you think about it as a filter.

Orientation:

Geometric primitives have an orientation. Just like the coordinates spaces down to the Camera Space are a left-handed system, the primitives can have their own “handedness” defined by Transformations. The coordinate system implemented by the primitives affects how the normals on the surface of the object are calculated. If the handedness of the primitive is reversed, then the normal will point in the opposite direction. This will changed whether the surface is oustide or inside and facing the viewpoint or hidden. This directly affects Shading, Culling and Solids.

Sides:

Objects can have 1 side or 2 sides. If the object is one-sided, it’s outside surface is front-facing when facing the viewer and back-facing when facing away from the viewer. Only its outside is visible. If the object is two-sided, both sides are visible. Simple.

Visibility:

The Visibility of a primitive to certain aspects of the renderer can be defined. The visibility to the camera, diffuse rays, specular rays, transmission/shadows, photon mapping and a special attribute known as mid-point visibility (shadow receiving but not shadow casting objects).

Culling:

Culling removes points from being shaded. Backfacing & Hidden surfaces can be forced into shading by turning of their respective culling attributes. This is useful for point cloud baking, occlusion and indirect illumination.

Dicing Strategy:

The Dicing Rate (see REYES) is determined based on the screen space coordinates of a primitive’s area projected onto either a plane or sphere. One of two reference cameras can be used for the determined dicing strategy: World Camera & Frame Camera. There are some special restrictions on setting up these cameras and using them that are not described here.

Strategies for off-screen primitives can be defined for dicing as well. The original strategy is to never split offscreen objects and clamp their dicing rates. New strategies exist that can treat out of viewing frustrum objects with the spherical dicing rate strategy touched on above or a good middle ground strategy which splits the out of viewing frustrum objects less and reduces dicing rate based on distance from view.

The raster metric used for dicing can be the standard screen oriented raster metric or an unoriented raster metric. The unoriented metric can be useful for primitives which shouldn’t change dicing rate when the camera position moves.

Other dicing strategy Attributes can be defined for surface patches and curves. The lowest levels of surface patches can be diced into grids in the power of 2 to help eliminate patch cracking.

Attribute Ri Procedures

Attributes have 26 Ri Procedures

Attribute Ri Procedures:

Shading Procedures:
RiAttribute
RiColor
RiOpacity
RiTextureCoordinates
RiLightSource
RiAreaLightSource
RiIlluminate
RiSurface
RiDisplacement
RiShader
RiAtmosphere
RiInterior
RiExterior
RiVPSurface
RiVPInterior
RiVPAtmosphere
RiShadingRate
RiShadingInterpolation
RiMatte
RiGeometricApproximation

Geometry:
RiBound
RiDetail
RiDetailRange
RiOrientation
RiReverseOrientation
RiSides

Transformations

Transformations are a special breed of Attributes (1 of 3). We don’t put them in Attributes because they also directly alter the Graphics State. The Interface Mode will often dictate a main current transformation and these Transformation attributes alter the coordinate systems or transform points between them. Transformations have 14 Ri Procedures.

Transformation Ri Procedures:
RiIdentity
RiTransform⎈
RiConcatTransform⎈
RiPerspective⎈
RiTranslate⎈
RiRotate⎈
RiScale⎈
RiSkew⎈
RiCoordinateSystem
RiScopedCoordinateSystem
RiCoordSysTransform
RiTransformPoints
RiTransformBegin
RiTransformEnd

Motion

Motion is the second type of special Attributes (2 of 3). Motion is created from two things:

➀-Moving Tranformations

➁-Moving Geometric Primitives

Motion provides us with two things as well: Motion Blur & Temporal Anti-aliasing.

Motion has 2 Ri Procedures. The Procedures on this page with a ⎈ symbol can appear within the RiMotionBegin-End block.

Motion Ri Procedures:
RiMotionBegin
RiMotionEnd

Resource

Resource is the third special Attribute (3 or 3). It basically encapsulates or “saves” a current part of the Graphics State and can be “restored” at a later time. Resource can restore the following five subsets: shading, transform, geometrymodification, geometrydefinition, & hiding. They are not subject to any scope rules. Resource has 3 Ri Procedures.

Resource Ri Procedures:
RiResource
RiResourceBegin
RiResourceEnd

Geometric Primitives

Whew. The previous sections of The Ri lays out the Graphics State of the Ri Scene Description. While the Graphics State defines how your scene and everything within will be rendered, the Geometric Primitives supplies the what.

Renderman supports polygons, bilinear and bicubic patches, non-uniform rational B-spline patches, quadric surfaces, and retained objects for its geometric primitives.

Ri NOTE: Renderman Graphics Environment

A few points need to be addressed considering the environment in which the Graphics State resides.

-Coordinate System: left-handed. There are 6 coordinate systems used in Renderman: object, shader, world, camera, screen, raster.

-Transformation: Transformation procedures work in the given coordinate system that is set.

-Cameras: Cameras are not objects in Renderman. Transformation procedures are called before the World begins that define all the parameters of the camera.

-Lights: As mentioned briefly in the shaders section, lights are turned into Light Shaders in the Renderman Interface. The positioning is set in the parameterlist of the shader.

Reyes – Render Everything You Ever Saw.

So, what does PRMan do with all of this information in the Renderman Interface passed to it via a RiB??

Well. It renders it.

And it does so through the Reyes (Render Everything You Ever Saw) set of algorithms.

The original REYES paradigm has been enhanced with extended algorithms defining the use of buckets, additional culling attributes and selective dicing and shading options. A high-level view of REYES looks like the following:

[PICTURE COMING SOON]

Hopefully this section will help bring all of the Ri into perspective.

BOUNDING & SPLITTING LOOP

Bounding:

The geometric primitives have their bounding boxes calculated in Camera Space. This bounding box is then calculated into Screen Space. The bounding box is a volume enclosing all of the primitive. Those geometric primitives that do not have any of their bounding box inside the camera’s viewing space (frustrum for perspective, rectangle for orthographic) is culled, removed from the renderer for the current frame. Cull-testing then eliminates those primitives that are back-facing (this can be turned off).

Displacement bounds increases the bounding box by a specified “padding” amount for all primitives which have displacement shading. If the bounds are accounted for, then displacement may leave holes in objects whose vertices have moved out of bounds and were not properly rendered. Displacement bounds can leave a lot of primitives hanging around waiting for their respective bucket (read: increase in rendering time). It’s best to make the displacement bounding box as tight as possible.

Bounding boxes also need to cover the entire motion (motion blur) that the object goes through in the frame. Depth of Field needs to be considered for bounding as well.

Splitting:

The 2D image space, Raster Space, is then divided into equal sized “buckets.” The buckets are measured in pixels (width x height). The primitives then go through splitting. Splitting cuts the primitives up to a small enough size to be placed into the buckets they belong. The resulting primitives are then placed into the top of the loop. These sub-primitives go through the Bounding phase to determine if they are in the viewing volume and, if so, to what bucket they belong. Splitting continues until all primitives are small enough, designated to a particular bucket and no back-facing/outside viewing volume primitives remain. Occlusion culling can keep track of primitives which are depth-sorted in a bucket and cull those primitives who occur behind fully opaque areas in the bucket. This keeps hidden primitives from going through the expensive Dicing/Shading/Hiding phases.

Default bucket sizes are usually 16×16 pixels.

Eye Splits:

A unique case which pops up in REYES is that of eye splits. The near clipping plane of the camera’s viewing volume exists for its name’s sake. It is there to clip all objects that exist before and beyond the near/far clipping plane. Well, REYES doesn’t use typical “clipping” algorithms because it doesn’t leave primitives that are cleanly cut for dicing. REYES uses typical splitting for this. For primitives to be set up for dicing and shading they must be projectable. It must also be projectable for determining bounding. That means that the primitive must lie completely forward of the eye plane (imagine the point where the viewing frustrum comes to a point).

If a primitive is entirely, independently located before the clipping plane, that’s easy: it’s culled. Even if part of it lies before the eye plane. If a primitive lies both before and after the clipping plane, but not before the eye plane, then it’s split and carried on through the renderer. However, what if a primitive spans from before the eye plane all the way forward of the clipping plane? Well, that means that the forward part of the primitive needs to be carried through, but part of the primitive needs to be culled. Since part of the primitive lies before the eye plane, a proper bounding on the primitive cannot be calculated. Thus, the renderer has to shoot splits in the dark, hoping to be able to eventually classify a section of good vs. bad primitives which are carried through or culled.

The attempts made to shoot these eye splits are a predetermined value. For instance, if you set eye splits to 6, you are splitting the primitive up to 2^6 or 64 times, hoping that you’ll create a split in the “safety zone” between the eye plane and clipping plane from where you can discard the back parts of the primitive.

The shaded area represents the area where primitives are not projectable. “A”, although not projectable, does not lie forward of the near clipping plane, so we discard it all together. “B” has parts before the near clipping plane, but it is all projectable so we can easily split it and save what we need. We cannot fully discern how to split “C” to find out what to keep or discard so we have to — as smartly as possible — split what we do have and hope we get a split which lies between the eye line and clipping plane. If we do, we can just discard everything which comes before the split line and continue splitting the rest of “C”, treating it like we treated “B.” If splitting fails, then the whole of “C” is discarded, leaving a transparent hole where it was.

DICING & SHADING

Dicing:

Dicing dices up the remaining primitives into a tessellation of quadrilateral facets that are tiny bilinear patches. These facets are known as micropolygons and are usually about 1 square pixel in area. The vertices of these micropolygon “grids” are what go through shading.

The size (micropolygon count) of the grids are dictated by the shading rate. The bucket area divided by the shading rate gives the grid size per bucket. Default grid sizes are usually 256 micropolygons.

Shading:

The shading attributes given are now calculated for the grid vertices. There is a specific order of operations for shading:

➀-Displacement Shading
➁-Surface Shading
➂-Light Shading (run as a co-routing to the first time called, then it is cached for later calls)
➃-Volume Shading
➄-Atmosphere Shading

After shading, each vertex on the grid has at least a color and opacity value. The results of shading, other than displacement, have a relatively small affect on the Reyes Engine. Transparency can slightly increase the rendering time due to less primitives being culled because of a fully opaque opacity might not be reached on a sample. Other than that, the Renderman Shading Language (RSL) dictates the actual operations which are performed in this section. We will discuss that in the RSL section and maybe even dispel some confusion as to how Reyes also can perform some Ray Tracing (which is a technique altogether different).

The default shading rate is 1.0.

It is important to note that anti-aliasing is a separate process from the Dicing & Shading.

HIDING

Busting:

The micropolygons grids are then “busted” into individual micropolygons.

Hiding & Sampling/Filtering:

These micropolygons are bound- and cull-tested. They are checked to see if their Bounding is still on-screen (displacement shading can move primitives off-screen). The remaining micropolygon primitives are then cull-tested to keep front-facing primitives (optional).

The bounding then tests the micropolygon primitives to determine in which pixel they belong. Once the micropolygon has been sorted into its appropriate pixel, a predetermined number of “samples” over the area of the pixel is tested to see which samples overlap the micropolygon primitive. Each of these pixel samples are recorded as a visible point, which is a depth-sorted list of color and opacity values. How these visible points are recorded depends on the shading interpolation method chosen (i.e. “smooth” or “flat” ). These visible point lists are then resolved to final pixel values. This means that they are composited and filtered to be computed into the final pixels. Once the visible point list for a bucket is resolved, the dicing/shading/busting/hiding for the next bucket is performed in the bucket scanline order (can be altered with arbitrary bucket order).

Bucketing Note: Bucketing allows visible point lists to be sampled and resolved on a per-bucket basis, thus eliminating the gargantuan amount of memory that visible point lists can consume. The image-wide database of visible point lists is replaced by a much more compact database of per-bucket high-level primitives inventory.

REYES RELATED FORMULAS:

grid size = (bucket size X * bucket size Y) / shading rate

micropolygons per pixel = 1/shading rate

RSL – Renderman Shading Language

A major section of the Reyes Pipeline is that of Shading, which occurs after dicing. The RSL opens up this section of the renderer and exposes the actual computations that are performed on each vertex of each grid.

As you dive deeper into the abyss, I encourage yout to remember the “Big Point”; the reason for Shading:

Big Point: The ultimate goal of the entire Shading Pipeline of a renderer is to produce a color at a specific point in an image.

That’s it. All of the opacity manipulation, lighting, object modeling, etc etc etc. is to eventually create the desired mix of red, green & blue at the desired pixel.

Get on. Let’s go.

Shading in RSL 2.0 follows a very specific pipeline:

1. Call Displacement Methods in Surface Shader if any
2. Execute Displacement Shader
3. Call Opacity Method in Surface Shader if any
4. Execute Surface Shader
5. Execute Interior, Exterior then Atmosphere Shaders

We will now dissect each of the Shader Types and their methods. We’ll classify each shader in the following fashion:

1. Mechanics: What the shader does and of what it is constructed.
2. Function: How the shader works.
3. Goal: Why the shader exists.

Shader Types

SURFACE SHADERS

Mechanics:

The Surface Shaders define the way the point on a surface, P, responds to the environment (Lights & Objects) surrounding it. The PRMan renderer gets this point, P, from the proceeding Reyes technique. (Real quick: if the point is discovered with Reyes and ray tracing is commenced via the shaders, does Renderman truly offer full ray tracing? We’ll see. All in time).

Color & Opacity are usually attached to what are known as “light rays.” These are essentially vectors. They are not normalized at the start. At some point, I might do a vector math tutorial.

Two major “rays” drive the surface shader. These are the I-ray and the L-ray. The I-ray, or incidence ray, comes from E, the “eye location.” A more advanced definition might be the entrance pupil of the imaging lens mechanism at which all incoming rays converge. The L-ray points from P in the direction of a given light, object, etc. which will drive the incoming light coming onto P. The I-ray contains the Color & Opacity values that we are seeking. Let’s see that:

[PICTURE]

Function:

The function of these shaders take predefined input variables and delivers result variables to the renderer. Most of the predefined variables are read-only, but a few are read/write including the result variables, of course.

Some of the values are known as “derivatives.” Remember what a derivative is? It’s the measurement that describes how much the function (read: method. read again: Shader) changes due to the change of some input variable. [Ever heard of differentiation? That’s a just a fancy word to name the way you find the value of this so-called derivative]. The binormal & tangent derivatives, (dPdu,dPdv), are technically geometric values in the Surface Shader.

So let’s see how the Surface Shader function is set up:

Pixar PRMan Renderman Surface Shader Function Method

For derivatives concerning position: The actual change of P’s position in each direction is P(u+du)=P+dPdu*du and P(v+dv)=P+dPdv*dv.

Goal:

The goal of the Surface Shader is to compute the Color & Opacity of the -I-ray, the accumulation of light coming back from the surface along the I-ray.

LIGHT SOURCE SHADERS

Mechanics:

The Light Source Shader defines the Opacity, Ol, and color of light, Cl, coming from the light’s origin point, P, to a point in space, Ps, along the L-ray. The value of the Color is also the intensity of the light. If this doesn’t make much sense, I recommend googleing colorimetry, radiometry and photometry. For now, just trust that the Cl value is color and intensity inclusive.

The L-ray, light ray, represents the vector which points to the point in space, Ps, in question. The geometric related variables, other than Ps, define the Light Source itself and not any other geometric primitives.

[PICTURE]

Function:

The function has relatively few variables that it needs to juggle around compared to the Surface Shader. Remember, the geometric variables other than Ps are in relation to the light source itself and no the geometry being shaded. Through this, the light source can be independent or attached to a geometric primitive.

Pixar PRMan Renderman Light Source Shader Function Method

Goal:

The goal of the Light Source Shader is to define the amount & color of light and it’s direction.

VOLUME SHADERS

Mechanics:

The Volume Shaders attenuate/alter the Color & Opacity of the I-ray coming from the ray’s origin point, P. The input variables which drive the Volume Shader are the same variables which it outputs. In this respect, the mechanics of the Volume Shader work very much like a filter. It’s important to know that the Volume Shader is agnostic to its own location and the location of primitives in space. It is just fed the I-ray and its Color & Opacity from the renderer.

Also, the Volume Shader includes all volumetric shading: Interior, Exterior and Atmosphere.

[PICTURE]

Function:

The function of the Volume Shader can be seen as a filter. The output is in the same form as the input and can be “transparent” both in the literal sense and in the sense of signal filtration.

Pixar PRMan Renderman Volume Shader Function Method

Goal:

The goal of a Volume Shader is to attenuate the light coming into the camera to simulate the volumetric qualities of the space or objects between the light rays origin and its final destination to the camera.

DISPLACEMENT SHADERS

Mechanics:

The Displacement Shader alters the geometric variables of the vertex position, P, the surface normal, N, and/or the displacement of P across time, dPdtime.

[PICTURE]

Function:

The function of the Displacement Shader seems pretty similar to the Surface Shader. The inputs are slightly different as no ray Color & Opacity values are used, nor the L-ray. Output is purely restricted to changing geometric properties.

Pixar PRMan Renderman Displacement Shader Function Method

Goal:

The goal of the Displacement Shader is to alter the perceived location of the surface (Normal) of an object or the actual location (Point) of the point. It takes the displacement across time into account for motion blur consideration.

IMAGER SHADERS

Mechanics:

The Imager Shader provides access to post-processing in Screen Space the Color & Opacity of the values produced from the combination of all other shaders.

[PICTURE]

Function:

Keep in mind that the geometric properties here are now in reference to the Screen Window and the pixels it generates. Like the Volume Shader, the Imager Shader also acts as a filter and, in many respects, is a digital image processing filter.

Pixar PRMan Renderman Imager Shader Function Method

Goal:

The goal of the Imager Shader is to provide further processing to the Color & Opacity of a pixel before the Reyes Algorithm leaves the Shading stage to move on to the next stage.

Shading Functions…AKA ShadeOps

Shading Types and their input/output variables show us what they need and what they provide. They do not show us how they process the information nor what goes on inside the Shader itself (as diagrammed as a circle in the function sections above).

This is where shadeops come in. Shader Operations equals ShadeOps. The five main Block Statement Constructs of Renderman Shading provide the foundation for retrieving, processing and returning the necessary values/variables discussed in the previous section.

➀-Gather
➁-Illuminance
➂-Illuminate
➃-Solar
➄-Ambience

Construct Shadeops

The Construct Shadeops are block statements; they “loop” through their functions and you define what they bring/send back.

GATHER

Gather shoots a given number of rays from the shaded point, P, in the direction of vector dir. The number of rays that are shot are called samples and they are shot within a given cone angle from the point. The sample rays can return with values relating to the surface point that the ray intersects or we can simply use the values of the rays themselves.

Gather utilizes Ray Tracing to shoot its sample rays. To “shoot a ray” for sampling means to not only create the ray, but also calculate the values for it to return.

Pixar PRMan Renderman Shader Function gather gather()

Gather can perform computations for when the sample ray hits a surface and when a sample ray misses and hits nothing.

There are two categories of Gather. Each designates the intent of the information gathered from the samples:

➀-illuminance: for gathering illuminance-related data regarding the samples. The rays are created and shot to return values concerning the shading of the point that the ray intersects.
➁-samplepattern: for gathering informational data regarding the samples. Does not perform ray tracing, but delivers information about the rays set up for possible shooting for the shader. Since the rays aren’t actually fired, they are considered “missed” and the computations assigned for missed rays are performed.

Parameters:

There a number of parameters available for fine tuning in the Gather shadeop which often goes with little or no explanation. I’ll try to cover all of them here.

Pixar PRMan Renderman Shader shadeop Gather parameters

First of all, we start with…yep. A point, P, on the surface. The grey rectangular area patch represents the samplebase. It is the area from which the jittered ray samples have their origins. It defaults to the size of the micropolygon, so it fits perfectly into the micropolygon area in the picture.

A bias actually pushes the origin of all sample rays slightly off of the surface so we don’t run the risk of self-intersection. Another words, we don’t want the rays to accidentally hit the same point from which it is originating.

A max distance sets the maximum distance the sample ray is allowed to go before returning missed. This is infinity by default.

An opacity threshold can make the ray continue to go forward collecting hit surfaces until the Opacity, Oi, of the accumulated intersections reaches the threshold. An opacity hit threshold can determine whether a surface point has even been hit by the sample ray depending on whether or not the point has passed this threshold.

Other objects’ surfaces in the scene can be selectively turned on or off to the visibility of the sample rays by setting a subset which defines what points may or may not be hit by the sample rays. Furthermore, a hitsides parameter can dictate whether the sample rays can hit the front side, back side or both on a one-sided surface point.

Not all sample rays are necessarily created equally. Well, uniform distribution would make the weight of each ray the same, but cosine distribution of the samples shot out in Gather would weigh the value of each ray against the cosine of the angle the sample ray makes with the center directional ray or surface normal.  This is analogous to the cosine of the angles between vectors in classic shading models like Lambertian Shading.

Uniform vs Cosine Distribution PRMan Pixar Renderman Shading

Illuminance Gathered Data

The sample rays for illuminance gathering can bring back the output variables for Surface, Volume and Displacement Shaders. It is important to know that these values are retrieved by first executing the shaders on the point intersected by the sample ray. These values are labeled as shadertypes output parameters.

The sample rays for illuminance gathering can also bring back the input variables for the Surface, Volume and Displacement shaders that are available before the shader’s execution. These are known as Gather’s primitives output parameters.

The third data type available to Gather is Attributes assigned to the intersected point. Yes, Attributes as in Graphics State Attributes. These are the attribute output parameters.

Samplepattern Gathered Data

The rays here are created in this category, but do not bring back information about the points they intersect with. The information created gives us the ray’s origin, direction and length.

These are Gather’s ray output parameters.

ILLUMINANCE

The second block statement construct shadeop is Illuminance. Compared to the Gather shadeop, Illuminance should feel like a cake-walk.

Pixar PRMan Renderman Shader shadeop Illuminance

The Illuminance shadeop takes the integral (combines) all of the light sources which appear in the three dimensional cone created by an input axis (typically the surface normal), the apex of the cone — position P, and an angle which defines the width of the cone. Essentially, it obtains the L-rays and Cl values of those L-rays, making them available for further computations. An angle of PI/2 would be a hemisphere sampling, PI would be all encompassing and 0 would be an infinitely small ray along the axis given.

Notice that light sources outside of the defined integral space/cone are not included.

Additionally, you can define which light sources are allowed to be included in the integral. How the integral is computed is left up to the shader writer.

Take note, that the Light Source’s own direction and spread can play into whether or not it is included in the integration.

ILLUMINATE

Illuminate creates an Light Source and its Color & Opacity (Cl & Ol). The parameters you pass to it creates the position, P, of the light source. Optionally, you can also create it’s direction vector, axis, and its coverage cone dictated by an angle from the axis given.

The L-ray in the illuminate shadeop is the same L-ray going to the surface point being shaded.

Pixar PRMan Renderman Shader Illuminate Shadeop

A quick note on Ol, the L-ray’s Opacity. It is deprecated. That means, for the most part, you’ll never have to worry about it or figure out its universal meaning.

SOLAR

The Solar shadeop is simple in design, but can be a bit tricky fully understanding the mechanics. It’s important to understand that solar doesn’t have a position. Rather, its position is infinity. The two parameters that you can input are a directional vector and an angle. This can be confusing when you throw in the fact that you can define a cone like in the previous shadeops. So, if we do not specify a position, or apex, of a cone, what does the cone represent?

Pixar PRMan Renderman Shading shadeop Solar

When the angle is not specified (or zero), the Solar shadeop is treated like a lightdome completely surrounding the point being shaded (See the greyed rays above). When the angle is specified, the cone signifies that the Solar Light Source is only coming from the given directions in the cone. Think about it for a second: a single, infinitely small ray signifying the Solar Light Source is easily blocked. What if we want some flexibility? We increase the possible directions that the source could come from. Still confused? Imagine that the cone creates a disc sized light source at a distant infinity.

It is important to realize that the parameters for Solar are for the direction and cone of the Light Source. The visualization above might appear as if the cone angle is dependent on the surface point being shaded; it is not.

The Solar shadeop does dictate its Color, Cl, as well.

Something interesting to note: “wrapping” occurs when the angle is so large that objects which normally wouldn’t be shaded by a single point light at infinity are in fact seen and shaded.

Pixar PRMan Renderman Shading shadeop Solar

There will be Case Study posts that will go up shortly after this Chapter is finished. They will further explain situations like these.

AMBIENCE

So, what if a Light Source Shader doesn’t contain an Illuminate or Solar shadeop? It isn’t run by the shader. Unless…you create an Ambience shadeop. Now, what would you do with Ambience? Set a Cl value, I suppose. With global illumination techniques available in large number, this isn’t the most grabbed tool in the shed nowadays.

Function Shadeops

These shadeops directly return a specific value of a specific type. In theory, you could perform most of the Shading & Lighting functions with the Construct Shadeops, but the internal code for the Function Shadeops can provide finer control or better efficiency for the very specific task for which the shadeop was created.

There are four main types of Function Shadeops:

➀-Shading & Lighting
➁-Texture Mapping
➂-Shadowing
➃-General Type

SHADING & LIGHTING FUNCTION SHADEOPS

Ambient:

[Returns Color]

Gives the ambient color value of the shaded surface point.  Point must be lit by ambient Light Source. (see “Ambience” shadeop)

Caustic:

[Returns Color]

Caustics are the result of a specular light ray reflecting off of a surface onto a diffuse surface, OR the result of a specular light ray refracting through a surface and landing on a diffuse surface.

Pixar PRMan Renderman Shading shaderop Caustic

Caustic needs two phases.  The first is to create a caustic photon map (.cpm) of the scene.  This would be all of the points, P, in the illustration above.  Then the shadeop Caustic can read the position and normal for the respective point in the photon map to shade the diffused points.

Caustic uses ray tracing to fire the rays from the light source to its final destination.

Diffuse:

[Returns Color]

Diffuse wraps up the Illuminance shadeop in a form which produces a Lambertian shading model of the point being shaded.

Pixar PRMan Renderman Shading shadeop Diffuse Lambertian

The “^” above the Normal and the L-ray signify that these vectors are “normalized.”  I’ll post a vector math post sometime in the future.

The Diffuse shadeop loops through a hemispherical region above the shaded point, P, to find all of the Light Sources.  For each Light Source, it calculates the dot product between the L-ray and the Normal.  This is value between 0.0 – 1.0.  This is multiplied times the Cl of the L-ray and gives the Color for the shaded point.  This Color is often used for Ci.

Notice that the viewing angle, created by the I-ray, does not affect how the point is shaded.  This is an important property of Lambertian shading.

Indirectdiffuse & Occlusion:

[Returns Color]

Consider Indirectdiffuse & Occlusion as specialized Gather shadeops.  They shoot sample rays into a hemispherical direction above the surface.  The hemispherical region is centered around the surface Normal defined for it.

Indirectdiffuse returns the diffused shading of all the points that the samples hit.  This shadeop can be run in a ray tracing mode or point based mode.  Point Based is a shading technique we haven’t described yet.

Let me stop here for an important note about sample rays in general, for shadeops which require them.  It is most efficient to use a number of sample rays that is 4 times a squared number (Integer). Examples: 4, 16, 36, 64, 100, 144, 196, …

Phong

[Returns Color]

Phong is a type of Specular Light Model.  Unlike the Diffuse Light Model, the Specular Light Model is very much so dependent upon the viewing angle, I.

Pixar PRMan Renderman Shading shadeop Phong Specular

Phong specifically takes the surface Normal, N, and creates a normalized Reflection Ray with the I-ray.  The Reflection Ray is the R-ray.  The Dot Product of the R-ray and any L-rays are taken in an Illuminance shadeop loop.  If the Dot Product is greater than 0.0, it is raised to a power.  Otherwise, there is no Color returned.  This power exponent dictates the fall-off or size of the specular highlight.  How does it do this?

Well, the Dot Product of two normalized vectors will be between 0.0 – 1.0.  A number in this range fraised to an exponent yields interesting results.  If the exponent is positive and < 1.0, it raises the Dot Product above itself.  It approaches 1.0 as the exponent approaches 0.0.   If the exponent is positive and > 1.0, it lowers the Dot Product.  The Dot Product will approach 0.0 as the exponent climbs.

Pixar PRMan Renderman Exponent Power Phong Specular

The final output number calculated from all of this is multiplied times the Cl from the L-ray.  And this provides the return Color with which you can shade your surface point, if you so desire.

It may take some time to let the exponent section sink in.  That’s ok.

SpecularBRDF

[Returns Color; Float]

SpecularBRDF utilizes the half-vector method of finding specular highlight.  If a view angle, I, is a perfect reflection with a light source, L, then the angle of incidence equals the angle of reflection.  In this case, the Normal vector is halfway between the I-ray and the L-ray. This gives a full value of the specular highlight reflected.    We create a half-vector which appears halfway between the I-ray and L-ray.  If it is dead-on with the surface Normal, N, then we have full reflection.  As the half-vector deviates from this, the value of specular highlight reflected decreases.

Pixar PRMan Renderman Shading shadeop Specular BRDF

Some new vector math here:  to get the half-vector, we add the normalized L-ray to the normalized I-ray.  This gives us the half-vector.  Then we normalize the half-vector to use it.

The SpecularBRDF includes a parameter similar to Phong’s size.  It is known as roughness.  Roughness is essentially the inverse of size in how it is computed.  As roughness goes up, the final power exponent for the Dot Product approaches zero.

Specular

[Returns Color]

Specular essentially uses the SpecularBRDF shadeop and runs it in an Illuminance shadeop loop for each L-ray.

Pixar PRMan Renderman Shading shadeop Specular Classic

Subsurface

To be continued…

Trace

[Returns Color; Float]

Trace shoots a ray tracing ray.  It shoots from the current shaded point, P, in the direction of a specified vector, dir.   It returns the Color of the surface point that it hits while traveling.

Pixar PRMan Renderman Shading shadeop Trace Ray Tracing

As you may have noticed, Trace can also return a Float value.  This value is the distance of the ray to the surface that it hit.

Transmission

[Returns Color]

Transmission returns the amount of transmission allowed between a source point, Psrc, and a destination point, Pdst.  Transmission is the reciprocal of opacity [ 1/Opacity = Transmission ].  The result of Transmission can be multiplied against the Light Source Color, Cl, to determine how much light makes it to a certain surface point if there are transparent/translucent items on the way.

Pixar PRMan Renderman Shading shadeop Transmission

Where Prsc & Pdst is entirely up to you.  You can even specify and cone angle and number of samples to fire.  Why would you do this?

If you set Psrc up as the surface point and the Pdst as the Light Source: by specifying a cone sample you effectively create soft shadows on the surface point because this makes the Light Source a type of area light, in a sense.

Also, this all looks oddly familiar to the Solar shadeop minus the transmission gathering.

TEXTURE MAPPING FUNCTION SHADEOPS

SHADOWING FUNCTION SHADEOPS

GENERAL TYPE FUNCTION SHADEOPS

About agentjj20

Shua Jackson works as a VFX Supervisor out of the Upstate, SC. He also keeps busily involved in film directing and producing.

Posted on December 29, 2011, in Blender/PRMan. Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: