Blinn-Phong reflectance model

L_d = k_d \frac I{r^2} \max(0, \mathbf n\cdot \mathbf l),

where k_d is the diffuse coeeficient (color), I the intense of light, r the distance, \mathbf n the normal vector and \mathbf l the light coming vector.

Specular Term(高光项)

Intensity depends on view direction. If bisector \mathbf h of view vector \mathbf v and \mathbf l is near to \mathbf n, then specular should appear. The specularly reflected light L_s is
L_s = k_s\frac I{r^2} \max(0, \cos\alpha)^p = k_s \frac I{r^2} \max(0, \mathbf n \cdot \mathbf h)^p.
\mathbf h = \frac{\mathbf v+\mathbf l}{|\mathbf v+\mathbf l|}.
The p makes the specular term falls fast if \mathbf n and \mathbf h are far from each other, then there is no specular at this situation. p=100 to 200 for most situation.

Ambient Term(环境光项)

L_a = k_a I_a

The ambient term is not related to the light source, etc. This is an assumption and simplification.

Blinn-Phong Reflection Model

The full expression is
L = L_a + L_d + L_s.

Shading Requencies

  • Flat shading (Shade each triangle)
  • Gouraud shading (Shade each vertex): Interpolate colors from vertices across triangle

  • Phong shading (Shade each pixel): Interpolate normal vectors across each triangle

If the model is detailed, then use flat shading has almost no difference to the others.

Per-Vertex Normal Vectors

N_v = \frac{\sum_i N_i}{|\sum_i N_i|}

where N_i are the normal vectors of adjacent faces.

Per-Pixel Normal Vectors

Barycentric interpolation of vertex normal vectors

Graphics (Real-time Rendering) Pipeline

It's already defined in GPU.

Texture Mapping

If the wooden floor has grain (texture): to compute the texture, need to span the surface to a 2D square image in [0,1]\times [0,1]. Each triangle "copies" a piece of the texture to the surface.

The same texture can be used multiple times, and it should be tilable.

Interpolation Across Triangles: Barycentric Coordinates

To Obtain smoothly varying values across triangles

barycentric coordinates is a tuple (\alpha, \beta, \gamma), where \alpha+\beta+\gamma = 1. If inside triangle, \alpha,\beta,\gamma > 0. The point satisfies:
P = \alpha A+\beta B+\gamma C.

The barycentric coordinate inside a triangle can be defined as such:

\alpha =A_a/S,
\beta =A_b/S,
\gamma = A_c/S,
where S is the area of the whole triangle, A_a, A_b, A_c are areas of the small triangles respectively.

The barycentric coordinate of centroid is (1/3,1/3,1/3).

For the case that only knows the ordinary corrdinates,
\alpha = \frac{-(x-x_B)(y_C-y_B)+(y-y_B)(x_C-x_B)}{-(x_A-x_B)(y_C-y_B)+(y_A-y_B)(x_C-x_B)},
\beta = \frac{-(x-x_C)(y_A-y_C)+(y-y_C)(x_A-x_C)}{-(x_B-x_C)(y_A-y_C)+(y_B-y_C)(x_A-x_C)},
\gamma = 1-\alpha-\beta.
It is not invariant under projections.

Apply textures

  • Find the barycentric coordinate in triangle of object

  • Find the place in the texture in accordance to the original object

  • apply the texture color

Texture Magnification

Texture too small

If texture is too small, texture resolution is insufficient.

A pixel on a texture is called texel.

Define \newcommand{\lerp}{{\rm lerp}}

{\rm lerp}(x,v_0 ,v_1) = v_0 +x(v_1 - v_0).

Bilinear interpolation

The color of a point (not integer) near 4 texel u_{00},u_{01},u_{10},u_{11} is
f(x,y) = {\rm lerp}(t,u_0,u_1),
u_0 = {\rm lerp}(s, u_{00}, u_{10}),
u_1 = {\rm lerp}(s, u_{01},u_{11}).
where s,t are the distance to u_{00} on x-axis and y-axis.

Bicubic interpolation

The bicubic interpolation uses 16 texels near a point.

Texture too large

Moire and Jaggies will appear. This is because a pixel covers a very large area. Supersampling may deal with it, but costly.

The solution is don't sample, just need to get the average value within a range.

Point Query and Range Query


Use mipmap, which allowing fast, approx, square range queries.

Mipmap is generate the texture of different level, each level has half pixels of previous level in each edge. This will cost some memory but not very large, since \sum_{i=1}^\infty 4^{-i}=1/3<1.

If a pixel with distance 1 to the above (or right) pixel mapped to the texture and their distance turned L, then apply level \log_2 L mipmap to the pixel.

If \log_2 L is not integer, then do bilinear interpolation on two nearest integer levels. Since we always do bilinear interpolation on a single level, the whole process is called trilinear interpolation.

If the texel range is not square, mipmap will get overblur.

Anisotropic Filtering

Mipmap consider only the square draw back; however, the pixel range mapped to texture might be irregular. Ripmap (Anisotropiv filtering) consider rectangle draw back, but also not effective for sloping range.

The memory cost is 3 times of original texture.

EWA Filtering

Use multiple lookups to get the range pixel covered, but highly improve the performance.

Applications of Textures

texture = memory + range query

Texture can be used in storing light from the environment.

Spherical Environment Map

Store environment light in a sphere

The problem is: spherical map may distort the top and bottom parts.

Cube Map

Store in a cube, not sphere. The top and bottom are not distorted, need dir->face computation.

Texture can affect shading

bumping mapping

Textures can also store height (unsmooth) info, to fake the detailed geometry.

How to perturb the normal in 3D: perturbed normal is
n = (-{\rm d} p/{\rm d} u, -{\rm d} p/{\rm d} v, 1)

Displacement mapping

A more advanced approach. Use the same texture in bumping mapping, but actually moves the vertices.

Displacement need more triangles.

3D texture

3D procedural noise + solid modeling

Define a function f(x,y,z) to define the noise of color at location (x,y,z).


Motivation: how to define the edge / surface of objects.

Implicit Representations

Defined by f(x,y,z)=0. It is not easy to know which point is on the surface, but easy to judge a point is inside / on / outside the surface.

Explicit Representations

f:\R^2\to \R^3, (u,v)\mapsto (x,y,z).

Sampling is easy in explicit expr. It is difficult to know whether a point is inside / on / outside the surface.

No "Best" representation.

More Implicit Represetations

  • Algebraic

  • Constructive solid geometry (CSG)

    Combine implicit geometry via boolean operations (intersection, union, diff of sets in \R^3)

  • Distance functions

    Instead of booleans, gradually blend surfaces together using distance functions

    Define the function of distance to a object, inside -> negative, outside -> positive.

    Blend two distance functions of two objects to a new distance function (averaging them), then the new blended geometry is defined by the new distance function.

  • Level set

    Store a grid of values approximating function. Surface is found where interpolated values equal zero. It can be defined in \R^3, to define 3D textures.

  • Fractals (分形)

    Exhibit self-similarity, detail at all scales.

Pros and Cons of implicit representation


  • Compact description
  • certain queries are easy (inside / outside / on / distance)
  • good for ray-to-surface intersection
  • for simple shapes, exact description / no sampling error
  • easy to handle change in topology (e.g. fluid)


  • difficult to model complex shapes

Explicit Representations in Graphics

Point Cloud

List of points (x,y,z).

Easily represent any kind of geometry, often converted into polygon mesh.

Difficult to draw in undersampled regions.

Polygon Mesh

Store vertices and polygons (often triangles and squads)

Easier to do processing / simulation, adaptive sampling

Perhaps most common representation

.obj Format

Wavefront Object File (.obj) Format: a text file that specifies vertices, normals, texture coordinates and their connectivities.

  • v: coordinate of point (x,y,z)
  • vt: texture coordinates
  • vn: normal vectors
  • f: the connection of vertices. f v1/vt1/vn1 v2/vt2/vn2 v3/vt3/vn3


Bezier Curves

The Bezier curve can be defined by a sequence of control points.

Evaluating Bezier Curves - de Casteljau Algorithm

Consider b_0, \dots, b_n and time t \in [0,1].

Define b_i^0 = b_i, b_i^k = b_i^{k-1}+t(b_{i+1}^{k-1}-b_i^{k-1}), the sequence {b_i^k}_i has one less point than {b_i^{k-1}}_i. The final point we get is the point on Bezier curve at time t.

The total expression is:
z(t) = \sum B_i^n (t) {\bf b}_i,
B_i^n(t) = C_n^i t^i (1-t)^{n-i}.

  • Invariant under affine transformation (obvious)

  • Start point and end point is the first and last control point

  • Cubic case: {\bf b}'(0) = 3({\bf b}_1-{\bf b}_0), {\bf b}'(1) = 3({\bf b}_3-{\bf b}_2) (cubic case uses 4 points)
  • Convex hull property: the whole curve is inside the convex hull of the control points (obvious)

Piecewise Bezier Curves

If the control points do not look smooth (i.e. sharp angles), use piecewise Bezier curve to simulate the sharp angles.

Piecewise cubic Bezier (4 points) is the most common. Use the derivative formula of cubic case, we can use only the start point and end point, and two derivatives at start point and end point to define a curve. Under this situation, we can use a sequence of points and corresponding directions to define a C^1 smooth piecewise Bezier curve.

Other Types of Splines

Spline is a continuous curve constructed so as to pass through a given set of points and have a certain number of continuous derivatives.

  • Short for basis splines
  • Require more information than Bezier curves
  • Satisfy all important properties that Bezier curve have (i.e. super set of Bezier curves)

  • Complicated


Bezier Surfaces

Extended from Bezier curve

Use 4x4 = 16 points to define 4 cubic Bezier curve horizontally, and then define vertical Bezier curve at each t.

Evaluation of Bezier Surfaces

need two-dimensional parameter (u,v)\in[0,1]^2.

  • Find Bezier curve w.r.t u at v=0,1/3,2/3,1;
  • Get 4 points at time u on 4 curves;
  • Find the Bezier curve according to the 4 points;
  • Get the point at time v.

mesh Operations: Geometry Processing

  • Mesh subdivision (upsampling). If surface is too sharp at some points, use more triangles (more accurate).
  • Mesh simplification (downsampling). If the surface is smooth at some points, use less triangles to define surface (less storage)
  • Mesh regularization. Use triangles with the near-regular triangle (same in finite element methods)
Subdivision Surfaces
Loop Subdivision

Loop 是人名

  • Split each triangle into four (e.g. at middle points)

  • Assign new vertex positions according to weights

New point is updated to 3/8*(A+B)+1/8*(C+D), where A,B are end points of the edge where new point is on, C,D are the other two vertices of two triangles sharing the edge.

Old point is updated to (1-n*u)*{\rm original\ position}+u*{\rm neighbour\ position\ sum}, where n is the vertex degree, u=3/16 if n=3, else 3/(8n).

Catmull-Clark Subdivision (General Mesh)

Non-quad face: not quadrilateral face

Extraordinary vertex: vertex with degree != 4.

After one subdivision, all middle points of faces and edges are the new vertexes. Extraordinary vertex will increase the number of non-quad face, and non-quad face will then disappear. If the procedure repeats, the number of extraordinary point will not increase since there is no new non-quad face.

face: f = 1/4 \sum_{i=1}^4 v_i

edge: e = 1/4 (v_1+v_2+f_1+f_2)

vertex: v = 1/16 (4p+\sum_{i=1}^4(f_i+2m_i)), p is the old vertex.

Loop subdivision cannot be used in quad faces, but Catmull can.

Mesh Simplification

Goal: reduce number of mesh elements while maintainng the overall shape

Similar to mipmap

Collapsing Edge

collapse edge to a point. Use quadric error metrics to determine which edge to collapse.

quadratic error metrics: the distance from new point to all original plane of faces

Each edge can get a corresponding minimum quadratic error. Choose the edge with minimum error to collapse, and recompute the edges effected by the collapse. Use priorty queue.

Is a greedy algorithm.


Shadow Mapping

An Image-space Algorithm.

  • No knowledge of scene's geometry during shadow computation
  • must deal with aliasing artifacts

Key idea: the points NOT in shadow must be seen both by the light and by the camera.

  • Pass 1: Render from Light

    "See" from light source, memory all points seen from light (depth buffer)

  • Pass 2A: Render from Eye

    "See" from eye.

  • Pass 2B: Project to light

    The object seen from eye projected to light, compare the depth buffer. If not equal, the point is not "seen" by light, then generate shadow.

Most used method for generating shadows.

Problems with shadow maps

  • Floating points, when comparing the depth buffer of light source
  • the resolution of rendering from light source.

  • Hard shadow and soft shadow. The light source is not always a point, if the light source is large, the edge of shadow is not sharp (i.e. soft shadow).

Ray Tracing

Why use ray tracing?

Rasterization couldn't handle global effects well.

  • Soft shadows
  • Glossy reflection
  • Indirect illumination

Rasterization is fast, but quality is relatively low. Ray tracing is accurate, but is very slow.

Rasterization: real-time, ray tracing: offline

10K CPU core hours to render one frame in production

Basic Ray Tracing Algorithm

Three ideas about light rays:

  • light travels in straight lines
  • light rays do not collide with each other if they cross
  • light rays travel from the light sources to the eye (reciprocity)

Ray tracing considers the inverse of light ray, from eye to light source

  • Start from eye, go through pixel and find the first object intersects
  • perform shading calculation of this pixel (e.g. Blinn Phong)

Recursive (Whitted-Style) Ray Tracing

At 1979, use time: 74 minutes. PC in 2006: 6 seconds. GPU in 2012: 1/30 second.

Whitted style also consider the refraction. Every intersection point will be calculated once.

Primary ray: the ray from eye.

Secondary ray: the ray generated from refractions and reflections of primary ray.

Shadow ray: the ray from object to light source.

Ray Surface Intersection

Ray equation is defined by its origin and a direction vector
{\bf r}(t) = {\bf o}+t{\bf d}.
({\bf p-c})^2-R^2=0.
Solve for intersection:
({\bf o}+t{\bf d}-{\bf c})^2-R^2=0
General implicit surface:
f({\bf o}+t{\bf d})=0,
where f({\bf p})=0 is the surface.

For triangle mesh (explicit): first consider how to intersect ray with triangle.

Just intersect with the plane of triangle, and test if intersection point is inside triangle.

To get the plane, just use a vertex {\bf p}' of triangle and the normal vector {\bf N}.
{(\bf p-p')\cdot N}=0.
Substitude {\bf p} to {\bf o}+t{\bf d}, and solve t.

A faster approach to get the intersection of triangle and ray: Moller Trumbore Algorithm
{\bf o}+t{\bf d} = (1-b_1-b_2){\bf P}_0+b_1 {\bf P}_1+b_2 {\bf P}_2
Solve the linear equation to get t, b_1, b_2. If one of b_1,b_2, 1-b_1-b_2 not in [0,1], there is no intersection.

Accelerate Ray Surface Intersection

Use bounding volumes.

Bounding Volumes Object is fully contained in the volume. If the ray doesn't hit the bounding bolume, it cannot hit the object.

Box can be considered as the "interior" of 3 pairs of parallel planes. We often use an Axis-Aligned Bounding Box (AABB) to refer the box.

We can consider the time of entering and leaving the "interior" of each pair of parallel planes. Intersect 3 intervals of entering to leaving to get the real interval of time that ray is in the box. Find the max of entering time as the time entering box, and min of exiting time as the time exiting box. If the exit time < 0, the box is behind the ray and there is not intersection. If exit time >0 and enter time < 0, the start point of ray is inside the box.

ray intersect AABB iff: t_enter < t_exit && t_exit >= 0

Use axis aligned parallel planes to simplify the calculation.

Uniform Spatial Partitions (Grids)

How to find the object from a lot of objects intersecting with ray?

  • Finding bounding box
  • Create grid
  • Store each object in overlapping cells
  • For each grid cell, test intersection and if there is object in cell

How to divide grid?

For only one cell, there is no acceleration.

For too many cells, intersection testing will cost too much time.

Empirical: cells = C * objs, C\approx 27 in 3D.

Grids work well on large collections of objects that are distributed evenly in size and space.

Fail in Teapot in a stadium case. i.e., small object and large spare space.

Spatial Partitions

  • Oct-Tree: Cut space into 2\times 2 \times 2 equal boxs, and if there is more than 1 objects in a square, apply cutting procedure on this box.
  • KD-Tree: Cut box each time parallel to one axis, each cut divide objects into two groups.
  • BSP-Tree: Similar to KD-tree, but use arbitrary plane to divide.

KD-tree is mostly used.

When constructing the tree, objects only stored in leaf nodes.

Traversing a KD-Tree:

  • Test if intersects with the parent node (big square).
  • If there is no intersection, the sons have no intersection too. If there is intersection, find recursively which leaf node intersects with ray.
  • Find the object in leaf node and test if intersects with ray.

It is difficult to test if a triangle intersects with box.

Object Partitions & Bounding Volume Hierarchy (BVH)

Each step, divide objects into two groups and find their new bounding box. Stop when necessary.

How to subdivide a node?


  1. Always choose the longest axis in node
  2. Split node at location of median object. (sort with barycentre)

Termination Criteria: stop when node containing few objects (e.g. 5)

### Basic Radiometry

How to define the loss of light intensity of each fraction / reflection?

Radiometry performs lighting calculations in a physically correct manner.

Radiant Energy and Flux

Def.Radiant energy is the energy of electromagnetic radiation. It is measured in units of joules, and denoted by the symbol Q.

Def. Radiant flux is the energy emitted, reflected, transmitted or received, per unit time.
\Phi \equiv \frac{{\rm d} Q}{{\rm d} t}

Radiant Intensity

Def. The radiant (luminous) intensity is the power per unit solid angle emitted by a point light source.
I(\omega) = \frac{{\rm d} \Phi}{{\rm d} \omega}
Def. Solid angle (steradian) is the corresponding area of surface on unit sphere.

Use {\rm d} \phi\, {\rm d} \theta to represent the solid angle. The volume differential {\rm d} A = r^2 \sin \theta \,{\rm d} \theta\, {\rm d}\phi, i.e.
{\rm d} \omega = \sin \theta \,{\rm d} \theta \,{\rm d} \phi,

Isotropic Point Source

\Phi = \int_{S^2} I\,{\rm d} \omega = 4\pi I
I = \frac{\Phi}{4\pi}


Def. The irradiance is the power per (perpendicular / projected) unit area incident on a surface point.
E({\bf x}) = \frac{{\rm d} \Phi({\bf x})}{{\rm d} A}.
Unit: {\rm lm}/{\rm m}^2, or {\rm lux}.


Def. The radiance (luminance) is the power emitted, reflected, transmitted or received by a surface, per unit solid angle, per projected unit area.
L({\rm p}, \omega) = \frac{{\rm d}^2 \Phi({\rm p}, \omega)}{{\rm d} \omega \, {\rm d} A \cos \theta}.

Incident Radiance

Incidant radiance is the irradiance per unit solid angle arriving at the surface. i.e. it is the light arriving at the surface along a given ray (point on surface and incident direction).
L({\rm p}, \omega) = \frac{{\rm d} E({\rm p})}{{\rm d} \omega \cos \theta}

Exiting Radiance

Exiting surface radiance is the intensity per unit projected area leaving the surface. e.g. for an area light it is the light emitted along a given ray (point on surface and exit direction).
L({\rm p}, \omega) = \frac{{\rm d} I({\rm p},\omega)}{{\rm d} A \cos \theta}

Irradiance vs. Radiance

E({\rm p}) = \int_{H^2} L_i({\rm p}, \omega) \cos \theta \,{\rm d} \omega.

Bidirectional Reflectance Distribution Function (BRDF)

Reflection at a Point

Radiance from direction \omega_i turns into the power E that {\rm d} A receives. Then power E will become the radiance to any direction \omega _0.


Def. The bidirectional reflectance distribution function (BRDF) represents how much light is reflected into each outgoing direction \omega_r from each incoming direction.
f_r(\omega_i,\, \omega_r) = \frac{{\rm d} L_r(\omega_r)}{{\rm d} E_i(\omega_i)} = \frac{{\rm d} L_r(\omega_r)}{L_i(\omega_i) \cos \theta_i \,{\rm d} \omega_i}

The Reflection Equation

L_r({\rm p}, \omega_r) = \int_{H^2} f_r({\rm p}, \omega_i, \omega_r) L_i({\rm p}, \omega_i) \cos \theta_i \,{\rm d} \omega_i.

Note: the equation is recursive: L_i can also be L_r, the light can reflect many times.

The Rendering Equation

L_o({\rm p}, \omega_o) = L_e({\rm p}, \omega_o)+\int_{\Omega^+}L_i({\rm p}, \omega_i) f_r({\rm p}, \omega_i, \omega_o)({\bf n}\cdot \omega_i)\,{\rm d} \omega_i

The key equation of rendering.

To be discrete (several isotropic point sources):
L_r(x,\,\omega_r) = L_e(x,\, \omega_r) + \sum L_i(x,\,\omega_i) f(x,\,\omega_i,\,\omega_r) (\omega_i\cdot {\bf n})

Use Linear Operator to Represent Rendering Equation

Simplify to: (suppose it exists)
l(u) = e(u)+\int l(v)K(u,\,v)\,{\rm d} v.
Take K as the operator K: l \mapsto \int l(\cdot)K(u,\,\cdot)\,{\rm d} v. Rewrite the equation:
L = E+KL.
Then (requires |K|<1)
L = (I-K)^{-1}E = (I+K+K^2+\dots)E.

Ray Tracing

L = E+KE+K^2 E+\dots

First term: emission directly from light sources

Second term: Direct illumination on surfaces

Third and other: indirect illumination (Bounced once, and more than once)

Probability Theory


Monte Carlo Path Tracing

Whitted style ray tracing always perform specular reflections and refractions, and stop bouncing at diffuse surfaces.

Whitted-style cannot perform glossy reflection, and global illumination.

Whitted-style is wrong, but rendering equation is correct.

Path tracing consider only one random direction of reflection / refraction that ray is traced back from. If consider N directions, each bounce will get N times more rays, and the total number of rays will be N^{ {\rm number\ of\ bounces} }, which cannot be evaluated in short time (this is called distributed ray tracing).

However, this will be noisy (the shading depends on the random direction we choose). The solution is: trace more than one ray in one pixel and average them.

Also the recursive algorithm may not stop. For example, the ray from a to b may bounce back from b to a. We introduce Russian Roulette (RR) to solve this problem. With probability p the ray is kept, otherwise the ray is discarded. In order to get the correct shading result (expectation), devide p if the ray is traced.

Introduce sample per pixel (SPP). If SPP is low, the result is very noisy.

Sampling the Light

The monte carlo method does not determine which ray to trace. We can always sample the light. The integrand of rendering equation relies on {\rm d}\omega, we need to transform to {\rm d} A of light source. Obviously
{\rm d} \omega = \frac{\cos \theta'\,{\rm d} A}{|x'-x|^2},
where the \theta' is the angle between light surface normal vector and ray direction. (Note that the Jacobian is always positive for area transformation, implying \cos \theta'\ge 0.)

We consider the shading of one point into two parts: the ray from light source directly and rays from other objects. The ray directly from light source will not use RR to discard it.

How do we know the light is not blocked by other objects?

Just test if the direct light exists (i.e. test if light source is blocked). If blocked, do not use direct light source part.

Ray Tracing: Previous vs. Modern Concepts

Previous: Ray tracing Whitted-style ray tracing

Modern: general solution of light transport

Multiple imp. sampling: sample hemisphere and also the light

Is path tracing still "Introductory"?

Yes. Fear the science, my friends.

引き籠り 絶対 ジャスティス!