This program provides a detailed implementation of a fundamental paper in rendering, focusing on Bidirectional Path Tracing (BDPT). The whole project is written in pure C++ based on the architecture of path tracing assignment from GAMES101 by Prof. Lingqi Yan (UCSB).
Bidirectional Path Tracing (BDPT) is an advanced global illumination algorithm that combines the strengths of both path tracing and light tracing. In BDPT, paths are traced from both the camera (eye) and the light sources, and these paths are connected to form complete light transport paths. This method effectively handles complex lighting scenarios, including caustics and indirect lighting, by sampling light paths more effectively.
-
Path Generation:
- Generate a path starting from the camera (eye path).
- Generate a path starting from a light source (light path).
-
Path Connection:
- Attempt to connect vertices between the eye path and the light path to form complete paths.
-
Multiple Importance Sampling (MIS):
- Use MIS to weight the different paths based on their contribution to the final image, reducing variance and noise.
-
Accumulation:
- Accumulate the contributions of all valid paths to determine the final pixel color.
function BidirectionalPathTracing(scene, camera, lightSources, maxDepth):
image = initialize_image(camera.resolution)
for each pixel in image:
radiance = vec3(0)
for each sample:
eyePath = generate_eye_path(camera, scene, maxDepth)
lightPath = generate_light_path(lightSources, scene, maxDepth)
for i in range(len(eyePath)):
for j in range(len(lightPath)):
connection = connect_paths(eyePath[i], lightPath[j], scene)
if is_valid_connection(connection, scene):
weight = MIS_weight(eyePath, lightPath, i, j)
radiance += weight * evaluate_path_contribution(eyePath, lightPath, connection)
image[pixel] = radiance / num_samples
return image
-
Eye Path Generation: The camera emits rays into the scene, and each intersection with a surface is recorded, creating a path.
-
Light Path Generation: Similarly, rays are emitted from light sources, and intersections are recorded.
-
Path Connection: The key step where vertices from the eye and light paths are connected. The algorithm checks if this connection is valid (e.g., not blocked by other geometry).
-
MIS Weight: Multiple Importance Sampling is used to give weights to different sampling strategies, reducing the noise by balancing the contribution from each path.
-
Path Contribution: If the connection is valid, the contribution of the path is calculated by evaluating the material properties (BRDF) along the path.
Figure 1: Bidirectional Path Tracing, spp = 25.
Figure 2: Path Tracing, spp = 25.
For comparison, Figure 2 is generated by the Path Tracing algorithm, while Figure 1 is generated by the Bidirectional Path Tracing algorithm. Both images are rendered with 25 samples per pixel. Apparently, the Bidirectional Path Tracing algorithm can generate more realistic images with less noise.
Create a build file folder in the directory:
mkdir build
Enter the build file folder:
cd build
Running cmake file:
cmake ..
Compile the project:
make
Run the project:
./BidirectionalPathTracing
An image file named binary.ppm
would appear in the directory.
The solution to the equation can be mathematically expressed as:
where
In Bidirectional Path Tracing (BDPT),
where
Substituting this into the Monte Carlo sampling formula, the contribution of each sample is:
However, the above formula from Veach’s paper seems inconsistent with the formula for standard path tracing. To make both consistent, I made the following modifications:
Multiple Importance Sampling (MIS) is a technique used in rendering algorithms, including Bidirectional Path Tracing, to improve the efficiency of sampling light transport paths. It combines multiple sampling strategies by giving higher importance to paths that contribute more significantly to the final image, thereby reducing noise and improving convergence.
For a specific path of length
Typically, samples with lower probability have higher variance. This heuristic aligns with efficient sampling of rare events. Therefore, the final weight for
where
In unidirectional path tracing, we start from the camera, but in Bidirectional Path Tracing, we also need to sample an initial point and initial ray from the light source. Here is a brief description of how to sample the light source:
- Uniformly select a light source in the scene.
-
Sample a point on the surface of the light source as the initial vertex of the light subpath with a probability density function (PDF)
$p_A(y_0)$ . -
Sample a direction for the initial ray with PDF
$p(y_0\rightarrow y_1|y_0)$ .
For instance, in the case of an area light source:
-
Randomly select a point on the area light source. Set
$p_A(y_0)$ to$\frac1A$ , where$A$ is the area of the light source. -
Sample the direction using a cosine-weighted hemisphere sampler and set the corresponding probability density function to
$p(y_0\rightarrow y_1|y_0)$ .
For simplicity, only the pinhole camera model is considered, as lens models require complex sampling.
For sampling eye vertices, there are two cases that need different handling:
-
Standard case: We generate the eye subpath from the initial eye vertex. Since we uniformly sample from the image plane corresponding to pixel
$I_j$ , we set$p_A(z_0)=1$ and$p(z_0\rightarrow z_1|z_0)=1$ . -
Special case: This requires more complex handling. When connecting a light subpath with only one vertex to an eye subpath, the corresponding ray may not contribute to pixel
$I_j$ . To efficiently utilize this sample, we need to sample another eye vertex and update the contribution to the pixel in the light map. To do so, we need to find$f_s(z_1\rightarrow z_0\rightarrow z_{-1})=W^{(1)}_e(z_0,\omega)$ such that:
equals the proportion of the image area covered by the point
Although the computation is complex (honestly, I don't know how to compute it), fortunately, the book "Physically Based Rendering: From Theory to Implementation" provides the computation result for the pinhole camera. The final result is
angle between the ray and the normal at the eye point (for the pinhole model, it is the camera's direction). The corresponding probability density function for the ray direction is
This project is still under development. More features might be added in the future. And there might be still some bugs in the implementation of the BDPT algorithm.
- In this project, I only consider the diffuse surface and reflection. Therefore, only Lambertian BRDF is used in the implementation.
- In the standard path tracing algorithm, I use the Russian Roulette technique to terminate the path. However, in the BDPT algorithm, I use the path length to terminate the path and Russian Roulette is not taken into consideration.
- In the standard path tracing algorithm, I sampled the light source to obtain the direct illumination, which increase the probability of direct illumination. So, the results of standard path tracing might be better than those of some traditional implementations.
- Veach E. "Robust Monte Carlo Methods for Light Transport Simulation," Ph.D. thesis, Stanford University Department of Computer Science, 1998.
- Arvo, J. "Analytic Methods for Simulated Light Transport," Ph.D. thesis, Yale University, 1995.