*** Wartungsfenster jeden ersten Mittwoch vormittag im Monat ***

Skip to content
Snippets Groups Projects
assignment1_path_tracing.tex 18 KiB
Newer Older
\documentclass{rtg}

\usepackage{graphicx}
\usepackage{xspace}
\usepackage{xcolor}
\usepackage{subcaption}
\newcommand{\OpenGL}{OpenGL\xspace}
\newcommand*\diff{\mathop{}\!\mathrm{d}}
\newcommand{\f}[1]{\operatorname{#1}}
\newcommand{\todo}[1]{\textcolor{red}{\textbf{#1}}}

\title{Assignment 1: Monte Carlo Integration and Path Tracing}
Hamed Jafari's avatar
Hamed Jafari committed
\deadline{2021-04-18 23:59}%2020-05-13 23:59
\teaser{
\hspace*{\fill}
Adam Celarek's avatar
Adam Celarek committed
\includegraphics[trim={1cm 1cm 1cm 1cm},clip,width=0.32\linewidth]{figures/cbox_ao_uniform.png}
Adam Celarek's avatar
Adam Celarek committed
\includegraphics[trim={1cm 1cm 1cm 1cm},clip,width=0.32\linewidth]{figures/cbox_direct_mesh_surface.png}
Adam Celarek's avatar
Adam Celarek committed
\includegraphics[trim={1cm 1cm 1cm 1cm},clip,width=0.32\linewidth]{figures/cbox_path_tracer_mesh.png}
\hspace*{\fill}
\label{fig:figintro}
}

\setcounter{section}{0}

\begin{document}

\maketitle

In this assignment you will implement all of the crucial parts to get a Monte Carlo based rendering system.
The result will be 1. an ambient occlusion integrator, 2. a direct light renderer, and 3. a simple path tracer.
The assignments build up upon each other, be sure to test everything before continuing.
For most points in this assignment you can ignore the material BRDF and just assume white diffuse materials ($\rho = \{1,1,1\}$).

\textbf{We have updated the \texttt{assignments} repository. Please merge all upstream changes before starting to work.}
\begin{verbatim}
git checkout master
git pull
git merge submission1       # just to be sure
git push                    # just in case something fails, make a backup
git remote add upstream git@submission.cg.tuwien.ac.at:rendering-2020/assignments.git
git pull upstream master
# resolve any merge conflict, or just confirm the merge. 
git push
\end{verbatim}

\textbf{Important:} As you have seen in assignment 0, you have to register a name for your integrators (and any other additions) with Nori framework. Our test system expects pre-defined names and attributes when invoking Nori via your solution. Please study the given scene xml files and choose the correct names for registration. It is recommended that you run the test files for yourself before submission.

\section{Completing Nori's MC Intestines}
Nori is an almost complete Monte Carlo integrator.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
But we have left out some crucial parts for you to complete.
By doing so, you'll get a short tour of the main MC machinery.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
The main loop structure of our renderer looks something like this:
\begin{verbatim}
/* For each pixel and pixel sample */
for (y=0; y<height; ++y) {
   for (x=0; x<width; ++x) {
Bernhard Kerbl's avatar
Bernhard Kerbl committed
      for (i=0; i<N; ++i) {										// N = Target sample count per pixel
         ray = compute_random_camera_ray_for_pixel(x, y)
Bernhard Kerbl's avatar
Bernhard Kerbl committed
         value = Li(ray, other, stuff)
         pixel[y][x] += value
      }
      pixel[y][x] /= N
   }
}
\end{verbatim}

Bernhard Kerbl's avatar
Bernhard Kerbl committed
Obviously, the code will be slightly different longer in practise due to parallelisation, filtering (something we will learn later) and general architectural design.
Look into the code, try to understand how things are done and complete the following functions (all changes are a single line):
\begin{description}
Bernhard Kerbl's avatar
Bernhard Kerbl committed
	\item[main.cpp, renderBlock()] Iterate over all required samples (target count stored in \texttt{sampler})
	\item[block.cpp, ImageBlock::put(Point2f, Color3f)] Accumulate samples and sample count
	\item[block.cpp, ImageBlock::toBitmap()] Divide RGB color by accumulated sample count (look at Color4f, if the count is in member \texttt{.w}, there is a function you can use)
\end{description}

Bernhard Kerbl's avatar
Bernhard Kerbl committed
For the normals integrator from last time, these changes shouldn't make a difference. 
However, for the techniques that you will implement in this assignment, they provide the basis for proper MC integration to resolve the noise in your images.
Beyond implementing them, make sure that you understand how they interconnect and how Nori converts ray samples into output pixel colors. 

As mentioned during the lecture, beyond the main loop you do not need another sample generating loop inside the integrators.
If you were to do that in a path tracer, there would be the problem of an ever-exploding number of samples (curse of dimensionality).
\section{Ambient occlusion (3 easy points)}
Implement ambient occlusion!
Bernhard Kerbl's avatar
Bernhard Kerbl committed
Its rendering equation is
\begin{align}
Bernhard Kerbl's avatar
Bernhard Kerbl committed
	L_i(x) = \int_{\Omega} \frac{1}{\pi} \f{V}(x, x + \alpha \omega) \cos(\theta) \diff \omega,
Bernhard Kerbl's avatar
Bernhard Kerbl committed
where $L_i$ is the brightness, $x$ a position on the surface, $\f{V}$ the visibility function, $\alpha$ a constant, and $\theta$ the angle between $\omega$ and the surface normal at $x$.
The visibility function is 1 or 0, depending on whether the ray from $x$ to $x+\alpha \omega$ reaches its destination without interference. This is also commonly referred to as a shadow ray.
$\alpha$ should be configurable via XML and default to \texttt{scene->getBoundingBox().getExtents().norm()} if no value is provided (experiment with it!).
$\frac{1}{\pi}$ represents a simple white diffuse BRDF, as we explained in the lecture about light when we talked about the furnace test.

For integration, you should sample the hemisphere surface around point $x$ uniformly. 
Since Nori's main loop already takes care of computing the mean for MC integration, the function should return one sample of the integrand, divided $\f{p}(x)$. The proper value for $\f{p}(x)$ for uniform sampling was discussed in the lecture.
In addition, you will need a function that can turn a uniformly random 2D value between 0 and 1 into a uniform hemisphere sample $\omega$.
This transformation is called warping.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
You can draw the 2D random values from the \texttt{sampler}.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
Apply the formulas from the lecture or look at \texttt{Vector3f Warp::squareToUniformHemisphere(const Point2f \&sample)} inside \texttt{warp.cpp} and \texttt{warp.h} to generate $\omega$.
Make sure to bring $\omega$ to world space before tracing (\texttt{.shFrame.toWorld}).
Bernhard Kerbl's avatar
Bernhard Kerbl committed
Pay attention to the individual mathematical factors (including those inside $\f{p}(x)$), some of them cancel out and don't need to be computed at all!
Bernhard Kerbl's avatar
Bernhard Kerbl committed
Altogether, this should be about 20 lines in a new \texttt{integrator\_ao.cpp} file (not counting boiler plate code).
Compare results with different sample counts (16, 64, 256...), do you see an improvement?
Bernhard Kerbl's avatar
Bernhard Kerbl committed
If not, go back to Completing Nori's MC Intestines!
\section{Direct lighting (up to 9 Points)}
Bernhard Kerbl's avatar
Bernhard Kerbl committed
Check the slides about light and the recaps in Monte Carlo integration and Path Tracing for the correct integrals.
There are two possibilities on how to implement direct lighting: hemisphere sampling and light source sampling.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
Hemisphere sampling works well only for very very large lights (sky), while light source sampling works especially well with small lights.
To make sure that both methods can be used, our scenes will contain area lights.
If we had point or directional lights, hemisphere sampling would not work and we could only use light source sampling (can you guess why?).
All these sampling methods can be combined using MIS (you will learn about that later).

Bernhard Kerbl's avatar
Bernhard Kerbl committed
You should start with uniform hemisphere sampling (it's very similar to ambient occlusion in terms of code structure).
Once hemisphere sampling works, you can continue with light source sampling and check whether the two methods converge to the same image when using a high number of samples.
If they don't, you have a bug, since both rendering methods are based on the same physical concepts and should eventually produce the same image (although one might be noisier than the other with low sample counts).
You may also try our provided unit tests locally (maybe you have to edit the python script to correct the scene file lookup path).
\subsection{Hemisphere sampling (3 easy points)}
You should base your code on \texttt{integrator\_ao.cpp} and implement it in \\
\texttt{integrator\_direct\_lighting.cpp}.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
\paragraph*{Task 1} Implement the emitter interface (create either a \texttt{parallelogram\_emitter} or \texttt{mesh\_emitter} class) and the supporting machinery.
Emitters need to read their brightness (radiance) and colour from the scene file and store it (minimum requirements for an emitter).
A name and debug info might also be good.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
If you don't plan to implement light source sampling, you can use a dummy implementation for \texttt{Emitter::pdf()} and \texttt{Emitter::sample()}.

\paragraph*{Task 2}
Implement the integrator.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
First, you need to check whether the camera ray directly hits a light source (emitter).
If so, return its colour and be done.
This is not completely correct, but you can ignore direct illumination of light sources for now.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
If you hit a regular surface instead, cast a random ray according to uniform hemisphere sampling, similar to ambient occlusion (no maximum ray length this time!).
If the closest intersected object is a light, compute its contribution using the equations from the lecture, otherwise return zero (black).
Bernhard Kerbl's avatar
Bernhard Kerbl committed
This should only require a small edit from the \texttt{ao} integrator.
\subsection{Light surface sampling (up to 6 points)}
Bernhard Kerbl's avatar
Bernhard Kerbl committed
Light surface sampling is important for performant path tracers (it's referenced as "next event estimation" or "direct light sampling" there).
In contrast to hemisphere sampling, you are not simply shooting rays around the hemisphere and hope to find light.
Instead, you try to connect hit points directly to light sources and check if that connection is possible.
If you implement it, you should see improvements immediately.
You will need to sample area light surfaces, i.e., you need a function to pick uniformly random points on the surface of each light.
There are 2 options, of which you should choose \textbf{one} for your implementation:
\begin{enumerate}
	\item \textbf{Parallelogram lights (3 points)}
Bernhard Kerbl's avatar
Bernhard Kerbl committed
	Parallelograms are very easy to sample uniformly, just use a linear combination $k_1 a + k_2 b$ of its side vectors $a, b$ with coefficients $k_1,k_2$ where $0\leq k_1, k_2 < 1$. Obviously, this option will restrict you to using rather basic light source shapes in your scene.

	\item \textbf{Triangle mesh lights (6 points)}
	This can give very cool results, i.e., imagine a glowing mesh.
	Mesh sampling is not that hard either: Select the triangle according to its surface area (larger triangles are more often selected).
	The implementation in \texttt{nori/dpdf.h} will be useful here.
	Once you have selected a triangle, sample a point on it (\url{http://mathworld.wolfram.com/TrianglePointPicking.html}).
	
Bernhard Kerbl's avatar
Bernhard Kerbl committed
	Be careful when you reuse random numbers! Example: 2 triangles, \texttt{s = rand(0, 1) < 0.5} would give you the first triangle.
	If you want to reuse \texttt{s} for sampling the position (after using it for discretely sampling the triangle), clearly you will only ever sample the first half of the first and the second half of the second triangle.
	In order to avoid artefacts, \texttt{s} needs to be shifted and scaled!
	\texttt{DiscretePDF::sampleReuse} is precisely for that.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
	Later on, you could use it for sampling the light as well (it's enough to query one random light per sample if you normalise properly).
	But if you are uncertain, you can always just draw additional fresh random numbers from  \texttt{sampler}.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
	%More complex samplers would be needed for large meshes, for instance such that do importance sampling based on distance, cosine, etc.
	%Please don't go that far for now.
\end{enumerate}
Bernhard Kerbl's avatar
Bernhard Kerbl committed
You can get 3 points for parallelogram or 6 points for triangle mesh lights, \textbf{but not both}.

\paragraph*{Task 3}
Implement sampling.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
The parallelogram, mesh, or emitter classes would be good places (your choice).
You need to implement something like \texttt{samplePosition} (taking random numbers, returning a position and its surface normal) and \texttt{pdf} (taking a position and returning the sample probability density).

\paragraph*{Task 4}
Bernhard Kerbl's avatar
Bernhard Kerbl committed
To pick one of the available light sources for sampling, you will need a list of emitters in the scene.
Hook into \texttt{Scene::addChild}.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
In our assignments, surface emitters are always children of meshes.
The switch emitter case is for point lights or other emitters without physical surface, you can ignore it for now.
Additionally, the emitter object needs a reference to the geometry (mesh or parallelogram, otherwise the sampling code has no data).
Don't be afraid to add stuff to headers or create new ones, it's your design now.

\paragraph*{Task 5}
Bernhard Kerbl's avatar
Bernhard Kerbl committed
Implement the direct lighting integrator for light source sampling.
Pick a light, either uniformly or according to the emitted light (importance sampling), and then sample a point on its surface.
Once you have a point, cast a shadow ray and compute the contribution, if any ($\f{f}(x)$ divided by joint pdf).
If there are multiple lights, make sure to compensate for the fact that you chose a particular one!
Add a boolean property to allow switching between hemisphere sampling and surface sampling.
\section{Simple Path Tracing (15 Points + 15 Bonus)}
\subsection{Implement the recursive path tracing algorithm (8 points)}
Create a new integrator and call it \texttt{path\_tracer\_recursive}(\texttt{.cpp}).
Start with a copy of the direct lighting integrator.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
It might pay off to keep your code clean so you can easily make small adjustments when we improve it in future assignments.
Adam Celarek's avatar
Adam Celarek committed
\paragraph*{Task 1, Start (5 easy points)}
Bernhard Kerbl's avatar
Bernhard Kerbl committed
Start with the pseudocode from the path tracing lecture slides.
Since Nori's main loop has no \texttt{depth} parameter, let \texttt{Li} be a stub that calls an additional, recursive function that can keep track of the current depth.
Adam Celarek's avatar
Adam Celarek committed
For the first task, you only have to implement a fixed depth recursion.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
You can choose to use a constant in code, or a parameter in the scene files, but the default if no parameters are given must be a depth of 3. 
During development, you should experiment with this number and can observe how the image becomes more realistic as you increase the depth.

\paragraph*{Task 2, Russian Roulette (1 easy and 2 normal points)}
Bernhard Kerbl's avatar
Bernhard Kerbl committed
Implement Russian Roulette, with a minimum guaranteed depth of 4, according to the slides.
Russian roulette must be parameterisasble from the scene file.
It's probably easiest to start with a version that uses a fixed continuation probability in each bounce (1 Point). Check the slides for details.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
However, the proper way to do it is to keep track of the \textit{throughput}.
With every bounce, the importance emitted from the camera is attenuated, and the probability for continuation should become lower.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
You should keep track of this throughput in a Color3f vector, and use its largest coefficient for Russian Roulette (2 Points). Check the slides for details.

\subsection{Implement and use the Diffuse BRDF / BSDF (2 points)}
Bernhard Kerbl's avatar
Bernhard Kerbl committed
Encapsulate uniform hemisphere sampling of diffuse materials in \texttt{diffuse.cpp}.
The test cases already use it, so you can store and use its albedo to generate colour!
These 2 points are only valid in conjunction with a working path tracer.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
Check slides for details.
\subsection{Implement path tracing in a loop (5 points)}
Every recursive algorithm can be written in a loop as well.
Sometimes a stack is needed, but in the path tracer that is not necessary.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
The loop form is much friendlier to the processor, and you can avoid stack overflows (which could happen with very deep recursions).

The code should be pretty similar.
You already keep track of the throughput, if you implemented Russian roulette.
Now you should get roughly something like this:
\begin{verbatim}
Li(Scene scene, Ray ray, int depth) {
Bernhard Kerbl's avatar
Bernhard Kerbl committed
    Color value = 0;
    Color throughput = 1;
    // .. some other stuff
Bernhard Kerbl's avatar
Bernhard Kerbl committed
    while (true) {
        // stuff
        throughput *= "something <= 1"
Bernhard Kerbl's avatar
Bernhard Kerbl committed
        // stuff
        value += throughput * something
Bernhard Kerbl's avatar
Bernhard Kerbl committed
        if (something)
            break;
    }
    return value;
}
\end{verbatim}

You might \textit{break}, or add things to \textit{value} in more than one place, or in a different order.
This is just the basic idea.

Bernhard Kerbl's avatar
Bernhard Kerbl committed
\subsection{Implement a higher-dimensional path tracing effect (15 bonus points)}

Implement either motion blur or depth-of-field effects. For motion blur, you will need to give something in your scene the ability to move (scene objects, camera). For each path, you will need an additional uniformly random time variable \texttt{t} and consider it when you perform intersection with your scene. To implement depth-of-field, you will need two additional uniformly random \texttt{u,v} variables for each path and consider them in the setup of your camera ray. You can gain 15 bonus points for either effect, \textbf{but not for both}.
\subsection*{Submission format}

%To be announced.
\input{submission.tex}


\subsection*{Words of wisdom}
\begin{itemize}
\item Remember that you don't need all points to get the best grade. The workload of 3 ECTS counts on taking the exam, which gives a lot of points.
\item Nori provides you with a \texttt{Sampler} that is passed in to the functions that produce the integrator input. Use this class to draw values from a canonic random variable.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
\item Be careful of so-called "self-intersections". These happen when you immediately hit the
same surface that you started your ray from, due to
inaccuracies in floating point computations. You can avoid these by
Adam Celarek's avatar
Adam Celarek committed
offsetting rays in the normal direction of the surface with a small $\epsilon$.
Use \texttt{Epsilon} defined in \texttt{nori/common.h}.
Bernhard Kerbl's avatar
Bernhard Kerbl committed
\item Hemisphere sampling and light source sampling are two methods to compute the same integral. Therefore, given enough samples, they both should converge to the same result.
\item The framework is using Eigen under the hood for vectors and matrices etc. Be careful when using \texttt{auto} in your code \href{https://eigen.tuxfamily.org/dox/TopicPitfalls.html}{(Read here why)}.
\item Please use TUWEL for questions, but refrain from posting critical code sections.
\item You are encouraged to write new test cases to experiment with challenging scenarios.
\item Tracing rays is expensive. You don't want to render high resolution images or complex scenes for testing. You may also want to avoid the \texttt{Debug} mode if you don't actually need it (use a release with debug info build!).
\item To reduce the waiting time, Nori runs multi-threaded by default. To make debugging easier, you will want to set the number of threads to 1. To do so, simply execute Nori with the additional arguments \texttt{-t 1}.
\end{itemize}

\end{document}