Newer
Older
\documentclass{rtg}
\usepackage{graphicx}
\usepackage{xspace}
\usepackage{xcolor}
\usepackage{subcaption}
\newcommand{\OpenGL}{OpenGL\xspace}
\newcommand*\diff{\mathop{}\!\mathrm{d}}
\newcommand{\f}[1]{\operatorname{#1}}
\newcommand{\todo}[1]{\textcolor{red}{\textbf{#1}}}
\title{Assignment 1: Monte Carlo Integration and Path Tracing}
\teaser{
\hspace*{\fill}
\includegraphics[trim={1cm 1cm 1cm 1cm},clip,width=0.32\linewidth]{figures/cbox_ao_uniform.png}
\hfill
\includegraphics[trim={1cm 1cm 1cm 1cm},clip,width=0.32\linewidth]{figures/cbox_direct_mesh_surface.png}
\hfill
\includegraphics[trim={1cm 1cm 1cm 1cm},clip,width=0.32\linewidth]{figures/cbox_path_tracer_mesh.png}
\hspace*{\fill}
\label{fig:figintro}
}
\setcounter{section}{0}
\begin{document}
\maketitle
In this assignment you will implement all of the crucial parts to get a Monte Carlo-based rendering system.
The result will be 1. an ambient occlusion integrator, 2. a direct light renderer, and 3. a simple path tracer.
The assignments build up upon each other, be sure to test everything before continuing.
For the first few points in this assignment, you can ignore the material BRDF and just assume white diffuse materials ($\rho = \{1,1,1\}$).
\textbf{We might have updated the Wienori \texttt{base} repository. You can pull the changes in gitlab (but you might have breaking changes if you started Assignment 0 without pulling Wienori).}
\textbf{Important:} As you have seen in assignment 0, you have to register a name for your integrators (and any other additions) with the framework. Our test system expects pre-defined names and attributes when invoking your solution. Please study the given scene \texttt{xml} files and choose the correct names for registration. We recommend that you run the test files for yourself before submission.
\section{Completing Wienori's MC Intestines (1 Point)}
Wienori is an almost complete Monte Carlo integrator.
But we have left out some crucial parts for you to complete.
By doing so, you'll get a short tour of the main MC machinery.
The main loop structure of our renderer looks something like this:
\begin{verbatim}
/* For each pixel and pixel sample */
for (y=0; y<height; ++y) {
for (x=0; x<width; ++x) {
for (i=0; i<N; ++i) { // N = Target sample count per pixel
ray = compute_random_camera_ray_for_pixel(x, y)
value = Li(ray, other, stuff)
pixel[y][x] += value
}
pixel[y][x] /= N
}
}
\end{verbatim}
Obviously, the code will be slightly different in practice due to parallelisation, filtering (something we will learn later) and general architectural design.
Look into the code, try to understand how things are done and complete the following functions so they work together to perform Monte Carlo integration (all changes are a single line):
\begin{description}
\item[main.cpp, renderBlock()] Iterate over all required samples (target count stored in \texttt{sampler})
\item[block.cpp, ImageBlock::put(Point2f, Color3f)] Accumulate samples and sample count
\item[block.cpp, ImageBlock::toBitmap()] Divide RGB color by accumulated sample count (look at Color4f, if the count is in member \texttt{.w}, there is a function you can use)
\end{description}
For the normals integrator from last time, these changes shouldn't make a difference.
However, for the techniques that you will implement in this assignment, they provide the basis for proper MC integration to resolve the noise in your images.
Beyond implementing them, make sure that you understand how they interconnect and how Wienori converts ray samples into output pixel colors.
As mentioned during the lecture, apart from the main loop and the summing/averaging that happens there, you do not need additional sample/integrate loops inside your integrator functions.
If you were to do that in a path tracer, there would be the problem of an ever-exploding number of samples (curse of dimensionality).
\section{Ambient occlusion (2 Points)}
Implement ambient occlusion!
Its rendering equation is
\begin{align}
L_i(x) = \int_{\Omega} \frac{1}{\pi} \f{V}(x, x + \alpha \omega) \cos \theta \diff \omega,
\end{align}
where $L_i$ is the reflected radiance, $x$ a position on the surface, $\f{V}$ the visibility function, $\alpha$ a constant, and $\theta$ the angle between $\omega$ and the surface normal at $x$.
The visibility function is 1 or 0, depending on whether the ray from $x$ to $x+\alpha \omega$ reaches its destination without interference. This is also commonly referred to as a shadow ray.
$\alpha$ should be configurable via XML and default to \texttt{scene->getBoundingBox().getExtents().norm()} if no value is provided (experiment with it!).
$\frac{1}{\pi}$ represents a simple white diffuse BRDF, as we explained in the lecture about light when we talked about the furnace test.
For integration, you should sample the directions in the hemisphere around point $x$ uniformly.
Since Wienori's main loop already takes care of computing the mean for MC integration, the function should return one sample of the integrand $\f{f}(x)$, divided by $\f{p}(x)$. The proper value for $\f{p}(x)$ for uniform sampling a hemisphere is $\frac{1}{2\pi}$.
In addition, you will need a function that can generate uniform samples for directions on the hemisphere. This is not trivial, so Wienori takes something that is easy to get (a uniformly random 2D value between 0 and 1) and turns it into a uniform hemisphere direction $\omega$.
This transformation is called warping.
You can draw the 2D random values from \texttt{sampler}, and then use \texttt{Vector3f Warp::squareToUniformHemisphere(const Point2f \&sample)} inside \texttt{warp.cpp} to generate $\omega$.
Make sure to bring this $\omega$ in local space to world space before tracing along it, by using \texttt{.shFrame.toWorld}.
Altogether, this should be about 20 lines in a new \texttt{integrator\_ao.cpp} file (not counting boiler plate code).
Compare results with different sample counts (16, 64, 256...), do you see an improvement?
If not, go back to Completing Nori's MC Intestines!
\section{Direct lighting (4 Points)}
Check the slides about Monte Carlo integration and the rendering equation for the correct integrals.
It's possible to compute direct lighting using 2 methods: Hemisphere sampling and light surface sampling. Points for light surface sampling will be counted in Assignment 3 even though we covered it already in the lecture. You can implement it anyway if you want to experiment and have more time for the project later.
Hemisphere sampling works nicely for very large lights (sky), but not so well for smaller lights (takes a long time to give smooth results). Small lights is where surface sampling shines. Think about why surface sampling is not optimal for very large lights. In a future lecture we will learn about MIS, a method to combine surface and hemisphere samples in an optimal way.
You should base your code on \texttt{integrator\_ao.cpp} and implement it in \\
\texttt{integrator\_direct\_lighting.cpp}.
To make sure that both methods can be used, our scenes will contain area lights.
If we had point or directional lights, hemisphere sampling would not work at all and we could only use light source sampling (remember why?).
You should start with uniform hemisphere sampling (it's very similar to ambient occlusion in terms of code structure).
Once hemisphere sampling works, you can choose to continue with light source sampling and check whether the two methods converge to the same image when using a high number of samples.
If they don't, you have a bug, since both rendering methods are based on the same physical concepts and should eventually produce the same image (although one might be noisier than the other with low sample counts).
\subsection{Hemisphere sampling (4 points)}
\paragraph*{Task 1} Implement the emitter interfaces. The test cases you receive have two types of emitters: \texttt{parallelogram\_emitter} and \texttt{area}. Some objects in your input scenes will be assigned these types and corresponding parameters.
\texttt{parallelogram\_emitter} are emitters that should only be tied to meshes in the shape of a parallelogram,
\texttt{area} may turn any mesh into a (complex) light source.
However, in the first assignment, there is no real difference between the two.
Both types of emitters need to read their brightness (radiance) and colour from the scene file, and store it.
The emitter interface has multiple parts to it, but for now, all you need to do for both types is make sure that you can read their radiance from the scene file and access it during rendering via the \texttt{eval}.
You can use a dummy implementation for \texttt{Emitter::pdf()} and \texttt{Emitter::sample()} for now.
You will complete these in a later task, and they will be different for \texttt{parallelogram\_emitter} and \texttt{area}.
\paragraph*{Task 2}
Implement the direct lighting integrator.
First, check whether the camera ray directly hits a light source (emitter).
If so, return its colour and be done (this is not completely correct, but for this task it is fine).
If you hit a regular, non-emitting surface instead, cast a new, random ray according to uniform hemisphere sampling, similar to ambient occlusion (but no maximum ray length this time!).
If the closest intersected object with this new ray is an emitter, use your emitter implementation from Task 1 to compute its contribution using the rendering equation, otherwise return zero (black). We assume that our light sources are all lambertian, therefore the radiance $L_i$ coming from a light in the rendering equation is just its radiance.
This should only require a small edit from the \texttt{ao} integrator.
\subsection{Light surface sampling (points will be counted in Assignment 3)}
\textbf{0 points for now, points will be counted in Assignment 3. So this one is completely optional. We covered it in the lecture, so you might want to try it out for yourself already (and you will have more time for other stuff later on).}
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
Light surface sampling is important for performant path tracers.
It's referenced as "next event estimation" or "direct light sampling" there (you will have the chance to implement it in a later assignment).
In contrast to hemisphere sampling, you are not simply shooting rays around the hemisphere and hope to find light.
Instead, you try to connect hit points directly to light sources and check if that connection is possible.
If you implement it, you should see improvements immediately.
You will need to sample area light surfaces, i.e., you need a function to pick uniformly random points on the surface of each light.
There are 2 options, of which you should choose \textbf{one} for your implementation:
\begin{enumerate}
\item \textbf{Parallelogram lights (2 points)}
Parallelograms are very easy to sample uniformly, just use a linear combination $k_1 a + k_2 b$ of its side vectors $a, b$ with coefficients $k_1,k_2$ where $0\leq k_1, k_2 < 1$. Obviously, this option will restrict you to using rather basic light source shapes in your scene.
\item \textbf{Triangle mesh lights (4 points)}
This can give very cool results, i.e., imagine a glowing mesh.
Mesh sampling is not that hard either: Select the triangle according to its surface area (larger triangles are more often selected).
The implementation in \texttt{nori/dpdf.h} will be useful here.
Once you have selected a triangle, sample a point on it (\url{http://mathworld.wolfram.com/TrianglePointPicking.html}).
Be careful when you reuse random numbers! Example: 2 triangles, \texttt{s = rand(0, 1) < 0.5} would give you the first triangle.
If you want to reuse \texttt{s} for sampling the position (after using it for discretely sampling the triangle), clearly you will only ever sample the first half of the first and the second half of the second triangle.
In order to avoid artefacts, \texttt{s} needs to be shifted and scaled!
\texttt{DiscretePDF::sampleReuse} is precisely for that.
Later on, you could use it for sampling the light as well (it's enough to query one random light per sample if you normalise properly).
But if you are uncertain, you can always just draw additional fresh random numbers from \texttt{sampler}.
More complex samplers would be needed for large meshes, for instance such that do importance sampling based on distance, cosine, etc.
Please don't go that far for now.
\end{enumerate}
You can get 2 points for parallelogram or 4 points for triangle mesh lights, \textbf{but not both}.
\paragraph*{Task 3}
Implement sampling.
The parallelogram, mesh, or emitter classes would be good places (your choice).
You need to implement something like \texttt{samplePosition} (taking random numbers, returning a position and its surface normal) and \texttt{pdf} (taking a position and returning the sample probability density).
\paragraph*{Task 4}
To pick one of the available light sources for sampling, you will need a list of emitters in the scene.
Hook into \texttt{Scene::addChild}.
In our assignments, surface emitters are always children of meshes.
The switch emitter case is for point lights or other emitters without physical surface, you can ignore it for now.
Additionally, the emitter object needs a reference to the geometry (mesh or parallelogram, otherwise the sampling code has no data).
Don't be afraid to add stuff to headers or create new ones, it's your design now.
\paragraph*{Task 5}
Implement the direct lighting integrator for light source sampling.
Pick a light, either uniformly or according to the emitted light (importance sampling), and then sample a point on its surface.
Once you have a point, cast a shadow ray and compute the contribution, if any ($\f{f}(x)$ divided by joint pdf).
If there are multiple lights, make sure to compensate for the fact that you chose a particular one!
Add a boolean property to allow switching between hemisphere sampling and surface sampling.
\section{Simple Path Tracing (12 Points + 30 Bonus)}
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
This will be the first version of your path tracer. Based on the rendering equation, you will get your first images with indirect lighting, shadows and multiple light sources.
\subsection{Implement the recursive path tracing algorithm (8 points)}
Create a new integrator and call it \texttt{path\_tracer\_recursive}(\texttt{.cpp}).
Start with a copy of the direct lighting integrator.
It might pay off to keep your code clean so you can easily make small adjustments when we improve it in future assignments.
\paragraph*{Task 1, Start (4 Points)}
Start with the pseudocode from the path tracing lecture slides.
Since Nori's main loop has no \texttt{depth} parameter, let \texttt{Li} be a stub that calls an additional, recursive function that can keep track of the current depth.
For the first task, you only have to implement a fixed depth recursion.
You can choose to use a constant in code, or a parameter in the scene files, but the default if no parameters are given must be a depth of 3.
During development, you should experiment with this number and can observe how the image becomes more realistic as you increase the depth.
\paragraph*{Task 2, Implement and use the Diffuse BRDF / BSDF (2 Points)}
Encapsulate uniform hemisphere sampling for diffuse materials in the BSDF implementation \texttt{diffuse.cpp}.
The test scenes already apply this material to the objects in the scene, so you can read and use the \texttt{albedo} member to render in colour!
\paragraph*{Task 3, Russian Roulette (max 2 Points)}
Implement Russian Roulette, with a minimum guaranteed depth of 4.
Whether or not Russian Roulette is used must be parameterisasble via a boolean parameter \texttt{rr} from the scene file.
If it is not used, fall back to fixed number of recursions.
You can start with a version that uses a fixed continuation probability in each bounce (1 Point).
The generated test outputs you get in your reports will actually be using a fixed value of 0.7 continuation probability. Check the slides for details.
However, the proper way to do it is to keep track of the \textit{throughput}.
With every bounce, the importance emitted from the camera is attenuated, and the probability for continuation should become lower.
You should keep track of this throughput in a Color3f vector, and use its largest coefficient for Russian Roulette (2 Points). Check the slides for details. Note that if you do this, your solution will look slightly different to the report reference. This is fine!
Feel free to also explore ideas that we didn't describe here (rays that miss are black by default, but you could use a sky colour or an environment map).
These things do not go unseen :)
\subsection{Implement path tracing in a loop (4 Points)}
Every recursive algorithm can be written in a loop as well.
Sometimes a stack is needed, but in the path tracer that is not necessary.
The loop form is much friendlier to the processor, and you can avoid stack overflows (which could happen with very deep recursions).
The code should be pretty similar.
You already keep track of the throughput, if you implemented Russian Roulette.
Now you should get roughly something like this:
\pagebreak
\begin{verbatim}
Li(Scene scene, Ray ray, int depth) {
Color value = 0;
Color throughput = 1;
// .. some other stuff
while (true) {
// stuff
throughput *= "something <= 1"
// stuff
value += throughput * something
if (something)
break;
}
return value;
}
\end{verbatim}
You might \textit{break}, or add things to \textit{value} in more than one place, or in a different order.
This is just the basic idea.
\subsection{Implement a higher-dimensional path tracing effect (15 bonus points)}
Implement either motion blur or depth-of-field effects. For motion blur, you will need to give something in your scene the ability to move (scene objects, camera). For each path, you will need an additional uniformly random time variable \texttt{t} and consider it when you perform intersection with your scene. To implement depth-of-field, you will need two additional uniformly random \texttt{u,v} variables for each path and consider them in the setup of your camera ray. You get 9 points for motion blur and 6 for depth of field.
\subsection{Standard deviation images (3 bonus points) and adaptive sampling (up to 9 bonus points)}
Standard deviation (SD) can be computed directly from the samples.
You can then colour map and store it in an extra output image (3 bonus points).
There you have the option to compute the SD of the samples or of the rendering produced (the difference is only the scaling factor $N$, the number of samples).
The SD gives you an estimate on the error, and you can use it for adaptive sampling, i.e., throw more samples at pixels that have a high SD.
However, this simple algorithm is biased.
You get 3 points if you implement that.
You get 3 extra points, if you explain why it's biased and implement an easy fix.
Another 3 extra points, if you implement a fix that reuses all samples (your own research).
\subsection{Be patient (2 Bonus Points)}
The path-traced images you get with the provided test scene configurations are very noisy. How long does it take on your machine to compute them? How much longer do you think it would take until you get a quality that you are happy with? Experiment with the number of samples and report if the development matches your expectations. Given that our scenes are extremely simple, do you think that with this kind of performance it is feasible to render entire \emph{movies}?
\begin{itemize}
\item Remember that you don't need all points to get the best grade. The workload of 3 ECTS counts on taking the exam, which gives a lot of points.
\item Nori provides you with a \texttt{Sampler} that is passed in to the functions that produce the integrator input. Use this class to draw values from a canonic random variable.
\item Be careful of so-called "self-intersections". These happen when you immediately hit the
same surface that you started your ray from, due to
inaccuracies in floating point computations. You can avoid these by
offsetting rays in the normal direction of the surface with a small $\epsilon$.
Use \texttt{Epsilon} defined in \texttt{nori/common.h}.
\item Hemisphere sampling and light source sampling are two methods to compute the same integral. Therefore, given enough samples, they both should converge to the same result.
\item The framework is using Eigen under the hood for vectors and matrices etc. Be careful when using \texttt{auto} in your code \href{https://eigen.tuxfamily.org/dox/TopicPitfalls.html}{(Read here why)}.
\item Please use Discord or TUWEL for questions, but refrain from posting critical code sections.
\item You are encouraged to write new test cases to experiment with challenging scenarios.
\item Tracing rays is expensive. You don't want to render high resolution images or complex scenes for testing. You may also want to avoid the \texttt{Debug} mode if you don't actually need it (use a release with debug info build!).
\item To reduce the waiting time, Wienori runs multi-threaded by default. To make debugging easier, you will want to set the number of threads to 1. To do so, simply execute Wienori with the additional arguments \texttt{-t 1}.