"...assignment3_sampling_materials_bits_and_bytes/main.tex" did not exist on "7e615ea58b041bc9118c37b885174eed9fc01164"
Newer
Older
\documentclass{rtg}
\usepackage{graphicx}
\usepackage{xspace}
\usepackage{subcaption}
\newcommand{\OpenGL}{OpenGL\xspace}
\newcommand*\diff{\mathop{}\!\mathrm{d}}
\newcommand{\f}[1]{\operatorname{#1}}
\newcommand{\todo}[1]{\textcolor{red}{\textbf{#1}}}
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
\title{Assignment 1: Monte Carlo Integration and Path Tracing}
\deadline{2020-05-24 23:59}%2020-05-13 23:59
\teaser{
\hspace*{\fill}
\includegraphics[trim={1cm 1cm 1cm 1cm},clip,width=0.32\linewidth]{figures/ajax-ao.png}
\hfill
\includegraphics[trim={1cm 1cm 1cm 1cm},clip,width=0.32\linewidth]{figures/ajax-arealight.png}
\hfill
\includegraphics[trim={1cm 1cm 1cm 1cm},clip,width=0.32\linewidth]{figures/ajax-2arealights.png}
\hspace*{\fill}
\label{fig:figintro}
}
\setcounter{section}{0}
\begin{document}
\maketitle
In this assignment you will implement all of the crucial parts to get a Monte Carlo based rendering system.
The result will be 1. an ambient occlusion integrator, 2. a direct light renderer, and 3. a simple path tracer.
The assignments build up upon each other, be sure to test everything before continuing.
For this assignment you can ignore the material BRDF and just assume white diffuse materials ($\rho = \{1,1,1\}$).
\textbf{We have updated the \texttt{assignments} repository. Please merge all upstream changes before starting to work.}
\begin{verbatim}
git checkout master
git pull
git merge submission1 # just to be sure
git push # just in case something fails, make a backup
git remote add upstream git@submission.cg.tuwien.ac.at:rendering-2020/assignments.git
git pull upstream master
# resolve any merge conflict, or just confirm the merge.
git push
\end{verbatim}
\section{Completing Nori's MC Intestines}
Nori is an almost complete Monte Carlo integrator.
We have left out some crucial parts for you to complete.
At the same time you'll get a short tour of the MC machinery.
The basic loop of a renderer is the following:
\begin{verbatim}
/* For each pixel and pixel sample */
for (y=0; y<height; ++y) {
for (x=0; x<width; ++x) {
for (i=0; i<N; ++i) { // N = number of samples per pixel
ray = compute_random_camera_ray_for_pixel(x, y)
value = compute_brightness(ray, scene, other, stuff)
pixel[y][x] += value
}
pixel[y][x] /= N
}
}
\end{verbatim}
Obviously the code will be longer in practise due to parallelisation, filtering (something we will learn later) and general architectural design.
Look into the code, try to understand how things are done and complete the following functions (all changes are a single line):
\begin{description}
\item[main.cpp, renderBlock()] (iterate over all samples)
\item[block.cpp, ImageBlock::put(Point2f, Color3f)] (accumulate samples and sample count)
\item[block.cpp, ImageBlock::toBitmap()] divide by sample count (look at Color4f, there is a function you can use)
\end{description}
With the normals renderer you shouldn't see a difference, but the AO integrator will not work properly without these changes.
You do not need another sample generating loop inside the integrators.
If you would do that in a path tracer, there would be the problem of ever exploding number of samples (curse of dimensionality).
\section{Ambient occlusion (3 easy points)}
Implement ambient occlusion!
The rendering equation is
\begin{align}
L_e(x) = \int_{\Omega} \frac{1}{\pi} \f{V}(x, x + \alpha \omega) \cos(\theta) \diff \omega,
\end{align}
where $L_e$ is the brightness, $x$ a position on the surface, $\f{V}$ the visibility function, $\alpha$ a constant, and $\theta$ the angle between the normal and $\omega$.
The visibility function is 1 or 0, depending on whether the ray reaches the destination, it's also called a shadow ray.
$\alpha$ should default to \texttt{scene->getBoundingBox().getExtents().norm()}, and be configurable via XML (experiment with it!).
$\frac{1}{\pi}$ is a white diffuse BRDF, as we explained in the lecture about light when we talked about the furnace test.
You will need a function that turns a 2d uniform sample between 0 and 1 into a uniform hemisphere sample.
This transformation is called warping.
Look at \texttt{Vector3f Warp::squareToUniformHemisphere(const Point2f \&sample)} inside \texttt{warp.cpp} and \texttt{warp.h}.
Be careful with all the factors (including those inside $\f{p}(x)$), some of them cancel out and don't need to be computed!
Altogether, this should be about 20 lines in a new \texttt{integrator\_ao.cpp} file (not counting boiler plate code).
Compare results with different sample counts (16, 64, 256...), do you see an improvement?
\section{Direct lighting (9 Points)}
Check the slides about light and the recap in Monte Carlo integration for the correct integrals.
There are two possibilities on how to implement direct lighting: hemisphere sampling and light source sampling.
Hemisphere sampling works well only for very large lights (sky), while light source sampling works especially well with small lights.
If we had point or directional lights, we could sample them only directly.
All these sampling methods can be combined using MIS (you will learn about that later).
In any case you should start with hemisphere sampling (it's quite similar to ambient occlusion from the code).
Once you have that and you start with light source sampling, you can test whether the methods converge to the same image.
If they don't, you have a bug.
You can also use our provided unit tests locally (maybe you have to edit the python script to correct the path).
\subsection{Hemisphere sampling (3 easy points)}
You should base your code on \texttt{integrator\_ao.cpp} and implement it in \\
\texttt{integrator\_direct\_lighting.cpp}.
\paragraph*{Task 1} Implement the emitter interface (make either a \texttt{parallelogram\_emitter} or \texttt{mesh\_emitter}) and the supporting machinery.
Emitters need to read their brightness (radiance) and colour from the scene file and store it (this is the least).
A name and debug info might also be good.
If you don't plan to implement the direct sampling, you can use a dummy implementation for Emitter::pdf() and Emitter::sample().
\paragraph*{Task 2}
Implement the integrator.
First, you need to check whether the camera ray hits a light (emitter).
If so, return its colour and be done.
This is not completely correct, but you can ignore direct illumination of light sources for now.
If you hit a regular surface instead, make a random ray cast using uniform hemisphere sampling, similar to ambient occlusion (no maximum length this time!).
If the closest intersected object is a light, compute its contribution using the equations from the lecture, otherwise return zero (black).
This is only a small edit from the \texttt{ao} integrator.
\subsection{Direct light sampling (6 points)}
Direct light sampling, is also important for performant path tracers (it's called "next event estimation" there).
You will need to sample area lights directly, i.e., you need a function to randomly pick points on the surface of the light.
There are 2 options here:
\begin{enumerate}
\item \textbf{Parallelogram lights (3 points)}
Parallelograms are very easy to sample (just take the linear combination of its side vectors with 0 < random factor < 1).
Another advantage is that you can apply sampling patterns (stratified, Halton, we'll cover those later).
\item \textbf{Triangle mesh lights (6 points)}
This can give very cool results, i.e., imagine a glowing mesh.
Mesh sampling is not that hard either: Select the triangle according to its surface area (larger triangles are more often selected).
The implementation in \texttt{nori/dpdf.h} will be useful here.
Once you have selected a triangle, sample a point on it (\url{http://mathworld.wolfram.com/TrianglePointPicking.html}).
Be careful about random numbers! Example: 2 triangles, \texttt{s = rand(0, 1) < 0.5} would give you the first triangle.
If you then reuse s for sampling the position (after discretely sampling the triangle), clearly you will only sample the first half of the first and the second half of the second triangle.
\texttt{s} needs to be shifted and scaled!
\texttt{DiscretePDF::sampleReuse} is precisely for that.
Later on, you should use it for sampling the light as well (it's enough to query one random light per sample if you normalise properly).
More complex samplers would be needed for large meshes, for instance such that do importance sampling based on distance, cosine, etc.
Please don't go that far for now.
\end{enumerate}
You can get either 3 points for parallelogram lights or 6 points for triangle mesh.
\paragraph*{Task 3}
Implement sampling.
The parallelogram, mesh, or emitter classes would be good places (your design).
You need to implement something like \texttt{samplePosition} (taking random numbers, returning a position and its normal) and \texttt{pdf} (taking a position and returning the sample probability).
This is similar to warping functions.
\paragraph*{Task 4}
You will need a list of emitters in the scene.
Hook into \texttt{Scene::addChild}.
In our assignments, surface emitters are children of meshes.
The switch emitter case is for point lights or other emitters without physical surface (that code was made for EPFL, we have different assignments).
Additionally, the emitter object needs a pointer to the geometry (mesh or parallelogram, otherwise the sampling code wouldn't have any data).
Don't be afraid of adding stuff to the headers or create new ones, it's your design now.
\paragraph*{Task 5}
Implement the direct lighting integrator for light sampling.
Instead of taking samples on the hemisphere, you have to sample a light source.
That consists of first picking a light (uniformly or by importance sample according to the emitted light) and then sampling a point on its surface.
When you have a point, you then cast a shadow ray, and compute the contribution ($\f{f}(x)$ divided by joint pdf).
Use a boolean property to switch between hemisphere sampling and surface sampling.
It shouldn't be hard to see the difference in quality.
\section{Simple Path Tracing (15 Points)}
\subsection{Implement the recursive path tracing algorithm (8 points)}
Create a new integrator and call it \texttt{path\_tracer\_recursive}(\texttt{.cpp}).
Start with a copy of the direct lighting integrator.
It might pay off to keep your code clean and make small refactorings while working on it.
\paragraph*{Task 1, starter code (5 easy points)}
Create an additional \texttt{Li} function, that also keeps track of the current depth.
Now you can use the pseudo code from the lecture as a template to implement a simple path tracer.
\begin{verbatim}
Li(Scene scene, Ray ray, int depth) {
Color value = 0;
if (!findIntersection(scene, ray))
return value;
Intersection its = getIntersection(scene, ray);
// Take care of emittance
if (isLightSource(its))
value += getLightRadiance(its);
if(depth >= 3)
return value;
// Generally, the BRDF should decide on the next ray (e.g. for specular
// reflections). For now you can assume white diffuse BRDF and uniform
// hemisphere sampling. Therefore, replace the code as you see fit.
BRDF brdf = getBRDF(its);
Color brdfValue = sampleBrdf(brdf, -ray, wo);
// Call recursively for indirect lighting
value += brdfValue * Li(scene, wo, depth + 1);
return value;
}
\end{verbatim}
And you can observe how the image becomes more realistic as you increase the depth.
\paragraph*{Task 2, Russian Roulette (1 easy and 2 normal points)}
Implement Russian Roulette with a minimum depth of 4 according to the slides.
It's probably easier to implement first a version with fixed probabilities (1 Point).
But the proper way to do it is to keep track of the \textit{throughput}.
With every bounce, the importance emitted from the camera is attenuated, and the probability for continuation should become lower.
You should keep track of this throughput in a Color3f vector, and use its largest coefficient for Russian Roulette (2 Points).
\subsection{Implement path tracing in a loop (5 bonus point)}
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
Every recursive algorithm can be written in a loop as well.
Sometimes a stack is needed, but in the path tracer that is not necessary.
The loop form is much friendlier to the processor, and you can't get stack overflows (which could happen with deep refractions).
The code should be pretty similar.
You already keep track of the throughput, if you implemented Russian roulette.
Now you should get roughly something like this:
\begin{verbatim}
Li(Scene scene, Ray ray, int depth) {
Color value = 0;
Color throughput = 1;
// .. some other stuff
while (true) {
// stuff
throughput *= "something <= 1"
// stuff
value += throughput * something
if (something)
break;
}
return value;
}
\end{verbatim}
You might \textit{break}, or add things to \textit{value} in more than one place, or in a different order.
This is just the basic idea.
\subsection{Use the BRDF / BSDF interface (2 bonus point)}
You will have to implement that later anyway, but you can do it early and gather bonus points.
These 2 bonus points are only available in conjunction with a working path tracer.
\subsection*{Submission format}
%To be announced.
\input{submission.tex}
\subsection*{Words of wisdom}
\begin{itemize}
\item Remember that you don't need all points to get the best grade. The workload of 3 ECTS counts on taking the exam, which gives a lot of points.
\item Nori provides you with a \texttt{Sampler} that is passed in to the functions that produce the integrator input. Use this class to draw values from a canonic random variable.
\item Be careful of so-called "self-intersections". These happen when due to
inaccuracies in floating point computations, you immediately hit the
same surface that you started your ray from. You can avoid these by
offsetting rays from their start with a small Epsilon. The \texttt{min} parameter
of the ray can help you there!
\item Hemisphere sampling and light source sampling are two methods to compute the same integral. Therefore, given enough samples, they all should converge to the same result.
\item The framework is using Eigen under the hood for vectors and matrices etc. Be careful when using \texttt{auto} in your code \href{https://eigen.tuxfamily.org/dox/TopicPitfalls.html}{(Read here why)}.
\item Please use TUWEL for questions, but refrain from posting critical code sections.
\item You are encouraged to write new test cases to experiment with challenging scenarios.
\item Tracing rays is expensive. You don't want to render high resolution images or complex scenes for testing. You may also want to avoid the \texttt{Debug} mode if you don't actually need it (use a release with debug info build!).
\item To reduce the waiting time, Nori runs multi-threaded by default. To make debugging easier, you will want to set the number of threads to 1. To do so, simply execute Nori with the additional arguments \texttt{-t 1}.
\end{itemize}
\end{document}