*** Wartungsfenster jeden ersten Mittwoch vormittag im Monat ***

Skip to content
Snippets Groups Projects
Commit 9f5467bb authored by Celarek, Adam's avatar Celarek, Adam
Browse files

update assignment 1

parent 8183d868
No related branches found
No related tags found
No related merge requests found
......@@ -10,7 +10,7 @@
\newcommand{\todo}[1]{\textcolor{red}{\textbf{#1}}}
\title{Assignment 1: Monte Carlo Integration and Path Tracing}
\deadline{2023-04-16 23:59}%2020-05-13 23:59
\deadline{2024-04-23 23:59}%2020-05-13 23:59
\teaser{
\hspace*{\fill}
\includegraphics[trim={1cm 1cm 1cm 1cm},clip,width=0.32\linewidth]{figures/cbox_ao_uniform.png}
......@@ -183,7 +183,7 @@ Once you have a point, cast a shadow ray and compute the contribution, if any ($
If there are multiple lights, make sure to compensate for the fact that you chose a particular one!
Add a boolean property to allow switching between hemisphere sampling and surface sampling.
\section{Simple Path Tracing (12 Points + 17 Bonus)}
\section{Simple Path Tracing (12 Points + 30 Bonus)}
This will be the first version of your path tracer. Based on the rendering equation, you will get your first images with indirect lighting, shadows and multiple light sources.
......@@ -249,10 +249,22 @@ Li(Scene scene, Ray ray, int depth) {
You might \textit{break}, or add things to \textit{value} in more than one place, or in a different order.
This is just the basic idea.
\subsection{Implement a higher-dimensional path tracing effect (15 Bonus Points)}
\subsection{Implement a higher-dimensional path tracing effect (15 bonus points)}
Implement either motion blur or depth-of-field effects. For motion blur, you will need to give something in your scene the ability to move (scene objects, camera). For each path, you will need an additional uniformly random time variable \texttt{t} and consider it when you perform intersection with your scene. To implement depth-of-field, you will need two additional uniformly random \texttt{u,v} variables for each path and consider them in the setup of your camera ray. You get 9 points for motion blur and 6 for depth of field.
\subsection{Standard deviation images (3 bonus points) and adaptive sampling (up to 9 bonus points)}
Standard deviation (SD) can be computed directly from the samples.
You can then colour map and store it in an extra output image (3 bonus points).
There you have the option to compute the SD of the samples or of the rendering produced (the difference is only the scaling factor $N$, the number of samples).
The SD gives you an estimate on the error, and you can use it for adaptive sampling, i.e., throw more samples at pixels that have a high SD.
However, this simple algorithm is biased.
You get 3 points if you implement that.
You get 3 extra points, if you explain why it's biased and implement an easy fix.
Another 3 extra points, if you implement a fix that reuses all samples (your own research).
\subsection{Be patient (2 Bonus Points)}
The path-traced images you get with the provided test scene configurations are very noisy. How long does it take on your machine to compute them? How much longer do you think it would take until you get a quality that you are happy with? Experiment with the number of samples and report if the development matches your expectations. Given that our scenes are extremely simple, do you think that with this kind of performance it is feasible to render entire \emph{movies}?
......
......@@ -5,3 +5,4 @@
- first assignment: git user must be studi mail
- make a comment, that you should put if / switches for direct/brdf sampling strategies etc before the code, not interleaved with other logic (easier)
- stress to ask on discord
- mc assignment: adaptive smapling
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment