diff --git a/assignments_2022/assignment3_sampling_materials_bits_and_bytes/main.tex b/assignments_2022/assignment3_sampling_materials_bits_and_bytes/main.tex
index 9cbed2d9a7902b8c33a8ffb596b46d21b3c577c6..bcef666bc927ec8a1709fe14945915e44b7eecbf 100644
--- a/assignments_2022/assignment3_sampling_materials_bits_and_bytes/main.tex
+++ b/assignments_2022/assignment3_sampling_materials_bits_and_bytes/main.tex
@@ -9,8 +9,8 @@
 \newcommand{\f}[1]{\operatorname{#1}}
 \newcommand{\todo}[1]{\color[red]{\textbf{#1}}}
 
-\title{Assignment 3: Importance Sampling}
-\deadline{2022-06-02 23:59}%2020-05-13 23:59
+\title{Assignment 3: Materials and Importance Sampling}
+\deadline{2022-05-24 23:59}%2020-05-13 23:59
 \teaser{
 \hspace*{\fill}
 \includegraphics[width=0.32\linewidth]{figures/ajax_pt_uniform.png}
@@ -28,7 +28,10 @@
 
 \maketitle
 
-In this assignment you will extend the Monte Carlo rendering system from the last assignment with importance sampling of various functions and next event estimation. In the above image, we see what these methods can do: the left scene is rendered with uniform hemisphere sampling. In the center, we use cosine-weighted hemisphere sampling (importance sampling). On the right, we use next event estimation to perform surface sampling in a recursive path tracer. Images were rendered with the same number of samples per pixel (32).
+In this assignment you will extend the Monte Carlo rendering system from the last assignment with basic materials, importance sampling of various functions and next event estimation. In the above image, we see what these methods can do: the left scene is rendered with uniform hemisphere sampling. In the center, we use cosine-weighted hemisphere sampling (importance sampling). On the right, we use next event estimation to perform surface sampling in a recursive path tracer. Images were rendered with the same number of samples per pixel (32).
+
+This assignment is quite long (more than 50 points in total), but don't worry, you don't have to implement everything.
+Pick and choose whatever is most interesting to you.
 
 \textbf{We have updated the \texttt{assignments} repository. Please merge all upstream changes before starting to work.}
 \begin{verbatim}
@@ -43,7 +46,8 @@ git push
 \end{verbatim}
 We also provide a reference implementation for assignment 2, you can download it from TUWEL.
 
-\section{Sample Warping (7 easy points, 9 bonus points)}
+
+\section{Sample Warping (3 easy points, 7 bonus points)}
 Random numbers are often generated uniformly in the range between 0 and 1. We can combine multiple such random numbers to sample cartesian domains uniformly, but different distributions are needed, e.g., to get uniform distribution in a non-cartesian domain (for recursive rendering, we need to sample the hemisphere for instance), or for importance sampling techniques. This task can be fully solved in \texttt{warp.cpp}.
 
 The process of \emph{changing} an existing distribution is called warping.
@@ -61,8 +65,8 @@ For an introduction on how to use \texttt{warptest} and what each distribution i
 \texttt{SquareToUniformHemisphere} is already there, some of you were already cleverly using it in the first assignment to do uniform hemisphere sampling. 
 
 \begin{description}
-	\item[squareToTent] 2 points, test your basic Monte Carlo sampling knowledge, \textbf{bonus}
-	\item[squareToUniformDisk] 3 points, \textbf{required}
+	\item[squareToTent] 1 points, test your basic Monte Carlo sampling knowledge, \textbf{bonus}
+	\item[squareToUniformDisk] 1 points, \textbf{required}
 	
 	\textbf{Sampling:} 
 	Use the input canonic variables to generate samples $(r,\theta)$ in polar coordinates where $r \in [0, 1)$ and $\theta \in [0, 2\pi)$, such that they are uniformly distributed when transformed to Cartesian coordinates $(x,y)$. 
@@ -80,9 +84,9 @@ For an introduction on how to use \texttt{warptest} and what each distribution i
 	Note: For a uniform distribution, the PDF is constant. 
 	Just make sure that the sample location is valid!
 	
-	\item[squareToUniformSphere] 2 points, can use it to implement spherical lights, \textbf{bonus}
+	\item[squareToUniformSphere] 1 points, can use it to implement spherical lights, \textbf{bonus}
 	
-	\item[squareToCosineHemisphere] 4 points if inversion method, 1 point if Malley's, \textbf{required}
+	\item[squareToCosineHemisphere] 2 point, \textbf{required}
 	
 	\textbf{Sampling:} The input is a 2D vector \texttt{sample} that holds values of two canonical random variables $\xi_1, \xi_2$. 
 	Use them to generate samples $(\theta, \phi)$ on the unit hemisphere such that they have a distribution proportional to $\cos(\theta)$ (i.e., more samples the closer we get to the pole of the hemisphere) and convert them to $\omega$ with the transformation for spherical coordinates.
@@ -104,12 +108,99 @@ For an introduction on how to use \texttt{warptest} and what each distribution i
 	
 \end{description}
 
-\section{Importance Sampling (?? easy points, ?? normal points and ?? hard points)}
+\section{Light surface sampling (3-6 points)}
+Extend your direct lighting integrator with surface sampling.
+This is a prerequisite for some of the tasks in this assignment.
+
+Light surface sampling is important for performant path tracers.
+It's referenced as "next event estimation" or "direct light sampling" there.
+In contrast to hemisphere sampling, you are not simply shooting rays around the hemisphere and hope to find light.
+Instead, you try to connect hit points directly to light sources and check if that connection is possible.
+
+You will need to sample area light surfaces, i.e., you need a function to pick uniformly random points on the surface of each light.
+There are 2 options, of which you should choose \textbf{one} for your implementation:
+\begin{enumerate}
+	\item \textbf{Parallelogram lights (3 points)}
+	Parallelograms are very easy to sample uniformly, just use a linear combination $k_1 a + k_2 b$ of its side vectors $a, b$ with coefficients $k_1,k_2$ where $0\leq k_1, k_2 < 1$. Obviously, this option will restrict the light source shapes in your scene.
+	
+	\item \textbf{Triangle mesh lights (6 points)}
+	This can give very cool results, i.e., imagine a glowing mesh.
+	Mesh sampling is not that hard either: Select the triangle according to its surface area (larger triangles are more often selected).
+	The implementation in \texttt{nori/dpdf.h} will be useful here.
+	Once you have selected a triangle, sample a point on it (\url{http://mathworld.wolfram.com/TrianglePointPicking.html}).
+	
+	Be careful when you reuse random numbers! Example: 2 triangles, \texttt{s = rand(0, 1) < 0.5} would give you the first triangle.
+	If you want to reuse \texttt{s} for sampling the position (after using it for discretely sampling the triangle), clearly you will only ever sample the first half of the first and the second half of the second triangle.
+	In order to avoid artefacts, \texttt{s} needs to be shifted and scaled!
+	\texttt{DiscretePDF::sampleReuse} is precisely for that.
+	Later on, you could use it for sampling the light as well (it's enough to query one random light per sample if you normalise properly).
+	But if you are uncertain, you can always just draw additional fresh random numbers from  \texttt{sampler}.
+	
+	%More complex samplers would be needed for large meshes, for instance such that do importance sampling based on distance, cosine, etc.
+	%Please don't go that far for now.
+\end{enumerate}
+You can get 3 points for parallelogram or 6 points for triangle mesh lights, \textbf{but not both}.
+
+\paragraph*{Task 1}
+Implement sampling.
+The parallelogram, mesh, or emitter classes would be good places (your choice).
+You need to implement something like \texttt{samplePosition} (taking random numbers, returning a position and its surface normal) and \texttt{pdf} (taking a position and returning the sample probability density).
+
+\paragraph*{Task 2}
+To pick one of the available light sources for sampling, you will need a list of emitters in the scene.
+Hook into \texttt{Scene::addChild}.
+In our assignments, surface emitters are always children of meshes.
+The switch emitter case is for point lights or other emitters without physical surface, you can ignore it for now.
+Additionally, the emitter object needs a reference to the geometry (mesh or parallelogram, otherwise the sampling code has no data).
+Don't be afraid to add stuff to headers or create new ones, it's your design now.
+
+\paragraph*{Task 3}
+Implement the direct lighting integrator for light source sampling.
+Pick a light, either uniformly or according to the emitted light (importance sampling), and then sample a point on its surface.
+Once you have a point, cast a shadow ray and compute the contribution, if any ($\f{f}(x)$ divided by joint pdf).
+If there are multiple lights, make sure to compensate for the fact that you chose a particular one!
+Add a boolean property to allow switching between hemisphere sampling and surface sampling.
+
+\section{Materials (15 Points)}
+\label{sec:materials}
+\subsection{Mirror BSDF (3 easy Points)}
+The mirror BSDF reflects the incoming ray using the normal.
+All light (and importance) is reflected in exactly this direction.
+This has several implications:
+\begin{itemize}
+	\item All light is reflected, which means, that there is no cosine on the incoming light. Technically this means, that the BRDF of a mirror is actually $1/\cos$. In Nori, however, the cosine term is computed in the \texttt{BSDF::sample} function for all materials. We can therefore omit the computation for the cosine and just return $1$.
+	\item The PDF is a Dirac delta function (spike with infinite height, which integrates to one). We can't use $\infty$ in code as it would produces \texttt{NaN}s. We are working around that issue in the following way: when sampling, we don't actually have to divide by the PDF, so no problem. When querying for the \texttt{pdf} or the \texttt{eval} function, we just return $0$ (in theory it is almost surely impossible to generate such a sample by chance, in practise it's super unlikely).
+	\item We specify \texttt{bRec.measure = EDiscrete}, so that the rendering code can deal with this special case (transparently regardless of BSDF, whether it's a mirror or dielectric BSDF).
+\end{itemize}
+
+Implementing the mirror gives you \textbf{3 points}, but enables you to gather more points for MIS and friends.
+
+\subsection{The Dielectric BSDF (9 normal Points and 3 hard ones)}
+A dielectric BSDF can be used to model transparent objects like glass, diamonds or water.
+Implement it according to the lecture on materials or perhaps the course book \href{http://www.pbr-book.org/3ed-2018/Reflection_Models/Specular_Reflection_and_Transmission.html}{PBRT}.
+Use the BSDF in \texttt{dielectric.cpp} to make your solution accessible from scene files.
+Note that different source use different conventions for the directions and indices of refraction that they reference. You can use any convention you like, but the setup of Nori prefers that \texttt{bRec.wi} should be the negative view ray direction. 
+The dielectric BSDF cannot give you the medium of the volume the view ray is coming from and the one it goes to, you should figure this out yourself. It only provides the index of refraction on the exterior and the interior of the object with the given material.
+
+One important note: before, we offset our rays before continuing with the next bounce in the direction of the surface normal. But, if you actually \textbf{enter} an object, this is not a good idea! Instead, offset your rays along the negative surface normal. Also, if you want your dielectrics to work with next event estimation, you basically have to treat a hit with them like a hit with a mirror material, because it only reflects / refracts in a single direction.
+
+Implementing until here gives you \textbf{9 points}.
+
+While working on dielectrics, you might wonder what the \texttt{BSDFQueryRecord::eta} is for. This is only really necessary when you perform Russian Roulette with throughput.
+When light switches media (e.g. vacuum $\rightarrow$ glass), the radiance it carries changes (see slides for details). 
+This change of density should be included in the BSDF weight that you return from the BSDF \texttt{sample} method.
+But, if you use Russian Roulette with throughput, then this may erroneously affect your decision to stop, since the throughput is now no longer strictly going down with every bounce, but may in- or decrease somewhat randomly as you switch between media. 
+We can counter this by keeping track of the relative eta in addition to the throughput.
+After each sampling / evaluation of the BSDF, we can update \texttt{eta *= bRec.eta}, and use it to modify the Russian Roulette survival probability to remove the influence on the estimated throughput from switching between media. For this to remain stable in all scenes, make sure that the other supported materials (diffuse, mirror) set a proper \texttt{bRec.eta = 1} to avoid unexpected behavior.
+
+Implementing this \texttt{bRec.eta} business including RR gives you \textbf{3 hard points}. You have to demonstrate the effectiveness of RR improvements with eta using example scenes (include renderings).
+
+\section{Importance Sampling (1 easy points, 9 normal points)}
 \label{sec:is}
-\subsection{Use cosine-weighted hemisphere samples for diffuse materials (5 easy points)}
+\subsection{Use cosine-weighted hemisphere samples for diffuse materials (1 easy points)}
 Use the cosine-weighted hemisphere sampling method, as described in the lecture. First make sure that your direct lighting and path tracing integrators use the diffuse BSDF class appropriately, then extend the diffuse BSDF with cosine-weighted hemisphere sampling. Ideally, you can reuse your warping solutions from the first part of this assignment! The BSDF should switch between using cosine-weighted and uniform hemisphere sampling, depending on the value of the \texttt{use\_cosine} flag provided by each object's material (default: \texttt{false}). Note that this affects both the sampling and PDF computation! Confirm for yourself that cosine-weighted hemisphere sampling can reduce the noise in your scenes. To test this, compare the output of the test scenes that end in \texttt{uniform} with the ones that end in \texttt{cosine}. The latter use cosine-weighted hemisphere sampling and should give slightly cleaner results.
 
-\subsection{Next Event Estimation with diffuse materials (5 points)}
+\subsection{Next Event Estimation with diffuse materials (4 points)}
 Implement next event estimation (NEE) for your diffuse path tracer using the 0/1 strategy, i.e., no mixing of sampling strategies.
 
 It should be active depending on a boolean \texttt{nee} in the test file (default \texttt{false}).
@@ -134,15 +225,13 @@ The \texttt{Mirror} class accounts for that by always returning 0 in the \texttt
 
 If you want mirror materials to play nice with NEE, you need to take special care: for any direction that is not explicitly the reflection vector, the sampling probability is 0, so you simply can't do light source sampling on mirrors. But if you just ignore direct light on mirror materials, the light sources will be missing in mirror reflections! You can achieve 5 points if you make mirror materials work with NEE.
 
-
-
 Hence, you need to treat this as a special case:
 \begin{enumerate}
 	\item Do not perform NEE when on such a surface.
 	\item If the previously hit surface was Dirac, then do add the emittance of the current surface. \texttt{BSDFQueryRecord::measure} and \texttt{EMeasure::EDiscrete} were made for this purpose.
 \end{enumerate}
 
-\section{Multiple Importance Sampling (MIS)}
+\section{Multiple Importance Sampling (MIS, 5 normal points 10 hard points)}
 \label{sec:mis}
 MIS is a bit hard to wrap your head around it, but once you do that, you can get quite a light bulb moment.
 We will try to go slow about it, and divide the implementation into several parts.
@@ -182,8 +271,7 @@ This is, because those materials have a Dirac delta probability functions: all r
 MIS requires a similar special treatment.
 Implement it!
 
-\section{Materials (15 Points + 15 Bonus Points)}
-\label{sec:materials}
+
 \begin{figure}[h!]
 	\hspace{\fill}
 	\begin{subfigure}{0.48\linewidth}
@@ -198,96 +286,6 @@ Implement it!
 	\hspace{\fill}
 \end{figure}
 
-
-
-\section{Sampling and Appearance (10 Points + 15 Bonus Points)}
-\subsection{Low-Discrepancy Sampling (9 Points + 10 Bonus Points)}
-\begin{figure}[h!]
-	\hspace{\fill}
-	\begin{subfigure}{0.45\linewidth}
-		\includegraphics[width=\linewidth]{figures/ajax_dl_surface_ind}
-		\caption{Direct lighting with independent sampler}
-	\end{subfigure}
-	\hfill
-	\begin{subfigure}{0.45\linewidth}
-		\includegraphics[width=\linewidth]{figures/ajax_dl_surface}
-		\caption{Direct lighting with Halton sampler}
-	\end{subfigure}
-	\hspace{\fill}
-\end{figure}
-Add an additional Halton-based sample generator named \texttt{halton} to your solution. The sampler should be able to produce 2D and 1D sequences, based on Halton low-discrepancy sequences that use the radical inverse. For 2D samples, use a combined base-11,13 Halton sequence and for 1D, use a base-7 Halton sequence. To minimize repeating patterns, you should initialize your Halton sampler states (use three separate state variables) to random values (\texttt{rand()}). Try your implementation on the light surface sampling scene. Usually, such simple samplers should only be used for individual effects (e.g., picking subpixel coordinates for rays), not the full rendering procedure, but direct lighting is simple enough, so it actually works out ok. If you want to break it, you can try it on full path tracing scenes or change 2D sampling to use base-2,3 instead and see what happens!
-
-For 10 bonus points, implement a sophisticated Halton-based sampling strategy that actually can replace the independent sampler completely! Hints and suggestions for making it work are described in the  \href{https://www.pbr-book.org/3ed-2018/Sampling_and_Reconstruction/The_Halton_Sampler}{course book}. Make sure that your renderings with Halton converge to the same result as with the independent sampler!
-
-\subsection*{Antialiasing (2 Points)}
-\begin{figure}[h!]
-	\hspace{\fill}
-	\begin{subfigure}{0.48\linewidth}
-		\includegraphics[width=\linewidth]{figures/cbox_pt_low_alias}
-		\caption{Sampling pixel center only}
-	\end{subfigure}
-	\hfill
-	\begin{subfigure}{0.48\linewidth}
-		\includegraphics[width=\linewidth]{figures/cbox_pt_low}
-		\caption{Sampling over entire pixel}
-	\end{subfigure}
-	\hspace{\fill}
-\end{figure}
-Before we get down to business, let's first get rid of aliasing in our renderings. Until now, we have only ever shot our rays through the center of the pixels. If you have a lower-resolution display or zoomed in on your renderings, you probably saw that they are somewhat jaggy because of this (look at sharp edges, like the bottom of the front box)! We can quickly fix that by running minimalistic antialiasing for the whole pixel: in \texttt{main.cpp}, instead of shooting rays always through the pixel center, make it so that the rays can sample the full pixel width and height!
-Also make sure that your changes are stored in \texttt{pixelSample} and then passed to \texttt{block.put}. This will be important later. 
-
-
-
-\subsection{Support for Filtering (1 Point)}
-When you fixed aliasing and computed output colors by integrating values over the whole pixel, you basically used a pixel-sized box filter.
-This is easy to implement, but really not a good choice for filtering: the box filter is sometimes jokingly referred to as the worst filter available. 
-To get support for a few different filters, you need to implement the corresponding suppport in Nori.
-Once done, you should experiment with different filters and sample counts, to see what a difference they can make.
-
-Apart from the theory behind it, which is not too complex, the \textbf{implementation} for supporting separable filters in a tiled renderer is not trivial (it's not that hard either), so we provide the missing code here:
-\begin{verbatim}
-	void ImageBlock::put(const Point2f &_pos, const Color3f &value) {
-		if (!value.isValid()) {
-			/* If this happens, go fix your code instead of removing this warning ;) */
-			cerr << "Integrator: computed an invalid radiance value: "
-			<< value.toString() << endl;
-			return;
-		}
-		
-		/* Convert to pixel coordinates within the image block */
-		Point2f pos(
-		_pos.x() - 0.5f - (m_offset.x() - m_borderSize),
-		_pos.y() - 0.5f - (m_offset.y() - m_borderSize));
-		
-		/* Compute the rectangle of pixels that will need to be updated */
-		BoundingBox2i bbox(
-		Point2i((int)  std::ceil(pos.x() - m_filterRadius),
-		(int)  std::ceil(pos.y() - m_filterRadius)),
-		Point2i((int) std::floor(pos.x() + m_filterRadius),
-		(int) std::floor(pos.y() + m_filterRadius)));
-		bbox.clip(BoundingBox2i(Point2i(0, 0),
-		Point2i((int) cols() - 1,
-		(int) rows() - 1)));
-		
-		/* Lookup values from the pre-rasterized filter */
-		for (int x=bbox.min.x(), idx = 0; x<=bbox.max.x(); ++x)
-		m_weightsX[idx++] = m_filter[(int) (std::abs(x-pos.x()) * m_lookupFactor)];
-		for (int y=bbox.min.y(), idx = 0; y<=bbox.max.y(); ++y)
-		m_weightsY[idx++] = m_filter[(int) (std::abs(y-pos.y()) * m_lookupFactor)];
-		
-		/* Add the colour valuel after filtering to the current estimate.
-		* Color4f extends the Color3f value by appending a 1. Therefore,
-		* in the 4th component we are automatically accumulating the filter
-		* weight. */
-		for (int y=bbox.min.y(), yr=0; y<=bbox.max.y(); ++y, ++yr) 
-		for (int x=bbox.min.x(), xr=0; x<=bbox.max.x(); ++x, ++xr) 
-		coeffRef(y, x) += Color4f(value) * m_weightsX[xr] * m_weightsY[yr];
-	}
-\end{verbatim}
-
-\subsection{Tone Mapping (5 Bonus Points)}
-There is already a basic type of tone mapping in Nori (computations are in float, output is in 8bit integer). Identify that code and extend it with the \href{http://www.cmap.polytechnique.fr/~peyre/cours/x2005signal/hdr_photographic.pdf}{Reinhard operator} for tone mapping, or use something similarly effective.
-
 \section*{Submission format}
 
 \textbf{Put a short PDF or text file called \texttt{submission<X>} into your git root directory and state all the points that you think you should get. This does not need to be long. Also mention the code files, where you implemented something if it is not obvious.}
@@ -327,8 +325,9 @@ Make sure to keep the directory structure in your submitted archive the same as
 \end{itemize}
 
 \section*{Appendix: The Phong BSDF}
-The Phong reflection model is one of the oldest ones.
-The original Phong was not even energy conserving, therefore we will implement the modified Phong \href{https://www.cs.princeton.edu/courses/archive/fall03/cs526/papers/lafortune94.pdf}{(Lafortune and Willems, 1994)}.
+The Phong reflection model is one of the oldest ones, but not physically plausible.
+Hence we banished it to this appendix.
+The original Phong was not even energy conserving, therefore we will present the modified Phong \href{https://www.cs.princeton.edu/courses/archive/fall03/cs526/papers/lafortune94.pdf}{(Lafortune and Willems, 1994)}.
 That report might be a bit hard to read (but doable, and there are some additional variance reducing improvements), so we will distil everything important into a summary.
 
 Phong is a glossy BSDF, consisting of a diffuse and specular part. The BSDF equation is:
diff --git a/assignments_2022/assignment4_project/main.tex b/assignments_2022/assignment4_project/main.tex
index ebd00033013c832873d70b927f4f0c480245f908..53dd033cb19fa88825f6a33c55d60c26ee54e06a 100644
--- a/assignments_2022/assignment4_project/main.tex
+++ b/assignments_2022/assignment4_project/main.tex
@@ -10,7 +10,7 @@
 \newcommand{\f}[1]{\operatorname{#1}}
 \newcommand{\todo}[1]{\color{red}{\textbf{#1}}}
 
-\title{Assignment 4: Materials and Appearance}
+\title{Assignment 4: Bonus and Projects}
 \deadline{2022-07-02 23:59}%2020-05-13 23:59
 \teaser{
 \hspace*{\fill}
@@ -48,6 +48,10 @@ git push
 
 \section{Create your own Scene (5 Easy Points + 30 Bonus Points)}
 
+5 punkte -> nur xml edits
+10 punkte -> blender export / edits
+15 punkte -> showcase your own features
+
 We would ask you to support us and future participants by preparing comprehensive test scenes that we can use in the next year, for features that you particularly liked. Perhaps you just want to get a little variation in (some of you may not want to see any more bunnies...). Preparing a useful scene will earn you 5 points (awarded only once per person). Scenes should only require a reasonable amount of processing power to be useful for students during development.
 
 If, however, you also want to go beyond, to the realm where samples don't matter, feel free to get artistic! We will be holding a competition for who can come up with the most impressive scene: you can prepare scenes by combining individual models (please make sure they are not heavily copyrighted) and features that you implemented (mandatory or bonus) in custom Nori test scenes. Aim to get the most impressive renderings that you can manage! We will pick a winner, whose work will earn her/him 30 bonus points, as well as the honor of being exhibited on the course homepage. If you want to participate, this should be an extra scene in addition to the "useful" scene for future students.
@@ -61,6 +65,93 @@ If you would like to go for something really ambitious but need an incentive, ta
 
 Another thing to keep in mind: if you stick out of the crowd, it is likely we would recommend you for a PhD position either here or at one of the more specialised labs.
 
+\section{Sampling and Appearance (10 Points + 15 Bonus Points)}
+\subsection{Low-Discrepancy Sampling (9 Points + 10 Bonus Points)}
+\begin{figure}[h!]
+	\hspace{\fill}
+	\begin{subfigure}{0.45\linewidth}
+		\includegraphics[width=\linewidth]{figures/ajax_dl_surface_ind}
+		\caption{Direct lighting with independent sampler}
+	\end{subfigure}
+	\hfill
+	\begin{subfigure}{0.45\linewidth}
+		\includegraphics[width=\linewidth]{figures/ajax_dl_surface}
+		\caption{Direct lighting with Halton sampler}
+	\end{subfigure}
+	\hspace{\fill}
+\end{figure}
+Add an additional Halton-based sample generator named \texttt{halton} to your solution. The sampler should be able to produce 2D and 1D sequences, based on Halton low-discrepancy sequences that use the radical inverse. For 2D samples, use a combined base-11,13 Halton sequence and for 1D, use a base-7 Halton sequence. To minimize repeating patterns, you should initialize your Halton sampler states (use three separate state variables) to random values (\texttt{rand()}). Try your implementation on the light surface sampling scene. Usually, such simple samplers should only be used for individual effects (e.g., picking subpixel coordinates for rays), not the full rendering procedure, but direct lighting is simple enough, so it actually works out ok. If you want to break it, you can try it on full path tracing scenes or change 2D sampling to use base-2,3 instead and see what happens!
+
+For 10 bonus points, implement a sophisticated Halton-based sampling strategy that actually can replace the independent sampler completely! Hints and suggestions for making it work are described in the  \href{https://www.pbr-book.org/3ed-2018/Sampling_and_Reconstruction/The_Halton_Sampler}{course book}. Make sure that your renderings with Halton converge to the same result as with the independent sampler!
+
+\subsection*{Antialiasing (2 Points)}
+\begin{figure}[h!]
+	\hspace{\fill}
+	\begin{subfigure}{0.48\linewidth}
+		\includegraphics[width=\linewidth]{figures/cbox_pt_low_alias}
+		\caption{Sampling pixel center only}
+	\end{subfigure}
+	\hfill
+	\begin{subfigure}{0.48\linewidth}
+		\includegraphics[width=\linewidth]{figures/cbox_pt_low}
+		\caption{Sampling over entire pixel}
+	\end{subfigure}
+	\hspace{\fill}
+\end{figure}
+Before we get down to business, let's first get rid of aliasing in our renderings. Until now, we have only ever shot our rays through the center of the pixels. If you have a lower-resolution display or zoomed in on your renderings, you probably saw that they are somewhat jaggy because of this (look at sharp edges, like the bottom of the front box)! We can quickly fix that by running minimalistic antialiasing for the whole pixel: in \texttt{main.cpp}, instead of shooting rays always through the pixel center, make it so that the rays can sample the full pixel width and height!
+Also make sure that your changes are stored in \texttt{pixelSample} and then passed to \texttt{block.put}. This will be important later. 
+
+
+
+\subsection{Support for Filtering (1 Point)}
+When you fixed aliasing and computed output colors by integrating values over the whole pixel, you basically used a pixel-sized box filter.
+This is easy to implement, but really not a good choice for filtering: the box filter is sometimes jokingly referred to as the worst filter available. 
+To get support for a few different filters, you need to implement the corresponding suppport in Nori.
+Once done, you should experiment with different filters and sample counts, to see what a difference they can make.
+
+Apart from the theory behind it, which is not too complex, the \textbf{implementation} for supporting separable filters in a tiled renderer is not trivial (it's not that hard either), so we provide the missing code here:
+\begin{verbatim}
+	void ImageBlock::put(const Point2f &_pos, const Color3f &value) {
+		if (!value.isValid()) {
+			/* If this happens, go fix your code instead of removing this warning ;) */
+			cerr << "Integrator: computed an invalid radiance value: "
+			<< value.toString() << endl;
+			return;
+		}
+		
+		/* Convert to pixel coordinates within the image block */
+		Point2f pos(
+		_pos.x() - 0.5f - (m_offset.x() - m_borderSize),
+		_pos.y() - 0.5f - (m_offset.y() - m_borderSize));
+		
+		/* Compute the rectangle of pixels that will need to be updated */
+		BoundingBox2i bbox(
+		Point2i((int)  std::ceil(pos.x() - m_filterRadius),
+		(int)  std::ceil(pos.y() - m_filterRadius)),
+		Point2i((int) std::floor(pos.x() + m_filterRadius),
+		(int) std::floor(pos.y() + m_filterRadius)));
+		bbox.clip(BoundingBox2i(Point2i(0, 0),
+		Point2i((int) cols() - 1,
+		(int) rows() - 1)));
+		
+		/* Lookup values from the pre-rasterized filter */
+		for (int x=bbox.min.x(), idx = 0; x<=bbox.max.x(); ++x)
+		m_weightsX[idx++] = m_filter[(int) (std::abs(x-pos.x()) * m_lookupFactor)];
+		for (int y=bbox.min.y(), idx = 0; y<=bbox.max.y(); ++y)
+		m_weightsY[idx++] = m_filter[(int) (std::abs(y-pos.y()) * m_lookupFactor)];
+		
+		/* Add the colour valuel after filtering to the current estimate.
+		* Color4f extends the Color3f value by appending a 1. Therefore,
+		* in the 4th component we are automatically accumulating the filter
+		* weight. */
+		for (int y=bbox.min.y(), yr=0; y<=bbox.max.y(); ++y, ++yr) 
+		for (int x=bbox.min.x(), xr=0; x<=bbox.max.x(); ++x, ++xr) 
+		coeffRef(y, x) += Color4f(value) * m_weightsX[xr] * m_weightsY[yr];
+	}
+\end{verbatim}
+
+\subsection{Tone Mapping (5 Bonus Points)}
+There is already a basic type of tone mapping in Nori (computations are in float, output is in 8bit integer). Identify that code and extend it with the \href{http://www.cmap.polytechnique.fr/~peyre/cours/x2005signal/hdr_photographic.pdf}{Reinhard operator} for tone mapping, or use something similarly effective.
 
 \section{Bonus Tasks (Loads of Points)}
 \subsection{Adding a Microfact BSDF (10 Bonus Points)}
@@ -68,22 +159,6 @@ Another thing to keep in mind: if you stick out of the crowd, it is likely we wo
 For some bonus points, you can implement a more complex Microfacet material model, according to the steps outlined in Assignment 5, Part 1 found on the \href{https://wjakob.github.io/nori/#pa5}{Nori webpage}.
 This BSDF should give you a linear blend between a diffuse and a Torrance-Sparrow-based specular model. Note that some of the notes on the webpage do not apply: first, there is no default fresnel implementation in our framework; adding it is part of the assignment for implementing dielectrics. Second, the microfacet BRDF and its distribution will not be tested automatically on the server.
 
-\subsection{The Dielectric BSDF (10 Points)}
-A dielectric BSDF can be used to model transparent objects like glass, diamonds or water.
-Implement it according to the lecture on materials or perhaps the course book \href{http://www.pbr-book.org/3ed-2018/Reflection_Models/Specular_Reflection_and_Transmission.html}{PBRT}.
-Use the BSDF in \texttt{dielectric.cpp} to make your solution accessible from scene files.
-Note that different source use different conventions for the directions and indices of refraction that they reference. You can use any convention you like, but the setup of Nori prefers that \texttt{bRec.wi} should be the negative view ray direction. 
-The dielectric BSDF cannot give you the medium the view ray is coming from and the one it goes to, you should figure this out yourself. It only provides the index of refraction on the exterior and the interior of the object with the given material.
-
-One important note: before, we offset our rays before continuing with the next bounce in the direction of the surface normal. But, if you actually \textbf{enter} an object, this is not a good idea! Instead, offset your rays along the negative surface normal. Also, if you want your dielectrics to work with next event estimation, you basically have to treat a hit with them like a hit with a mirror material, because it only reflects / refracts in a single direction. 
-
-While working on dielectrics, you might wonder what the \texttt{BSDFQueryRecord::eta} is for. This is only really necessary when you perform Russian Roulette with throughput.
-When light switches media (e.g. vacuum $\rightarrow$ glass), the radiance it carries changes (see slides for details). 
-This change of density should be included in the BSDF weight that you return from the BSDF \texttt{sample} method.
-But, if you use Russian Roulette with throughput, then this may erroneously affect your decision to stop, since the throughput is now no longer strictly going down with every bounce, but may in- or decrease somewhat randomly as you switch between media. 
-We can counter this by keeping track of the relative eta in addition to the throughput.
-After each sampling / evaluation of the BSDF, we can update \texttt{eta *= bRec.eta}, and use it to modify the Russian Roulette survival probability to remove the influence on the estimated throughput from switching between media. For this to remain stable in all scenes, make sure that the other supported materials (diffuse, mirror) set a proper \texttt{bRec.eta = 1} to avoid unexpected behavior.
-
 \subsection{More Materials}
 The topic of materials does not stop at the microfacet model. There is a wide range of more complex aspects of objects' physical appearance, and the resulting rendering solutions can become very sophisticated. \href{http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.468.9506&rep=rep1&type=pdf}{Background: Physics and Math of Shading by Naty Hoffman} is a nice didactic introduction and contains a lot of in-depth information.
 It is a good read even if you don't want to implement anything!