Corinne Alini HW 2

I have used 0 late days on this assignment

Zero late

Question 1: Sphere Tracing

Question 1 The algorithm begins with the points at the origin. These are the origins of the rays that will be cast. Then for each iteration until we have reached the max number of iterations, we calculate the circle such that it computes the nearest surface point from the current set of points. We then move along the ray by the radius of the circle. This will move us dynamically along the array until we are either at our max iterations or we hit a surface. We update points by points = circle radius * ray directions + points

Question 2

Question 2 input Question 2 I ended up not needing to tune the parameters to get a good result and my network was of the shape n_harmonic_functions_xyz was of size 4 Harmonic Embedding 3 --> n_harmonic_functions_xyz
Linear n_harmonic_functions_xyz --> 128
relu
Linear 128 --> 128
relu
Linear 128 --> 128
relu
Linear 128 --> 128
relu
Linear 128 --> 128
relu
Linear 128 --> 128
relu
Linear 128 --> 3
My The eikonal loss was computed by following a similar equation to the paper. I took the mean of the norm of the gradient along the points - 1 squared. Question 3 geometry

Question 3

Question 3 color Question 3 geometry

Alpha: Alpha is a scaling factor of our densities.

Beta: Beta is the L1 version of the standard deviation. It gives the scale for the CDF of the Laplace distribution.

How does high beta bias your learned SDF? What about low beta?
A high beta would allow us to have a higher variance which in turn would allow for better generalization. This generalization causes the image to be blurrier as seen in the images below
Question 3 color Question 3 geometry
A low beta would decrease the variance and thus would learn the given examples better. This will help us learn the given surface better with the given training examples.

Would an SDF be easier to train with volume rendering and low beta or high beta? Why?
It would be easier to train with a higher beta because it would be less prone to overfitting. Since the beta would encourage learning further away from the surface it could get stuck at a local minima and not learn as well. <

Would you be more likely to learn an accurate surface with high beta or low beta? Why?
We are more likely to get more accurate surface with a low beta. This is because we are giving the system less variance on the boundary which will keep generalization low but make it harder to train but learn a more accurate surfae

Comment on the settings you chose, and why they seem to work well.
The result above is using the original parameters. I decided to lower beta to be 0.003. This allowed me to get a more accurate feature.

Question 3 color Question 3 geometry The next thing I tried was lowering the number of epochs to 50. I noticed an overfitting problem so lowering the number of epochs

Question 4.2

These gif were run on 20 views with 250 epochs. You can see that they generalized well to few views, however the image is definitely blurrier with fewer viewsThere are differences when comparing to Q3.
Question 42 color Question 42 geometry
If you compare our results in 4.2 to Nerf we can see that Nerf learns details better while VolSDF generalizes much better from fewer views. The color image of VolSdf is blurrier than the one from nerf.
Question 2.3

Question 4.3

I decided to implement the logistic density distribution from the NeuS paper. I ran it with a s = 30 and got these results.
Question 42 color Question 42 geometry