Assignment 4

NAME: Hiresh Gupta

ANDREW ID: hireshg

Late days used: 0

1. Sphere Tracing (30pts)

Usage:

python -m a4.main --config-name=torus

Writeup:

I implemented Sphere tracing as per the equations explained in the lectures. I ran Sphere Tracing for a total of max_iters iterations. I start at the origin and at each iteration, I compute the SDF value to the nearest surface. In the next iteration, if the distance of the current point to its nearest surface (new SDF value) is less than eps, it is considered as a point of intersection.

Hyperparameters used:

  1. max_iters: 64
  2. eps: 1e-6

Visualization:

2. Optimizing a Neural SDF (30pts)

Usage:

python -m a4.main --config-name=points

WriteUp:

Similar to the NeRF architecture, I used an MLP encoder with the same setting that first computes the Harmonic Embeddings of the points, and then passes it to the MLP backbone with 6 linear layers. In order to predict SDF value, I have added one additional linear layer at the end, since SDF is not constrained to lie between 0 and 1 and can take any real values.

We add Eikonal loss as a geometric regulariser to constrain the network to represent a Signed Distance Function. This regularisation is defined by constraining the norm of the gradient to 1. To implement this, for I first computed the l2_norm of the gradient at each point, and apply mean reduction over the absolute value of l2_norm -1 across all the points.

Visualizations:

I experimented with the eikonal_weight hyperparameter and found that eikonal_weight=0.02 gave me the best results. The result of that run can be found below:

3. VolSDF (30 pts)

Usage:

python -m a4.main --config-name=volsdf

WriteUp:

Questions:

Q1. How does high beta bias your learned SDF? What about low beta?

Ans: For a very high beta, a constant value of density will be predicted for almost all the SDF values. Hence the network wouldn’t learn any strong signals for the boundaries and would generate smoother surfaces. Using a low beta value helps in generating objects with sharper boundaries.

Q2. Would an SDF be easier to train with volume rendering and low beta or high beta? Why?

Ans: It would be easier to train an SDF with higher beta values, since it would have much smoother gradients. A low beta value might give a very high gradient and hence it might not learn well.

Q3. Would you be more likely to learn an accurate surface with high beta or low beta? Why?

Ans: To learn a more accurate surface we would want to use a lower beta value because it would ensure a clearer distinction between the inside and outside of the surface.

Hyperparameter Tuning & Visualizations:

Hyperparameter value (beta) Geometry Colored output
0.01
0.05
0.1
1.0

Explanation: As discussed above, we can see that the smaller values of beta gives us a more sharper boundary. On increasing the beta value further we observe a smoother mesh output upto a point, after which the network fails to learn the structure properly.

4. Neural Surface Extras

4.1. Render a Large Scene with Sphere Tracing (10 pts)

Usage:

python -m a4.main --config-name=multi-objects

WriteUp:

For this question I rendered a collection of seven spheres, cubes and torus (a combination of 21 shapes) to generate the following output:

4.2 Fewer Training Views (10 pts)

Usage:

python -m a4.main --config-name=volsdf_20_views

Visualizations:

I experimented by modifying the number of views in the following setting and the results can be found below:

Model Number of Views Visualization
NeRF 10
NeRF 20
NeRF 100
VolSDF 10
VolSDF 20
VolSDF 100

Explanation: While NeRF does a good job in predicting the 3D object for a large number of views, we observe that it is unable to generalize on unseen views if we train it on a very less number of views. However, this is not the case with VolSDF as it performs well even with a lesser number of views.

4.3 Alternate SDF to Density Conversions (10 pts)

Usage:

python -m a4.main --config-name=neus

Visualizations:

I have implemented the SDF to density conversion equation mentioned in the NeuS paper and the results can be found below:

Hyperparameter value (s) Geometry Colored output
10
50
100

Explanation: A higher value of s ensures a sharper change in the density at the surface. We observe the same in our above experiments, since a higher value of s is giving us getting sharper results.