Assignment 4

Bunny geometry

1. Sphere Tracing (30pts)

You can run the code for part 1 with:

python -m a4.main --config-name=torus

Output:

Torus

2. Optimizing a Neural SDF (30pts)

After this, you should be able to train a NeuralSurface representation by:

python -m a4.main --config-name=points

This should save save part_2_input.gif and part_2.gif in the images folder. The former visualizes the input point cloud used for training, and the latter shows your prediction which you should include on the webpage alongwith brief descriptions of your MLP and eikonal loss. You might need to tune hyperparameters (e.g. number of layers, epochs, weight of regularization, etc.) for good results.

Input Point Cloud:

Bunny geometry

Output Geometry:

Bunny geometry

3. VolSDF (30 pts)

  1. How does high beta bias your learned SDF? What about low beta?
    1. A higher beta results in treating the distance as uniform. A lower beta will result in a very sharp drop off from around distance 0.
    2. Beta controls the sharpness of the drop from higher weight to lower weight.
    3. A higher beta yields the following curve:
      1. Bulldozer geometry
    4. A lower beta yields the following curve: 2. Bulldozer geometry
    5. A low beta value is able to capture the details of the geometry, whereas a high beta value is not able to and usually has a blurry output.
  2. Would an SDF be easier to train with volume rendering and low beta or high beta? Why?
    1. An SDF with higher beta would be easier to train since if the beta is low the density will be really high even if the distance is slightly off. As such, a higher beta will provide more leniency to the SDF and converge faster but most likely will be an extra smoothed result.
  3. Would you be more likely to learn an accurate surface with high beta or low beta? Why?
    1. A lower beta can learn an accurate surface as it has a sharp drop-off beyond the surface. However, this is much harder to train and could lead to an invalid result.
    2. High beta would learn the surface, but it likely won't be as accurate.

Run with:

python -m a4.main --config-name=volsdf

This will save part_3_geometry.gif and part_3.gif. Experiment with hyper-parameters to and attach your best results on your webpage. Comment on the settings you chose, and why they seem to work well.

Alpha = 10, Beta = 0.01

Bulldozer geometry Bulldozer color

Alpha = 10, Beta = 1.0

Bulldozer geometry Bulldozer color

Alpha = 10, Beta = 0.001

Bulldozer geometry Bulldozer color

4. Neural Surface Extras (CHOOSE ONE! More than one is extra credit)

4.1. Render a Large Scene with Sphere Tracing (10 pts)

Run with:

python -m a4.main --config-name=composite

I have created a new class CompositeSDF which takes in any list of SDFs inside a config file, and renders the union of such SDFs. In order to do this, for every query input point, we simply return the minimum distance across all SDFs.

Result for a simple scene with lots of torus and spheres. Bulldozer geometry

4.2 Fewer Training Views (10 pts)

I compared VolSDF output to NeRF for 100, 20, and 5 views.

Number Of Views NeRF VolSDF Geometry VolSDF Color
100 Bulldozer geometry Bulldozer geometry Bulldozer geometry
20 Bulldozer geometry Bulldozer geometry Bulldozer geometry
5 Bulldozer geometry Bulldozer geometry Bulldozer geometry

NeRF was able to get a higher quality rendering for lower number of views as shown above.

4.3 Alternate SDF to Density Conversions (10 pts)

I tried the naive SDF to density approach which can be visualized below:

Bulldozer geometry Bulldozer color