Learning for 3D Vision: Assignment 4

1. Sphere Tracing

The implementation involves iteratively updating each 3D point in the point cloud starting from origin with sdf_value*direction until sdf_value < epsilon a small value i.e. point of intersection. To check for the 3D points that have reached the point of intersection and therefore need not be updated a mask is maintained.

2. Optimizing a Neural SDF

Ground Truth Optimized

3. VolSDF

  1. Lower beta biases the density to sharply increase to a very high value at the point of intersection. Lower beta leads to smoother increase in density near the surface. At very high beta density will become almost same everywhere. Alpha determines the maximum density value in the complete volume.

  2. With volume rendering techniques a low beta will be suitable (but not too low for numerical stability, beta=0.01 resulted in loss being nan and unstable gradient) as it would give a better estimate of density inside the surface compared to outside.

  3. For an accurate surface representation lower beta is more desirable.

alpha=10, beta=0.05 alpha=10, beta=0.1 alpha=10, beta=1. alpha=50, beta=0.05
epoch 30
epoch 30
epoch 60
epoch 60

4. Neural Surface: Alternate SDF to Density Conversions

I implemented the NeuS formulation for distance to density conversion with s=90

epoch 30
epoch 60

4. Neural Surface: Fewer Training Views

I trained training with 10 views for SDF to density (VolSDF formulation) and NeRF from previous assignment with similar architecture.

In case of SDF with the default parameters, even though geometry is recovered to a great extent rendering is of poor quality comapred to NeRF.

SDF rendering SDF geometry NeRF rendering
epoch 30
epoch 60

4. Neural Surface: Render a Large Scene with Sphere Tracing

For SDF redering of multiple objects SDF value will be minimum of distances returned by all the objects.

Number of late days