1. Sphere Tracing (30pts)

To implement sphere tracing, I just follow the pseudo code in the slide:

marched_point(x) = origin
marched_dist = 0
iter = 0
while iter < MaxIter:
    marched_point(x) = origin + marched_dist * direction
    if implicit_fn(marched_point(x)) < 1e-5:
        break

My implementation is essentially a batched version of the above code that runs more efficiently. To determine whether the ray already intersects with the surface, I test whether the SDF value is smaller than 1e-5.

My part1

Voxel 0

2. Optimizing a Neural SDF (30pts)

Description: My MLP follows the suggested hyperparameters in points.yaml. I have a MLP with 6 layers with hidden dimension 128 and no skip connections. And finally, I append one more linear layer to map the hidden vector to 1, with no ReLU followed so that the output can have negative value.

My Eikonal loss is simply to enforce the eikonal_gradient to have norm 1 along dimension 1. I use L1 norm for this loss.

I used lr 0.0001, and multiply the point loss by 1.0, the eikonal loss by 0.02. I used 5000 epochs and scale it by 0.8 every 50 epochs. Other hyperparameters follow the original yaml file.

My part2

Voxel 0

3. VolSDF (30 pts)

In your write-up, give an intuitive explanation of what the parameters alpha and beta are doing here:

Beta controls the smoothness of the function (i.e., how sensitive the density is to the distance). If beta is small, then small changes in distance could result in large change in density, and eventually converging to a step function.
Alpha controls the overall scale of the function. In case beta is approaching zero, alpha is the density inside the object.


Also, answer the following questions:

How does high beta bias your learned SDF? What about low beta? Answer:

Higher beta means the resulting density is less sensitive to the surface boundary and resulting in more blurred rendering (as shown below).


Would an SDF be easier to train with volume rendering and low beta or high beta? Why?

We need to have the right beta -- if beta is too high, we are essentially allowing density for points lying far away from the surface, so the exact surface may not be well optimized. Yet, if beta is too low, then only points on the surface could have nonzero density, and this might may training harder. The best solution might be making beta learnable as the reference paper does.


Would you be more likely to learn an accurate surface with high beta or low beta? Why?

For more accurate surface (I assume the surface is regular surface, not glassy etc.), then it is better to use lower beta such that the density degrade faster when moving away from the surface. This enforce the model to make more accurate prediction of the surface, otherwise the rendering will be off.


I attach two results. One with default beta 0.05 and one with 10 x 0.05 = 0.5.

My part3

Part 3 with 10 x Beta

Voxel 0 GT Mesh 0

4.2 Fewer Training Views (10 pts)

I tried 20 views! It can be seen that Nerf with 20 views produces some artifacts at corner and blur the renderings. Neural SDF also suffers from fewer views but with no visible artifacts.

Part 3 with 100 views

Part 3 with 20 views

NeRF with 100 views

NeRF with 20 views

Voxel 0 GT Mesh 0 GT Mesh 0 GT Mesh 0

4.3 Alternate SDF to Density Conversions (10 pts)

I tried the naive solution from the NeuS paper and compare it with the original result. I also try out different value for s, and seems like increasing s is essential for achieving reasonable result. Overall I think selecting a good s it will produce similar results as the default sdf to density function:

My part3

NeuS Naive s = 0.01

NeuS Naive s = 1

NeuS Naive s = 100

Voxel 0 GT Mesh 0 GT Mesh 0 GT Mesh 0