Name: Yuqing Qin
Andrew ID: yuqingq

Assignment 4

1. Sphere Tracing

To find the intersections, it will iteratively step forward along each ray until it hits the surface or out of the scene. At each step, it will first find out the distance to the closest surface (sphere), and step forward by that distance. Also, I used vectorization to speed up the sphere tracing process. At each step, I have three "masks" to maintain each ray's condition (i.e. not finished, hit the surface, out of scene), and only update those rays with "not finished" mask.

The visualization is shown as below:

1.1
 

2. Optimizing a Neural SDF

MLP: My implementation is similar to the one in NeRF. With 6 stacked hidden linear layers, followed by one fully connected layer at the end to output the distance. The difference is the output range. For NeRF, we have 'ReLU' to make the density to be a positive real number, but for SDF, what we need is just the real number, so no 'ReLU' in this case.

Loss: The loss I used is simply the MSE loss with Eikonal Constraint. Eikonal Constraint is just to force the norm of distance to be 1, so it can be measured by using the absolute value of the difference between the norm and 1.

The one before and after training is shown as below. To get better results, I increased the training epochs to be 8000.

2.2 2.2
 

3. VolSDF

Intuitively, the alpha is the constant density inside the object, and the density is decreased near the boundary of the surface. And the outside of the surface is supposed to be zero. The beta determines how fast to transform the density from inside to outside. If beta is close to zero, which means it is the identity function with sharp decrease near the boundary, which would also make the SDF to learn a smaller range of values (shrink the range). High beta in contrast, will provide smooth transform from inside to outside, which will make the SDF to learn a larger range of values to compensate.

It would be easier to train with a higher beta. As I mentioned earlier, with high beta, the range for SDF could be larger, and the values would spread out. Therefore, the training would be more stable. If beta is small, every small changes in the SDF would lead to a large changes in the density. As a result, the error and gradient update will be affected a lot. Also, from the paper, to maintain the similar error bound, with lower beta, it will require more samples to train to give accurate estimate.

With lower beta, we could possibly learn a more accurate surface since the beta determines the sharpness of the density. If beta is small, the surface would be more clear and accurate.

I chose alpha = 10, beta = 0.03, n_ptrs_per_ray = 192, batch_size = 1024, chunk_size=8192. I make the beta to be smaller and at the same time, make more sampling points to maintain a low error bound (proved by the paper). The paper shows one example on the beta (converge to ~0.02 at the end), so I decided to use 0.03 as the beta, but at the same time increase the number of sampling points. Also, I have to decrease chunk_size a lot to fit it into the memory.

2.2 2.2
 

4. Neural Surface Extras

4.1. Render a Large Scene with Sphere Tracing

There are three components to form this large scene: box, sphere, and torus. I place these three components at 21 different locations (randomly). Below two renderings are with two randomization of locations.

 
2.2 2.2
 

4.3 Alternate SDF to Density Conversions

I try the naive density conversion from NeuS. High density near the surface, and zero otherwise. This naive solution gives worse rendering compared to the one used in VolSDF. Below rendering shows the result with NeuS solution (s = 5). As you can see from below, the rendering is not accurate and is blur everywhere. I have increased the epochs from 250 to 400, and kept the n_ptrs_per_ray = 128.

 
5 5