Neural Surfaces

1. Sphere Tracing

To implement the Sphere Tracing I followed the pseudo code in the lecture slides:


		while f(p) > epsilon:
			t=t+f(p)
			p=x0+td
						
, where \(x_0\) is the origins, \(f\) is the implicit function, \(t\) is the current distance and \(d\) is the ray direction. In each iteration, we update the marched points and signed distances in each ray according to this pseudo code. And after certain iterations, we finish updating and check the distances between the points in each ray and the object. If the distances are lower than some threshold, the ray will be determined to intersect with the surface.

2. Optimizing a Neural SDF

MLP My implementation for MLP in this assignment is similar to the one in last assignment, which uses Harmonic Embeddings of the points as inputs and has 6 hidden linear layers followed by one head at the end to output the distance. And for the VolSDF, as the distances are real number, we don't apply the ReLU activation at the output layer.

Eikonal Loss The Eikonal Loss is applied to force the norm of distance to be 1, which is \(\mathcal{L}_{Eikonal}=(||\delta_x||-1)^2\)

Loss: The loss I used is simply the MSE loss with Eikonal Constraint. Eikonal Constraint is just to force the norm of distance to be 1

3. VolSDF

Intuitively, \(\beta\) controls the smoothness of changes of density near the surface. \(\alpha\) controls is a scaling factor and controls the magnitude of density.

As shown in the results below, high \(\beta\) will increase the smoothness of the surface and makes the results blurry. And a low \(\beta\), in contrast, will encourage more details and sharpness in the results.

An SDF will be be easier to train with higher \(\beta\) as the gradient would be smooth. In contrast, the gradient explodes may happen for a low \(\beta\).

An SDF will be be easier to train with higher \(\beta\) as the gradient would be smooth. In contrast, the gradient explodes may happen for a low \(\beta\).

To generate more accurate surface, a lower \(\beta\) should be used as the model will punish more when the density moving away from the surface, and leading to a more accurate result. But still, if we set \(\beta\) to low, the gradients may explode and the model can't converge

Result

I basically follows all setting in the given configuration files, with conducting experiments on different \(\beta\) values.

  • \(\beta=0.05\)

  • \(\beta=0.5\)

  • \(\beta=0.01\)

  • \(\beta=0.05\)

  • \(\beta=0.5\)

  • \(\beta=0.01\)

4. Neural Surface Extras

4.1 Render a Large Scene with Sphere Tracing

I create a scene with 15 stacked spheres, 15 torus, 15 rounded boxes and 15 cubes. The results are shown as below.

4.2 Fewer Training Views

The results with fewer 20, 10, 5 views for both NeRF and VolSDF are shown below. We can see the results are blurrier for the VolSDF using 10 and 20 views than that of NeRF. But the VolSDF can still generate smooth rendering for even 5 views as it's better regularized while NeRF can't.

  • VolSDF, views = 20

  • VolSDF, views = 10

  • VolSDF, views = 5

  • NeRF, views = 20

  • NeRF, views = 10

  • NeRF, views = 5

4.3 Alternate SDF to Density Conversions

I implemented the naive solution in NeuS paper to calculate density as \(\sigma=\frac{se^{-sx}}{(1+e^{-sx})^2}\), and I set the \(s=100\).