1. Sphere Tracing
The way I did is to iteratively update the point positions, where in each iter:
- check if the current sdf is lower than a threshold, e.g., 1e-3
- if yes, then this point is determined to already be on the surface
- if not, then the position of the point is udpated to be current_position + sdf * ray_direction
- check if a maximal iteration is reached.
- if not, continue the iteration
- if yes, then points whose sdf are below the threshold are assumed to be on the surface, and points whose sdf are above the threshold are assumed to not intersect with the surface.
2. Optimizing a Neural SDF

3. VolSDF
- Alpha is controlling the scale of the density function , e.g., when beta is fixed, the larger the alpha, the larger the density. Beta is controlling the smoothness of the density when moving from inside the boundary to outside the boundary. When beta is 0, the density is 1 inside the boundary, and 0 outside. When beta is +inf, the density is 1/2 everywhere.
- As beta gets higher, the converted density will be smoother and depends less on the input sdf. Suppose the ideal density is all 1 inside the boundary and all 0 outside the boundary, then a large beta will bias the learned sdf to have jumps, i.e., points inside and near the boundary will be pushed furhter inside, and points outside near the boundary will be pushed further outside (in other word, the sdf of a -0.1 point might be learned to be -1, and the sdf of a 0.1 point might be learned to be 1), because the transformed density would decrease in a smooth way near the boundary with large beta. A low beta would cause the learned sdf to be more accurate without the jumps.
- Since the graident of \psi with respect to s is scaled by 1/beta, larger beta should make the graident more bounded and stable, while smaller beta might cause large and unstable gradients, especially when using SGD. So I would guess a slightly larger beta would make the training easier.
- As explained above, if the ideal density function is 1 inside the boundary and 0 outside, I think a small beta would make the learned sdf more accurate.
For training, I implemented the distance mlp with hidden neurons [128, 128, 128], using ReLU as activation function. The color mlp is has hidden neurons [128, 128, 128, 128], with ReLU as activation function, and sigmoid as the final non-linear activation. I used the default training hyper parameters.
Result:


4.2 fewer training views
I tried to use only 1/5 of the original number of training views. In terms of rgb reconstruction, nerf still gives reasonable reconstruction, while Neus's recoquality has a noticable drop. In terms of geometry, with these fewer views, neus result seems to be better.
VolSDF result:


Nerf result:
4.3 Alternate SDF to Density Conversions
I tried the sdf to desity function in NeuS paper, and setting s = 1. The reconstruction result is much much worse.