Implementation: in each iteration, I compute implicit value of point p.
if this implicit value is larger than 1e-5, I increment t (initial value is 0) by this implicit value, then compute p = origin + t * direction. Other wise I break the iteration.
The maximum iteration is 64. points whose implicit value is still larger than 1e-5 after 64 iterations will be considered as having no intersection with the surface.
mlp structure: 8 linear layer + relu activation combinations with harmonic embeddings as input and skip connections at the 5th layer
eikonal loss: the l2 different between input points' gradient and 1. ie.e encouraging the absolute value of gradients to be 1
1. a higher beta would bias the sdf more, since higher beta induces more smooth density values versus distance.
2. a higher beta would make the sdf be easier to train, since it make the density values more smooth thus make the training loss more smooth.
3. a lower beta would be more likely to learn an accurate surface since the density values will peak near the surface with less smooth values.
I use the default settings of alph=10 and beta=0.25, which works very well with their moderate values balancing between learning difficulty and density accuracy.
generally spearking, reconstruction result of volsdf is much worse than nerf.
with only 20 views, the reconstruction quality of volsdf is much worse than 100 views, only the general colors can be shown; While Nerf result is much more robust.