For the implementation of sphere tracing, I have done the following:
I define placeholders for the intersections and masks tensors. I also define a small threshold value eps within which value, the distance to the surface for a ray will be considered 0 (i.e. intersection)
I then calculate the first set of distances to the surface from the origin.
After this, through a loop, I find the new ray points (using closest distance to the surface as the radius of the sphere along the ray direction), find the distance to the surface for the set of new points, check for intersection with the surface using the threshold, update the intersections and masks tensors, and finally, update the origins (as the new points from where the sphere will be computed)
The smoothing amount of the density at the boundaries of a homogenous object is controlled by β. A higher β makes the network predict a smoother object which may lose finer some details. A lower β gives more voxelized results.
The training is better with a lower beta value as the SDFs are smoother surfaces by nature
The higher the value of beta, the more accurate the surface is as it will lead to more intricate details being learned.
I chose to train with learning rate 0.0005 for 250 epochs with alpha = 10 and beta = 0.05 This seems to work well because beta is fairly low.
I have added 4 tori and 20 spheres to the scene
These were created from 20 views instead of 100.
The VolSDF is definitely more robust to fewer views. The NeRF Solution lacks finer details when trained on fewer views.
[NbConvertApp] Converting notebook index.ipynb to html [NbConvertApp] Writing 1700865 bytes to index.html