You can run the code for part 1 with:
python -m a4.main --config-name=torus
Output:
After this, you should be able to train a NeuralSurface representation by:
python -m a4.main --config-name=points
This should save save part_2_input.gif
and part_2.gif
in the images
folder. The former visualizes the input point cloud used for training, and the latter shows your prediction which you should include on the webpage alongwith brief descriptions of your MLP and eikonal loss. You might need to tune hyperparameters (e.g. number of layers, epochs, weight of regularization, etc.) for good results.
Input Point Cloud:
Output Geometry:
beta
bias your learned SDF? What about low beta
?
beta
or high beta
? Why?
beta
or low beta
? Why?
Run with:
python -m a4.main --config-name=volsdf
This will save part_3_geometry.gif
and part_3.gif
. Experiment with hyper-parameters to and attach your best results on your webpage. Comment on the settings you chose, and why they seem to work well.
Alpha = 10, Beta = 0.01
Alpha = 10, Beta = 1.0
Alpha = 10, Beta = 0.001
Run with:
python -m a4.main --config-name=composite
I have created a new class CompositeSDF which takes in any list of SDFs inside a config file, and renders the union of such SDFs. In order to do this, for every query input point, we simply return the minimum distance across all SDFs.
Result for a simple scene with lots of torus and spheres.
I compared VolSDF output to NeRF for 100, 20, and 5 views.
Number Of Views | NeRF | VolSDF Geometry | VolSDF Color |
---|---|---|---|
100 | ![]() |
![]() |
![]() |
20 | ![]() |
![]() |
![]() |
5 | ![]() |
![]() |
![]() |
NeRF was able to get a higher quality rendering for lower number of views as shown above.
I tried the naive SDF to density approach which can be visualized below: