Intuition:
With reference to the paper, alpha can be intuitively understood as modelling a constant density while beta is a smoothness parameter that decreases the density smoothly near the boundaries.
- How does high beta bias your learned SDF? What about low beta?
High beta increases the variance in the densities which can lead to blurrier/smoother results. Low beta reduces the variance and eventually converts the density to a scaled indicator function when it approaches 0.
- Would an SDF be easier to train with volume rendering and low beta or high beta? Why?
It would be easier to train with slightly higher beta as it may help in generalization and smoother gradients while low beta can lead to overfitting. We need to balance as there is a tradeoff.
- Would you be more likely to learn an accurate surface with high beta or low beta? Why?
Low beta can help to get a better surface as there will be very low variance near the boundaries and hence can have sharper results.
Regarding the hyperparameters, I increased the number of neurons to 200 to increase the representational capacity of the network. I reduced number of epochs to 50 to avoid overfitting.
I also changed the lr_scheduling rate to every 20 steps for better convergence in 50 epochs. The default alpha and beta values worked for me well so I kept it the same.