Since SDF outputs distance to the closest surface, one can directly use the SDF value to perform spherical tracing. This tracing is performed for "max_iter" iterations, and a threshold of SDF value is applied to check whether the ray intersects or not.
![]() |
A simple 4-hidden-layer MLP is used to predict the SDF (no ReLU for distance output). Again, harmonic embedding is used. To get Eikonal loss, I took the l2 norm of the gradient and do a MSE loss to encourage it to be 1. Below shows results.
input | sdf |
![]() |
![]() |
Similarly a simple 4-hidden-layer MLP is used to predict both the SDF and color at the same time. Output dimension is 4. The first dimension is the SDF value, and the rest is applied with Sigmoid to get rbg values. Hyperparameters are all default ones.
1. Lower beta approximates a surface better, since the absolute values inside exponents will be larger, making the density more like a step function. Higher beta will create smoother density changes throughout T.
2. Intuitively, higher beta will be easier to train, because the gradients can be more stable. (less exploding/vanishing gradients)
3. As discussed in (1), if the model trains/learns properly, a lower beta values can model surfaces mroe accurately. However, there is risk of unstable training if beta is too low.
geometry | rendering |
![]() |
![]() |
I experimented with fewer training views (20 views) by directing changing the training indices (e.g. train_idx = train_idx[:20]). Interestingly, both NeRF and Neural surfaces perform reasonably well with 20 views, despite having slightly blurrier results.
Neural Surface geometry (full view) | Neural Surface rendering (full view) | Neural Surface geometry (20 view) | Neural Surface rendering (20 view) |
![]() |
![]() |
![]() |
![]() |
NeRF rendering (full view) | NeRF rendering (20 view) |
![]() |
![]() |