Late days used: 1
The sphere_tracing()
function implements the sphere tracing algorithm. It was implemented as per the following pseudo-code
x eps = 1e-5
z = ones_like(directions)
mask = ones((directions.shape[0], 1))
points = origins + z * directions
# SDF value
dists = implicit_fn(points)
# Compute distance to intersection
while any(mask * dists > eps):
z = z + dists
points = origins + z*directions
dists = implicit_fn(points)
mask[dists >= far] = False
return points, mask
For each iteration, the points with an SDF value greater than a threshold are masked out.
Input gif
Output gif
Implementing the MLP Network
The network implemented for SDF prediction was similar to that of the NeRF coordinate MLP for
Eikonal loss
The eikonal constraint was applied on the gradient of the outputs with respect to the inputs. The condition enforced was
Theory questions
How does high beta
bias your learned SDF? What about low beta
?
A low beta
places a larger weight near the region with lower SDFs. That is, the region near the true 3D shape has high probability of having higher density and decreases sharply as we move away from it. This can be seen from the trained VolSDF models for low beta
. We get sharper reconstructions from a lower beta
. For higher beta, a higher density is placed for a larger range of SDF values, thus the transmittance value can be distributed across a larger range of points and thus decrease the reconstruction error. This makes the reconstructions lose fidelity.
Would an SDF be easier to train with volume rendering and low beta
or high beta
? Why?
A SDF with an image reconstruction loss would be easier trained with a higher beta. Since having a high beta allows the network to more easily distribute the radiance across more points along a ray as compared to a network with low beta which forces the network to put radiance correctly at a point close to the true surface.
Would you be more likely to learn an accurate surface with high beta
or low beta
? Why?
An accurate surface would be learnt better by a network with a lower beta. This is because with lower beta values, the densities are more concentrated near the true surface boundary and hence radiance also needs to be accumulated near the true surface region. This would force the network to learn a more accurate representation of the underlying 3D shape.
part_3.gif
and part_3_geometry.gif
The below scene was created with 10 unique SDF shapes. The created SDF shapes were
I hope creating 10 unique SDF shapes is sufficient for this question rather than creating a scene with 20 shapes with multiple copies :)
Because of using a surface based SDF representation, VolSDF is able to work with lesser number of views as compared to NeRF. With NeRF, the results look blurrier and have weird color artifacts.
VolSDF
NeRF (w/ hierarchical sampling)
The NeuS method uses the derivative of the sigmoid function with