yenchich
===================================
Please refer to README.md
for how to run the code for outputing all results.
I implement the sphere_tracing
by first specifying the threshold that decides if a point is close enough to the surface to be considered as "intersection", and the max steps for running the sphere tracing steps. Then in each iteration, starting from the origin, I advance each ray by the distance of that point to the closest surface, and record if each ray intersect with the surface or not. The implementation is shown below:
thresh = 1e-5
max_steps = self.max_iters
device = origins.device
N_rays, loc_dim = origins.shape
points = origins.clone()
mask = torch.zeros(N_rays, 1, device=device).bool()
t = 0
for i in range(max_steps):
t = torch.clamp(t + implicit_fn(points), self.near, self.far)
# t = t + implicit_fn(points)
points = origins + t * directions
# points = points + implicit_fn(points) * directions
# print(implicit_fn(points).mean())
mask = implicit_fn(points) < thresh
And the result:
The input and the optimized SDF is shown below.
I use the similar MLP in the Assignment 3: a 6-layer MLP with skip connection. Given a input point in the space (x, y, z coordinate), we first obtain the positional embedding of the location.
After that we pass the embedding into the MLP. Finally, we have a head for predicting the distance to the cloest surface at each location. There are no ReLU activation, as the distance can be both +/- infinity.
Eikonal loss: is that the norm of the gradient must be 1 everywhere (||grad(norm) - 1||), which is the property that a signed distance function must hold.
beta
bias your learned SDF? What about low beta
?Higher beta
will bias the SDF toward a more "smooth" surface, while lower beta
will learn a SDF with sharper surface.
beta
or high beta
? Why?SDF would be easier to train with high beta
. Higher beta
will make the density to decrease more smoothly near the boundary.
This makes the farther space still have high values of densities instead of some small numbers which are close to zeros.
Therefore, even points far away can contribute to the loss and produce gradients for the model. However, this can also cause a more smooth surface and blurry results.
beta
or low beta
? Why?We will learn an accurate surface with low beta
. As beta
approach zero, the density will converge to alpha
when its near the surface and 0 when it is far away.
This ends up with a more accurate surface and sharper results.
I use 6 layers of the MLP, 4 basis of the positional embedding, alpha=10
, beta=0.005
. I mainly tune the beta
, and smaller beta
gives me much shaprer results and stable training.
I render a union of a torus and a sphere together:
I use ~30 views for training the neural surface model. The comparison between the surface representation and volumetric representation are shown below.
Left is the surface representation, while right is the NeRF trained with similar view. NeRF still performs quite well in this case. I think the surface model still requires some fine-tuning.
I try the `naive` solution from the NeuS paper:
s * exp(-s * sdf) / (exp(1 + exp(-sx))) ** 2
.