I visualize part_1.gif
and the depth as below:
I use the equation in the slide which is:
I use a for loop with defualt max_iters
and set
The part_2_input.gif
(left) and part_2.gif
(right) are as below:
My MLP used to predict distance is very similar to the Nerf used to predict density in the Assignment 3, except removing 'ReLU' in the last layer. Nerf use 'ReLU' to make density non-negative but the distance here is just a real number. There are 10 linear layers in my MLP and the input points are first fed into harmonic embedding function to get the encoding then fed into the 1st and 6th layer to get better performance. The other hyper-parameters like number of hidden neurons (128), epoch (5000) are as default.
For eikonal loss, I set the L2 norm as below to force the norm of gradient to be close to 1 along the second dimension .
From equation in the VolSDF paper, we can see that alpha
is a scale factor that determine the max density inside the object. The density will decrease from alpha
(inside the object) to zero (outside the object) and beta
is the factor that control the decreasing rate near the boundary or maybe we can also see it as the smoothness of the function which is how density decrease wrt distance. alpha
is large or small means the max density is large or small. beta
is large means the function sharp near the object boundary and small beta
will lead to a smooth density transform near the boundary. I did experiments with beta
as 0.01, 0.05, 0.1, 0.25, 0.5 and the results are as below (in order, no geometry.gif for beta
as 0.01 since training processing failed):
beta
bias your learned SDF? What about low beta
?High beta
will make the density transform smooth and make the rendering result blurry since the SDF has a larger range and less sensitive to distance. Low beta
A low beta make the density transform sharp near the boundary and produce a more sharp and clear rendering result since the SDF has a smaller range and more sensitive to distance. But when beta
is too small, it is hard to learn with limit number views and cause the failed learning.
beta
or high beta
? Why?An SDF is eaiser to learn with high beta
since the density transform is smoother and the SDF has a larger range and less sensitive to distance. So the leanring can be more stable. But if the beta
is low, the density transform sharper. SDF has a smaller range and more sensitive to distance, which means a small change in SDF will cause large change in density. So it is not easy to learn and even not converge at the begining of training process.
beta
or low beta
? Why?I would like to learn an accurate surface with low beta
since it will give sharp transform of density near the surface so as to lead to an accurate surface. The trade-off is that it may be not easy to train and may need more training views.
I chose the setting as: alpha
is 10, beta
is 0.05, n_pts_per_ray
is 128. The other settings are as default. I chose this settings because beta
cannot be too small or too large due the above reason. Large alpha
and n_pts_per_ray
can also effect the learning efficiency. Other settings can also affect the performance and I tested and chose the settings in proper range with the trade-off of performance and training time.
The result is as below:
I render three type of shapes (box, sphere, and torus) and 25 primitives in total using the equations
I use 20 training views here. The surface representation result (left and middle) and NeRF result (right) are as below:
We can see that NeRF has better rendering quality compared to VoISDF. This may be because VoISDF is easy to be affected by hyper-parameters like alpha
and beta
.
I use the equation as below whose curve is simialr to the equation VolSDF Paper . The other settings are as default.
The result is as below: