In this part, I implement the same logic as the psudo code in the lecture.
while f(p)> epsilon:
t = t + f(p)
p = x0 + td
Some difference in the implementation. Instead of applying a while loop, I use the maximum number of iterations. The algorithmn will loop through the for
loop to approximate a minimum distance using the equation p = x0 + td
, where t is the signed distance. Also, generate a mask based on the threshold to identify the background points.
I used the default setup of our assignment, applying L1 loss to the point loss. Also the eikonal loss enforcing the norm of gradient to 1.
Following is the volumn sdf with default setup.
To analize how beta
bias the result, I will show the results with
beta = 0.02
beta = 0.1 | beta = 0.05 | beta = 0.02
:-------------------------:|:-------------------------: |:-------------------------:
|
|
|
|
Ideally, a lower beta
leads to a "sharp" density function. This will contribute to the details representation of the image and geometry. While a higher beta
leads to a "smooth" density function. The image is more blur but smoother than low beta.
a higher beta
is much easier for training, as the gradient will be more smooth, based on the gradient descent, it is easier to converge. On the contrary, a low beta
is more difficult.
Ideally, we need a low beta as the reson given above, low beta help to depict the details of the image.
This is the graph where y is the image loss and x is the training epoches
view points | view points = 100 | view points = 50 | view points = 20 |
---|---|---|---|
volsdf | ![]() |
![]() |
![]() |
volsdf | ![]() |
![]() |
![]() |
Nerf | ![]() |
![]() |
![]() |