Q1

In this part, I implement the same logic as the psudo code in the lecture.

while f(p)> epsilon:
    t = t + f(p)
    p = x0 + td

Torus

Some difference in the implementation. Instead of applying a while loop, I use the maximum number of iterations. The algorithmn will loop through the for loop to approximate a minimum distance using the equation p = x0 + td, where t is the signed distance. Also, generate a mask based on the threshold to identify the background points.

Q2

I used the default setup of our assignment, applying L1 loss to the point loss. Also the eikonal loss enforcing the norm of gradient to 1.

Torus

Q3

Following is the volumn sdf with default setup.

Bulldozer geometry Bulldozer color

To analize how beta bias the result, I will show the results with

  1. Ideally, a lower beta leads to a "sharp" density function. This will contribute to the details representation of the image and geometry. While a higher beta leads to a "smooth" density function. The image is more blur but smoother than low beta.

  2. a higher beta is much easier for training, as the gradient will be more smooth, based on the gradient descent, it is easier to converge. On the contrary, a low beta is more difficult.

  3. Ideally, we need a low beta as the reson given above, low beta help to depict the details of the image.

Alt Text This is the graph where y is the image loss and x is the training epoches

Q4.2

view points view points = 100 view points = 50 view points = 20
volsdf Alt Text Alt Text Alt Text
volsdf Alt Text Alt Text Alt Text
Nerf Alt Text Alt Text Alt Text

Q4.3

  1. the same density function in Q3

Bulldozer geometry Bulldozer color

  1. Apply parts of the density function $\alpha*(1-0.5 \exp(\frac{s}{\beta}))$

Bulldozer geometry Bulldozer color

  1. Use abosolute value as the density function

Bulldozer geometry Bulldozer color