16889_hw4

Problem 1

For implementation, I followed the methods described in the lecture slides.

I initialized points with ray origins and the distance to surface was calculated with the implicit function.

Then, I iteratly updated the points along the ray directions until the distances were under a threshold or we reached maximum steps.

When the loop terminated, I calculated the mask by checking which distances were under the threshold.

1.1

Problem 2

Input:

1.1

Predictions:

1.1

Description:

I pretty much used the same MLP to predict density in assignment 3:

Generate positional encoding, feed it through several FC layers,

then concatenate the output with the positional encoding, feed it through several other FC layers

Then, I replaced the density network with distance network by getting rid of the ReLU at the end.

I computed the eikonal loss following the equations in lecture slides:

Compute the norm of gradient, minus one, then get the absolute value, and finally take the mean.

Problem 3

1.1 1.1

α is a constant density. β controls how much the object's density decreases around boundary.

An SDF is easier to train with high beta, because the higher the beta, the lower the density around object's boundary, leading to less details to be fitted.

It is more likely to learn an accurate surface with low beta. Low beta will lead to more detailed boundary.

Settings: I just used the default settings, and I believe both α and β is in the right range.

Problem 4.2

SDF with 20 views:

1.1 1.1

Training is significantly faster, and the result is still very reasonable.

Nerf with 20 views:

1.1

Comparsion: For Nerf with 20 views, even though we could still get similar results compared to SDF with 20 views,

the training process is not that stable (especially with fewer views like 10, 15, 20). Sometimes, the results will be black images.

Problem 4.3

I implemented the solution from the Neus paper. The parameter s is set to 100.

1.1 1.1