ASSIGNMENT-4

Submitted by Ambareesh Revanur (arevanur)

Number of late days used is 0.

drawing

Q1

drawing

I implemented Sphere tracing as discussed in the lecture. Specifically, I traversed along the direction with magnitude equal to the value measured by implicit function at the current point. I iteratively traversed until I reached max_iters. Rays that did not converge (implicit function value is greater than 0) were used for deciding the mask. I am not going to paste code here as requested by TAs.


Q2

drawing drawing

Brief descriptions of your MLP and eikonal loss

I created a MLP that outputs the SDF value. I used a 6 layer MLP architecure that had ReLU activations. I ensured that I didn't include the sigmoid layer since the distances do not necessarily have to lie [0,1] space. The eikonal loss was computed by taking forcing the network to force an unit norm on the gradient. I achieved this by an L1 loss formulation on the norm. I am not going to paste code here as requested by TAs.


Q3

Original hyperparameter optimization:

drawing drawing

How does high beta bias your learned SDF? What about low beta?

Higher beta would increase apparent density. Hence the occupancy looks higher. Lower beta would decrease apparent density. Hence the occupancy reduces in magnitude. There would be some gaps (like non-continuous or some sort of gaps).

Would an SDF be easier to train with volume rendering and low beta or high beta? Why?

Higher beta is easier to train. Lower beta has two issues -- (1) the exponential will blow up to nan (since denominator is very small) and hence the model fails to learn. (2) the model is likely to have gradient issues since d(psi)/ds is very low for low beta.

Would you be more likely to learn an accurate surface with high beta or low beta? Why?

Lower beta is better poised to represent surfaces more faithfully since the density values it produces is close to 0 or 1 which is desirable for understanding surfaces.

beta = 1

drawing drawing

The model trained as expected. Also the geometry is not sharp as expected.

beta = 0.0005 . This hyperparameter setting did not train.

beta = 0.005. This hyperparameter setting did not train.

beta = 0.01. This hyperparameter setting did not train.

beta = 0.03. This is the best hyperparameter setting. This worked well because it strikes the right balance between trainability and representation faithfulness of the volumetric representation.

drawing drawing

Q4.1

Please run python -m a4.main --config-name=composite and the result will be generated in part_1.gif

drawing

Q4.2

10 views (left is VolSDF and right is NeRF)

drawing drawing

20 views (left is VolSDF and right is NeRF)

drawing drawing

Full views (left is VolSDF and right is NeRF)

drawing drawing

Q4.3

I used a sigmoid function instead of Cumulative Distribution Function of the Laplace distribution since sigmoid has similar properties to the original function that are desirable. sigmoid(0) = 0.5, sigmoid(+inf) = 1 and sigmoid(-inf) = 0

drawing drawing