**Assignment 4: Neural Surfaces** Student name: Kangle Deng (##) Sphere Tracing I use a mask to maintain which part of the rays already intersect, when all the rays intersect or we reach the max iterations, then we end the loop. (##) Optimizing a Neural SDF Below are the input point clouds and the optimized SDF mesh. I use a similar structure to NeRF's MLP, and for Eikonal Loss, I use the L2 loss between the gradient maginitude and 1. (##) VolSDF Alpha means the maximum value of the density, which is literally the value inside the object. Beta means how fast your density will change from 0 to 1 across the object edge (Higher beta means slower). 1. High beta will strech out the learned SDF while low beta will compact it. 2. SDF would be easier trained with high beta. 3. We will be more likely to learn an accurate surface with low beta. I use the default setup alpha = 10 and beta = 0.05. (##) Neural Surface Extras (###) Render a Large Scene with Sphere Tracing I include 22 objects in this scene. (###) Fewer Training Views I sub-sample 10 views from the training set and train a SDF and NeRF model respectively. Below is the SDF result. Below is the NeRF result. (###) Alternate SDF to Density Conversions I implemented the 'naive' solution in the NeuS paper, and set $s = 100$. Specifically, $\sigma = s * Sigmoid(sx) * (1 - Sigmoid(sx))$. Below is the result. The RGB rendering is good while the geometry is not as good as our previous experiment.