16-889 Assignment 4
1.Sphere Tracing
while count<count_max:
t=t+f(p)
p = origin+t*directions
count+=1
if less than threshold:
mask[i]=True
else:
mask[i]=False

Pic: Q1.gif
By using iterative update, the distance to the surface can be calculated. Also, because the distance between the point and closest surface is always smaller or equal to the hit surface (equal as normal angle to the surface), it will never cross the surface.
For mask, use a threshold value and compare it with the point_to_surface distance to get the mask
2. Optimizing a Neural SDF
Distance
For MLP, the network is created with main_structure and a distance_straucture. For the main structure, it has 4 layers of network with embedding _dim_xyz from HarmonicaEmbedding as input size, 128 as hidden neurons, and with Relu as each layer as activation.
For Distance Structure, it has 2 layers to convert data form size of 128 to 1, and it has no ReLu() as activation at the final layer to allow negative value for SDF distance.
For forward function, implement main_structure first, then use distance function
Eikonal loss:
It is set to be L2 loss by calculate the average of (gradients-1)^2, it enforce the gradient of SDF close to 1 and encourage network to produce a valid distance instead of arbitrary value.
Pic: Input pointcloud and rendered result
3 VolSDF
For color prediction, the structure is similar to get_distance function, a 3 layers MLP with 128 neurons and output size of 3 at the end.
1: explain the parameter of the alpha and beta
For the purpose of Alpha and Beta, alpha control the overall scale of the function, and represent the magnitude of density on the object. beta control the smoothness, small beta value will cause big change in density when the distance changes are small near the surface
2: how does high beta bias your learned SDF? what about low beta
High beta value will cause density be less sensitive to the change of distance near the surface and cause the rendering blurrier. Low beta will cause the density be more sensitive as it become closer to the surface, which will cause more accurate rendering on surface
3: would an SDF be easier to train with volume rendering and low or high beta? why?
depends on the rendering requirement, high beta value will cause render be far from the actual surface and cause blurred surface, but can make the network easier to train due to the less penalization closer to the surface. low beta value will cause the surface render to be accurate, but make the network harder to train.
4: would you likely to learn an accurate surface with high beta or low beta? why?
lower beta means the density will be more sensitive as the point get closer to the actual surface, so the render surface will have lower variance, means the surface representation will be more accurate compare to high beat
For the rendering, the alpha and beta are set to be alpha =10, beta=0.03. The network is roughly the same as Q2
For the visualization:
4.2 Fewer training views:
by setting the train_idx to range of 20, VolSDF and NeRF both get less input for training, for comparison, the NeRF seems to have slightly better quality in details, this could be caused by either volSDF parameter alpha and beta
Pic: left two as result from VolSDF and right is NeRF
Pic:VolSDF on the left NeRF on the Right
For the comparison between the two, both are very blurry when zoomed in under the same epoch of training, but NeRF seems to have clearer representation on the edges
4.3 Naive Density function
For naive density function, the function is density = s(e^-xs)/(1+s*e^-sx)^2 with s=90, the result is shown below as
Pic: with s =90
late day: 1