Name: Sri Nitchith Akula
Andrew ID: srinitca
Run Command
python -m a4.main --config-name=torus
Implementation :
Starting from the origin
, we update each point in the direction of the ray based on the distance from the surface using implicit_fn
. We update the points till iterations reaches max_iters
. Mask is generated based on the final distance of the points w.r.t surface. I kept a threshold of 1e-3
. We consider the points to be on surface if the final distance is <
threshold.
We can exit the loop earlier by keeping an additional check if all points are within the threshold 1e-3
or if the distance of all points cross the far
distance.
Run Command
python -m a4.main --config-name=points
Point Cloud | SDF Geometry |
---|---|
![]() |
![]() |
MLP Descrition :
implicit_function:
type: neural_surface
n_harmonic_functions_xyz: 4
n_layers_distance: 8
n_hidden_neurons_distance: 256
append_distance: [4]
n_layers_color: 2
n_hidden_neurons_color: 256
append_color: []
Eikonal Loss :
||gradient|| = 1
. Eikonal loss tries to enforce this by penalizing the ||gradient|| - 1
. Loss is minimal when this difference is as small as possible.Set alpha
, beta
in volsdf.cfg
and run the below command
Run Command
python -m a4.main --config-name=volsdf
alpha |
beta |
Geometry Output | Scene Outuput |
---|---|---|---|
10 | 0.001 | ![]() |
![]() |
10 | 0.05 | ![]() |
![]() |
10 | 0.5 | ![]() |
![]() |
10 | 1.0 | ![]() |
![]() |
alpha
can be thought of constant density of the object while beta
controls the decay (smoothening) of density as it crosses the boundary
beta
bias your learned SDF? What about low beta
?beta
enforces the network to learn sharp surfaces while high beta
makes the surface smoothened.beta
or high beta
? Why?beta
should be easier to learn. If beta
is high, then more points in the scene can have density and contribute to color of the pixel. So photometric loss of the rendered image will be low. In the end, you will be able to render back the color images but the surface learnt will be smoothened version.beta
or low beta
? Why?beta
is more likely to learn accurate surface. Low beta
makes sure that the density at a point is high when it is only closer the surface. To render the image, it has to accurately learn the sdf, otherwise there will be many points with low density and image that gets rendered will be empty.Architecture: We are using the same architecture defined in the previous question. After the 8 layers, we have 2 more layers + sigmoid to get the color outputs. beta
with low values are performing well compared to high values as seen in the above figures.
Run Command
python -m a4.main --config-name=large_scene
The above scene is inspired from Tower of Hanoi
and Rubik's Cube
.
Tower of Hanoi :
I added new CapsuleSDF
class to render the cylinder poles using implementation from this website. I created multiple TorusSDF
and CapsuleSDF
objects and positioned them apropirately to get Tower of Hanoi. I also added some cubes just for fun.
Rubik's Cube :
First, I added a black cube to the scene. I created new CubeFaceSDF
class that creates 9
cubes with appropriate offset to form a single face of the Rubik's cube. Finally, I created 5
instances of CubeFaceSDF
to render the five faces of the Rubik's cube
Total Objects : 11 (Torus) + 3 (Capsules) + 4 (Cubes) + 9 * 5 (Rubik's cube faces) = 63 objects
Coloring : Each object instance is assosicated with a color from the given color-palette. For every queried point, we assign the color of its closest object.
Num Views | VolSDF |
NeRF |
---|---|---|
100 | ![]() |
![]() |
20 | ![]() |
![]() |
10 | ![]() |
![]() |
Observations:
We can makeout the structure of the bulldozer even from just 10 views in case of VolSDF
Set alpha
, s
in volsdf_ners.cfg
and run the below command
Run Command
python -m a4.main --config-name=volsdf_ners
I have implemented the SDF to density function from the NeuS paper. I experiemented with various s
to increase/decrease the variance of the logistic distribution and I found out that when s > 100
, loss becomes Nan
. To counter this, I introduced the alpha
scale factor. Below is the final formula.
density = alpha * s * torch.exp(-s * signed_distance) / (1 + torch.exp(-s * signed_distance)) ** 2
alpha |
s |
Geometry Output | Scene Outuput |
---|---|---|---|
0.1 | 100 | ![]() |
![]() |
0.5 | 100 | ![]() |
![]() |
1.0 | 100 | ![]() |
![]() |
10 | 100 | ![]() |
![]() |