Learning for 3D Vision: Assignement 3

Deepti Upmaka

1. Differentiable Volume Rendering

1.3. Ray sampling (10 points)

1.4. Point sampling (10 points)

1.5. Volume rendering (30 points)

2. Optimizing a basic implicit volume

2.2. Loss and training (5 points)

Box center: (0.25, 0.25, -0.00)
Box side lengths: (2.00, 1.50, 1.50)

2.3. Visualization

3. Optimizing a Neural Radiance Field (NeRF) (30 points)

4. NeRF Extras

4.1 View Dependence (10 pts)

View dependence on low resolution lego.



By comparing the output of the high resolution lego with (top) and without view dependence (bottom) we can see that the view dependence make the wheels in the back sharper.





If we are seeking finer details for better view dependence, then we will not be able to generalize for all examples. It is dependent on the training process and what images we train on to adjust for the specurlarity changes. In the case we run inference on a new scene, it might not accurately represent the view dependence as it might not learn the color from that viewing angle.

4.3 High Resolution Imagery (10 pts)

In the below experiments, I tried changing the number of sampled points along a ray and the number of hidden layers when computing the color and density. Increasing these parameters increases the runtime as it needs to sample more points and bring it to a higher dimentional space when training the network. As you can see the output for the n_pts_per_ray: 256, n_hidden_neurons_xyz: 256 is the sharpest and the lowest sampling n_pts_per_ray: 32, n_hidden_neurons_xyz: 128 is the blurriest. When you sample more points along the ray it captures more fine details. In the first image, you can also observe it not only did not learn the details but also did not learn certain colors like the red lights on the image.

n_pts_per_ray: 32, n_hidden_neurons_xyz: 128



n_pts_per_ray: 128, n_hidden_neurons_xyz: 128



n_pts_per_ray: 128, n_hidden_neurons_xyz: 256



n_pts_per_ray: 256, n_hidden_neurons_xyz: 256