Assignment 3

Anirudh Chakravarthy (achakrav)

Late days: 0

Question 1

Question 1.3

Grid/ray visualizations:

Question 1.3 Question 1.3

Question 1.4

Point samples:

Question 1.4

Question 1.5

Color/depth visualizations:

Question 1.5 Question 1.5

Question 2

Question 2.2

Box center: (0.2502. 0.2506, -0.0005)
Box side lengths: (2.0051, 1.5036, 1.5034)

Question 2.3

Volume visualization:

Question 2

Question 3

I used positional encoding but no view dependence.

Question 3

Question 4

Question 4.1

I followed the view dependence implementation from the NeRF paper.

Question 4.1

On adding view dependence, I observed two key differences:

  1. The patterns at the base of the bulldozer seem more clear during rendering. In the paper, they share this observation, since view dependence allows the network to handle specularities better.
  2. Without view dependence, we observe some artifacts in the rendering. Specifically, when the bulldozer (in Q3) is turned away from us, we can momentarily observe a reflection below the base of the bulldozer. This is not present on adding view dependence.

Based on these observations, I believe that adding view dependence improves generalization ability to unseen views. This allows the network to render realistic colours consistent with viewing directions.

Question 4.3

For training a high-resolution NeRF, I experimented with 3 settings

Baseline

In this setting, I trained the same network (as in the previous questions) for high-resolution images.

Question 4.3

This resulted in a blank render with a black background. This led to the conclusion that the network capacity was not sufficient to learn.

Deeper network

In this setting, I followed the exact network from the NeRF paper (depth, skip connections, hidden dimensions, etc). I used a batch size of 256 and chunk size of 8192.

Question 4.3

As seen in the GIF above, the network is able to render fine-grained details (the dotted pattern) on the base on the bulldozer. This was absent in the previous results.