Box center: (0.249, 0.250, -0.001)
Box side lengths: (2.004, 1.500, 1.503)
The increased view dependence reduces the generalization quality of the network because the density becomes a function of the view as well as the color. Adding w earlier in the network leads to more reduction in generalizability as the network prioritizes it more and develops a bigger bias towards view.
Result from 9-layer NeRF model with default parameters
I have trained the model 4 times with changes in point samples per ray and network capacity. The results are presented below:
Trial | No. of layers in Nerf | No. of samples per ray | Loss |
---|---|---|---|
1 | 5 | 128 | 0.0046 |
2 | 9 | 128 | 0.0031 |
3 | 5 | 200 | 0.0066 |
4 | 5 | 128 | 0.0038 |
Adding more layers (with Dropout) reduces the loss and gives better results. However, adding more points per ray does not improve the performance.
[NbConvertApp] Converting notebook index.ipynb to html [NbConvertApp] Writing 4855450 bytes to index.html