Improving Few-Shot NeRF Performance using Data Augmentations and Self Consistency Losses

Submitted for 16-889 course project by Aarush Gupta (aarushg3), Ayush Jain (ayushj2) and Mayank Singh (mayanks2).

1. CycleGAN Augmentation Results




The leftmost gifs are from the pixelNeRF baseline. The center gifs are from the pixelNeRF model trained with augmented training dataset and RGB loss. The rightmost gifs are from the pixelNeRF model trained with augmented training dataset and RGB + Perceptual loss.

2. Self-Supervised Depth Prediction Results


Results on Lego scene:



Results on Drums scene:



When the DietNeRF model is trained on 8 views, we qualitatively observe that the depth consistency loss helps in learning better depth maps.

Self-Distillation Results


We notice that DistillNerf with RGB + Depth MSE gives slightly better results in few frames. Please check Fig. 4 in report for more discussion on this.