andrew id: tianyuaz
late day: 6
For grid:
For Ray visualization:
For 360 visualization:
For depth map:
It is wired.
I implement the loss as: loss = F.mse_loss(rgb_gt, out["feature"])
, but when I train the box, the loss easilly got nan, but when I train nerf, it goes well.
I checked several place where possible numerical instability will occur, but still don't work.
So I don't have final visualizations for this problem.
I implement the two part MLP like the nerf paper, where first predict density from position embedding, then predict colors from concatenation of directional emebedding and previous MLP features.
The visualization of lego:
For high resolution training, it seems we need more sampling to get better high frequency details. So I compare training the model with different num_sampled_pts_per_ray
.
sample 16 points per ray:
sample 32 points per ray:
sample 64 points per ray:
We see that increasing the number of sampled points create high frequency detials.