Homework 3 Tianyuan

andrew id: tianyuaz

late day: 6

1.3 Ray sampling

For grid:

Grid

For Ray visualization:

Ray

1.4 Point sampling

Point

1.5 Volume rendering

For 360 visualization:

volume

For depth map:

depth

2.2 and 2.3 Train box

It is wired.

I implement the loss as: loss = F.mse_loss(rgb_gt, out["feature"]), but when I train the box, the loss easilly got nan, but when I train nerf, it goes well.

I checked several place where possible numerical instability will occur, but still don't work.

So I don't have final visualizations for this problem.

3

I implement the two part MLP like the nerf paper, where first predict density from position embedding, then predict colors from concatenation of directional emebedding and previous MLP features.

The visualization of lego:

volume

4.3 High resolution

For high resolution training, it seems we need more sampling to get better high frequency details. So I compare training the model with different num_sampled_pts_per_ray.

sample 16 points per ray:

volume

sample 32 points per ray:

volume

sample 64 points per ray:

volume

 

We see that increasing the number of sampled points create high frequency detials.