two.png

1 Differentiable volume Rendering

1.3 Ray Sampling

Code to generate outputs: python main.py --config-name=box

rays_vis.png

grid_vis.png

1.4 Point Sampling

Code to generate outputs: python main.py --config-name=box

1.4.png

1.5 Volume Rendering

Code to generate outputs: python main.py --config-name=box

Depth map still

depth.png

Depth map GIF

part_1_depth.gif

Autogenerated output

part_1.gif

2 Optimizing a basic implicit volume

Code to generate is python main.py --config-name=train_box

part_2.gif

Box center: (0.25, 0.25, -0.00)
Box side lengths: (2.01, 1.50, 1.50)

3 NeRF

Code to generate: python main.py --config-name=nerf_lego

Remember to set the parameter in nerf_lego.implicit_function.view_dep = False

part_3_128_no_view_dep.gif

More NeRF

4.1 View Dependence

Code to generate: python main.py --config-name=nerf_lego

Remember to set the parameter in nerf_lego.implicit_function.view_dep = True

part_3_%5B128,%20128%5D_view_dep_100.gif

The resolution in this image is too low to appreciate the view dependence effects coming to the fore however if you pay attention you'll see that the same pixels to the from of the lego bulldozer appear darker on the back swing of the truck as compared to the GIF in #3. I was not satisfied with the result hence I attempted view dependence again after switching to high res, stay tuned.

View Dependence vs Generalisation: If the direction related information is provided too early to the network it may be highly view dependent, which is undesirable as the object remains the same, hence we introduce the directional information later in the network (for me it was in the 6th layer) in order to have an inductive bias that allows the basic geometry to largely only depend on the positional information but have a smaller effect of the direction of viewing in the final RGB values produced.

4.3 High res

Code to generate: python main.py --config-name=nerf_lego_highres

Remember to set the parameter nerf_lego_highres.implicit_function.view_dep = False

Remember to set the parameter nerf_lego_highres.sampler.n_pts_per_ray = 128

Remember to set the parameter nerf_lego_highres.training.batch_size = 1024

part_3_%5B400,%20400%5D_from_pretrain_150_to_240.gif

At 240 iterations we can see that finer details of the lego bezels have not come through although the details are much much clearer than #3.

4.3 + 4.1 High res with view dependence

Code to generate: python main.py --config-name=nerf_lego_highres

Remember to set the parameter nerf_lego_highres.implicit_function.view_dep = True

Remember to set the parameter nerf_lego_highres.sampler.n_pts_per_ray = 256

Remember to set the parameter nerf_lego_highres.training.batch_size = 512

part_3_400_at_250_view_dep.gif

The view dependence effects are clearly shining through and we can see the extrusions of the lego board appear differently with different views depending on the shadows in just 250 iterations.

More Training

Using the exact same config as above but training another 100 epochs results in an even more accurate rendering:

part_3_400_view_dep_340_iter.gif

Parameters played with