Name: Caroline Ai
grid visualization:
ray visualization:
Visualize sample points from the first camera:
The get_random_pixels_from_image
method is implemented in ray_utils.py
.
loss
is implemented in main.py
.
Box center: (0.25, 0.25, -0.00)
Box side lengths: (2.00, 1.50, 1.50)
I used the original experimental settings in configs/nerf_lego.yaml
.
I've added another experimental setting file nerf_lego_4_1.yaml
in configs
.
Train the NeRF model adding view dependence
python main.py --config-name=nerf_lego_4_1
NeRF without view dependence | NeRF with view dependence |
---|---|
![]() |
![]() |
The NeRF model adding view dependence shows more details of the lego compared to the model without view dependence, because it trains the model along the viewing direction. In this particular case, the overfitting effect is not obvious. However, depending on the specularity in the inputs, the model with view dependence might include some unintended points or discolor some points.
After revising some parameters in the setting,
chunk_size: 16384
render_interval: 50
I get the output:
Changing n_pts_per_ray
:
n_pts_per_ray = 256 | n_pts_per_ray = 128 | n_pts_per_ray = 64 |
---|---|---|
![]() |
![]() |
![]() |
The accuracy and sharpness of the model decrease as point samples per ray decrease. More point samples per ray includes more details in the model, but it could cause the output to take up too much memory.
After testing different network capacities, larger n_hidden_neurons_xyz
and n_hidden_neurons_dir
allows larger network capacity and generates clearer and sharper output.