David Russell (davidrus)
Assignment 3

4 Late days used

Markdown Monster icon

Collaboration credit: I recieved help from Gerard Maggiolino.

1.3 Ray Sampling

Both the grid and the rays match the expected result.

Markdown Monster icon Markdown Monster icon

1.4 Point Sampling

Below is the result from the point sampler. It looks as expected.

Markdown Monster icon

1.5

Here is the visualization of the cube, which looks correct.

Cube render

In the reference photo the depth for transperent regions is set to be zero. I used a different implementation where rather than using the weighted depth, I took the weighed distance of the ray. Therefore, in my result, the transperent region is 5 units away.

Markdown Monster icon

2.2

Box center: (0.25, 0.25, 0.00)
Box side lengths: (2.00, 1.50, 1.50)

2.3

This result looks correct when compared to the reference gif.

Markdown Monster icon

3

Here is the result without any view depenent effects. I used the default parameters with positional embedding. Note that fine structure is recovered fairly well.

Markdown Monster icon

4.1

This is the result with positional encoding. Instead of directly regressing color from the output of the shared layers, I concatenated the direction to the feature, added one more layer the same size as the output dimension, and then predicted the color. This seemed to be a good tradeoff where the network didn't seem to overfit but still produced specular flares on the ground and the bucket of the vehicle. However, it lost the reflection which was captured by the intial result.

Markdown Monster icon

In general, spatial consistency is a strong regularizer. As far as I understand, any surface which is not a perfect mirror will have some view-independent effects due to the Lambertian portion of the reflection model. If the spatial dimension was largely relied upon, there would not be a strong constraint on the feasible color for novel views of a previously-seen point. This will promote better generalization even for spatially-close points which haven't been seen, since materials often are the same color locally. However, without giving enough expressive capability to the network, it will not be able to visualize strong view-dependence like specularities at all.