16-889: Assignment 4

Ben Eisner

1. Sphere Tracing (30pts)

Here is an image of the torus I rendered:

My general implementation strategy was to follow the slides as closely as possible. However, it was important to me that I vectorize things for maximum efficiency. Basically, I:

  1. Calculated the current distance from the implicit function.
  2. Determined whether we had reached the surface for each point.
  3. Stored surface + mask data based on this information.
  4. Updated the current t along each ray.
  5. Repeat until the rays are either out of bounds or all at the surface or the steps have been exceeded.
Initially, I thought it would be most efficient to progressively filter down the rays via slicing, so that we could stop computing whenever a ray had either hit the surface or escaped the bounds. However, in practice, I found that this was pretty significantly slower than just batching everything and wasting some computation. So I stuck with the lossy batch. There's probably an enven more-efficient way to do this,

2. Optimizing a Neural SDF (30pts)

Here is the bunny that I trained and rendered:

MLP Architecture: I used a variant of the skip-layer architecture we used in last week's assignment, except that I made all of the design changes included in the NeuS paper (https://arxiv.org/pdf/2106.10689.pdf). Specifically I had 8 hidden layers, each with width 256, with a skip-connection of the input to the 4th layer. I also used positional encoding with dimension 6. Finally (and this helped the loss get lower), I used the Softplus activation for every layer with beta = 100. Eikonal Loss: I used the Eikonal loss described in the NeuS paper. Specifically, it is just the average of the MSE of the magnitude of the gradient minus 1.

3. VolSDF (30 pts)

The parameters have the following effect: To answer the questions about beta:
  1. How does high beta bias your learned SDF? What about low beta?

    High beta biases the SDF to be smoother (because mass is distributed more evenly around the surface), whereas the low beta biases the SDF to be sharper (because all the mass will be concetrated at the surface).

  2. Would an SDF be easier to train with volume rendering and low beta or high beta? Why?

    Higher beta is probably easier, because there's more surface for a ray to catch.

  3. Would you be more likely to learn an accurate surface with high beta or low beta? Why?

    It's more likely that a surface will be more accurate with a low beta, because more of the true mass is concentrated around the actual surface, rather than being spread out around the surface.

Here are the visualizations:

Geometry Render

I actually used the stock settings, and they worked pretty well the first try. They provide a good tradeoff between sharpness and ease of training. However, some artifacts do remain, suggesting that I could probably tune the parameters/architecture a bit more.

4.1. Render a Large Scene with Sphere Tracing (10 pts)

I rendered a not-to-scale model of the solar system, with a bunch of torus and sphere elements (>20). I created a class to compose these primitives, using the min over all SDFs. Here's the render: