Assignment 4

Name: Manikandtan Chiral Kunjunni Kartha

Andrew ID: mchiralk

Late days used: 2

two

1. Sphere Tracing (30pts)

run

python -m a4.main --config-name=torus

Torus

Implementation

Given the origin positions, we update the position of the point along the direction of the ray. The update step size is based on the distance from the surface using implicit_fn. The points are continously updated for max_iter iterations. We generate the mask based on which condition for pixels reach close to the surface (my threshold was 0.1, and if the distance to surface is below that, we consider those points as to be on the surface).

2. Optimizing a Neural SDF (30pts)

run

python -m a4.main --config-name=points
Point Cloud SDF geometry
Grid Rays

MLP details:

Similar architecture to the one used with NeRF. 6 layers of Linear and ReLU, one skipped connection at the 3rd layer. Distance computation has no activation at the end to make sure the output is unbounded in both direction (+ve and -ve).

implicit_function:
  type: neural_surface

  n_harmonic_functions_xyz: 4

  n_layers_distance: 6
  n_hidden_neurons_distance: 128
  append_distance: [3]

  n_layers_color: 2
  n_hidden_neurons_color: 128
  append_color: []

Eikonal Loss:

Implemented the loss as ||gradient|| - 1.

3. VolSDF (30 pts)

run

python -m a4.main --config-name=volsdf
alpha beta Point Cloud SDF geometry
10.0 0.005 Bulldozer geometry Bulldozer color
5.0 0.5 Bulldozer geometry Bulldozer color

explanation of alpha and beta

Alpha seems to be modeling the density value at the implicit surface, where are beta seems to be defining how smoothly the density varies from inside to outside of the surface.

  1. How does high beta bias your learned SDF? What about low beta?

    A low beta would enforce the system to learn sharp density change in the surface while a high beta would make the surface density transition smooth. A very high value would leep to a constant SDF prediction as the gradients will be zero, while a very high beta will cause the gradients to be infinity and won't predict anything meaningful.

  2. Would an SDF be easier to train with volume rendering and low beta or high beta? Why?

    Higher beta would make it easier to learn as more point intersections with the surface will have density and will have non-background color value for the pixel.

  3. Would you be more likely to learn an accurate surface with high beta or low beta? Why?

    A low beta would probably learn a more accurate surfac, as the density at the point would be high on when it is really close to the surface.


Network details

Same network as q2. Added additional layers and sigmoid for color prediction. Low beta (0.05 or lower) gives visually better results, as described above.

4. Neural Surface Extras (CHOOSE ONE! More than one is extra credit)

4.1. Render a Large Scene with Sphere Tracing (10 pts)

I tried to make a tornado with the torus shapes. I added 21 toruses with rainblow coloring with some position and radius offset to simulate a tornado

Run

python -m a4.main --config-name=tornado

tornado

4.2 Fewer Training Views (10 pts)

I chose 20 views (equally spaced from the 100 views) and rendered both VolSDF and NeRF

num views geometry VolSDF NeRF
20 Bulldozer geometry Bulldozer geometry Bulldozer color

VolSDF should be able to render the view with much fewer views.

4.3 Alternate SDF to Density Conversions (10 pts)

I implemented the SDF to density equation from the NeuS paper in the following way:

def sdf_to_density_neus(signed_distance):
    # TODO (Q 4.3): Convert signed distance to density with alpha, beta parameters
    s = 50
    density = s * torch.exp(- s * signed_distance) / (1 + torch.exp(- s * signed_distance)) ** 2
    return density
s Point Cloud SDF geometry
50 Grid Rays