Name: Manikandtan Chiral Kunjunni Kartha
Andrew ID: mchiralk
Late days used: 2
python -m a4.main --config-name=torus
Given the origin positions, we update the position of the point along the direction of the ray. The update step size is based on the distance from the surface using implicit_fn. The points are continously updated for max_iter iterations. We generate the mask based on which condition for pixels reach close to the surface (my threshold was 0.1, and if the distance to surface is below that, we consider those points as to be on the surface).
python -m a4.main --config-name=points
Point Cloud | SDF geometry |
---|---|
Similar architecture to the one used with NeRF. 6 layers of Linear and ReLU, one skipped connection at the 3rd layer. Distance computation has no activation at the end to make sure the output is unbounded in both direction (+ve and -ve).
implicit_function:
type: neural_surface
n_harmonic_functions_xyz: 4
n_layers_distance: 6
n_hidden_neurons_distance: 128
append_distance: [3]
n_layers_color: 2
n_hidden_neurons_color: 128
append_color: []
Implemented the loss as ||gradient|| - 1.
python -m a4.main --config-name=volsdf
alpha | beta | Point Cloud | SDF geometry |
---|---|---|---|
10.0 | 0.005 | ||
5.0 | 0.5 |
Alpha seems to be modeling the density value at the implicit surface, where are beta seems to be defining how smoothly the density varies from inside to outside of the surface.
How does high beta bias your learned SDF? What about low beta?
A low beta would enforce the system to learn sharp density change in the surface while a high beta would make the surface density transition smooth. A very high value would leep to a constant SDF prediction as the gradients will be zero, while a very high beta will cause the gradients to be infinity and won't predict anything meaningful.
Would an SDF be easier to train with volume rendering and low beta or high beta? Why?
Higher beta would make it easier to learn as more point intersections with the surface will have density and will have non-background color value for the pixel.
Would you be more likely to learn an accurate surface with high beta or low beta? Why?
A low beta would probably learn a more accurate surfac, as the density at the point would be high on when it is really close to the surface.
Same network as q2. Added additional layers and sigmoid for color prediction. Low beta (0.05 or lower) gives visually better results, as described above.
I tried to make a tornado with the torus shapes. I added 21 toruses with rainblow coloring with some position and radius offset to simulate a tornado
python -m a4.main --config-name=tornado
I chose 20 views (equally spaced from the 100 views) and rendered both VolSDF and NeRF
num views | geometry | VolSDF | NeRF |
---|---|---|---|
20 |
VolSDF should be able to render the view with much fewer views.
I implemented the SDF to density equation from the NeuS paper in the following way:
def sdf_to_density_neus(signed_distance):
# TODO (Q 4.3): Convert signed distance to density with alpha, beta parameters
s = 50
density = s * torch.exp(- s * signed_distance) / (1 + torch.exp(- s * signed_distance)) ** 2
return density
s | Point Cloud | SDF geometry |
---|---|---|
50 |