Assignment 4
Submitted by: Naveen Venkat (nvenkat)
Late Days: 2
1. Sphere Tracing (30pts)
Result
Torus | Sphere |
---|---|
![]() |
![]() |
Implementation Details
python -m a4.main --config-name=torus
python -m a4.main --config-name=sphere
The sphere_tracing
function accepts an SDF function, and rays parameterized by (origin, direction), and computes the
point of intersection of the rays with the implicit surface (if exists). At each iteration, we trace points on the rays, where each point is
marched based on its current SDF value. The point is only updated (marched) if its SDF is above some small value (eps=1e-4).
2. Optimizing a Neural SDF (30pts)
Result
Point Cloud | Rendered Image |
---|---|
![]() |
![]() |
Implementation Details
python -m a4.main --config-name=points
MLP: The MLP contains 6 Linear
layers with ReLU
non-linearity. The coordinates of the points are first transformed to Harmonic Embeddings
(with 4
frequencies), and passed to the MLP which predicts the signed distance.
Loss: The model is trained with the SDF Loss which enforces 0-distance on surface points, and the eikonal loss that ensures that the norm of the gradients at the given points is unity.
The number of layers, and hyperparameters are set in a4/configs/points.yaml
.
3. VolSDF (30 pts)
Result
Color Rendering | Geometry |
---|---|
![]() |
![]() |
Implementation Details
python -m a4.main --config-name=volsdf
Set renderer.alpha = 10.0
, renderer.beta = 0.05
in a4/configs/volsdf.yaml
for the above result.
As described in the sections below, these settings seem to work reasonably well for this scene. An appropriate choice of alpha
and beta
values
allows the density function to be concentrated at the surfaces, while preventing gaps / holes due to low density.
Hyperparameter Tuning: alpha
Here, beta = 0.05
alpha |
Color Rendering | Geometry |
---|---|---|
1.0 | ![]() |
![]() |
5.0 | ![]() |
![]() |
10.0 | ![]() |
![]() |
50.0 | ![]() |
![]() |
Observation: Lower the alpha, coarser is the detail, and more diffused is the density function.
Hyperparameter Tuning: beta
Here, alpha = 10.0
beta |
Color Rendering | Geometry |
---|---|---|
0.002 | ![]() |
![]() |
0.005 | ![]() |
![]() |
0.01 | ![]() |
![]() |
0.02 | ![]() |
![]() |
0.05 | ![]() |
![]() |
0.1 | ![]() |
![]() |
Observation: Lower the beta, thinner is the density function and more details appear.
Discussion
1. How does high beta
bias your learned SDF? What about low beta
?
density(s) = alpha * {
0.5 * exp(- s / beta); if s <= 0
1 - 0.5 * exp(s / beta); if s > 0
}
As can be seen above, beta
controls the smoothness of the density function by adjusting the width of the bell-curve. A high
beta
yields a wider density function, which increases the occupancy of points in space. As a result, the learned SDF is expected to be smoother,
i.e. the surface of the object is more diffused causing a blurred reconstruction. A low beta
yields a stricter occupancy regime (for instance,
highly concentrated density regions) which causes sharper surfaces (thinner, however more detailed). In a way, beta
biases the model to learn
thicker surfaces (higher the value of beta, more is the density / occupancy around the surface).
2. Would an SDF be easier to train with volume rendering and low beta
or high beta
? Why?
We expect an SDF model to learn the signed distance to the surface. A very small value of beta
could cause the exponential to blow up in magnitude
causing unstable gradients during training. In this manner it would be difficult to train with a very low beta
. On the other hand, with
large values of beta
, one would observe thicker (more diffused) surfaces which leads to the loss of fine details. It would therefore be
difficult to train with a large value of beta
as well. A smartly chosen beta
could therefore help learn fine structures without training
difficulties.
Specifically, if the geometry of the object is simple (e.g., a sphere blob), then a higher value
of beta
would regularize the model to learn smooth geometry which is favorable. However, for more complex structures (with fine details), one
would find it easier to train with a lower value of beta because of the aforementioned effect.
3. Would you be more likely to learn an accurate surface with high beta
or low beta
? Why?
One would learn more accurate surface with a low beta
as discussed above. Specifically, with a smaller density regime (density concentrated near
the surfaces), one would be able to describe (complex) fine details on the surfaces in the scene. This would enable the model to receive more
meaningful gradients to learn the surface representation.
4. Neural Surface Extras (CHOOSE ONE! More than one is extra credit)
4.1. Render a Large Scene with Sphere Tracing (10 pts)
Implementation Details
python -m a4.main --config-name=composed
Here, I rendered a pyramid of 55
spheres (5^2 + 4^2 + 3^2 + 2^2 + 1
). The class ComposedSDF
in a4/implicit.py
defines this scene.
4.2 Fewer Training Views (10 pts)
Results
Method | num_views = 100 |
num_views = 20 |
num_views = 10 |
---|---|---|---|
VolSDF | ![]() |
![]() |
![]() |
NeRF | ![]() |
![]() |
![]() |
VolSDF is observed to be more robust for few view synthesis, however NeRF yields qualitatively better results for larger number of views.
Implementation Details
python -m a4.main --config-name=volsdf_20
For the VolSDF renderings, set the configuration data.num_views = 20
(or any other number). By deleting the key data.num_views
, the model will be
trained on all views by default. For the NeRF renderings, I used assignment 3's codebase, with modified dataset that uses only num_views
number
of views chosen at random once.
4.3 Alternate SDF to Density Conversions (10 pts)
Here, I implemented the Naive solution from the NeuS paper. Specifically, the function sdf_to_density_neus_naive
in a4/renderer.py
yields the
densities given the SDF values. The hyperparameter s
was tuned to refine the results, however, it was hard to tune this hyperparameter. The best
result is shown below.
Results and Hyperparameter Tuning: s
s |
Color Rendering | Geometry |
---|---|---|
1 | ![]() |
![]() |
5 | ![]() |
![]() |
10 | ![]() |
![]() |
50 | ![]() |
![]() |
100 | ![]() |
![]() |
500 | ![]() |
![]() |
Implementation Details
python -m a4.main --config-name=volsdf_neus
In a4/configs/volsdf_neus
, set the configuration renderer.use_neus_naive = True
, along with an appropriate value of s
defined by the key
renderer.s_value
.