Files in a4/
has been merged to its root folder, so the commands to to run for part 1 ~ 3 are
python main --config-name=torus
python main --config-name=points
python main --config-name=volsdf
python main --config-name=trees
There're three hyperparameters to be added to render efficiently: maximum distance, minimum distance, maximum iterations.
The tracing process will be repeated before reaching the maximum iterations. After that, if a SDF value is smaller than the minimum distance, the ray will be marked as valid.
One extra trick is to early terminate the tracing if all SDF values are larger than the maximum distance or smaller than the minimum distance. If a ray moves too far away from the closest surface, we will assume its SDF will monotonically increase later. For torus, this trick could increase the rendering speed by at least 2 times.
My MLP architecture for this question includes the a harmonic embedding layer, 6 hidden fully connected layers in 128 dimensions, and a fully connected layer with a 1 dimension output.
The Eikonal loss is implemented as the mean of absolute values of gradient's l2 norms minus 1.
A: Low beta corresponds to faster increase of the density moving from outside of the surface to the inside. High beta tends to flatten the density changing along SDF's gradient direction.
A: SDF should be easier to train with high beta, because flat density changes encourage gradient information to be distributed further, and thus there's a lower chance for SDF to be stuck at local minima.
A: Surfaces should be more accurate after training with low beta, because there's a greater density difference between the exterior and interior of solids, and thus surface inaccruacies will induce larger losses and gradient penalties.
Geometry | Visual |
---|---|
![]() | ![]() |
I've tried to tune alpha, beta, and the neural network architecture. For the above results, alpha and beta remain at the default values, i.e. 10.0 and 0.05 respectively, while the network becomes more complicated. The detailed setting is listed below. Both color and distance network have input skip and there're four more layers in the color network, and that enables the network to be more expressive.
xxxxxxxxxx
n_layers_distance6
n_hidden_neurons_distance128
append_distance 4
n_layers_color6
n_hidden_neurons_color128
append_color 4
According to the SDF described in the recommended website, I implemented the Cone - exact
, Capped Cylinder - exact
, Plane - exact
in pytorch. Combining them together, I further defined a TreesSDF
class, rendering 21 objects as four pine trees on a plane.