Author: Zhe Huang (zhehuang@andrew.cmu.edu)
!pwd
/home/zheh/Documents/CMU_16_889_22spring/assignment3
!which python
/home/zheh/anaconda3/envs/l3d/bin/python
import torch
import pytorch3d
import imageio
import numpy as np
!pip install -q mediapy
import mediapy as media
Implementation description: the idea is that we iterate through z_near
to z_far
for every ray. We find the point where the SDF is smaller than some certain threshould $\epsilon$ and mask that ray off from the searching scheme. We repeat this process until we reach to z_far
.
media.show_video(media.read_video('images/part_1.gif'), codec='gif', title='torus', height=500)
torus |
Network Structure: harmonic functions + 6 layer of FCs + 1 output layer.
Hyperparameters: I use the default setting provided in the config, which is
n_harmonic_functions_xyz: 4
n_layers_distance: 6
n_hidden_neurons_distance: 128
append_distance: []
Eikonal loss: $$ \mathcal{L} = \frac{1}{N}\sum_{i}^{N}\|\texttt{grad}_i - 1\|^2 $$
p2_gifs = {
'input': media.read_video('images/part_2_input.gif'),
'result': media.read_video('images/part_2.gif'),
}
media.show_videos(p2_gifs, codec='gif', height=500)
input | result |
Network Structure: the same as in 2 with the last FC layer adapted for color output.
Network hyperparameters: to learn a better model I pump up the model capacity a little bit.
n_harmonic_functions_xyz: 6
n_layers_distance: 9
n_hidden_neurons_distance: 128
append_distance: []
What the parameters $\alpha$ and $\beta$ are doing?
$\alpha$: basically the learning rate for the density.
$\beta$: basically controls the "sharpness" of the sdf_to_density
function.
How does high $\beta$ bias your learned SDF? What about low $\beta$?
High $\beta$ will decrease the "sharpness" of the mesh at different locations, meaning our result will likely have lower resolution due to the insensitivity. On the other hand, low $\beta$ will make the learned SDF more unstable and may result in noncontinuity and artifacts.
Would an SDF be easier to train with volume rendering and low $\beta$ or high $\beta$? Why?
High $\beta$ will make it easier to train as the VolSDF loss will be smaller (i.e. less enforced).
Would you be more likely to learn an accurate surface with high $\beta$ or low $\beta$? Why?
Low $\beta$ will give us more chance to learn an accurate surface as the VolSDF loss is assigned a larger weight during the training.
Visualization: the following visualization results from default $\alpha$ and $\beta$ vaules.
p3_gifs = {
'color': media.read_video('images/part_3_100.gif'),
'mesh': media.read_video('images/part_3_geometry_100.gif'),
}
media.show_videos(p3_gifs, codec='gif', height=500)
color | mesh |
p4_gifs = {
'volSDF_mesh': media.read_video('images/part_3_geometry_20.gif'),
'volSDF_color': media.read_video('images/part_3_20.gif'),
'NeRF_mesh': media.read_video('images/part_3_250.gif'),
}
media.show_videos(p4_gifs, codec='gif', height=500)
volSDF_mesh | volSDF_color | NeRF_mesh |