Voxel reconstruction.
source | target |
![]() |
![]() |
Point reconstruction.
source | target |
![]() |
![]() |
Mesh reconstruction.
source | target |
![]() |
![]() |
Image to voxel.
input RGB | prediction | ground truth |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Image to points.
input RGB | prediction | ground truth |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Image to mesh.
input RGB | prediction | ground truth |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Quantitative comparison.
(type: Avg F1@0.05)I analyze the w_smooth term. The default is 0.1, and I tried training it with higher value (10) and lower value (0.001). Results are shown below. Note that the lower the w_smooth regularization, the higher the F1 score is. However, if we plot out the meshes, we can find out that the model trained with low w_smooth loss generates mesh that is not smooth enough. Therefore, an adequate smooth loss (0.1) is necessary to do the tradeoff.
ws10: 32.109 ws0.001: 87.750input RGB | w_smooth=10 | w_smooth=0.1 | w_smooth=0.001 | ground truth |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
I plot out the spatial chamfer error map of the predicted mesh. To do this, I sample a lot of points from both the predicted mesh and the ground truth mesh. Then, a chamfer loss is assigned to each point on the predicted mesh, and we visualize this error. Red is high error, and vice versa for blue.
input RGB | w_smooth=0.1 | w_smooth=0.001 | ground truth |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |