The gif on the top is the generated voxel and the source voxel
The gif on the top is the generated point cloud and the source point cloud
The gif on the top is the generated mesh and the source mesh
For mesh, the chamfer loss coefficient and the ico-sphere number has been varied keeping other parameters constant. Chamfer loss coefficient of 5.0 and ico-sphere value of 4 gave the best results. The table matches the logic that the higher chamfer loss and higher ico-sphere (more number of faces and vertices) leads to better reconstruction.
Chamfer loss weight | ico-sphere | F1 score |
---|---|---|
0.1 | 4 | 78.63 |
1 | 4 | 82.64 |
5 | 4 | 86.41 |
5 | 3 | 83.1 |
5 | 2 | 81.48 |
n_point | F1 score |
---|---|
1000 | 81.92 |
5000 | 88.27 |
10000 | 91.21 |
In this question, I wanted to analyze the rotation invariance of the model, especially because all the training data has been on the most usual chair/couch poses. In this experiment, I have rotated a couch, by 0,90,180 and 270 deg and predicted the point cloud representation. The point cloud representation is alll same for the 4 orientations, suggesting the model has overfitted to a single orientation. To overcome this, I would like to add rotations to the image and train the model again.