Test accuracy of best model: 0.9801
Object 1 predicted class: Chair
Object 2 predicted class: Vase
Object 3 predicted class: Lamp
Object 4 predicted class: Lamp (Bad example; True label is chair)
Object 5 predicted class: Lamp (Bad example; True label is vase)
Object 6 predicted class: Vase (Bad example; True label is lamp)
Interpretation:
We can see from object 4,5,6, these objects do not have the typical shape of their class.
Object 4 is a folded chair, and I cannot tell what object 5 and object 6 are even looking at the gifs.
Generally, the model performs pretty well.
Test accuracy of best model: 0.9005
Left gif: ground truth; Right gif: predictions
Object 1 accuracy: 0.9383
Object 2 accuracy: 0.9855
Object 3 accuracy: 0.8445
Object 4 accuracy: 0.5862 (bad example)
Object 5 accuracy: 0.5553 (bad example)
Interpretation:
For sofas, the seat part (red), the leg part (blue), and the armrest (yellow) are somewhat ambigious. It is very difficult to set a good boundary between these three parts.
Therefore, we can see the last two examples are not so good.
Experiment 1:
I rotated the input point clouds by 30 and 60 degrees clockwise.
Test accuracy for classification: 0.9801 (0 degree); 0.5666 (30 degree); 0.2676 (60 degree)
Test accuracy for segmentation: 0.9005 (0 degree); 0.6909 (30 degree); 0.5518 (60 degree)
We can see that the model is not very robust when dealing with rotation. I believe this is due to the lack of transformation blocks in our network.
Experiment 2:
I changed the number of points per object from 10000 (default) to 100, 1000, 5000.
Test accuracy for classification: 0.9801 (10000 points); 0.9297 (100 points); 0.9779 (1000 points); 0.9800 (5000 points)
Test accuracy for segmentation: 0.9005 (10000 points); 0.8305 (100 points); 0.8954 (1000 points); 0.9001 (5000 points)
We can see that the more points per object, the higher the accuracy,
but the performance differences are very tiny as long as the number of points per object is not too small
I implemented the PointeNet ++ model. Please see models.py for details. Note that I only implemented the classification branch.
Test accuracy of best PointNet model: 0.9801
Test accuracy of best PointNet ++ model: 0.9496
With PointNet ++, I incorporated the locality into my model through farthest point sampling and ball query.
The test accuracy of PointNet ++ is slightly lower, but I think this is due to the lack of computation resources.
The training process is really slow and a larger batch size is not available. I only managed to train around 30 epoches.
For the 6 cases described in Q1, the new model produces exactly the same result compared to that of our old model.