
Groundtruth : Chair, Predicted : Chair

Groundtruth : Chair, Predicted : Chair

Groundtruth : Vase, Predicted : Vase

Groundtruth : Vase, Predicted : Lamp

Groundtruth : Lamp, Predicted : Lamp
![]()
Groundtruth : Chair, No failure cases

Groundtruth : Vase, Predicted : Lamp
As can be seen, the vase has structural similarity with lamp point clouds and hence model wrongly classifies it as lamp

Groundtruth : Lamp, Predicted : Vase
Similarly, some lamps have vase like strcuture and hence model gets confused between the two in some cases.
2. Segmentation Model
Test Accuracy of Segmentation Model : 89.9 %

Input Point Cloud

Prediction
Test accuracy : 0.992

Input Point Cloud

Prediction
Test accuracy : 0.967

Input Point Cloud

Prediction
Test accuracy : 0.946

Input Point Cloud

Prediction
Test accuracy : 0.803

Input Point Cloud

Prediction
Test accuracy : 0.7515
As can be seen in high accuracy prediction, model seems to segment the general chair with normal handles pretty well. It is also able to learn segmentation
on complex/different chairs like sofa but fails at some points like extending the handle towards top. This maybe because of the imbalance of data and lack of some
generality. As chair becomes more different like different legs, it tends to do more errors. Overall, the model performs well as can be seen with ~90 % accuracy.
3. Robustness Analysis
3.1 Experiment 1 (Rotation)
Trained the model with non-rotated point clouds. To test the model, I rotated the point clouds along z-axis with varying angles (0, 15, 30, 45, 60) degrees before passing
to the model to examine the robustness of the model. Similar approach was done for both classification and segmentation models. Results are reported below.
As can be seen, that increasing the angle, reduces the accuracy which shows that model is not rotation invariant.
The first result in each case is same as Q1 and Q2 for comparison.
The following rotation matrix was used :
| cos(theta) -sin(theta) 0 |
| sin(theta) cos(theta) 0 |
| 0         0         1 |

Groundtruth, Rotation = 0
test accuracy: 0.976

Groundtruth, Rotation = 15
test accuracy: 0.953

Groundtruth, Rotation = 30
test accuracy: 0.842

Groundtruth, Rotation = 45
test accuracy: 0.616

Groundtruth, Rotation = 60
test accuracy: 0.458

Groundtruth, Rotation = 0

Groundtruth, Rotation = 15

Groundtruth, Rotation = 30

Groundtruth, Rotation = 45

Groundtruth, Rotation = 60

Prediction, Rotation = 0
test accuracy: 0.946

Prediction, Rotation = 15
test accuracy: 0.893

Prediction, Rotation = 30
test accuracy: 0.751

Prediction, Rotation = 45
test accuracy: 0.651

Prediction, Rotation = 60
test accuracy: 0.480
3.2 Experiment 2 (Num Points)
Trained the model with 10000 points samples for each object. To test the model, I sub-sampled various number of points (10000, 1000, 500, 100, 50) before passing
to the model at test time to examine the robustness of the model.
As can be seen, that reducing the number of points, does not reduce the accuracy that much in classfication task. Hence, classification model is fairly robust.
In case of segmentation, model takes a hit at 100 and 50 points but still does fairly well.
The first result in each case is same as Q1 and Q2 for comparison.

Groundtruth, Num Points = 10000
test accuracy: 0.976

Groundtruth, Num Points = 1000
test accuracy: 0.968

Groundtruth, Num Points = 500
test accuracy: 0.964

Groundtruth, Num Points = 100
test accuracy: 0.947

Groundtruth, Num Points = 50
test accuracy: 0.89

Groundtruth, Num Points = 10000

Groundtruth, Num Points = 1000

Groundtruth, Num Points = 500

Groundtruth, Num Points = 100

Groundtruth, Num Points = 50

Prediction, Num Points = 10000
Test accuracy Object: 0.946
Test accuracy Global : 0.899

Prediction, Num Points = 1000
Test accuracy Object: 0.937
Test accuracy Global : 0.893

Prediction, Num Points = 500
Test accuracy Object: 0.926
Test accuracy Global : 0.878

Prediction, Num Points = 100
Test accuracy Object: 0.84
Test accuracy Global : 0.798

Prediction, Num Points = 50
Test accuracy Object : 0.82
Test accuracy Global : 0.75