16-889 Assignment 5
Name: Adithya Sampath
Andrew ID: adithyas
Late days used:
Q1. Classification Model (40 points)
Implemented the classification model in models.py
.
Intput: points clouds from across 3 classes (chairs, vases and lamps objects)
Output: propapibility distribution indicating predicted classification (Dimension: Batch * Number of Classes)
To train the classification model, run python train.py --task cls
.
To evaluate the classfication model, run python eval_cls.py
.
To visualise the classfication results of every test point cloud, run python test_cls.py
.
The test accuracy of the best model: 0.9779643231899265 (i.e. ~97.8%)
Graphs
Parameters |
My Output |
Train Loss |
 |
Test Accuracy |
 |
Results
Correct Predictions
Point Cloud |
Prediction |
Ground Truth |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Chair (0) |
 |
Vase (1) |
Vase (1) |
 |
Vase (1) |
Vase (1) |
 |
Lamp (2) |
Lamp (2) |
 |
Lamp (2) |
Lamp (2) |
Failure Cases
Chair misclassified
Analysis for Chair: There were no chairs misclassified in the test data - this can be because of the data distribution, where there were more chair images than those of lamp and vase images.
Vase misclassified
Analysis for Vase: Although the model has high classification accuracy, it fails in some corner cases. In the first example, it is easy to misclassify it as lamp due to its closed structure. For the 2nd and 3rd examples it is classified as a chair since the objects have a base and back rest like structure which makes them look like a chair - the projections on top are misleading as well. The 4th, 5th, and 6th examples are also very misleading since they have features similar to the lamps - especially the 6th example, which even a human might mistake it to be a lamp.
Point Cloud |
Prediction |
Ground Truth |
 |
Lamp (2) |
Vase (1) |
 |
Chair (0) |
Vase (1) |
 |
Chair (0) |
Vase (1) |
 |
Lamp (2) |
Vase (1) |
 |
Lamp (2) |
Vase (1) |
 |
Lamp (2) |
Vase (1) |
Lamp misclassified
Analysis for Lamp: Lamps are only misclassified as Vase in difficult corner cases. For example, in the 3rd and 5th Examples below, where they're just wide hollow structures with an opening, just like a vase. Hence the model misclassified only the difficult examples and classified most other point clouds correctly.
Point Cloud |
Prediction |
Ground Truth |
 |
Vase (1) |
Lamp (2) |
 |
Vase (1) |
Lamp (2) |
 |
Vase (1) |
Lamp (2) |
 |
Vase (1) |
Lamp (2) |
 |
Vase (1) |
Lamp (2) |
 |
Vase (1) |
Lamp (2) |
Q2. Segmentation Model (40 points)
Implemented the segmentation model in models.py
.
To train the model, run python train.py --task seg
.
The model can be evaulated on test data by running python eval_seg.py
- this will save two gif's, one for ground truth and the other for model prediction.
To visualise the segmentation results of every test point cloud data, run python test_seg.py
.
The test accuracy of the best model: 0.9046784440842788 (i.e. ~90.47%)
Graphs
Parameters |
My Output |
Train Loss |
 |
Test Accuracy |
 |
Results
Correct Predictions
Ground Truth |
Prediction |
Accuracy |
 |
 |
0.9434 |
 |
 |
0.9555 |
 |
 |
0.8606 |
 |
 |
0.9027 |
 |
 |
0.9762 |
 |
 |
0.9304 |
Failure Cases
From the ground truth point clouds we can observe the following:
Part |
Color |
Head Rest |
purple |
Back rest |
light blue |
Seat |
red |
Arm rest |
yellow |
Base |
dark blue |
Misc external objects |
white |
The analysis for the wrong segmentations of 3 parts - the head rest, arm rest, and the base - are as follows:
1. Issue with Head rest of the chair
In most examples the head rest was never segmented as a separate part, and was always segmented as a back rest (light blue).
Ground Truth |
Prediction |
Accuracy |
 |
 |
0.53 |
 |
 |
0.6832 |
 |
 |
0.6566 |
2. Issue with Base of the chair
Analysis: The issues with base of the chair are different for each example - for the 1st example, the base is misinterpreted as a seat (i.e. red) throughout instead of dark blue (i.e. base). In the 2nd example, which is a difficult one, there is only a back rest, seat, and arm rest, and no base marked in the ground truth.
Ground Truth |
Prediction |
Accuracy |
 |
 |
0.5058 |
 |
 |
0.5224 |
3. Issue with Arm rest of the chair
Analysis: Finally, there are also issues with the arm rest segmentation. In the 1st example below, the model assume an arm rest to exist whereas there is no arm rest marked in the ground truth. In the 2nd example,the model is unclear about the demarkations of where the arm rest region ends.
Ground Truth |
Prediction |
Accuracy |
 |
 |
0.8224 |
 |
 |
0.7813 |
Q3. Robustness Analysis (20 points)
Conduct 2 experiments to analyze the robustness of your learned model. Some possible suggestions are:
- You can rotate the input point clouds by certain degrees and report how much the accuracy falls
- You can input a different number of points points per object (modify
--num_points
when evaluating models in eval_cls.py
and eval_seg.py
)
Experiment 1: Changing number of points in Point Cloud
Task |
Accuracy Plot |
Classification |
 |
Segmentation |
 |
Test accuracy of best classification model: 0.9779
Test accuracy of best segmentation model: 0.9071
Analysis:
- We can observe from the graph for classification that even if we decrease num_points to 100 or 50 we observe a test accuracy of around 0.95 and 0.82 respectively (since it is clear that the struture of a chair is preserved even when num_points is as low as 100 or 50). This shows that the classification model is robust even with relatively low number of points. However, the test accuracy drop is significant when num_points is reduced to 10 or below, where the test accuracy drops to as low as 0.245 - I observed misclassification only in these cases where chair was misclassified as lamp. Hence, the classification model is fairly robust to changing the number of points (up to a certain threshold) in the point cloud.
- We can observe from the graph for segmentation that even if we decrease num_points to 500 we observe a test accuracy of 0.903. This shows that even the segmentation model is fairly robust to reducing number of points in the point cloud. Like in the case above, a significant drop in accuracy is observed only for num_points of 10 or below. What I found surprising, is that the best test accuracy of 0.9071 was got for num_points = 2500, which shows that a dense point cloud is not absolutely necessary for good segmentation performance. Hence even the segmentation the model is fairly robust to changing number of points (up to a certain threshold) in the point cloud.
Classification
Points |
Test Accuracy |
10000 |
0.9779 |
7500 |
0.9758 |
5000 |
0.9758 |
2500 |
0.9748 |
1000 |
0.9748 |
500 |
0.9748 |
100 |
0.9548 |
50 |
0.8258 |
10 |
0.2455 |
5 |
0.2455 |
Point Cloud |
Ground Truth |
Prediction |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Lamp (2) |
 |
Chair (0) |
Lamp (2) |
Segmentation
Points |
Test Accuracy |
Ground Truth |
Prediction |
10000 |
0.9046 |
 |
 |
7500 |
0.9053 |
 |
 |
5000 |
0.9061 |
 |
 |
2500 |
0.9071 |
 |
 |
1000 |
0.9072 |
 |
 |
500 |
0.9030 |
 |
 |
100 |
0.8470 |
 |
 |
50 |
0.8166 |
 |
 |
10 |
0.7282 |
 |
 |
5 |
0.7488 |
 |
 |
Experiment 2: Rotating the Point Cloud by certain degrees
Task |
Accuracy Plot |
Classification |
 |
Segmentation |
 |
Test accuracy of best classification model: 0.9779
Test accuracy of best segmentation model: 0.9046
NOTE: The rotation along y-axis isn't very evident since the gif rotates along that axis
Analysis:
- We can observe from the graph for classification test accuracy that the model is not robust to rotation. The best accuracy is observed only when the points clouds are not rotated. The classification results are poor when any rotation is applied. In fact, the test accuracy drop is very significant when rotated by 90 degrees on any axis - test accuracy is lowest when rotated by this angle. Since the model has not been trained on data which has different orientations of the chair, this result is expected. Poorest results are observed when rotated along the z-axis.
- We can observe from the graph for segmentation test accuracy that the model is not robust to rotation. The best accuracy is observed only when the points clouds are not rotated. We can observe that when the chair is flipped upside down (i.e. rotated 180 degrees), the top of the chair (which is at the bottom) is segmented as dark blue and the bottom of the chair (which is at the top) is segmented light blue. At 60 degrees, the top is segmented yellow like an arm rest, and when rotated 90 degrees, the top of the chair the segmented red like a seat. This shows a clear bias in the data where the model is biased to predict specific chair parts at specific locations.
Classification
X axis
Angle |
Test Accuracy |
0 |
0.9779 |
30 |
0.8698 |
60 |
0.3767 |
90 |
0.2371 |
120 |
0.4407 |
150 |
0.7271 |
180 |
0.7313 |
Point Cloud |
Ground Truth |
Prediction |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Lamp (2) |
 |
Chair (0) |
Lamp (2) |
 |
Chair (0) |
Vase (1) |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Chair (0) |
Y axis
Angle |
Test Accuracy |
0 |
0.9779 |
30 |
0.8814 |
60 |
0.6327 |
90 |
0.6169 |
120 |
0.3095 |
150 |
0.3095 |
180 |
0.3284 |
Point Cloud |
Ground Truth |
Prediction |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Vase (1) |
 |
Chair (0) |
Lamp (2) |
 |
Chair (0) |
Vase (1) |
 |
Chair (0) |
Lamp (2) |
 |
Chair (0) |
Lamp (2) |
Z axis
Angle |
Test Accuracy |
0 |
0.9779 |
30 |
0.6295 |
60 |
0.2528 |
90 |
0.2318 |
120 |
0.2381 |
150 |
0.2528 |
180 |
0.4711 |
Point Cloud |
Ground Truth |
Prediction |
 |
Chair (0) |
Chair (0) |
 |
Chair (0) |
Vase (1) |
 |
Chair (0) |
Vase (1) |
 |
Chair (0) |
Vase (1) |
 |
Chair (0) |
Lamp (2) |
 |
Chair (0) |
Lamp (2) |
 |
Chair (0) |
Lamp (2) |
Segmentation
X axis
Angle |
Test Accuracy |
Ground Truth |
Prediction |
0 |
0.9046 |
 |
 |
30 |
0.7337 |
 |
 |
60 |
0.4496 |
 |
 |
90 |
0.2792 |
 |
 |
120 |
0.2389 |
 |
 |
150 |
0.2860 |
 |
 |
180 |
0.3631 |
 |
 |
Y axis
Angle |
Test Accuracy |
Ground Truth |
Prediction |
0 |
0.9046 |
 |
 |
30 |
0.7601 |
 |
 |
60 |
0.6603 |
 |
 |
90 |
0.5983 |
 |
 |
120 |
0.5594 |
 |
 |
150 |
0.5678 |
 |
 |
180 |
0.6214 |
 |
 |
Z axis
Angle |
Test Accuracy |
Ground Truth |
Prediction |
0 |
0.9046 |
 |
 |
30 |
0.6925 |
 |
 |
60 |
0.5162 |
 |
 |
90 |
0.4018 |
 |
 |
120 |
0.2889 |
 |
 |
150 |
0.3085 |
 |
 |
180 |
0.34780 |
 |
 |