18-889 Assignment 5

xikaid

Q1: classification

1: The best accuracy is around 97.68% after 25 epoch

Three Correct prediction of Chair, Vase and Lamp

]

Wrong estimation:

Predict chair as lamp

predict vase as lamp

Predict lap as vase

2: the prediction error mostly happened between the "vase" and "lamp" prediction, which could be caused by the data sample size of the case and lamp. Also, even with high prediction accuracy, the model still could classificate wrong object if the geometry is relatively different to the rest of the same class of data. The wrong estimation also could be source from the similarity in geometry of the shape between the different class, such as height, roundness and size

Q2: For segmentation

The overall accuracy is around 89.78% with 25 epoch of training

The good segmentation is listed below with ground truth on the right, predicted on the left

Ground Tructh and Precition

Accuracy:97%

Accuracy:96%

Accuracy:92%

The bad segmentation is listed below

Ground truth and Prediction

Accuracy: 52.8%

Accuracy:55%

Accuracy: 53.6%

The overall error happens between the edge of the pointclouds, especially when the labeled edge is very blurry, such as the cushion part and bottom support of the chair. For the model, it shows bad performance to handle the overlay of the pointcloud and perform segmentation on it. For portion of the pointcloud with different geometry compare to the rest of the pointcloud, the model is having difficulty to segment it, such as the pillow on the above gif.

Q3: Robust Analysis

First is lower the number of points in the prediction stage of the classification and segmentation. It is achieved by setting the arg.num_points in the command line.

Original

classification accuracy is 97.7%

segmentation accuracy is 89.8%

At num__points=5000,

classification accuracy is 96.8%

segmentation accuracy is 88.9%

At Num_points=1000,

classification accuracy is 96.9%

segmentation accuracy is 89.2%

Fig Segmentation result at num=5000 and 1000

In conclusion, lower the number of points will hardly affect the prediction accuracy, unless the number of points is dropped in significant level. Model still can predict and classified the pointcloud. Until the number of points affect the feature of the pointcloud, the model will maintain in high accuracy.

3.2

Second robust analysis is the rotation of pointcloud, and it is achieved by using the rotation matrix and use it multiplied with the original pointcloud. In extreme cases, such as the point cloud is rotated in 180 degree, the accuract is drops significantly.

As a start when pointcloud is 90 degree rotation in x axis, the accuracy will be a lot less than original model.

classification accuracy = 48.6%

segmentation accuracy = 32%

Fig: Rotation of 90 degree in x axis with accuracy of 36%

Next is to rotate pointcloud 180 degree in the x axis, which the pointcloud will be upside down

Classification accuracy=63.9% (could mostly due to the symmetry of the target)

Fig: Error example of classify vase to lamp

Segmentation accuracy = 33.9%

Fig: Segmentation error when rotate 180 degree, with accuracy = 28.42%

Based on the comparison, when the pointcloud is rotated, the model can't no longer well capture the feature and perform segmentation and classification. Base on this, the base model without the pose transformation is not robust for feature rotation detection, and even small shift in the test data will cause a significant drop in accuracy. Furthermore, the symmetricity and density of the pointcloud will also affect model's prediction on the pointcloud