1 late day
I trained 110 epoch with learning rate=1e-3. The loss is almost converged. The test accuracy is 96.85%
From the below examples, we can see the model can perform quite well to classify three kinds of items, especially when the shape of the point cloud are typical and can be easily classified by human. For fail cases, it usually happens when the shape of item is extra-ordinary. For example, for idx=543, the back of chair is too long, which is similar to a lamp in shape. For idx=618, the shape of the lamp is cylindrical just like a vase.
chair, idx=0
lamp, idx=719
vase, idx=617
gt: chair; pred: lamp; idx=543
gt: vase; pred: lamp; idx=618
gt: lamp; pred: chair; idx=758
Due to time limit, I trained 100 epoch with learning rate=1e-3. The loss is almost converged. The test accuracy is 89.60%
From the below example, we can see the segment is easier when the joint of the object is small, sharp and clear. For the failer case, the joint is bigger and smoother, which means the boundary between each segment is long and hard to identify.
idx=0; accuracy: 94.02%
gt
pred
idx=20; accuracy: 98.71%
gt
pred
idx=40; accuracy: 95.05%
gt
pred
idx=140; accuracy: 83.08%
gt
pred
idx=180; accuracy: 76.7%
gt
pred
I added a small noise towards coordinations of all points. The noise follows the normal distribution $N(0, \sigma^2)$, and I tried three different $\sigma^2$. It shows the Pointnet is robust towards small random noises, but the performance drops badly if the noise is too big($\sigma^2>0.01$).
$\sigma^2$ | CLS | SEG |
---|---|---|
0 | 96.85 | 89.60 |
0.001 | 95.70 | 89.59 |
0.01 | 95.28 | 89.31 |
0.1 | 78.59 | 54.39 |
I changed num_points
in the eval scripts. The model is quite robust if we reduce the number of points. For cls model, the performance does not drop until reduced to 50 points. For Seg model, the performance does not drop until reduced to 200 points.
num_points | CLS | SEG |
---|---|---|
10000 | 96.85 | 89.60 |
5000 | 95.69 | 89.59 |
1000 | 95.90 | 88.42 |
200 | 94.43 | 80.64 |
50 | 89.30 | 66.85 |