Luyuan Wang (luyuanwang@cmu.edu)
Zero late days used!
This is a cow model displayed in 360-degree:
To build a terahedron mesh, we need 4 vertices and 4 triangle faces.
A cube need 8 vertices, and 2x6 = 12 triangle faces (each plane need 2 triangle faces).
The original image:
R = [[0, 1, 0], [-1, 0, 0], [0, 0, 1]] ; T = [0, 0, 0]
Rotate the camera along z axis by 90 degrees:
R = [[1, 0, 0], [0, 1, 0], [0, 0, 1]] ; T = [0, 0, 2]
Move the camera by 2 along z axis:
x
rad = 2 * np.pi / 180
R = [[np.cos(rad), 0, np.sin(rad)], [0, 1, 0], [-np.sin(rad), 0, np.cos(rad)]] #y
R = np.array(R)
rad = 2 * np.pi / 180
R2 = [[1, 0, 0], [0, np.cos(rad), -np.sin(rad)], [0, np.sin(rad), np.cos(rad)]] #x
R2 = np.array(R2)
R = R @ R2
T = [0.3, -0.3, 0]
Rotate the camera along y and x axes by 2 degree, then translate 0.3 and -0.3 along x and y axes, respectively.
x
rad = 90 * np.pi / 180
R = [[np.cos(rad), 0, np.sin(rad)], [0, 1, 0], [-np.sin(rad), 0, np.cos(rad)]]
T = [-3, 0, 3]
Rotate 90 degrees alon y axis, and then tranlate -3 and 3 on x and z axes, repectively:
From left to right: the 1st depth image; the 2nd depth image; the union of both point clouds.
Point cloud of a torus (donut shape) -- the point cloud is pretty dense, but you can still see its points.
A torus mesh:
Generally, rendering a point cloud is much faster than rendering a mesh. Point is a very basic shape, it requires less calculation. However, mesh models have better render quality. With only a few points, it's hard to tell the 3D shape of the object. People need a pretty dense point cloud to recognize the real shape, which significantly increases the amount of data to store and process.
This a point cloud of gas pipe, which is scanned by a in-pipe inspection robot (by Biorobotics Lab, Carnegie Mellon University). The abnormal area is a calibration board, which is used to measure the 3D scanning accuracy.