My Mesh has 4 vertices (0,0,0), (0,0,1.), (0,1.,0), (1.,0,0) and 3 faces
My cube has 8 vertices at (1.,1.,1.), (1.,-1.,1.), (-1.,-1.,1.), (-1.,1.,1.), (1.,1.,-1.), (1.,-1.,-1.), (-1.,-1.,-1.), (-1.,1.,-1.) and 12 faces
In my rendering, color1 = [0,1,1] (cyan) and color2 = [1,0,1] (magenta)
For output 1: R_relative=[[0, 1, 0], [-1, 0, 0], [0, 0, 1]] T_relative = [0, 0, 0]
The R_relative rotates the camera by 90 degrees (anticlockwise) along the z axis.
For output 2: R_relative = [[1, 0, 0], [0, 1, 0], [0, 0, 1]], T_relative=[0, 0, 2]
There is no additional rotation and T_relative shifts the camera away from the cow in the z-axis.
For output 3: R_relative = [[1, 0, 0], [0, 1, 0], [0, 0, 1]], T_relative=[0.5, -0.2, 0]
There is no additional rotation and T_relative shifts the camera in a plane parallel to the XY plane. The camera moves to the right and shifts slightly below.
For output 4: R_relative = [[0, 0, 1], [0, 1, 0], [-1, 0, 0]] and T_rel = [-3.0, 0, 3]
The R_relative rotates the camera by 90 degrees (anticlockwise) along the y axis. The T_relative moves the camera in the XZ plane to view the cow from the side.
The point cloud representation is better than the mesh in terms of ease of rendering and memory usage. The mesh representation is better in terms of rendering quality and ease of use.
This type of rendering is useful for AR applications which allow you to monitor the health of plants in your home/garden.
At 10 points:
At 100 sampled points:
At 1000 points:
At 10000 points:
[NbConvertApp] Converting notebook index.ipynb to html [NbConvertApp] Writing 12316421 bytes to index.html