The mesh has 4 faces and 4 vertices. The coordinates of the vertices are [[0,1,2],[1,2,3],[2,3,0],[3,0,1]]
The mesh has 8 faces and 12 vertices. The coordinates of the vertices are [[-1.,-1.,-1.],[1.,-1.,-1.],[-1.,1.,-1.],[-1.,-1.,1.],[1.,1.,-1.],[1.,-1.,1.],[-1.,1.,1.],[1.,1.,1.]]
The rendering has the color as follows: color1 = [0,1,0] (green), color2 = [1,0,0] (red)
R_relative=[[0, 1, 0], [-1, 0, 0], [0, 0, 1]] T_relative = [0, 0, 0]
The R_relative rotates the camera by +90 degrees along the z axis
R_relative = [[1, 0, 0], [0, 1, 0], [0, 0, 1]], T_relative=[0, 0, 2]
T_relative shifts the camera away from the cow in the z-axis by 2 units.
R_relative = [[1, 0, 0], [0, 1, 0], [0, 0, 1]], T_relative=[0.5, -0.2, 0]
T_relative shifts the camera in the XY plane, by 0.5 units in x axis and -0.2 in yaxis
R_relative = [[0, 0, 1], [0, 1, 0], [-1, 0, 0]] and T_rel = [-3.0, 0, 3]
The R_relative rotates the camera by +90 degrees along the y axis
The mesh reconstruction is better in terms of connectivity, structure and rendering quality. The point cloud reconstruction is better in terms of rendering speed and also memory usage.
In this problem statement, I have imported two furnitures (sofa) in a room setting and imported them at two different locations. This can be used in an augmented reality setting, where different furnitures/objects can be imported into a room setting at a particular location with different textures and can be visualized before buying them.
At 10 points:
At 100 sampled points:
At 1000 points:
At 10000 points:
[NbConvertApp] Converting notebook HW1.ipynb to html [NbConvertApp] Writing 13888482 bytes to HW1.html