Number of vertices: 4
Number of faces: 4
Number of vertices: 8
Number of faces: 12
color1 = [0, 0, 1]
color2 = [1, 0, 0]
Description: Rotate a mesh -90 degrees about the z axis
R_relative: [[np.cos(theta), np.sin(theta), 0], [-np.sin(theta), np.cos(theta), 0], [0, 0, 1]]
T_relative: [0, 0, 0]
Description: Translate a mesh on the z axis by 2.0
R_relative: [[1, 0, 0], [0, 1, 0], [0, 0, 1]]
T_relative: [0, 0, 2.0]
Description: Translate a mesh on the x and y axes by 0.5 and -0.5 respectively
R_relative: [[1, 0, 0], [0, 1, 0], [0, 0, 1]]
T_relative: [0.5, -0.5, 0]
Description: Rotate 90 degrees about the y axis, and translate a mesh on the x and z axes by -3.0 and 3.0 respectively
R_relative: [[np.cos(theta), 0, np.sin(theta)], [0, 1, 0], [-np.sin(theta), 0, np.cos(theta)]]
T_relative: [-3.0, 0, 3.0]
The rendering speed of the point cloud is faster than the mesh. This is because only the pixels projected by the 3D points for point cloud rendering are considered. However, for mesh rendering, the color of each pixel is determined based on the barycentric coordinate of the corresponding face, and this interpolation requires inspection of more pixels. Conversely, the rendering quality of the mesh is better than the point cloud due to the interpolation. The rendering image of the point cloud is sparse, and sometimes hard to see what kind of object is shown. The point cloud and mesh are easy to use and supported by many renderers. However, the implementation of mesh rendering is much more complex than the point cloud because pixel or face-level parallelization is essential to obtain the practical runtime. Finally, in terms of memory usage, the mesh is superior. The point cloud requires many 3D points to obtain high-quality images, but the mesh does not because of barycentric interpolation.
Pytorch3D can be used as a differentiable renderer (differentiable regarding vertices and textures). I used Pytorch3d to attach the target image to the mesh constructed by the equation, X**6 + Y**7 + Z**8 - R**9. To this end, the L2 norm between a target image (right) and the rendered image with learnable textures is minimized such that the mesh can have the same texture as the target image.