I show the cow mesh from 200 continously changing viewpoints in 360 degree as below.
I show dolly zoom effect with for from 5 to 120 degree containing 100 frames.
My tetrahedron and 3D coordinates of vertices are show as below.
The 360-degree gif animation of my tetrahedron as below:
The mesh should have 4 vertices and 4 (triangle) faces. Also, to make it more easy to observe, I random the color of each vertex and show it as below:
The 360-degree gif animation of my cube as below:
The mesh should have 8 vertices and 12 (triangle) faces. Also, to make it more easy to observe, I random the color of each vertex and adjust camera angle, show it as below:
I set color1 = [0, 0, 1]
and color2 = [1, 0, 0]
. The gif is as below:
The originale image using the camera extrinsics rotation R_0
and translation T_0
is shown as below:
To get the first image, R_relative and T_relative should make the cow rotate 90 degrees clockwise around the z-axis relative to the camera. According to the formula of rotation around the z-axis:
I choose
The rendering result is:
To get the second image, R_relative and T_relative should make the cow move far away from the camera without rotation. I choose
This make the camera 2 units away from current position along the z-axis and the rendering result is:
The third image can be obtained by choosing R_relative and T_relative to make the cow move along the x-axis and y-axis without rotation relative to the camera. I choose
This make the cow 0.5 unit away from current position along the x-axis and -0.5 unit away from current position along the y-axis relative to the camera. The rendering result is:
The third image can be obtained by choosing R_relative and T_relative to make the cow rotate 90 degrees counter-clockwise around the y-axis relative to the camera first, then move along the x-axis and z-axis relative to the camera.
According to the formula of rotation around the y-axis:
I choose
This make the cow rotate 90 degrees counter-clockwise around the y-axis relative to the camera first, then move -3 units away from current position along the x-axis and 3 units away from current position along the z-axis relative to the camera. The rendering result is:
The results with cameras initialized 6 units from the origin are shown as below (from left to right: first image, second image, union image):
I show the results of torus with 100 samples (left) and 1000 samples (right) below:
The torus mesh is shown as below:
I test the time of rendering as a mesh vs a point cloud. Mesh needs around 24.1s. When rendering as a point cloud, it needs about 47.9s to sample 1000 points but only needs about 16.2s to sample 100 points. So as for rendering speed, point cloud can be faster to render if sample few points, which also lead to low rendering quality. Sample more points can imporve rendering quality and also need longer time. Mesh can have good rendering quality (Curved objects like this torus are approximated) and it also need longer time to render. To render as a point cloud, we need to store points and this can consume much memory when points number are large. To render as a mesh, we also need to store vertices and faces, which may need less memory than store high resolution large points but still consume much memory. In different scenarios, we may use different 3d representations. Point cloud can be fast to render when only sample few points and easy to give exact representation but it only give numerous points and has no explicit ‘connectivity’ information. Mesh can give ‘connectivity’ information can easy to do some transformations but mesh of curved objects like this torus are approximated. We can choose appropriate 3d representations according to our tasks.
Mobius strip is a very famous topology graph that can allow a bug crawl all over a surface without having to step over its edge. Here I render a Mobius strip to give an intuitive display. I also change the elevation and azimuth simultaneously so that we can see the Mobius strip from different views. I render the Mobius strip using Parametric Function with 100 samples (left) and 1000 samples (right) below:
I first compute the areas of all the faces. Then sample faces with probability proportional to their area. Then uniformly sample a random barycentric coordinate using the method in the lecture. The method is random
Result of randomly sample 10 points:
Result of randomly sample 100 points:
Result of randomly sample 1000 points:
Result of randomly sample 10000 points: