16-889 Assignment 1
Name: Hiresh Gupta
Andrew ID: hireshg
Question 1
Question 1.1
Usage:
python -m code.q1_1_render_360
Question 1.2
Usage:
python -m code.q1_2_dolly_zoom --num_frames 40 --duration 2
Question 2
Question 2.1
A tetrahedron has 4 vertices and 4 faces.
Usage:
python -m code.q2_render_shapes --obj_shape tetrahedron --output_path images/q2_tetrahedron.gif
Question 2.2
A cube will have 8 vertices and 12 faces (3 vertices per face).
Usage:
python -m code.q2_render_shapes --obj_shape cube --output_path images/q2_cube.gif
Question 3
My choice of color1 = [0,0,1]
(red) and color2=[1,0,0]
(blue).
Usage:
python -m code.q3_render_textured_cow --image_size 256
Question 4
Description of Relative Rotation: The Relative Rotation between the two cameras captures the additional rotation that needs to be pre-multiplied to first camera's Rotation matrix to arrive at the second camera's Rotation matrix.
Description of Relative Translation: The Relative Translation vector signifies the additional translation to the first camera to reach the second camera. It is calculated by measuring the first camera's origin in second camera frames.
Usage:
python -m code.q4_camera_transforms --image_size 512
Question 4.1
R_relative = torch.tensor([[0, -1, 0], [1, 0, 0], [0, 0, 1]]).T
T_relative = torch.tensor([0, 0, 0])
Description: We define the new camera coordinate system (C2) such that X points in -Y direction, Y points in X direction, and the Z directions remains same as C1. There is no translation between cameras.
Question 4.2
R_relative = torch.eye(3).T
T_relative = torch.tensor([0, 0, 3])
Description: There is no relative rotation between cameras. To create a zoom out effect, we shift the camera 3 units backwards (negative z direction), which is denoted by the T_relative array.
Question 4.3
R_relative = torch.eye(3).T
T_relative = torch.tensor([0.5, -0.7, 0])
Description: There is no relative rotation between cameras. To achieve the desired effect, we shift the camera by 0.5 units in the -ve X direction and by 0.7 units in the +ve Y direction, which is denoted by the T_relative array.
Question 4.4
R_relative = torch.tensor([[0, 0, -1], [0, 1, 0], [1, 0, 0]]).T
T_relative = torch.tensor([-3, 0, 3])
Description: We define the new camera coordinate system (C2) such that X points in -Z direction, Y remains the same, and the Z directions points in the X. The relative translation vector can be calculated by finding the origin of C1 in second camera's frame which comes as [-3, 0, 3]
(since camera1 is displaced by 3 units in the +X direction and 3 units in the -Z direction to reach its new place).
Question 5
Question 5.1
Usage:
python -m code.q5_1_rgbd_pcloud
Question 5.2
Number of points = 100
Number of points = 1000
Usage:
python -m code.q5_2_parametric_func --render torus --num_samples 100
Question 5.3
Comparing Point clouds with Meshes:
- Memory Requirement: We only need to store the vertices and texture values to store Point clouds, while in meshes we also store the faces. However, if we need to render Point Clouds in High Quality, we might need to store a large number of points for proper rendering.
- Rendering Speed: Its faster & easier to render Point Clouds since we need to place a point at the correct 3D location. However, we might be sacrificing on some quality if we sample less points.
- Rendering Quality: Meshes provide us with the ability to observe surfaces and understand connectivity between points while rendering. We should also be able to distinguish between different object surfaces by referring connectivity information.
- Ease of Use: To model complex surfaces Meshes & Implicit Surfaces might be difficult to use. Point clouds only require point information and hence are easier to use.
Usage:
python -m code.q5_3_implicit_mesh --render torus
Question 6
I read about different 3D parametric surfaces and have rendered a Figure-8 knot using parametric equations on just one variable.
Usage:
python -m code.q6_render_implicit
Question 7
Usage:
python -m code.q7_mesh_sampling --num_samples 10000