16-889: Learning for 3D Vision

HW 1



1.1 360-degree Renders

1.2 Dolly Zoom


2.1 Tetrahedron

Vertices = 4, Faces=4

2.2 Cube

Vertices = 8, Faces=12

3 Re-texturing a mesh

color1=[0.8, 0.5, 0], color2=[0.2, 1, 0.8]

4 Camera Transformations

R_relative_custom
0.70700.707
010
-0.70700.707

T_relative_custom = [-2.1213, 0.0000, 0.8787]
R_relative
0.01.00
-100
001

T_relative = [0, 0, 0]
R_relative
1.00.00
010
001

T_relative = [0, 0, 3]
R_relative
1.00.00.087
010
-0.08701

T_relative = [0.1385, -0.5, 0.0114]
R_relative
0.00.01.0
010
-100

T_relative = [-3, 0, 3]

R_relative and T_relative are transforming the camera to a new location which is rotated and translated with respect to current R_0 and T_0 by this amount

5.1 Rendering the point clouds from RGB-D Images

5.2 Parametric Functions

5.3 Implicit Functions

Mesh requires the connectivity information among the Vertices and hence it is not very easy to construct and use. Point cloud on the other hand does not suffer from this. The quality of rendering is generally better in meshes with even less number of vertices than point clouds because of the faces and connectivities. Meshes can be represented with less number of vertices and hence, they are memory efficient. Meshes approximate regions based on faces/surfaces and hence, in terms of exactness, point clouds are more accuarate representation of geometry.

6 Give a man a mask and he will tell the truth

7 (Extra-Credit) Sampling Points on Meshes

N = 10
N = 100
N = 1000
N = 10000