Excuse the visual artifacts in the gifs due to compression!
x
python main.py --qn 1_1 --mesh_path <path/to/mesh> --image_size <256/512/...> --output_file <path/to/file> --fps <10/24/...>
x
python main.py --qn 1_2 --mesh_path <path/to/mesh> --image_size <256/512/...> --output_file <path/to/file> --fps <10/24/...>
Constructing a tetrahedron requires 4 faces and vertices each.
x
python main.py --qn 2_1 --image_size <256/512/...> --output_file <path/to/file> --fps <10/24/...>
Constructing a cube requires 12 faces and 8 vertices.
python main.py --qn 2_2 --image_size <256/512/...> --output_file <path/to/file> --fps <10/24/...>
The colors used were: color1 = [0.0, 1.0, 0.0]
and color2 = [1.0, 0.0, 1.0]
.
x
python main.py --qn 3 --mesh_path <path/to/mesh> --image_size <256/512/...> --output_file <path/to/file> --fps <10/24/...>
![]() | ![]() | ![]() | ![]() | |
---|---|---|---|---|
R_relative | ||||
T_relative |
xxxxxxxxxx
python main.py --qn 5_1 --image_size <256/512/...> --fps <10/24/...>
x
python main.py --qn 5_2 --image_size <256/512/...> --output_file <path/to/file> --fps <10/24/...> --num_samples <100/1000/...>
xxxxxxxxxx
python main.py --qn 5_3 --image_size <256/512/...> --output_file <path/to/file> --fps <10/24/...>
Rendering a mesh is easier because pointclouds need to be converted to a 3D primitive (sphere, cube) first before they can be rasterized thus incurring an additional compute + memory overhead. Since meshes come with face information, the shading and blending step calculates the interpolated color/feature values for each face; while this step has to be done for each 3D primitive corresponding to each point. This implies that rendering large meshes would be faster than rendering large pointclouds (with same no. of vertices).
One can add material properties to change the manipulate rendering of a mesh. Increasing the shininess
component allows for specular highlights on the mesh.
x
python main.py --qn 6_1 --mesh_path <path/to/mesh> --image_size <256/512/...> --output_file <path/to/file> --fps <10/24/...>
To render surface normal maps of meshes, one has to write a custom shader leaving the rasterization step as it is. The shader retrieves the face corresponding to each pixel (assuming no blending) and calculates the normal of each face (by averaging the corresponding vertex normals).
python main.py --qn 6_2 --mesh_path <path/to/mesh> --image_size <256/512/...> --output_file <path/to/file> --fps <10/24/...>
x
python main.py --qn 7 --mesh_path <path/to/mesh> --image_size <256/512/...> --output_file <path/to/file> --fps <10/24/...> --num_samples <100/1000/...>