main.py
script arguments
usage: main.py [-h] [-q {q1.1,q1.2,q2.1,q2.2,q3,q4,q5.1.1,q5.1.2,q5.1.3,q5.2,q5.3,q6,q7}]
[--obj_path OBJ_PATH] [-i IMAGE_SIZE] [-n NUM_SAMPLES] [-o OUTPUT_PATH] [-f NUM_FRAMES]
[-d DURATION]
optional arguments:
-h, --help show this help message and exit
-q {q1.1,q1.2,q2.1,q2.2,q3,q4,q5.1.1,q5.1.2,q5.1.3,q5.2,q5.3,q6,q7}
--obj_path OBJ_PATH
-i IMAGE_SIZE, --image_size IMAGE_SIZE
-n NUM_SAMPLES, --num_samples NUM_SAMPLES
-o OUTPUT_PATH, --output_path OUTPUT_PATH
-f NUM_FRAMES, --num_frames NUM_FRAMES
-d DURATION, --duration DURATION
Run command :
python main.py -q q1.1 --obj_path data/cow.obj -i 256 -n 100 -o output/q11.gif -f 30 -d 2
Run command :
python main.py -q q1.2 --obj_path data/cow_on_plane.obj -i 256 -n 100 -o output/q12.gif -f 30 -d 2
Run command :
python main.py -q q2.1 -i 256 -n 100 -o output/q21.gif -f 30 -d 2
Run command :
python main.py -q q2.2 -i 256 -n 100 -o output/q22.gif -f 30 -d 2
Run command :
python main.py -q q3 --obj_path data/cow.obj -i 256 -n 100 -o output/q3.gif -f 30 -d 2
I used color1 = [0, 0, 1]
and color2 = [1, 0, 0]
Run command :
python main.py -q q4 --obj_path data/cow.obj -i 256 -o output/q4
Output | R_relative | T_relative | Description |
---|---|---|---|
![]() |
[[0, 1, 0], [-1, 0, 0], [0, 0, 1]] |
[0, 0, 0] | Rotate the camera clockwise 90° along z-axis |
![]() |
[[1, 0, 0], [0, 1, 0], [0, 0, 1]] |
[0, 0, 2] | Camera is moved 2 units in -z direction to get the zoomed out effect. |
![]() |
[[1, 0, 0], [0, 1, 0], [0, 0, 1]] |
[0.5, -0.5, 0] | Move the camera 0.5 units in -x axis and 0.5 units in y axis |
![]() |
[[0, 0, 1], [0, 1, 0], [-1, 0, 0]] |
[-3, 0, 3] | Rotate the camera anti-clockwise 90° in y axis and then move 3 units in the new x-axis and -3 units in the new z-axis |
Run command :
python main.py -q q5.1.1 -i 256 -o output/q51_a.gif
python main.py -q q5.1.2 -i 256 -o output/q51_b.gif
python main.py -q q5.1.3 -i 256 -o output/q51_c.gif
Plant1 | Plant2 | Both |
---|---|---|
![]() |
![]() |
![]() |
Run command :
python main.py -q q5.2 -n 100 -i 256 -o output/q52_100.gif
python main.py -q q5.2 -n 1000 -i 256 -o output/q52_1000.gif
100 points | 1000 points |
---|---|
![]() |
![]() |
Run command :
python main.py -q q5.3 -i 256 -o output/q53.gif
Comparision between Mesh and Point cloud
To have a continuous 3d surface representation, point cloud requires a lot of points to fill any kind of gaps. While, a mesh always has a continuous representation. So visually rendering quality of mesh is better.
In the above example, we needed (1000^2 = 1e6) points to represent the torus using parametric form and we only needed (64^3 ~ 2.6e5) for implicit representation to construct the mesh. We observe that the time taken to render point cloud is more compared to render the mesh.
To get the continuous representation, we need to store less number of (vertex + face) data compared to point cloud. So memory requirement is less for mesh.
In case of ease of use, I think point cloud is easier to handle. If we want to deform the object, since every point is independent in point cloud, its easier to experiment with. In case of mesh, due to strong dependence with the neighboring points that form face, we need to manipulate more data.
Run command :
python main.py -q q6 -i 256 -o output/q6.gif
I have downloaded snowflake 3D model from free3D. I duplicated this mesh 25 times at random positions. To render the image, I moved my camera diagonally to get this snowfall effect :) I noticed pixelating artifacts on rendering an image of 256x256 resolution. So I increased the resolution to 1024x1024.
Here is an another experiment where I moved camera towards the snowflakes
Run command :
python main.py -q q7 -i 256 -n 10 -o output/q7_10.gif
python main.py -q q7 -i 256 -n 100 -o output/q7_100.gif
python main.py -q q7 -i 256 -n 1000 -o output/q7_1000.gif
python main.py -q q7 -i 256 -n 10000 -o output/q7_10000.gif
Cow mesh | Points : 10 | Points : 100 | Points : 1000 | Points : 10000 |
---|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |