%cd ~/assignment3
Basically evaluate sdf at point and add it sdf * ray direction repeatedly till convergence. If the final ray is far away which can be checked by querying sdf for the final point, if its close to zero, intersection has been found.
!python -m a4.main --config-name=torus
!python -m a4.main --config-name=points
!python -m a4.main --config-name=volsdf
As you can see more dense sampling can lead to some rendering artifacts in empty space since sdf for some empty regions might be low enough to be considered as dense. Overall quality of render improves slightly.
We can try increasing the representational power by using more harmonics to get rid of artifacts.
Trying to lower beta by factor of half, which means that halved sdf value as compared to before give same density. This is to make reconstruction more crisp for thin areas. Also there is a small artifact revolving in the previous part's geometry which is now gone.
Low beta means only for regions where we have small sdf around object, density wil exist. But due uniform sampling, it means that those regions where sdf is low have to be wider around the object. High beta is exact opposite, it decreases the width of low sdf regions around object.
If we sample less points along a ray or if we dont sample intelligently, we need a larger beta so that one of the samples have a higher density otherwise training is harder.
The surface wont be accurate with a higher beta assuming we can sample infinite points. But it also depends on the surface and scene, if there is a cloud in the scene we need lower beta.
!python -m a4.main --config-name=twentysphere
This is using original sdf to density that we were asked to implement
Sparse result using 20 images. Due to the model being big, the network is overfitting. Geometry is not available as sdf is not zero anywhere
..
Using ners's sdf to density and taking advantage of its thin tall peak initialization(large s) and learnability of sdf_to_density hyperparams
Ignore the surface, since loss is based on pixels and not zero crossing of surface.
As they say in hindi "hume aam khane se matlab hai ki ped ginne se" which translates to "what are we more concerned about eating mangoes or counting mango trees". We are concerned, atleast in the present loss function, about render quality rather than surface which we are getting.
...
Added learnable s=100 initially to the parameters of the model. Used
where y is density a is s and x is sdf
This works better than the previous rendering. But it could be due to the fact that the hyper parameters alpha and beta were not learnable but s is learnable. But this leads to thin surfaces vanishing. Since rendering is not happening through circle marching, this doesnt affect the final learnt renderer output. Sdf can be low enough for object and colour to be rendered like a cloud ..