In this assignment, we took several glass plate images to formulate clearly-aligned RGB images. Sepcifically, we align each image's red and green channel with its corresponding blue channel, and then concatenate them as a 3-channel color image. In order to perfect align each image, we developed a method that searches over a window of possible displacements based on the Sume of Squared Differences (SSD) and finds the best displacement to produce aligned RGB images. To address the problem that some images are too large to do an exhaustive search, we took advantage of multi-level pyramid searching method that first does a coarse search on down-sampled images, and then does a more precise search using previous results as initial displacements.
In the following sections, I will first show the results I got from both single-scale and multi-scale channel alignment methods as required by the assignment. After that, I will show some of my bells & whistles that improve the whole image generation process, either in terms of speed or quality or both.
# import some code
from main_hw1 import *
%matplotlib inline
# show the images I got
base_jpg_list = glob.glob('./data/*.jpg')
base_tif_list = glob.glob('./data/*.tif')
extra_jpg_list = glob.glob('./extra_data/*.jpg')
extra_tif_list = glob.glob('./extra_data/*.tif')
print('\n'.join(base_jpg_list + base_tif_list + extra_jpg_list + extra_tif_list))
./data/cathedral.jpg ./data/emir.tif ./data/three_generations.tif ./data/train.tif ./data/icon.tif ./data/village.tif ./data/self_portrait.tif ./data/harvesters.tif ./data/lady.tif ./data/turkmen.tif ./extra_data/5042.jpg ./extra_data/3041.jpg ./extra_data/5030.jpg ./extra_data/5032.tif ./extra_data/5014.tif
The single scale channel alignment searches a best dispacement vector using SSD as its metric. The dispacement vector is defined as (dx, dy), which will be used to align an image channel's pixel from (y, x) to (y + dy, x + dx) to achieve lowest SSD in a given search space. The search space by default is set between [-15, 15] for each axis. Note that dx is corresponding to the number of pixels to shift at the width axis and dy is at the height axis, whereas a pixel is represented by (y, x). The order is different. Also, this algorithm takes raw image channel as input, without extacting its gradients/edges.
Here is what it performs looks like on the provided "./data/cathedral.jpg".
align_image('./data/cathedral.jpg', pyramid=False, save_as='./result/cathedral.jpg')
creating a color image of [./data/cathedral.jpg] using [single scale alignment] dispalcement vec for channel [red]: (3, 12) dispalcement vec for channel [green]: (2, 5) result saved as [./result/cathedral.jpg]
align_image('./extra_data/5042.jpg', pyramid=False, save_as='./result/5042.jpg')
creating a color image of [./extra_data/5042.jpg] using [single scale alignment] dispalcement vec for channel [red]: (-1, 9) dispalcement vec for channel [green]: (0, 2) result saved as [./result/5042.jpg]
align_image('./extra_data/5030.jpg', pyramid=False, save_as='./result/5030.jpg')
creating a color image of [./extra_data/5030.jpg] using [single scale alignment] dispalcement vec for channel [red]: (-1, 6) dispalcement vec for channel [green]: (0, 2) result saved as [./result/5030.jpg]
align_image('./extra_data/3041.jpg', pyramid=False, save_as='./result/3041.jpg')
creating a color image of [./extra_data/3041.jpg] using [single scale alignment] dispalcement vec for channel [red]: (1, 6) dispalcement vec for channel [green]: (1, 3) result saved as [./result/3041.jpg]
To solve the problem that some raw images are too large and doing exhaustive search over a large search space is computational infeasible, we develop a coarse-to-fine multi-scale channel alignment algorithm.
First, for each channel, we create a pyramid of channel, denoted as $Ch$, with different scales. Here for each image, the number of scales, $N$, is defined as $$ N = int(np.clip(\log_2(\frac{\min( Ch.shape)}{256}) , 1, +\infty)). $$
We downsample the channel by the factor of 2. That is, for the channel at pyramid level $i$ (1-indexed), $Ch_i$, it is computed as $$ Ch_i = rescale(Ch, \frac{1}{2^{i - 1}}). $$
For the most coarse level, the search space by default is set between [-15, 15]. For each level of pyramid $i$ other than the most coarse level, the search space for height is defined as $$ [-15 + 2^{i - 1}dy, 15 + 2^{i - 1}dy], $$ whereas the search space for width is defined as $$ [-15 + 2^{i - 1}dx, 15 + 2^{i - 1}dx], $$ where $dy, dx$ is from the displacement vector from one level below, respectively. The final dispacement vector is given after it aligns the level $1$, which corresponds to the orignal scale of the channel.
Here is what it performs looks like on provided images.
align_image('./data/emir.tif', save_as='./result/emir.jpg')
creating a color image of [./data/emir.tif] using [multi-scale pyramid alignment] dispalcement vec for channel [red]: (29, 103) dispalcement vec for channel [green]: (22, 51) result saved as [./result/emir.jpg]
align_image('./data/three_generations.tif', save_as='./result/three_generations.jpg')
creating a color image of [./data/three_generations.tif] using [multi-scale pyramid alignment] dispalcement vec for channel [red]: (10, 111) dispalcement vec for channel [green]: (9, 59) result saved as [./result/three_generations.jpg]
align_image('./data/train.tif', save_as='./result/train.jpg')
creating a color image of [./data/train.tif] using [multi-scale pyramid alignment] dispalcement vec for channel [red]: (29, 88) dispalcement vec for channel [green]: (-3, 43) result saved as [./result/train.jpg]
align_image('./data/icon.tif', save_as='./result/icon.jpg')
creating a color image of [./data/icon.tif] using [multi-scale pyramid alignment] dispalcement vec for channel [red]: (23, 90) dispalcement vec for channel [green]: (16, 41) result saved as [./result/icon.jpg]
align_image('./data/village.tif', save_as='./result/village.jpg')
creating a color image of [./data/village.tif] using [multi-scale pyramid alignment] dispalcement vec for channel [red]: (13, 137) dispalcement vec for channel [green]: (-3, 75) result saved as [./result/village.jpg]
align_image('./data/self_portrait.tif', save_as='./result/self_portrait.jpg')
creating a color image of [./data/self_portrait.tif] using [multi-scale pyramid alignment] dispalcement vec for channel [red]: (3, 165) dispalcement vec for channel [green]: (-1, 77) result saved as [./result/self_portrait.jpg]
align_image('./data/harvesters.tif', save_as='./result/harvesters.jpg')
creating a color image of [./data/harvesters.tif] using [multi-scale pyramid alignment] dispalcement vec for channel [red]: (13, 125) dispalcement vec for channel [green]: (17, 75) result saved as [./result/harvesters.jpg]
align_image('./data/lady.tif', save_as='./result/lady.jpg')
creating a color image of [./data/lady.tif] using [multi-scale pyramid alignment] dispalcement vec for channel [red]: (1, 113) dispalcement vec for channel [green]: (-7, 67) result saved as [./result/lady.jpg]
align_image('./data/turkmen.tif', save_as='./result/turkmen.jpg')
creating a color image of [./data/turkmen.tif] using [multi-scale pyramid alignment] dispalcement vec for channel [red]: (26, 114) dispalcement vec for channel [green]: (6, 67) result saved as [./result/turkmen.jpg]
align_image('./extra_data/5014.tif', save_as='./result/5014.jpg')
creating a color image of [./extra_data/5014.tif] using [multi-scale pyramid alignment] dispalcement vec for channel [red]: (-13, 7) dispalcement vec for channel [green]: (-6, 0) result saved as [./result/5014.jpg]
align_image('./extra_data/5032.tif', save_as='./result/5032.jpg')
creating a color image of [./extra_data/5032.tif] using [multi-scale pyramid alignment] dispalcement vec for channel [red]: (22, 86) dispalcement vec for channel [green]: (-1, 2) result saved as [./result/5032.jpg]
Instead of using raw image pixel inputs, we first blur the image using Gaussian kernel and then use sobel filter to extract the gradients. We align channels using gradients which are sparser and more robust to noise.
from scipy.ndimage import sobel, gaussian_filter
better_feat_func = lambda ch: sobel(gaussian_filter(ch, sigma=5))
align_image('./extra_data/5042.jpg', pyramid=False, compare_img='./result/5042.jpg',
save_as='./result/5042_better_feat.jpg', feat_func=better_feat_func)
creating a color image of [./extra_data/5042.jpg] using [single scale alignment] dispalcement vec for channel [red]: (11, 10) dispalcement vec for channel [green]: (9, 0) result saved as [./result/5042_better_feat.jpg]
align_image('./data/self_portrait.tif', compare_img='./result/self_portrait.jpg',
save_as='./result/self_portrait.jpg', feat_func=better_feat_func)
creating a color image of [./data/self_portrait.tif] using [multi-scale pyramid alignment] dispalcement vec for channel [red]: (47, 165) dispalcement vec for channel [green]: (41, 80) result saved as [./result/self_portrait.jpg]
Add rotation into search space to form the 3rd component of the displacement vector, dr. For the rotation component, we search three angles $\in \{-0.1, 0, 0.1\}$ to detect small rotational variations among channels. For multi-level pyramid alignment, this rotational displacement is parsed to the next layer as the inital guess as well.
Here are some results by adding rotations into the search space, which are better than our vanilla solution.
align_image('./data/cathedral.jpg', compare_img='./result/cathedral.jpg', pyramid=False,
save_as='./result/cathedral_better_trans.jpg', rotation=(-0.1, 0.2))
creating a color image of [./data/cathedral.jpg] using [single scale alignment] dispalcement vec for channel [red]: (3, 12, -0.1) dispalcement vec for channel [green]: (2, 5, -0.1) result saved as [./result/cathedral_better_trans.jpg]
align_image('./extra_data/5014.tif', compare_img='./result/5014.jpg', bound=((-3, 3), (-3, 3)),
save_as='./result/5014_better_trans.jpg', rotation=(-0.1, 0.2))
creating a color image of [./extra_data/5014.tif] using [multi-scale pyramid alignment] dispalcement vec for channel [red]: (-15, -3, 0.1) dispalcement vec for channel [green]: (-6, 0, 0.0) result saved as [./result/5014_better_trans.jpg]