16726 Project 2 - Zihang Lai

Gradient Domain Fusion

Project overview - A brief description

In this project, we aim to naturally blend a source image into a target image by leveraging image gradient. An example of the blending task is the following: suppose we have have image of a child playing in the pool and another image of a bear. We want to create a new image in which the second bear image is blended into the pool image such that it appears that the bear is also in the pool. If we use naive method of copying the pixels of the first image into the second image, we would have noticeable artifact along the edge of the pasted pixels. Also, the colors of the pixels copied also differ from the original one. A better solution would be using some blending algorithms. In this project, specifically, we focus on a technique called the Poisson blending.

The optimization goal of Poisson blending is two-fold: first, in the middle of the patch blended into the target image, the image gradient should be similar to that of the source image; second, on the boundary of the patch, the "image gradient" between inside and outside the patch should be similar with the gradient of the source image.

A toy example

In [1]:
%matplotlib inline
from proj2_utils import *
In [2]:
image = imageio.imread('../demo_data/toy_problem.png')
image_hat = toy_recon(image)

plt.subplot(121)
plt.imshow(image, cmap='gray')
plt.title('Input')
plt.subplot(122)
plt.imshow(image_hat, cmap='gray')
plt.title('Output')
plt.show()

Main results

We want to blend the following source image (a plane) into the top right corner of the target image (a natural scene). As explained above, we use Poisson blending to get the result. There are two things we optimized for in a least-squared setting: first, in the middle of the patch blended into the target image, the image gradient should be similar to that of the source image; second, on the boundary of the patch, the "image gradient" between inside and outside the patch should be similar with the gradient of the source image.

In [4]:
r_fg, r_bg = 0.4, 0.8

fg = cv2.resize(imageio.imread('../data/jet.jpg'), (0, 0), fx=r_fg, fy=r_fg)
bg = cv2.resize(imageio.imread('../data/nature.jpg'), (0, 0), fx=r_bg, fy=r_bg)
mask = cv2.resize(imageio.imread('../data/jet_mask.png'), (0, 0), fx=r_fg, fy=r_fg)
offset = -np.array([140,0])

plt.figure(figsize=(14,6))
plt.subplot(121); plt.imshow(fg); plt.title('Source image')
plt.subplot(122); plt.imshow(bg); plt.title('Target image'); plt.show()
nblend, pblend = Poisson_wrapper(fg, bg, mask, offset)
show_blend(nblend, pblend, (14,6))

Clearly, the naive blend is not at all natural because of the boundary is all white and does not blend into the target image. The poisson blend offers much better result perceptually.

More result (successful and failure examples)

This is another nice result. Notice the edge of the polar bear blend into the target image seamlessly.

In [63]:
r_fg, r_bg = 0.4, 0.2
fg = cv2.resize(imageio.imread('../data/bear.jpg'), (0, 0), fx=r_fg, fy=r_fg)
bg = cv2.resize(imageio.imread('../data/ski.jpg'), (0, 0), fx=r_bg, fy=r_bg)
mask = cv2.resize(imageio.imread('../data/bear_mask.png'), (0, 0), fx=r_fg, fy=r_fg)
offset = -np.array([40,80])

nblend, pblend = Poisson_wrapper(fg, bg, mask, offset)
show_blend(nblend, pblend, (12,6))

This is a failed example. The doge dog looks very unnatural on the balcony. The main reason is that the pixels around the pasted patch are dark and bluish, so the patch becomes bluish as well - although you would expect the dog also being lightened up by the light (similar to the wall below). The other possible reason is that the textures are too different (the target image is a oil painting and the source image is computer generated.)

In [41]:
fg = cv2.resize(imageio.imread('../data/doge.jpg'), (0, 0), fx=0.3, fy=0.3)[:,::-1]
bg = cv2.resize(imageio.imread('../data/cafe.jpg'), (0, 0), fx=0.2, fy=0.2)
mask = cv2.resize(imageio.imread('../data/doge_mask.png'), (0, 0), fx=0.3, fy=0.3)[:,::-1]
offset = -np.array([100,55])

nblend, pblend = Poisson_wrapper(fg, bg, mask, offset)
show_blend(nblend, pblend, (12,6))

Bells & Whistles

Mixed Gradient

In [5]:
r_fg, r_bg = 0.4, 0.8
fg = cv2.resize(imageio.imread('../data/jet.jpg'), (0, 0), fx=r_fg, fy=r_fg)
bg = cv2.resize(imageio.imread('../data/nature.jpg'), (0, 0), fx=r_bg, fy=r_bg)
mask = cv2.resize(imageio.imread('../data/jet_mask.png'), (0, 0), fx=r_fg, fy=r_fg)
offset = -np.array([140,0])

plt.figure(figsize=(14,6))
nblend, pblend = Poisson_wrapper(fg, bg, mask, offset)
nblend, mblend = Mixed_wrapper(fg, bg, mask, offset)
<Figure size 1008x432 with 0 Axes>
In [12]:
plt.figure(figsize=(14,6))
plt.subplot(121)
plt.imshow(pblend)
plt.title('Poisson Blend')
plt.subplot(122)
plt.imshow(mblend)
plt.title('Mixed Blend')
plt.show()
plt.figure(figsize=(14,6))
plt.subplot(121)
plt.imshow(pblend[40:100,150:250])
plt.title('Poisson Blend (detail)')
plt.subplot(122)
plt.imshow(mblend[40:100,150:250])
plt.title('Mixed Blend (detail)')
plt.show()

By comparing the results of both Poisson blending and Mixed blending, we can notice in areas in the target image that is being occluded by the pasted patch, the mixed blending shows stronger results: the mixed blend has more details. This is because mixed blending also considered gradient of the target image whereas this information is not being used in Poisson blending.

More gradient domain processing

Many filters in image processing make use of gradient information. For example, the Sobel operator is a simple filter that is widely used for edge detection. The general idea is to use the image gradient in X or Y direction as an indicator for edge. This idea is then implemented using a filter which is convolved with the image to create the result. A $3\times 3$ Sobel filter looks like this: $$ \left[ \begin{matrix} +1 & 0 & -1 \\ +2 & 0 & -2 \\ +1 & 0 & -1 \end{matrix} \right] $$ See below for an example.

In [35]:
import cv2

img = cv2.imread('../data/jet.jpg')[...,::-1]
sobelx = cv2.Sobel(img[...,0],cv2.CV_64F,1,0,ksize=5)
sobely = cv2.Sobel(img[...,0],cv2.CV_64F,0,1,ksize=5)
plt.figure(figsize=(14,6))
plt.subplot(131)
plt.imshow(img)
plt.title('Image')

plt.subplot(132)
plt.imshow(sobelx,cmap = 'gray')
plt.title('Horizontal Sobel')

plt.subplot(133)
plt.imshow(sobely,cmap = 'gray')
plt.title('Vertical Sobel')

plt.show()
In [ ]: