16-726 21spring Assignment #5

GAN Photo Editing

Author: Zhe Huang (Andrew ID: zhehuang)


Introduction

In this assignment, we dived into the topic of GAN photo editing, a novel image modification task that utilizes generative adversarial network to produce the desired image with proper constraints. Specifically, in this assignment we explored subtopics such as generator inversion, GAN image interpolation and generating potorealistic images from scribbles.

Part 0: General hyperparameter settings

Unless stated otherwise, by default, all experiments in this assignment are following:

with all other hyperparameters set to starter file and PyTorch default.

Part 1: Inverting the Generator

In this section, we implement a generator inversion optimization process to retrieve latent variable $z$ from an input image $x$. Specifically, our optimization target is
$$ z^{\ast} = \arg \min_z \left\{(1 - \lambda) \cdot \mathcal{L_2}(G(z), x) + \lambda \cdot \left(\sum_i\mathcal{L}^{(i)}_{content}((G(z), x) + \sum_j\mathcal{L}^{(j)}_{style}((G(z), x)\right)\right\}, $$ where $\mathcal{L_2}$ is the L2 loss, $\mathcal{L}_{content}$ and $\mathcal{L}_{style}$ are content loss components styles loss components. There are multiple content/style loss units in the network so we sum them up to formulate the perceptual loss. Finally, we combine L2 loss and perceptual loss via coefficient $\lambda$. We optimize our generator inversion problem over this target function.

We can also switch from directly using the latent variable $z$ to higher dimensional intermediate embeddings such as $w$ and $w+$.

Part 1.1: The effect of perceptual loss

We show the effect of perceptual loss by comparing reconstructed images generated using different $\lambda$ values while keeping other settings identical. Specifically, our experiments are done using vanilla generator model with $\lambda$ set to be $0, 0.25, 0.5, 0.75, 1$ to reconstruct $z$ to test on situations of using L2 loss only, using a mixture of L2 and perceptual losses, and using perceptual loss only, correspondingly. Visualizations of reconstructed images are as follows.

Comment on previous results: It seems like that $\lambda = 0.5$ gives the best result among all settings, which in my opinion has a proper balance of resonstruction of details and overall style similarity.

Part 1.2: The effect of w/w+ space

We further test the effectiveness of swaping the encoding from latent $z$ to $w$ and $w+$. To show this, we employ the provided StyleGAN generator which has a proper $w$/$w+$ mapping network. We reconstruct images with a fixed $\lambda = 0,5$, using $z$, $w$ and $w+$ respectively. Here are the results.

Comment on previous results: First of all, even only using latent $z$, the result is better than the the its vanilla counterpart, which shows that the provided StyleGAN generator is a better model compared with the vanilla model. When it comes to the reconstruction quality using different embeddings, I find at this time $w$ embedding seems to give the best result in terms of its color, style, and overall similarity with the original input. Due to this, I will choose to use StyleGAN generator with $w$ space and set $\lambda = 0.5$ in the following section. When it comes to the optimization speed, the StyleGAN generator takes significant longer time to train compared with the vanilla model while different embeddings do not affact the training speed at all.

Part 2: Interpolate your Cats

In this section, we blend 2 cat images together by reweighting their corresponding latent $z_1$ and $z_2$ variables. Specifically, for gif generation purpose, our $\theta$ ranges from $0$ to $1.02$ with a step size $0.02$ (i.e.torch.arange(0, 1.2, 0.02). By doing this we can have a smooth transition from using $0$% of $z_2$ to $100$% using $z_2$. We also tried this process on various settings (i.e. different models & embeddings). Recounstruction results as well as gif interpolation results are as follows.

Comment on previous results: The interpolation quality is basically in accordance with the image reconstruction quality. Thus, there is no surprise that StyleGAN with $w$ or $w+$ space yields the best result out of all experiments. When it comes to the interpolation consistancy, overall they are all fair good except for the StyleGan with $z$ settings on 2->3. This may be due to the poor reconstruction of the input image.

Part 3: Scribble to Image

In this section we try to convert a hand drawing cat to a photorealistic cat iamge generated by GAN generator. Sepcifically we use color scribble constraints and try different settings to generate from the same drawing. Results are show below.

Comment on previous results: The overall quality is not great but discernible. I do notice that the generator sometimes is bounding between choosing an image that matches the color better vs. choosing another image that better resembles the original scribble structure. This seems to be a trade-off. A possible explanation is that the only constraint we impose is not capable of discriminating such difference so the model struggles to decide which way to go. We can solve this problem by adding constraints.

Part 4: Bells & whistles

Part 4.1: Texture constraint using style loss

We inject style loss, same as $\mathcal{L}_{style}$ that is described in Part 1, to do a better style transfer between hand drawing images and photorealistic reconstructions. Here are the results when using a mixture of color scribble constraint and the texture constraint.

Part 4.2: Grumpy Cat with higher resolution

We upgrade the whole model to optimize on high resolution cat images and redo some tasks from Part 1 to Part 3. Some "cherry-picked" results are demonstrated below.