16-726 Learning-Based Image Synthesis

Assignment 5 - GAN Photo Editing

Trung Nguyen

Inverting the Generator

Given an original image, we are optimizing an sample noise from either the latent z, w, and w+ to produce the original image. We will experiment with DCGAN and StyleGAN2. We use perception weight loss of 0.2 for these experiments.

Original image

DCGAN
Project Latent Z
Perceptual Loss Variation

0

0.2

0.5

Iter 250

Iter 500

Iter 750

Iter 1000

StyleGAN
Project Latent Z
Perceptual Loss Variation

0

0.2

0.5

Iter 250

Iter 500

Iter 750

Iter 1000

Project Latent W
Perceptual Loss Variation

0

0.2

0.5

Iter 250

Iter 500

Iter 750

Iter 1000

Project Latent W+
Perceptual Loss Variation

0

0.2

0.5

Iter 250

Iter 500

Iter 750

Iter 1000

Time Performance

The two models were runs for 1000 iterations on Nvidia T4 GPU on 1 input image

Models Latent Z Latent W Latent W+
DCGAN (w/o Perceptual Loss)
5(s)
DCGAN
8.6(s)
StyleGAN (w/o Perceptual Loss)
23.32(s)
23.31(s)
22.78(s)
StyleGAN
27.80(s)
27.27(s)
27.05(s)

Interpolate the Cats

Given two images, we are optimizing two sample noises from either the latent z, w, and w+ in GAN to produce the original image. Then we interpolate within each latent z, w, w+ between the two images to make the interpolation. We will experiment with DCGAN and StyleGAN2. We use perception weight loss of 0.2 for these experiments.

Original Source
Original Target
DCGAN
Interpolate in latent Z space

Projected Source

Projected Target

Interpolation between projections

StyleGAN
Interpolate Latent Z

Projected Source

Projected Target

Interpolation between projections

Interpolate Latent W

Projected Source

Projected Target

Interpolation between projections

Interpolate Latent W+

Projected Source

Projected Target

Interpolation between projections

From scribble to Image

Given a sketch, we will generate an image while keep the constraints.

Scribble
DCGAN + Latent Z
StyleGAN + Latent Z
StyleGAN + Latent W
StyleGAN + Latent W+