16-726 Project 3, Jason Xu

Overview

This project aims to implement the layers and train two machine learning networks, DCGAN and CycleGAN.  

Padding for Discrimintor Layers

Differentiable Augumentation

Differentiable augumentation is implemented using random edits to brightness, contrast, saturation and random translation and cutout for the real and fake images. This likely prevents overfitting on training the discriminator, enabling better results.

When not using differentiable augumentation, although discriminator loss drops quicker, generator loss does not drop at all. The reuslts returned (below) are also noticeably worse.

 

Without differentiable augumentation

6400-no-diffaug.png

With differentiable augumentation

6400-diffaug.png

DCGAN Results

Discriminator and generator training loss:

Basic preprocess, no diff-aug

basic no diffaug

Basic preprocess, diff-aug

basic diffaug

Deluxe preprocess, no diff-aug

deluxe no diffaug

Deluxe preprocess, diff-aug

deluxe diffaug

If the GAN manages to train, training loss for discriminator and generator should both decrease as time goes on.

 

Early sample:

200

Later sample:

6400

The early samples have some hints of the generated image, but there a lot of noise-like colors still in the generated image. In contrast, in the later iterations, the generated image is much closer in resemblance to the reference image, although artifacts are still present.

CycleGAN Results

Results at iteration 1000:

sample-01000-X-Y

 

sample-01000-Y-X

 

Cat dataset:

PatchDiscriminator, Cycle Consistency:

cat patch consistency xy

 

cat patch consistency yx

PatchDiscriminator, No Cycle Consistency:

cat patch no consistency xy

 

cat patch no consistency yx

DCDiscriminator, Cycle Consistency:

cat dc consistency xy

 

cat dc consistency yx

 

Apple dataset:

PatchDiscriminator, Cycle Consistency:

apple patch consistency xy

 

apple patch consistency yx

PatchDiscriminator, No Cycle Consistency:

apple patch no consistency xy

 

apple patch no consistency yx

DCDiscriminator, Cycle Consistency:

apple dc consistency xy

 

apple dc consistency yx

Discussion of results - cat:

Between using patch and DC discriminator, Patch yields results more similar to the shape of the reference image, but has slightly worse features than the output of DCDiscriminator. As an example, the black & white color of the grumpy cats are shown better with DCDiscriminator, but the face of the cats seems closer to reality with PatchDiscriminator.

Using cycle consistency yields better results, as the one without cycle consistency distorts the output image significantly. However, it does have the color most similar to the target image, as the grey or black & white pattern are all very apparent.

Discussion of results - apple-orange:

Patch vs DC discriminator is a similar story to the previous dataset. DC seems to give a more dramatic color than Patch. In my opinion, patch is much better on this dataset than DC, compared to the cat dataset.

Cycle consistency on and off is also a similar story to the previous dataset. In the case with no cycle consistency loss, there are some results which, although have accurate color, are just blobs of color and do not resemble fruit in any way.