Please run according to the assignment pages instructions.
For result of diff_augment.py
in code, add parser --use-diffaug True
after the command.
This assignment requires us to implement two GANs: DCGAN and CycleGAN. The first one generates cat images from noise and the second one transfer style from one group of pictures to another group.
For Data Augmentation, there are two augmentation in this assignment. One is in data_loader.py
with parser -data_preprocessing 'deluxe'
. The second is Differentiable Augmentation, and is added in training process with activation parser --use-diffaug True
.
For the padding value, I use the formula from PyTorch official page: nn.Conv2d:
Take kernel_size = 4 and stride = 2 and the scale of input and output image into this formula, we got the padding value 1.
And as for the first layer in Generator without using up_conv
, taking the values in and we got padding value 3.
For basic and diff_aug only mode, the network overfits the dataset in the late iteration, and the loss function doesn't seem converge. For the deluxe augmentation, the result is better.
And when the DiffAug and Deluxe are both enabled, through the iterations the details becomes more and more clear and cat's feature becomes recognizable. In early stages only the illusions is generated but later the face and eyes is generated as well.
Mode | Early Iteration(200) | Mid Iteration(2000) | Late Iteration(6400) | Loss* |
---|---|---|---|---|
Basic | ||||
Deluxe | ||||
DiffAug | ||||
DiffAug+Deluxe |
(*: Curve smooth with 0.9 in Tensorboard)
Generally speaking, the more iterations, the better result it is. And by adding cycle consistency loss function, the output image seems more stable and has more features from the other image. Take this as an example. With cycle loss, the brown fur is kept and it's more like the original picture of Grumpy cat.
Generally speaking, the more iterations, the better result it is. And y adding cycle consistency loss function, the
And for Discriminator selection, its hard to tell which one is better than the other. What my observation is that the result generated by Patch Discriminator usually have the expected shape, which looks more like real pictures. The DC Discriminator may have facial expressions very clear, but in general the cat is sometimes in deformation status. The only difference between the two discriminators is the output layer of Patch is 4x4 while the DC is 1x1. So the Patch Discriminator could resolve the general features well while the DC focus more on detailed features of images.
One interesting to note is that usually X->Y direction is better than Y->X direction.
Discriminator | Cycle Consistency Enabled? | Dirction | Early Iteration(500) | Mid Iteration(5000) | Late Iteration(10000) | Generator Loss | Discriminator Loss |
---|---|---|---|---|---|---|---|
PatchDiscriminator | No | X->Y | |||||
Y->X | |||||||
PatchDiscriminator | Yes | X->Y | |||||
Y->X | |||||||
DCDiscriminator | No | X->Y | |||||
Y->X | |||||||
DCDiscriminator | Yes | X->Y | |||||
Y->X |
It is the same with the Apple-orange Dataset. The background is remained and the edge of apple looks great with cycle loss.
Discriminator | Cycle Consistency Enabled? | Dirction | Early Iteration(500) | Mid Iteration(5000) | Late Iteration(10000) | Generator Loss | Discriminator Loss |
---|---|---|---|---|---|---|---|
PatchDiscriminator | No | X->Y | |||||
Y->X | |||||||
PatchDiscriminator | Yes | X->Y | |||||
Y->X | |||||||
DCDiscriminator | No | X->Y | |||||
Y->X | |||||||
DCDiscriminator | Yes | X->Y | |||||
Y->X |