proj 5
Embedding Experiments
Original:

Here are various results on the cat:
L2 decay for delta
Here we see l2 decay makes things more fuzzy.
python main.py --model stylegan --mode project --latent w+ --l2_wgt 1e-2 --l1_wgt 10
python main.py --model stylegan --mode project --latent w+ --l1_wgt 10
Latent z vs w vs w+
Here we see w+ is better than w, which is better than z.
python main.py --model stylegan --mode project --latent z --l1_wgt 10
python main.py --model stylegan --mode project --latent w --l1_wgt 10
python main.py --model stylegan --mode project --latent w+ --l1_wgt 10
Vanilla vs StyleGan
Here we see no big difference, since we use z in both cases, but
we saw earlier w or w+ makes a big difference
python main.py --model vanilla --mode project --latent z --l1_wgt 10
python main.py --model stylegan --mode project --latent z --l1_wgt 10
Loss Weighting
Here we see various hyperparameters make a difference for the quality
python main.py --model stylegan --mode project --latent w+ --l2_wgt 1e-2 --l1_wgt 10
python main.py --model stylegan --mode project --latent w+ --l2_wgt 1e-1 --l1_wgt 10
python main.py --model stylegan --mode project --latent w+ --l2_wgt 0 --l1_wgt 20
python main.py --model stylegan --mode project --latent w+ --l2_wgt 0 --l1_wgt 1
I find w+ with no l2 works best, as the cat is not a face. Also, the stylegan takes 14 seconds, while the vanilla takes 5 seconds.
Interpolation
Pair 1
Img 0 original, embedded

Img 1 original, embedded

Img 0-1 interpolation gif
Pair 2
Img 0 original, embedded

Img 1 original, embedded

Img 0-1 interpolation gif

It seems the quality is good. The faces morph along the way of least
resistance it seems, similar to how least mesh morphing might occur.
Drawing
Given Sketches
These are the given sketches, with perceptual weight 0.1 and l2 regularization weight 6.0; I settled on this because it is a good tradeoff for the hyperparameters I tried:

Here is also one with lower l2 regulariation weight and perceptual weight:

We find that it's very important to tune hyperparameters; I found that with less perceptual weight, the images would be much more dull and with less structure, and with less l2 regularization, the images would be very washed out and bland. This is probably because perceptual loss is needed so that details come out more, and l2 decay on delta is needed to ensure the output still looks like a cat.