Neural Transfer Implementation & by Github - Koalaliketree

Optimizing content loss at different layers

We feed in random noise as input image and implement content loss on different layer of VGG-19, as we can see the "lower" the layer is , the closer the image looks like the original pixel image

Generate image from two different random noise

We feed in different randomly generated noise, and optimized it w.r.t only the content loss. The output looks quite similar

Noise 1

Noise 2

Optimizing Style loss at different layers

We feed in random noise as input image and implement style loss on different layer of VGG-19

Generate image from two different random noise

We feed in different randomly generated noise, and optimized it w.r.t only the content loss. The output looks quite similar

Noise 1

Noise 2

Fine tune

I fine tune the model with number of epoch and style content weight ratio, I found the best style content weight ratio is 100000:1. The best epoch is 300

Try with different style and content

Noise and content image

Noise output

Content output

Favorite Image