Ville-Matias Heikkilä posted four trips to YouTube, each the work of a different “DeepDreaming“-style neural network (based on the Caffe deep learning framework) but the same source image.
Before training my own dreaming network, I’ll need to choose a network layout that suits my needs. In order to learn about the strengths and weaknesses of different layouts, I’ve run the same guided dreaming tour with four different Imagenet-pretrained models: GoogLeNet, VGG CNN-F, VGG CNN-S and Network-in Network Imagenet model (all available via Caffe model zoo).
The interframe processing is the same for all except NIN which is keen to hallucinate very bright saturated spots, so I decided to couple it with a desaturation filter which effectively produces a gray background. Most of the artifacts you are likely to see stem from the cumulative nature of the interframe processing (not from compression).
[via Hacker News.]