Take Google Research's "inceptionism" white-paper on AI-based pattern matching and feed it random noise, then recurse the output over and over, and the deep doglizards of reality come out to play.
This video is made using a visualisation technique applied to a neural network trained to recognise a broad range of images. Each frame is recursively fed back to the network starting with a frame of random noise. Every 100 frames (4 seconds) the next layer is targeted until the lowest layer is reached.
Inside an artificial brain [Johan Nordberg/Vimeo]]
(via JWZ)