Boing Boing Staging

Teaching image-recognition algorithms to produce nightmarish hellscapes


In “Inceptionism,” scientists at Google Research describe their work training neural nets with sets of images, then tweaking the “layers” of neural net nodes to produce weird outcomes.


I’ve seen lots of tldrs of the paper, but the best so far comes from JWZ:

So then they reach inside to one of the layers and spin the knob randomly to fuck it up. Lower layers are edges and curves. Higher layers are faces, eyes and shoggoth ovipositors.

…But the best part is not when they just glitch an image — which is a fun kind of embossing at one end, and the “extra eyes” filter at the other — but is when they take a net trained on some particular set of objects and feed it static, then zoom in, and feed the output back in repeatedly. That’s when you converge upon the platonic ideal of those objects, which — it turns out — tend to be Giger nightmare landscapes. Who knew. (I knew.)


Inceptionism: Going Deeper into Neural Networks [Alexander Mordvintsev, Christopher Olah and Mike Tyka/Google Research]


Exit mobile version