The machine knows what cat parts are and knows where they go, and can glue them together to fit drawings of cats that you provide. Whether the results charm or horrify you might depend on whether you, yourself, are part of the simulation. The Next Web interviews creator Christopher Hesse.
Hesse used a model called pix2pix, that works by training the algorithm with pairs of images to turn any input into the thing you’ve trained it with. On his demo site, he included some examples for buildings, cats, shoes and handbags.
For the cat part, he used a database of some 2,000 cat pictures to train the algorithm to recognize shapes and edges, and fill them in with parts of said cat pictures. It’s far from perfect, as he notes on the site: “Some of the pictures look especially creepy, I think because it’s easier to notice when an animal looks wrong, especially around the eyes. The auto-detected edges are not very good and in many cases didn’t detect the cat’s eyes, making it a bit worse for training the image translation model.”
edges2cats is excellent. https://t.co/hPg23BeEX2 pic.twitter.com/NXDxYzQ3X3
— Aras Pranckevičius (@aras_p) February 22, 2017
https://twitter.com/Tettix/status/834361025399050242/photo/1?ref_src=twsrc%5Etfw
ENDLESS SUFFERINGhttps://t.co/n3MAAgizxS pic.twitter.com/zm6QPN1Cvo
— Ross Plaskow (@rossplaskow) February 22, 2017
If you draw a realistic cat, it will actually generate a cat that is merely uncanny rather than absolutely horrific:
Working on ICML vs playing with https://t.co/ohdXT7EnX2. #choices pic.twitter.com/aUe4bLLP4J
— Oriol Vinyals (@OriolVinyalsML) February 22, 2017
I did a cat sketch and the result looks like a painting! I must have pleased the machine: