Dango is a personal assistant that feeds its users' messages into a deep-learning neural net to discover new expressive possibilities for emojis, GIFs and stickers, and then suggests never-seen combinations of graphic elements to your text messages that add striking nuances to them.
The model began life without any explicit, human-generated labels for emoji. By using a recurrent neural network, it was able to make inferences about graphic meanings and combine them in fascinating ways that its creators never anticipated.
Multiple RNNs can also be stacked on top of each other: each RNN layer takes its input sequence and transforms it into a new, more abstracted representation that is then fed into the next layer, and so on. The deeper you stack these networks, the more complex the sorts of functions they can represent. Incidentally, this is where the now popular term “deep learning” comes from. Major breakthroughs on hard problems like computer vision have come partly from simply using deeper and deeper stacks of network layers.
Dango’s neural network ultimately spits out a list of hundreds of numbers. The list can be interpreted as a point in a higher-dimensional space, just as a list of three numbers can be interpreted as the x-, y-, and z-coordinates of a point in three-dimensional space.
We call the high-dimensional space semantic space, think of it as a multi-dimensional grid where various ideas exist at various points. In this space, similar ideas are close together. Deep learning pioneer Geoff Hinton evocatively refers to points in this space as “thought vectors”. What Dango learned during the training process was how to convert both natural language sentences and emoji into individual vectors in this semantic space.
So when Dango receives some text, it maps it into this semantic space. To decide which emojis to suggest, it then projects each emoji’s vector onto this sentence vector. Projection is a simple operation which gives a measure of similarity between two vectors. Dango then suggests the emoji with the longest projection — these are the ones closest in meaning to the input text.
Teaching Robots to Feel: Emoji & Deep Learning ? ? ?
[Xavier Snelgrove/Get Dango]
(via Four Short Links)