And it reveals how well-designed the original monsters were.
Neural networks and Pokémon Go. It was only a matter of time before some computer scientist realized this was a chocolate-and-peanut butter combination and programmed a convolution neural network like Google's Deep Dream on Pokémon Go's dataset of 151 cute, collectible monsters. Now we have it, courtesy of a Japanese researcher known as Bohemia highlighted today by Prosthetic Knowledge.
What you get from the results are abstract impressions of Pokémon. The sense of a Snorlax. The premonition of a Pikachu. The hunch of a Horsea. The notion of a Nidorino. The alliterative of an Alakazam. These blurry Pokémon blobs end up looking like pocket monsters that have viewed through the gene-slimed porthole by Professor Oak after being put through the telepods together. They're less new Pokémon than chromatic smears of the mashed-up attributes of existing Pokémon.
To me, what's fascinating about the exercise is that it really shows just how well-designed the original 151 Pokémon were. Even when a neural network is hallucinating them, the core traits of various Pokémon usually come through. This is actually by necessity. The original 151 Pokémon, which is what Pokémon Go uses, were originally designed in the mid-'90s so that they would appear distinct when viewed on the original Nintendo Game Boy's 160x144 resolution screen. So to be successful, a Pokémon needed to have its own distinct silhouette, and ideally one distinct highlighting feature—like the big spiral Poliwhirl has on his stomach—in addition to the colors that would never be seen on a Game Boy's screen (but would be seen in cartoons and merchandise).
It's a credit to how successful these designs were that even when a neural network is mashing them up, you can look at the results and say, "That's, like, 70% a Pikachu, and 25% a Bulbasaur, and 5% a Magikarp." In fact, that sounds like a pretty fun game variant of Pokémon in its own right.