Auto? Generating ?

Alfred Wang
5 min readJan 25, 2021


I’ve always abused and used the word AI for any technology that I don’t understand. and, I think I’m sort of not far away form that idea.

Machine learning is an application of artificial intelligence (AI) that provide the systems the ability to learn and improve on itself. Machine learning basically is a bunch of algorithms and statistics to find the patterns in the data. Example could be numbers, words, images, how long you paused on a video ad, how long you were watching certain images or articles, if it can be stored, it can be used.

A recommendation on youtube or Netflix, Facebook or Tweets that is suggested. My incorrect grammar linter warning while trying to finish up this sentence. Search engines, voice assistants on Siri, these are all kind of machine learning.

However, I found something interesting with this kind of machine learning.

GPT-2, is a model where it was trained with a lots of data that it can predict the next word, here is an example. It is a text generation tool with neural network , with huge amount of data, new versions can be generate html, this is from open AI group.


Recycling is good for the world.


Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources. And THAT is why we need to get back to basics and get back to basics in our recycling efforts. One of the best ways to start is to look at the process of creating a paper product. When you make a paper product, it is basically a long chain of materials.

GP2-Simple, is a easier one with Python library, people have done things other than text, such as loading chess moves inside and train to to play chess bot, or people have been putting MIDI files and got new tracks from it, as long you can enter the text files into it. It is taking the base model and re-training it.

So Matthew Rayfield, grab a bunch of pokemon images and put it in the model to train on, and created new pokemons out of it. Here’s how he did it.

Matthew created a script which go through every image , pixel by pixel translated into text.

Images are also flipped to give it more data to train on, normally gpt-2 are trained to do words with spaces, so above the “~” to be considered as spaces for it to train on, which ends up over 100,000 lines of pokemon spirtes, we use that data and let it train on itself, few hours later, it can create something new .

And then you can take that text and put it back into an image

Since this is just machine generated, even though this one doesn’t look exactly like an pokemon, however, you can generate thousands of it that look more appealing to human eyes.

The longer hours of training the more “realistic” they become, some models look like splash paintings, some even just have circle with different colors on it, and some are just all over the place.

Matthew then collaborated with artist Rachel Briggs( who also did drawing pokemons that don’t actually exist, and she made those pixel ones into life.

OpenAI have developed their own technique for image generation with GPT, and this project was done before that it comes out.

So gpt images with neural network could take text processing and creation.

It analyzes selected text, such as “an avocado fell on an armchair”, GPT-3 can create such a story in a human-like way.


Github link to Matthrw Rayfield

The pokemons created using the script

GPT -2 Simple github