I trained a Transformer model with limited capacity on the CIFAR image dataset. The constraint of using a small model led to the generation of images that only captured the most basic features of the data. The low-level features captured in the generated images provide a glimpse into the “mind” of the machine. It’s an opportunity to see how the model processes and understands the world through its training data.

When we think of machine learning models, especially Transformers, we often associate them with their ability to handle complex asks. However, by deliberately limiting the model’s capacity, we forced it to simplify the complexity of the images it was trained on. The generated images, therefore, became a canvas of the most fundamental visual elements — colors stripped of their usual context.

Artists have long embraced constraints as a means to spark creativity. Whether it’s a poet working within the structure of a sonnet or a painter using a limited color palette, limitations can often lead to more innovative and expressive works. In the case of the Transformer model, the small capacity imposed a unique set of limitations that resulted in images that, while not photorealistic, possess a certain charm and intrigue.

Colors
Colors
Colors
Colors