"Frozen Intelligence" - Turning the Inside of an AI Brain Into Art
by Arpan Mondal in Design > Art
314 Views, 7 Favorites, 0 Comments
"Frozen Intelligence" - Turning the Inside of an AI Brain Into Art
I am a junior AI engineer, and I hear this one thing often that kinda annoys me: "Neural networks are black boxes. We don't know what's happening inside."
Because of this "black box" logic, people either believe AI is a conscious entity beyond our understanding, or it is something dangerous and unpredictable.
I don't believe that. I wanted to prove that the inside of a neural net isn't magic. It is mathematics. And it is beautiful. When you strip away the "black box", a neural network is basically a single, giant equation* that can predict tomorrow's weather, recognize a face, or predict who will win a game. Exactly, a single equation can do that! Who could imagine?
To show this beauty to the world, I created this artwork titled "Frozen Intelligence", showing the inside of a real AI brain with all its intelligence frozen on a piece of paper.
For this artwork, I trained a custom neural network to predict who will win a game of Tic-Tac-Toe given the state of the game board at any stage of the game (even on the first turn!).
Specialties of this artwork:
Literal Visualization: This isn't just a drawing of a brain. The "pixels" making up the brain are the actual weights that make the AI work.
Functional Art: If you were patient enough to sit down with a calculator and substitute the variables (x1 to x9) into the equation printed on this art, you would get the correct winner probability!
Just to be clear. This is NOT an AI generated art. This is an original artwork I created to look inside the AI mind.
In this Instructable, I will explain the entire process: training the brain, extracting what's inside the AI brain, and managing tens of thousands of characters to create a piece of art that proves AI can be beautiful on the inside!
Let's look inside the box :)
Downloads
Supplies
Hardware:
A computer (preferably with a GPU to train the neural net faster)
A large picture frame (I used this: IKEA RÖDALM frame)
Paper cutter: Link to buy
Long ruler or guide to cut the poster to size: Link to buy
Glue
Magnifying glass (optional, if the font is too small)
Software:
Visual Studio Code (or any python editing software): Link
Python: Link
Python libraries: PyTorch, Pillow, PDF Plumber, SVG write, Pandas, NumPy
Finally, you'll need to get the artwork printed via an online or offline service.
But What Is a Neural Network?
To really understand what we'll be doing, I'm going to give a very beginner-friendly introduction to neural networks here. For my project, I designed a neural network specifically for Tic-Tac-Toe. Here is how the flow works, from start to finish:
1. The Eyes (The Input Layer): 9 Neurons
First, the AI needs to see the board. A Tic-Tac-Toe board has 9 squares, so my network has 9 input neurons.
- If there is an X, the input is 1.
- If there is an O, the input is -1.
- If it's Empty, the input is 0.
2. The "Thinking" (The Hidden Layers): 64 & 32 Neurons
This is the so-called "Black Box." The 9 inputs send signals to a layer of 64 neurons, which then talk to a layer of 32 neurons.
Imagine these connections as pipes with valves (we call these valves "Weights"). The AI learns to turn these valves.
- If a connection is important (like "blocking the opponent"), the weight is high. You can think of it as the valve being open fully to let more water flow.
- If a connection is useless, the weight is near zero. So the valve is mostly closed and very little water flows through.
3. The Verdict (The Output Layer): 1 Neuron
Finally, all that information gets squashed down into 1 single output neuron.
- If the number is close to 1, X wins.
- If it's close to -1, O wins.
Here is the beautiful part. Inside the computer, this isn't any pipes or valves. It is just multiplication and addition.
- Layer 1 is just: (Input × Weight)
- Layer 2 is just: (Layer 1 result × Weight)
If you combine them all, the entire brain is technically just one giant, nested mathematical equation.
That is what I wanted to capture. I didn't want to draw a picture of a brain; I wanted to write down that specific equation, all 300,000+ characters of it, on a single piece of paper.
Note: I've simplified quite a lot here. I haven't spoken about bias terms, activation and the fact that the AI will be a single equation only if there is a single output neuron. If you'd like to dig deeper, here's a beautiful video by 3Blue1Brown
Teaching the AI to Play (Data Generation)
So, how does an AI learn?
Think of this neural network as a student who knows nothing about Tic-Tac-Toe. To teach it, we need to show it thousands of examples.
We say something like, "Hey, look at this board. X is in the corner. If both players play perfectly from here, X will win. So, remember that."
If we show it enough examples, it stops guessing and starts understanding the patterns.
You didn't want your AI to just learn from random games (which would make it a bad player). You want it to learn from a perfect player.
I wrote a Python script using an algorithm called Minimax.
- Minimax is a classic algorithm that looks ahead at every possible future move to find the perfect strategy. It literally can't lose.
- I used this to generate over 50,000 games and label every single board state with the true mathematical outcome (Win, Lose, or Draw).
After running my custom generator (code attached!), I had a CSV file with thousands of board positions, each marked with the perfect correct answer.
I've attached the code as well as my dataset.
Training the AI
With my dataset ready, it was time to build the actual "brain." I used PyTorch, a popular AI library, to construct the network.
This was the hardest part of the design. I had to balance two conflicting goals:
- Too Small: If the network was too simple, it would lose games.
- Too Big: If the network was huge, the "equation" would be millions of characters long and wouldn't fit on a poster.
I experimented with several sizes and found the "Goldilocks" zone: 9 → 64 → 32 → 1.
The Code (Simplified)
Here is the core logic that defines the brain structure. You can see the layers stacking up just like in our diagram:
Training Process
I let the AI "study" the textbook (my dataset) for up to 200 rounds (epochs).
- It would make a prediction.
- Check the answer key.
- Adjust its "valves" (weights) slightly to get closer to the right answer.
I also used a technique called Early Stopping (patience=10), which tells the AI, "If you stop improving for 10 rounds, stop studying. You're ready."
After training, I had a file called "ttt_best.pth". This small file contained the optimized numerical values for every single weight and bias in the network.
I've attached the training code below.
Downloads
Extracting the Equation From the Brain
Now, we have a trained brain, but it is trapped inside a binary file (.pth) that only a computer could read. To put it on a poster, we need to translate it into human language (mathematics).
This isn't as simple as hitting "Print."
I wrote a custom script to loop through every layer of the trained model and pull out the raw numbers.
- Weights: The strength of connections between neurons.
- Biases: The threshold for activating a neuron.
I quickly ran into a physical problem. The raw numbers were too precise. A single weight looked like this:
0.12345678 (10 characters).
With nearly 3,000 parameters plus all the algebraic symbols (x, +, *, tanh), the equation exploded to over 500,000 characters! It wouldn't fit on my target paper size without becoming microscopic.
The Solution
I realized I didn't need 8 decimal places for the art to be functional.
I rounded every weight to 4 decimal places (e.g., 0.1235).
- Original: 0.12345678
- Optimized: 0.1235
This seemingly small change saved over 20,000 characters of space! It was just enough to compress the equation onto an A2 sheet while keeping the calculation accurate enough to predict the winner.
My script then stitched these thousands of numbers into one massive, continuous string of text, formatting it exactly like a mathematical function:
tanh( ... + 0.5432 * ReLU( ... + -1.234 * x1 ... ) ... )
I saved this as a generic text file.
Creating the Art
I now had the text, but turning it into art was a massive engineering challenge in itself.
Attempt 1
My first instinct was to open Inkscape, draw a brain silhouette, and paste the text inside. Standard design software is not built for 300,000+ characters of editable text. Inkscape froze and crashed every time I tried to manipulate the text block. Even when I managed a partial test, the text was too dense. To fit inside just the brain shape, the font had to be microscopic—requiring an A0 printer (which is huge and expensive) just to be readable.
Attempt 2
I needed a solution that was efficient (wouldn't crash) and readable (fit on a standard A2/A3 poster).
I realized I shouldn't just fill the brain; I should fill the entire page with the equation.
I wrote a second Python script to "render" the text directly to a PDF, bypassing design software entirely.
The Logic:
- The Canvas: The script treats the entire A2 page as a grid of text. It flows the equation from top-left to bottom-right, like a wall of code.
- The Brain Check: For every single word it places, the script checks coordinates against a hidden brain image (a "mask").
- Conditional Formatting:
- If inside the brain: The text is colored BLACK (0.0).
- If outside the brain: The text is colored GRAY (0.5).
The Result:
The brain silhouette emerges not because I drew a line, but because the equation itself changes color as it passes through the "mind." This technique allowed me to use a larger, readable font size while keeping the brain clearly visible.
Key Features of the Script:
- Vector PDF: It generates a native PDF, meaning you can zoom in infinitely. The text never gets pixelated.
- Automated Typography: It handles line spacing and kerning mathematically, ensuring the "wall of text" is perfectly even.
Printing and Framing
Printing
Because the text is the art, print quality is everything.
- Paper Size: I chose A2 (420 x 594 mm). It is large enough to be impressive, but standard enough to frame easily.
- Paper Type: I recommend Matte Poster Paper (170gsm or higher). Glossy paper reflects too much light, making the tiny text hard to read.
- Before committing to a full print, I sent a small crop of the PDF to my local print shop to test the font size.
Ask your vendor specifically if 5pt text will be legible on the print size. High-resolution digital printing (1200 dpi+) is required.
Framing
A piece this technical needs a clean, modern frame.
- The Frame: I used the IKEA RODALM (50x70 cm).
- The RODALM comes with a mount (the white border inside). But, my design fills the entire A2 sheet. So removed the mount entirely. The A2 paper fits perfectly into the 50x70cm frame with a bit of "breathing room" (or flush, depending on your exact print margins).
No More a Black Box!
From the very beginning, this project was about one thing: proving that the inside of a neural network isn't a "black box." It's not magic, and it's not unknowable. It's just math.
This piece, "Frozen Intelligence," is the literal proof. It is a functional scientific instrument and a piece of art at the same time. The fact that standard software crashed and that I had to write my own tools to create it was a bit stressful, but I really enjoyed the whole process.
I hope this project inspires other engineers to see the beauty in their code and other artists to see the creative potential hidden in technology. If you make your own version or try this technique with a different AI model, please share it! I would love to see what you create.
Thank you for following along on my journey :)