All About AI

Can Claude Code Learn To Draw In MS Paint?

2026-03-16

This started as a "what if" experiment after I saw a clip from Mo on YouTube about how nature solves complex problems. His point: nature does not "code up" a hand. It tries random things, evaluates them against natural selection, keeps what works, and over time builds something impossible to design by hand. He called it vibe-coding, applied to biology. And he argued that humans will only build truly complex software the same way: define a goal, define an evaluation metric, let LLMs mutate and search until they hit the criterion.

So I gave Claude Code a goal, a tool, and an evaluator. The goal: copy a drawing I made in JS Paint. The tool: my Chrome browser automation setup. The evaluator: visual similarity. Then I let it iterate.

Watch the video:

The Setup

Three pieces, nothing fancy:

I started with no skills loaded. Claude has no idea how to operate JS Paint. The first run is pure exploration. The prompt:

"Here is your challenge. You have the tools to navigate Chrome. Read the image fisherman.png. Go to JS Paint. Draw the exact image. Use screenshot to compare to the truth at all times. Build tools if you need them. When you have 95% similarity you can stop. No cheating. Do you understand?"

I added --dangerously-skip-permissions so I would not have to approve every tool call.

Drawing 1: The Fisherman

Claude started by reading the image and inspecting CDP capabilities. Then it built a drawing script that controlled the mouse via CDP to draw on the JS Paint canvas. First attempt: rough fisherman silhouette, four legs (yes, my drawing did have four legs because the figure was on a stool), no facial features, oddly-shaped fishing rod.

Second attempt: better proportions, fishing line included, but the rod still looked off. Third attempt: marginally better, added the bobber. The agent stopped iterating around the third attempt — it had hit a local minimum where its tools were not precise enough for the next jump.

Fair enough. The point was not to make it perfect — it was to see if the iterative loop worked. It did.

Drawing 2: AI Agent Text

Second test: I drew "AI Agent" in handwriting on a colored background. Claude reproduced it but the N was backwards. I prompted it to fix and try to handle the comparison. It flipped the N. Then it tried to make the letters more handdrawn — the styling was better but the A's came out strange.

It then built a comparison tool to measure similarity, scored 78.1%, complained that the comparison was unfair (artwork vs screenshot), tried more variations, and eventually self-reported 95%. I don't fully buy that 95% — it cheated a bit on the metric — but again, the iterative search loop is the lesson.

Saving the Learnings: A Drawing Skill

The interesting moment was after the experiments. I distilled what worked into a draw_on_js_paint skill — brush stroke techniques, color picking, eye and face proportion patterns. Now I can prompt:

"Draw an abstract oil painting of a female on a hot summer night."

And Claude loads the skill and produces something passable in JS Paint using the brush stroke technique it learned. It is not Picasso, but it is recognizably an oil-style painting — face, stars, color blocking. I tested with "a dog playing in the snow on a winter day" and got a four-legged retriever-shaped dog in snow.

I also have a pencil_charcoal_portrait technique saved separately. Different brush approach, more shading, more detail. I prompted "a pencil portrait of a female" and got a face with shadows and reasonable proportions. Hair was rough; face was solid.

Why This Pattern Generalizes

The whole point is that "draw something in MS Paint" is silly, but the loop is not. Replace the goal:

This is what Mo was talking about. The unlock is not the LLM — it is the loop. Goal + tools + evaluator + iteration = complexity that would be impossible by hand. As long as the evaluator is honest (more honest than my JS Paint similarity score), the agent finds the path.

Inspiration

Mo's video was the spark for this — go watch it if you have not. The framing of "all life is essentially vibe-coded" is a useful mental model for what we are doing with autonomous agents now. The same evolutionary search that built hands is what builds working code, working strategies, working drawings — given a tight enough loop and a good enough evaluator.

Resources