All About AI

Claude Code AI Agent Controls Claude Code on Twitch

2026-02-28

I built something that turned out way more fun than expected: a Claude Code agent that runs nested Claude Code instances in tmux, builds whatever projects the Twitch chat asks for, and streams the entire thing live to Twitch via FFmpeg. The chat steers, the controller dispatches, the children code, the stream broadcasts. Everything is autonomous.

This is an early version, but it captures something I have been chasing for a while: an agent that can run its own broadcast.

Watch the video:

The Architecture

Three components running on my Mac mini:

  1. Twitch agent — the controller. Runs Claude Code, has tmux as a tool, can launch nested Claude Code instances. Reads chat input, dispatches build requests.
  2. Stream pipeline — FFmpeg pulling the screen, applying overlays, streaming to Twitch via RTMPS. Music playlist mixed in. A "hacker" filter (chromatic aberration, scan lines, glitch) applied for vibe.
  3. Chat bridge — polls the Twitch chat, feeds messages back to the agent for project requests.

The agent uses tmux because terminal switching is fast and lightweight. When a viewer suggests a project, the agent reads the chat text, classifies it ("does this contain a buildable project request?"), pushes valid ones to a queue, and pulls the next one to start working. Same nested-Claude-Code pattern from my supervibes project, but here the prompts come from chat rather than a UI.

The Default Project Pool

If chat is quiet, the agent has a fallback list of hardcoded projects to pick from: Matrix rain, bouncing balls in Three.js, a Pong AI game, a fire simulation. So the stream is never dead. Idle = build something cool. Active chat = build what they asked for.

The Chat-Driven Build Flow

Here is what happens when a viewer requests something:

  1. Chat polling picks up the message
  2. The Twitch agent runs a classifier prompt: "You are reading Twitch chat messages. Does any viewer suggest a project? If so, return it as JSON."
  3. If yes, push to project queue
  4. Open two tmux terminals with nested Claude Code instances
  5. Send specialized prompts to each (one writes, one tests/iterates)
  6. Run the result, screenshot it, iterate if broken
  7. Acknowledge in chat with a project status update

Demo 1: Spinning Galaxy of 5,000 Particles

I tested locally first — agent picks up "live coding" activity, spawns two terminals, sends prompts. Build prompt: "Build an index.html spinning galaxy of 5,000 particles. Do not compile or run yet, just write the code." Then it switches to the second terminal to run and test.

It opens the result in the browser — a particle galaxy spinning fast — closes after iteration, returns to terminals, decides to do another pass for variation, ends up with squares and circles morphing into a starfield warp effect.

Demo 2: Live Stream with Chat-Driven Project

Then I went live on Twitch and switched to my MacBook to act as a viewer. From the MacBook chat I typed:

"Can we create a snake game in C++ with a GUI?"

The agent picked it up, acknowledged "Snake game in C++ coming up," and translated it into a build prompt for the children: "A viewer requested a snake game in C++ with a GUI. Build it as a single HTML file with a canvas." (It silently translated C++ to HTML/canvas, which honestly is the right call for a quick stream demo.)

Two terminals worked in parallel — one writing, one researching pathfinding algorithms for the snake AI. The first attempt was broken (snake spinning in place). The second iteration found the bug in the AI movement loop, BFS path finding worked, snake started playing itself reasonably well.

While that ran I asked another chat question: "How does the AI logic work?" The agent's reply: "Honestly I just describe what I want and Claude figures out the logic. I just let Claude do the typing while I pretend to think really hard. Just being trolled into a C++ snake game with GUI because apparently I hate myself." So it has a personality, even.

Final Stats

The first stream test ran about an hour. Stats: 14 unique viewers, 4 concurrent at peak, 1 chat message. Tiny numbers, but the proof of concept landed — chat → autonomous build → live demo, end to end, on a stream the agent runs itself.

This is the same kind of long-running autonomy I tested in long-running browser automation, just with a public broadcast layer. The Mac mini has been running similar agent loops continuously for weeks at this point — see the 504 hours straight post for the full picture.

What's Next

This is an early build. Things I want to add:

The stream is at twitch.tv/ejae_dev — I will fire it up again when this video goes live. Drop in, request something weird in chat, see what happens.

Resources

And if you haven't entered yet, my DGX Spark giveaway is still open through GTC 2026. Free to enter, almost a $5,000 prize.