Automate Anything With a Simple 3-Part AI Agent System
I have been building a lot of automation flows lately, and I keep coming back to the same three-part recipe. It is dead simple, it works for almost any use case, and you can stand it up in an afternoon. In this post I will walk through the framework and a real example: a fully autonomous research agent that researches a topic, fact-checks itself, fills out a Google Form, and shuts down — all on a schedule.
Watch the video:
The 3-Part Framework
The whole framework is just three pieces:
- A cron job for timing — fires the automation on a schedule.
- Claude Code
--p(or Codex exec, or opencode run) for headless execution — kicks off a skill in non-interactive mode. - A browser tool for everything that doesn't have a clean API — in my case Surfagent.
That is it. With those three primitives you can build almost any "trigger on a schedule, do research / browse / act / report back" automation. I covered the headless side of this in detail in my headless AI agents post — same flag, different use case.
Today's Example: An Automated Recon Loop
The example I built for this video is a recon loop on the topic "Hermes AI Agent." Five steps, all autonomous:
- Search Google, YouTube, and news via SerpApi for the past 7 days
- Use Surfagent to open the browser and fact-check the top sources
- Write a structured report
- Submit the report to a Google Form
- Clean up and exit
The whole thing is driven by a single Claude Code skill called recon. The cron job fires this command:
claude -p "/recon" --model claude-sonnet-4-6 --dangerously-skip-permissions
That is the entire trigger. Inside the skill I pull fresh SerpApi results for Google, YouTube, and news, hand them to Surfagent for browser-level verification, then drive the Google Form fill. --dangerously-skip-permissions is what makes this run unattended — the agent gets full tool access for this scoped automation.
Why SerpApi Beats Scraping
The piece I want to highlight is search. Context quality is everything in agent loops, and trying to scrape Google directly means dealing with CAPTCHAs, proxies, IP rotation, and constant breakage. SerpApi sponsored this part of the video, and the reason I actually use it is reliability: clean JSON output, no anti-bot games, and it handles Google + YouTube + news under one API key. I dropped the SerpApi docs into Claude Code, asked it to build a serp_context skill, and that was the whole integration.
Once you have search as a clean primitive, the rest of the agent loop gets a lot easier — bad context is the single biggest source of agent failures.
Watching It Run
The interesting part is watching Surfagent do the verification phase. Because I run this on a dedicated Mac mini, I leave the browser visible (non-headless) so I can watch what the agent does. For the Hermes example it:
- Opened YouTube, found three videos on the topic, scrolled through the transcripts and comment sections
- Pulled context from Reddit and Hacker News
- Cross-referenced sources on Decrypt and a couple of other sites
- Navigated to my Google Form and filled in the three questions: best idea found, source URL, and a 1-5 rating
End result: the form had a real submission with the URL of the source video, a multi-sentence answer about an "ambient memory loop" idea built on Hermes, and a 4-star rating. All without me touching anything after kicking off the cron.
Why This Pattern Generalizes
The reason I love this setup is that the topic is the only thing you change. Swap "Hermes AI Agent" for any other topic, and the same pipeline runs. Swap the Google Form for an email, a Notion page, an SQL row, or a Discord webhook — Surfagent handles the form filling, Claude does the writing. You can spin up multiple cron jobs each chasing a different topic, and every morning you wake up to a populated dashboard.
This is also the foundation for the Claude Code passive income setup I have shared before. Same three pieces, different goal.