ButterGrow - AI employees for your growth teamButterGrowBook a Demo
Developer Stories

How a Bootstrapped Founder Built a Zero-Writer SEO Content Engine with OpenClaw

12 min readBy ButterGrow Team
Developer Stories

Marcus Chen had a problem that most bootstrapped SaaS founders know intimately: SEO content is a compounding asset, but good writers are expensive. By early 2025, his project management tool for freelancers — FreelanceOS — was burning $6,000 a month on content. Eight to ten articles per month, each carefully researched and edited, each taking a week to produce. Organic traffic was growing, but not fast enough to justify the spend.

Then Marcus discovered OpenClaw, self-hosted it on ButterGrow, and spent a weekend rebuilding his entire content operation around four AI agents working in parallel. Three months later, he was publishing 40 articles per month, organic traffic had grown 310%, and his total content infrastructure cost — API calls, ButterGrow hosting, and his own time — was under $400 a month.

This is the technical story of exactly how he did it.

40
Articles per month (up from 10)
310%
Organic traffic growth in 90 days
$400
Total monthly cost (down from $6,000)

The $6,000/Month Problem

Marcus was not anti-freelancer. He was, after all, building a product for freelancers. But the economics of content at scale are brutal for a bootstrapped company. His previous workflow looked like this: he would spend a few hours researching topics and writing briefs, then hand them to three freelance writers who each produced two to three articles per month. A dedicated editor reviewed each draft. The whole cycle took one to two weeks per article.

The ceiling on this system was obvious: to double output, he would need to double spend. He could not hire faster than he could spend money he did not have. What he needed was a system that scaled horizontally — where adding more output cost almost nothing at the margin.

The key insight: AI agents are not about replacing quality with quantity. They are about removing the bottleneck between strategy and execution. Marcus still owned the strategy. The agents owned the execution.

Pipeline Overview: Four Agents, One Assembly Line

Marcus designed his pipeline as a linear handoff chain, with each OpenClaw agent consuming the output of the previous one. The entire pipeline runs autonomously on a daily cron schedule managed by ButterGrow, with one human checkpoint before any article goes live.

Here is the high-level architecture:

  1. Keyword Research Agent — discovers and prioritizes target keywords
  2. Content Brief Agent — generates structured briefs from ranked keywords
  3. Drafting Agent — writes full article drafts from briefs
  4. Publish & Interlink Agent — queues drafts in the CMS and adds internal links

Each agent is a separate OpenClaw instance with its own system prompt, tool access, and output schema. They communicate through a shared Notion database that acts as a job queue — each row represents one article at a specific pipeline stage.

Agent 1: The Keyword Research Agent

This agent runs once per day at 6 AM. Its job is to fill a backlog of approved keywords that the rest of the pipeline can draw from.

What the agent does

  • Opens a browser session via ButterGrow's Chrome DevTools MCP integration
  • Searches Google for the founder's seed topics (e.g., "freelance project management," "invoice automation")
  • Scrapes "People Also Ask" boxes and autocomplete suggestions from the SERPs
  • Cross-references each keyword against the Ahrefs API for monthly search volume and keyword difficulty
  • Filters for keywords with volume above 200 and difficulty below 40
  • Appends approved keywords to the Notion database with status "keyword-approved"
# Simplified version of the keyword research agent system prompt You are a senior SEO strategist for FreelanceOS, a project management tool for independent contractors. Your job is to discover low-competition, high-intent keywords that our target audience — freelancers and solopreneurs — would search for when facing a business problem we can solve. Process: 1. Use browser_navigate to load Google search results for each seed topic 2. Extract PAA questions and autocomplete variants 3. For each candidate keyword, call ahrefs_keywords_explorer 4. Filter: volume >= 200, KD <= 40, intent = informational or commercial 5. Add passing keywords to notion_database_append with status "keyword-approved"

Marcus started without the Ahrefs integration — he scraped Google alone for the first few weeks. "It worked," he told us. "I was just targeting based on PAA relevance and gut. Adding Ahrefs volume data later improved the hit rate but wasn't required to get started."

Agent 2: The Content Brief Agent

The Brief Agent runs at 7 AM, one hour after the Keyword Agent. It picks up any rows with status "keyword-approved," and for each one, generates a structured content brief.

What a brief contains

  • Target keyword and 3–5 secondary keywords
  • Proposed title (H1)
  • Target word count range
  • Recommended H2 section headings with one-sentence descriptions
  • SERP analysis: top 3 ranking articles, what they cover, what they miss
  • Audience pain point this article addresses
  • Suggested internal links from existing FreelanceOS blog posts

Why the brief matters: The brief is where SEO intent is set. Without a brief, the drafting agent writes coherent prose that ranks for nothing. With a good brief, even mediocre prose has structure that matches search intent. The brief is the most important prompt in the pipeline.

The Brief Agent uses OpenClaw's browser tool to pull and summarize the top 3 ranking articles for each keyword. It does not just copy headings — it identifies gaps: what questions do the top results fail to answer? Those gaps become unique H2 sections in the brief, giving the FreelanceOS article a structural edge over existing content.

Agent 3: The Drafting Agent

This is the most resource-intensive agent in the chain. The Drafting Agent runs at 8 AM, picking up briefs with status "brief-approved" (the founder manually reviews and approves briefs in about five minutes per batch each morning).

Drafting agent configuration

Model Choice

Claude Sonnet 4.6 with long context

Marcus uses Claude Sonnet 4.6 for drafting rather than a smaller model. "The quality gap is enormous for long-form content. Haiku writes fine sentences but loses coherence across 1,500 words. Sonnet holds the thread."

Persona Injection

Writing as a "practitioner" voice

The drafting prompt instructs the agent to write as "a working freelancer who has personally dealt with this problem" — not as a generic AI assistant. This shifts the tone from explanatory to experiential, which performs better with readers and reduces the "AI smell" that triggers editor skepticism.

Output Schema

Structured JSON draft

The agent outputs a JSON object with fields for title, meta description, slug, body (Markdown), and suggested_tags. The downstream publish agent reads this schema directly — no parsing required.

The entire drafting run for a batch of five articles takes roughly 12 minutes. Marcus runs it as a background task; ButterGrow's session heartbeat monitoring keeps the agent context alive across the full run without timeouts.

The final agent in the chain converts approved drafts into live staged posts and weaves them into the site's internal link graph.

Publish workflow

  • Reads draft JSON from Notion rows with status "draft-ready"
  • Converts Markdown to HTML using a lightweight remark pipeline
  • Calls the Webflow CMS API to create a new Collection Item in draft state
  • Sets a "human-review" custom field to true — this surfaces the post in Marcus's daily review queue
  • Scans the existing published article index for topically related posts
  • Injects 2–4 contextual internal links into the new draft's body
  • Also appends a backlink to the new post in 1–2 existing published articles where the new topic is referenced

That last step — retroactively updating existing articles to link to new ones — was the single biggest SEO accelerator Marcus discovered. "Most content operations publish and forget. My agent goes back to old articles every time something new is live. Google re-crawls the updated pages, sees a fresh signal, and tends to index the new article within 24 hours."

The Human Review Loop: 15 Minutes a Day

Marcus is emphatic that the pipeline does not eliminate human judgment — it compresses it. His daily review ritual:

  1. 7:55 AM (5 min): Review and approve content briefs in Notion. He checks that the brief's angle is differentiated and the target keyword makes sense. He rejects roughly one in eight briefs and adds a comment for the Brief Agent to re-run with a modified prompt.
  2. After lunch (10 min): Skim 4–5 new drafts staged in Webflow. He checks for factual accuracy, adds one or two personal anecdotes or data points, adjusts the opening paragraph if it's generic, and hits publish.

"The agents handle the 80% of content production that is mechanical — research, structuring, drafting to a brief, formatting, interlinking. I handle the 20% that requires actual judgment and lived experience. That's how it should be."

On Google's AI content guidance: Google's helpful content guidance has always rewarded accuracy and usefulness, not origin. Marcus's human review step ensures every article is factually correct and contextually relevant before going live. No penalties. No manual actions. Just compounding organic growth.

Results After 90 Days

Marcus shared his Google Search Console data from the 90-day period after launching the pipeline:

  • Total impressions: Up 280% (from ~45,000 to ~171,000 per month)
  • Total clicks: Up 310% (from ~1,900 to ~7,800 per month)
  • Keywords in top 10: Up from 38 to 194
  • Average position for target keywords: Improved from 24 to 11
  • Trial signups from organic: Up 240%

The ROI is straightforward: at $6,000/month for 10 articles, each article cost $600. At $400/month for 40 articles, each article costs $10. The content quality — measured by time-on-page, bounce rate, and conversion to trial — is comparable. Several pipeline-generated articles have outperformed the best human-written articles in the archive.

Mistakes Made Along the Way

Marcus is candid about what went wrong early. Three failures worth learning from:

1. Skipping the brief step initially

In his first version, Marcus went straight from keywords to drafts. The drafts were fluent but structurally random — the agent would write what it knew about a topic rather than what the SERP demanded. Adding the Brief Agent with SERP gap analysis doubled ranking performance almost immediately.

2. Running all four agents as one monolithic agent

His original prototype was a single OpenClaw agent trying to do keyword research, brief writing, drafting, and publishing in one session. It was slow, fragile, and expensive. The moment a step failed, the entire session failed. Splitting into four discrete agents with the Notion queue as a handoff layer made the system fault-tolerant: each stage can fail and retry independently.

3. Not setting output token limits on the Drafting Agent

Without a word count constraint in the prompt, the Drafting Agent would sometimes produce 4,000-word articles when the brief called for 1,200. This inflated costs and produced articles that needed heavy editing. A simple instruction — "Write between 1,200 and 1,500 words. Stop when you reach 1,500." — solved it.

How to Replicate This Pipeline

If you want to build a similar system, here is a condensed starting checklist:

Step 1

Set up ButterGrow with four OpenClaw agent instances

Each agent gets its own isolated session. Name them clearly: keyword-agent, brief-agent, draft-agent, publish-agent.

Step 2

Create your Notion job queue database

Schema: Keyword, Status (enum), Brief (rich text), Draft JSON (code), Published URL, Review Notes. The Status enum drives the pipeline: keyword-approved → brief-approved → draft-ready → published.

Step 3

Write your system prompts with explicit output schemas

Every agent should output structured data (JSON), not free-form prose. The downstream agent reads the schema, not the prose. This is the single most important architectural decision.

Step 4

Configure ButterGrow cron schedules with timezone awareness

Stagger your cron jobs by at least 30 minutes. Keyword Agent at 6 AM, Brief Agent at 7 AM, Draft Agent at 8 AM, Publish Agent at 4 PM (after your review window). ButterGrow's timezone-aware cron ensures these run at the right local time regardless of server location.

Step 5

Wire up your CMS publish connector

ButterGrow supports Webflow, WordPress, Ghost, and Contentful natively. For other CMSes, use the HTTP action node with your CMS's REST API. Always publish to draft state first — never publish directly to live.

Step 6

Design your human review ritual before you launch

Decide in advance: how will drafts reach you? (Notion queue, Slack digest, email summary?) What is your minimum bar for approving a draft? What happens to a rejected draft? Write these rules down and encode them in your review checklist before the first article runs.

Conclusion

Marcus Chen did not replace his content team because AI became magically good at writing. He replaced it because he redesigned the problem. Instead of asking "can AI write like a human?", he asked "what parts of content production are mechanical enough that an AI agent can own them reliably?" The answer turned out to be: most of it.

The keyword research, the SERP gap analysis, the structural brief, the first draft, the CMS upload, the internal linking — all of it is deterministic given good instructions. What requires human judgment is the strategy, the editorial eye, and the lived-experience detail that makes an article feel real. Marcus kept those parts. He automated the rest.

The compounding effect is the part most people underestimate. Forty articles per month is 480 per year. Each one is an asset that earns traffic indefinitely. At his previous pace of 10 per month, he would have produced the same volume in four years. The AI agents did not just cut costs — they collapsed the timeline.

If you are a bootstrapped founder looking at a content backlog that feels impossible to close, this pipeline is worth the weekend it takes to build. The tools are available on ButterGrow today. The prompts are yours to write. The results, as Marcus demonstrated, are real.

SEO Content Engine FAQ

How many OpenClaw agents did the founder run in parallel for this content pipeline?

The founder ran four specialized OpenClaw agents: a keyword research agent, a content brief agent, a drafting agent, and a publish-and-interlink agent. Each agent has a distinct role and hands off structured output to the next via a shared Notion database, forming an automated assembly line rather than a single monolithic bot.

What was the monthly cost savings compared to hiring freelance writers?

The founder's previous freelance writing budget was roughly $6,000 per month for 8–10 articles. After switching to the OpenClaw pipeline on ButterGrow, the all-in monthly cost dropped to under $400 while output scaled to 40 articles per month — a 93% cost reduction with a 4× throughput increase.

How did the pipeline avoid Google penalties for AI-generated content?

A mandatory human review step is built into the workflow: the publish agent stages every draft in Webflow with a "human-review" flag before anything goes live. The founder spends 10–15 minutes each afternoon verifying facts, adding personal context, and approving drafts. Google's helpful content guidance rewards accuracy and usefulness — not authorship origin.

Why did the founder split the pipeline into four separate agents instead of one?

His original prototype was a single monolithic agent doing all four tasks in one session. It was slow, expensive, and fragile — a failure at any step reset the entire pipeline. Splitting into four discrete agents with a Notion queue as the handoff layer made each stage independently retryable and fault-tolerant, and dramatically reduced per-run LLM costs.

What CMS integrations does ButterGrow's publish agent support?

ButterGrow ships with native connectors for Webflow CMS, WordPress (REST API), Ghost, and Contentful. For headless setups or custom CMSes, the publish agent can write to any REST or GraphQL endpoint using ButterGrow's HTTP action node. The founder in this article used the Webflow CMS integration to push drafts directly into his collection.

Does the keyword research agent require a paid SEO tool subscription like Ahrefs?

No. The founder started by having the browser agent scrape Google's "People Also Ask" boxes and autocomplete suggestions as a free source of keyword ideas. He added Ahrefs API access later to get search volume and keyword difficulty scores, which improved ranking precision — but the pipeline runs and produces results without a paid SEO tool subscription.

What was the single biggest SEO accelerator the founder discovered in the pipeline?

Retroactive internal linking: every time a new article is published, the publish-and-interlink agent goes back to 1–2 existing published articles on related topics and adds contextual links pointing to the new post. This prompts Google to re-crawl the updated pages, which typically results in the new article being indexed within 24 hours rather than weeks.

Related: How One Developer Built a Multi-Channel Content Calendar Bot with OpenClaw in a Weekend · AI-Powered SEO Automation: Keyword Research, On-Page Optimization & Link Building in 2026

Ready to try ButterGrow?

See how ButterGrow can supercharge your growth with a quick demo.

Book a Demo