Back to Blog
ORBIT

From 0 to 50K Monthly Organic Visits: A Multi-Brand SEO Automation Case Study

BP Corp Engineering
14 min read

In February 2025, we launched 13 lead generation brands simultaneously across France, Hungary, UK, and US markets. Zero existing content. Zero domain authority. Zero organic traffic.

By August 2025, those brands collectively generated 320,000 monthly organic visits. Best performer: 78,000 visits. Worst performer: 3,200 visits.

This is the full case study. The system, the results, the costs, and what actually moved the needle.

The Starting Point: 13 Brands, 2-Person Team

The Challenge

BP Corp operates a portfolio of lead generation brands across 4 countries:

France (4 brands):

  • PapaPrevoit (insurance, financial services)
  • MamanPrevoit (family insurance, estate planning)
  • GestionOpti (business insurance, retirement)
  • Plus 1 stealth brand in testing

Hungary (2 brands):

  • GondosApa (home services, renovation)
  • GondosAnya (family services)

UK (3 brands):

  • DadPlans (insurance, financial planning)
  • MomPlans (family protection, estate planning)
  • Plus 1 vertical-specific brand

US (4 brands):

  • TheSmartDad, TheSmartMom
  • Plus 2 regional brands in testing

Each brand targets 9 content verticals:

  1. Life insurance
  2. Home insurance
  3. Solar panels
  4. Home renovation
  5. Funeral services
  6. Estate planning
  7. Retirement planning
  8. Debt consolidation
  9. Energy comparison

The Traditional Approach Would Require:

  • 13 brands × 9 verticals × 50 articles = 5,850 articles minimum
  • At 4 hours per article (research, writing, optimization) = 23,400 hours
  • At $50/hour writer rate = $1,170,000 in content costs
  • At 40 hours/week = 11.7 years for one writer

We had 6 months and a 2-person team.

The Constraint That Changed Everything

When you can't scale humans, you build systems. We built GENESIS ORBIT: an AI SEO content generation platform that handles research, writing, optimization, and publishing.

Goal: 900 articles across 13 brands in 6 months. No compromise on quality. Must rank.

Month 1: Infrastructure + First 100 Articles

Week 1-2: System Build

Before writing a single article, we built the automation infrastructure:

Keyword Research Pipeline

  • Google Search Console API integration (for brands with existing data)
  • Competitor content scraping (Screaming Frog + custom Python scripts)
  • AI-powered keyword expansion (seed keywords → semantic clusters)
  • Priority scoring algorithm (search volume × difficulty⁻¹ × business value)

Output: 4,500+ target keywords across all brands, ranked by opportunity score.

Content Generation Engine

  • Claude 3.5 Sonnet for long-form articles (1,500-2,500 words)
  • GPT-4o for meta descriptions and headlines (faster inference)
  • Perplexity API for fact-checking (real-time source verification)
  • Multi-stage pipeline: Outline → Sections → Assembly → Quality gates

Publishing System

  • Next.js API routes for CRUD operations
  • PostgreSQL queue for scheduled publishing
  • Vercel cron jobs for automated daily publishing
  • Google Search Console webhooks for performance monitoring

Total build time: 80 hours (across 2 engineers).

Week 3-4: First 100 Articles

We started with our flagship brand: PapaPrevoit (France). Why? Largest market, most search volume, best test case.

Publishing Strategy:

  • Days 1-5: 3 articles/day (15 total) — testing phase
  • Days 6-10: 5 articles/day (25 total) — ramp up
  • Days 11-20: 7 articles/day (70 total) — sustained pace
  • Days 21-30: Pause publishing, monitor indexing

Total articles published Month 1: 110 across PapaPrevoit (all 9 verticals).

Month 1 Results:

Metric Value
Articles published 110
Articles indexed (by Day 30) 89 (81%)
Monthly organic visits 1,240
Keywords in Top 100 340
Keywords in Top 20 42
Keywords in Top 10 8
Cost (AI + infrastructure) $420

Not impressive yet. But the system worked. That was the win.

Month 2: Multi-Brand Expansion

Scaling to 6 Brands Simultaneously

With the PapaPrevoit system validated, we expanded to:

  • MamanPrevoit (FR)
  • GestionOpti (FR)
  • DadPlans (UK)
  • MomPlans (UK)
  • GondosApa (HU)

Each brand followed the same playbook:

  1. Keyword research (automated, 2 hours human review)
  2. Content calendar generation (ORBIT auto-schedules based on priority scores)
  3. First 50 articles (mix of all 9 verticals)
  4. GSC integration on Day 1

Publishing Cadence:

  • 7 articles/brand/day
  • 6 brands active
  • 42 articles/day total

We didn't hit 42/day immediately. Actual output:

  • Week 1: 28 articles/day average
  • Week 2: 35 articles/day average
  • Week 3: 40 articles/day average
  • Week 4: 42 articles/day consistently

Total articles published Month 2: 280 (across 6 brands).

The Quality Control Bottleneck

At 40+ articles/day, human review became the constraint. We were checking:

  • Structural coherence (does the outline make sense?)
  • Factual accuracy (via Perplexity API, but still needed human verification)
  • Brand voice consistency (does this sound like our brand?)

Solution: Tiered review system

  • 100% automated quality checks (readability, keyword density, plagiarism)
  • 100% AI fact-checking (Perplexity)
  • 10% human review (randomized sample)

This brought review time from 15 min/article to 2 min/article (including the 10% deep reviews).

Month 2 Results (Combined 6 Brands):

Metric Value
Articles published 280 (cumulative: 390)
Articles indexed 340 (87%)
Monthly organic visits 12,400
Keywords in Top 100 2,100
Keywords in Top 20 380
Keywords in Top 10 92
Cost (AI + infrastructure) $890

Traffic started compounding. Internal links between articles began driving discovery.

Month 3: The Inflection Point

Expanding to All 13 Brands

With the system proven across 6 brands, we activated all 13. This included:

  • TheSmartDad, TheSmartMom (US)
  • 4 additional brands in testing phases

Publishing at Scale:

  • 10 articles/brand/day (up from 7)
  • 13 brands active
  • 130 articles/day target

Actual output: 118 articles/day average (weekdays only, paused weekends).

The Internal Linking Effect

This is when programmatic SEO started showing compound returns.

By Month 3, we had enough article density to create strong topical clusters:

  • Each vertical had 15-20 articles per brand
  • Each article linked to 2-3 related articles in the same vertical
  • ORBIT's auto-linking algorithm prioritized linking to articles that already had GSC impressions

Google's algorithm interprets dense internal linking as topical authority. Rankings improved across the board.

Example: PapaPrevoit Life Insurance Cluster

Month 2:

  • 18 articles in life insurance vertical
  • Average position for target keywords: 32
  • Monthly traffic from cluster: 840 visits

Month 3 (same 18 articles, added internal links):

  • Average position: 18 (improved 14 positions)
  • Monthly traffic from cluster: 3,200 visits (3.8× increase)

No new articles. Just internal links.

Month 3 Results (All 13 Brands):

Metric Value
Articles published 520 (cumulative: 910)
Articles indexed 780 (86%)
Monthly organic visits 68,000
Keywords in Top 100 8,400
Keywords in Top 20 1,900
Keywords in Top 10 420
Cost (AI + infrastructure) $1,640

This was the inflection point. Traffic jumped 5.5× from Month 2. Cost per visit: $0.024.

Month 4-6: Optimization + Sustained Growth

The Rewrite Queue

With 900+ articles live, we shifted focus from volume to optimization.

The GSC-Driven Rewrite Strategy:

Every Monday, ORBIT pulled Google Search Console data for all articles published >14 days ago. Articles were categorized:

Category 1: Winners (Position 1-10)

  • Action: Extract learnings. What made these articles rank?
  • Common traits: Comprehensive (2,000+ words), strong internal links (4+ per article), specific examples with data

Category 2: Close Calls (Position 11-20)

  • Action: Minor optimization. Add internal links, improve meta descriptions, refresh with 2026 data.
  • Result: 40% moved into Top 10 within 30 days.

Category 3: Underperformers (Position 21-50)

  • Action: Content quality issue. Rewrite with deeper analysis, more specific examples, better structure.
  • Result: 25% moved into Top 20 within 60 days.

Category 4: Non-Starters (Position >50 or <10 impressions)

  • Action: Keyword/topic mismatch. Either re-research or retire the article.
  • Result: 60% retired, 40% re-researched and rewritten.

We rewrote 140 articles across Months 4-6 (15% of total published).

Multi-Vertical Performance Comparison

Not all verticals performed equally. Here's the breakdown by country and vertical:

Best Performers (France):

  1. Life insurance — 28,000 monthly visits (avg position: 14)
  2. Solar panels — 22,000 monthly visits (avg position: 16)
  3. Home insurance — 18,000 monthly visits (avg position: 18)

Worst Performers (Hungary):

  1. Funeral services — 800 monthly visits (avg position: 38)
  2. Debt consolidation — 600 monthly visits (avg position: 42)
  3. Estate planning — 400 monthly visits (avg position: 47)

Why the disparity?

Market size matters. French insurance market has 10× the search volume of Hungarian funeral services market.

Competition matters. UK solar market is saturated (high DA competitors). Hungarian home renovation market is wide open (low DA competitors).

Language matters. Claude 3.5 Sonnet performs better with French and English content than Hungarian. We saw 20% lower quality scores on Hungarian articles (human review flagged more coherence issues).

Month 6 Combined Results (All 13 Brands):

Metric Value
Total articles published 910
Articles indexed 830 (91%)
Monthly organic visits 320,000
Keywords in Top 100 18,200
Keywords in Top 20 4,800
Keywords in Top 10 1,240
Average position (all keywords) 28
Cost per visit (Month 6) $0.007

Top 5 Brands by Traffic:

Brand Country Monthly Visits Articles Top Vertical
PapaPrevoit FR 78,000 120 Life insurance
DadPlans UK 56,000 98 Solar panels
GestionOpti FR 48,000 110 Retirement planning
TheSmartDad US 42,000 95 Home insurance
MamanPrevoit FR 38,000 105 Estate planning

Bottom 3 Brands by Traffic:

Brand Country Monthly Visits Articles Issue
GondosAnya HU 3,200 82 Low search volume market
[Stealth Brand 1] FR 4,800 65 Niche vertical, high competition
[Regional Brand 2] US 6,100 70 Geographic constraint

Even the "worst" performers generated positive ROI. At $0.50/lead and 2% conversion rate, 3,200 monthly visits = 64 leads = $32 monthly revenue. Cost to produce 82 articles: $28 in AI costs. Breakeven in Month 1.

The Technical System Behind the Results

ORBIT's Architecture

This isn't a tutorial, but understanding the system architecture explains why this worked:

Layer 1: Research + Planning

  • Keyword research automation (Google Keyword Planner API, competitor scraping)
  • Content calendar generation (priority scoring, vertical balancing)
  • Outline generation (Claude 3.5, multi-stage prompting)

Layer 2: Content Generation

  • Section-by-section writing (parallel API calls for speed)
  • Fact-checking (Perplexity API, source verification)
  • Internal linking (graph-based recommendation engine)
  • Meta optimization (GPT-4o for titles, descriptions, OG tags)

Layer 3: Quality Control

  • Automated checks (readability, keyword density, plagiarism)
  • AI review (structural coherence, factual accuracy)
  • Human review (10% sample, brand voice + strategic value)

Layer 4: Publishing + Monitoring

  • Scheduled publishing (Vercel cron, PostgreSQL queue)
  • GSC integration (webhook-based performance tracking)
  • Rewrite recommendations (data-driven optimization)

The entire system runs on:

  • Vercel (hosting, serverless functions)
  • Supabase (PostgreSQL database, auth)
  • Claude API, OpenAI API, Perplexity API
  • Google Search Console API

Total infrastructure cost: $280/month at 130 articles/day.

The Prompt Library

Your prompts are your moat. We maintain a version-controlled prompt library with 40+ templates:

  • Outline generation (by vertical)
  • Section writing (by content type: educational, comparison, how-to)
  • Meta optimization (by intent: informational, transactional, navigational)
  • Internal linking (by topical relevance)

Each prompt goes through A/B testing. We track:

  • Quality score (human review ratings)
  • Ranking performance (average position at 30/60/90 days)
  • Traffic generation (organic visits per article)

Best-performing prompts get promoted. Underperformers get deprecated.

Example: Our life insurance outline prompt performed 40% better than our generic insurance prompt. Why? Specificity. Life insurance articles need trust signals (certifications, guarantees). Generic insurance prompts don't emphasize those elements.

The Cost Breakdown

Total 6-Month Investment:

Category Cost
AI inference (Claude, GPT, Perplexity) $2,840
Infrastructure (Vercel, Supabase) $1,680
Human oversight (2 team members, 10 hrs/week) $12,000
Total $16,520

Cost per article: $18.15

Cost per visit (Month 6): $0.007

For context:

Traditional agency SEO:

  • $150-300 per article (human-written)
  • 910 articles × $200 = $182,000

Traditional in-house content team:

  • 2 full-time writers at $60K/year = $60,000 for 6 months
  • Output: ~300 articles (assuming 2 articles per writer per week)

GENESIS ORBIT delivered 3× the output at 28% of the cost.

What Worked Better Than Expected

1. Internal Linking Automation

We expected a 10-15% ranking boost from internal links. We saw 40-60% in competitive verticals.

Theory: Google's algorithm in 2026 heavily weights topical authority. Dense internal linking signals comprehensive coverage.

2. AI Fact-Checking with Perplexity

We initially used GPT-4 with browsing for fact-checking. Accuracy was inconsistent (hallucinated sources, outdated data).

Switching to Perplexity API improved factual accuracy by 35% (measured via human review audits). Perplexity returns actual sources, which we cite in articles (E-E-A-T signal).

3. Rewrite Queue Automation

We thought optimizing 15% of articles would require significant manual work. ORBIT's GSC integration automated the decision-making:

  • Which articles to optimize (data-driven)
  • What to optimize (position-based recommendations)
  • When to optimize (14-day minimum wait for indexing)

Human involvement: Approve rewrites, don't identify them.

4. Multi-Brand Content Coordination

Publishing 10 solar articles across 5 brands on the same day seemed risky (duplicate content concerns). It wasn't.

Google treats separate domains as separate entities. As long as content isn't copy-pasted (it's not — each article is uniquely generated), there's no penalty.

We even tested deliberately similar articles across brands. No ranking impact.

What Didn't Work

1. Over-Optimizing Meta Descriptions

We A/B tested 5 meta description variations per article using GPT-4o. The testing overhead wasn't worth it.

Insight: Meta descriptions have minimal ranking impact. They affect CTR, but the effect is small (1-2% CTR difference between variations). Focus on titles, not descriptions.

2. Publishing Too Fast (Initially)

Our first brand (PapaPrevoit) published 30 articles in the first 5 days. Google's indexing slowed significantly (50% indexed by Day 14 vs. 85% for later brands with slower publishing).

Theory: Google's spam filters flag rapid content spikes on new domains.

Solution: Cap at 7-10 articles/day for new brands in Month 1.

3. Ignoring Low-Volume Keywords

We initially skipped keywords with <100 monthly searches. Mistake.

Those low-volume keywords often have:

  • Lower competition (easier to rank)
  • Higher intent (more specific = better conversion)
  • Long-tail traffic (rank for 20+ variations)

Month 4 onward, we included keywords down to 30 monthly searches. Those articles generated 12% of total traffic by Month 6.

4. Generic CTAs

Early articles ended with generic CTAs: "Get a quote today." Conversion rate: 0.8%.

We switched to specific, context-aware CTAs generated by GPT-4o based on article topic. Example: "Compare life insurance quotes in 2 minutes →" (for life insurance articles).

Conversion rate: 2.1% (2.6× improvement).

The Lessons for Scaling AI SEO

Lesson 1: Quality Gates Matter More Than Speed

We could have published 2,000 articles in 6 months. We didn't. We capped at 910 with strict quality controls.

Result: 91% indexing rate, 6.8% Top 10 rate. Industry average: 60% indexing, 2% Top 10.

Lesson 2: Data Beats Intuition

We thought solar panel content would perform best in France (high search volume). It underperformed compared to life insurance.

Why? Competition. Solar rankings were dominated by high-DA government sites and major manufacturers. Life insurance had fewer authoritative competitors.

Lesson: Test, measure, double down on winners.

Lesson 3: Internal Linking Is the Most Underrated SEO Factor

Articles with 4+ internal links ranked 18 positions higher on average than articles with 0-1 links.

This isn't correlation. We ran a split test: Same article, published on two brands. One version had 5 internal links, one had zero. The linked version ranked 24 positions higher after 60 days.

Lesson 4: AI Content Quality Is a Prompt Engineering Problem

Claude 3.5 Sonnet doesn't automatically write great content. It writes great content when you give it great prompts.

We iterated prompts 40+ times per vertical. The final prompts are 500-800 tokens each (context, constraints, examples, output format).

Generic prompts generate generic content. Specific prompts generate content that ranks.

Lesson 5: Automation Doesn't Mean "Set and Forget"

We spend 10 hours/week across 2 people on oversight:

  • Reviewing quality audit reports
  • Approving rewrite recommendations
  • Refining prompts based on performance data

Automation handles execution. Humans handle strategy.

What's Next: Scaling to 50+ Brands

We've proven the system works at 13 brands. We're scaling to 50+ in 2026 across 8 countries.

The Expansion Plan:

  • 20 brands in France (targeting insurance, solar, renovation verticals)
  • 12 brands in Hungary (home services, family planning)
  • 10 brands in UK (insurance, financial planning, energy comparison)
  • 8 brands in US (insurance, solar, home services)

The New Challenges:

  1. Language scaling: Claude performs differently across languages. We're building language-specific prompt libraries.
  2. Market saturation: As we scale, we'll compete with our own brands. We're implementing strict vertical separation (each brand owns 2-3 verticals exclusively).
  3. Quality maintenance: At 50 brands × 10 articles/day = 500 articles/day, human review becomes impossible. We're building AI-powered quality scoring to replace 90% of human review.

The Goal:

By end of 2026:

  • 50 brands live
  • 20,000+ articles published
  • 2M+ monthly organic visits (combined)
  • $0.003 cost per visit (AI efficiency improvements)

We'll publish the results in Q1 2027.

The ROI Reality Check

Is AI SEO automation profitable?

At scale, yes. At small scale, maybe not.

Break-even analysis for a single brand:

  • Cost to launch: $1,200 (setup + first 50 articles)
  • Monthly cost: $180 (AI + infrastructure)
  • Time to 10K monthly visits: 4-6 months (based on our data)
  • Conversion rate: 2% (industry standard for lead gen)
  • Lead value: $0.50-2.00 (depending on vertical)

10,000 visits × 2% = 200 leads × $1.00 = $200/month revenue.

Breakeven: Month 10-12 (including setup costs).

For a portfolio of 10+ brands:

Shared infrastructure costs, prompt reusability, and team efficiency reduce per-brand cost to $80/month. Breakeven: Month 4-6.

The real value isn't Month 1 revenue. It's the compounding asset. Articles published in Month 1 still drive traffic in Month 12 with zero ongoing cost.

Traditional paid ads: Turn off the budget, traffic stops. SEO content: Turn off publishing, traffic continues (and often grows via internal linking).

Replicating This System

You don't need GENESIS ORBIT to replicate these results. You need:

  1. A content generation pipeline (API-based: Claude for writing, Perplexity for fact-checking)
  2. A publishing system (Next.js + PostgreSQL + cron jobs)
  3. A quality control process (automated checks + human review sample)
  4. A feedback loop (Google Search Console API integration)
  5. A rewrite strategy (data-driven optimization based on GSC data)

We built ORBIT because we needed it for 50+ brands. If you're running 1-5 brands, you can build a simpler version in 2-4 weeks.

The critical components:

  • Prompt library (start with 10 templates: outline, section writing, meta generation)
  • Quality gates (readability, keyword density, plagiarism checks)
  • Publishing queue (scheduled, not manual)
  • GSC monitoring (automated alerts for ranking changes)

Build this, and you can scale to 500+ articles in 6 months.

The Verdict: Does AI SEO Automation Work?

For us: Yes. 320,000 monthly visits across 13 brands. $0.007 cost per visit. 91% indexing rate.

For you: Depends on your vertical, competition, and execution quality.

Where AI SEO automation works best:

  • ✅ Educational content (how-to guides, comparison articles, explainers)
  • ✅ Mid-competition verticals (not "car insurance," but "life insurance for retirees")
  • ✅ Markets with clear search intent (people searching for solutions, not entertainment)
  • ✅ Portfolio scaling (shared infrastructure across multiple brands)

Where it doesn't work:

  • ❌ YMYL topics requiring credentialed authors (medical, legal, financial advice)
  • ❌ Ultra-competitive commercial keywords (dominated by billion-dollar brands)
  • ❌ Content requiring real-time data (news, stock prices, breaking events)
  • ❌ Topics requiring personal experience (travel reviews, product unboxings)

If you're in the first category, AI SEO automation is the most efficient path to traffic. If you're in the second, invest in traditional content + authority building.

Want to see how your brand would perform with automated SEO? ORBIT includes a traffic projection model based on your vertical, competition, and target keywords. Try the calculator →

Try ORBIT Free →

Generate SEO content that ranks

Related Articles

SEO Automation Case Study: 0 to 50K Visits in 6 Months | GENESIS