article

Case Study: How We Scaled AI-Referred Trials from 550 to 2,300+ in 4 Weeks

A mid-market B2B SaaS company faced an existential threat: 89% of B2B buyers use AI platforms like ChatGPT and Perplexity to research solutions, yet the company appeared in zero AI-generated recommendations.

Liam Dunne
Liam Dunne
Growth marketer and B2B demand specialist with expertise in AI search optimisation - I've worked with 50+ firms, scaled some to 8-figure ARR, and managed $400k+/mo budgets.
March 12, 2026
10 mins

Updated March 12, 2026

TL;DR: A mid-market B2B SaaS company faced an existential threat: 89% of B2B buyers use AI platforms like ChatGPT and Perplexity to research solutions, yet the company appeared in zero AI-generated recommendations. Despite ranking #1 on Google for target keywords, they were invisible where it mattered most. We implemented our CITABLE framework with daily content production, publishing 2-3 optimized pieces per day for four weeks. The result: AI-referred product trials grew from 550 to over 2,300 in one month, and those leads converted at 2.4x the rate of traditional organic search traffic. This case study reveals the exact methodology, content types, and technical optimizations that delivered a 4x trial increase in 28 days.

Nearly half your prospects now bypass your website entirely. They ask ChatGPT "What's the best [your category] for [their use case]?" and make decisions based on whatever the AI recommends. If your brand isn't cited in those responses, you don't exist to that buyer.

This isn't speculation about the future of search. This is what happened when a B2B SaaS company realized their impeccable Google rankings meant nothing to the growing segment of buyers who never see traditional search results. Here's how we fixed it in four weeks.

Understanding the new search landscape: AEO vs SEO

AI-powered answer engines have fundamentally changed how buyers discover and evaluate information, making traditional search optimization insufficient on its own. Search Engine Optimization (SEO) optimizes for ranking position in a list of blue links. Answer Engine Optimization (AEO) optimizes for citation within a synthesized, conversational response.

The distinction matters because the rules changed completely. Gartner predicts traditional search volume will drop 25% by 2026 as users shift to AI chatbots, and Google's own AI Overviews now appear in a significant and growing percentage of searches.

Here's the critical comparison:

Dimension Traditional SEO Answer Engine Optimization
Primary Goal Rank positions 1-3 on SERPs Get cited in AI responses
Success Metric Rankings, organic traffic Citation rate, AI share of voice
Content Style Long-form blog posts Structured answer blocks
Publishing Frequency 4-8 posts/month Daily (20-60 pieces/month)

The technical reason lies in how Large Language Models retrieve information. RAG systems (Retrieval-Augmented Generation) pull real-time information from the web to ground their responses. These systems exhibit a documented recency bias, where the vast majority of AI-cited pages were updated in the last 30 days according to recent research on LLM behavior.

Traditional SEO agencies optimize for domain authority and backlinks. We engineer content for passage retrieval and citation-worthiness.

Why AI search optimization matters for B2B SaaS pipeline

The pipeline impact of AI invisibility is immediate and measurable. Forrester's research shows that 89% of B2B buyers have adopted generative AI as a primary information source throughout their purchasing process. That's not early adopters anymore, that's your entire addressable market.

The conversion data proves this is urgent. Research on AI-referred traffic found that while AI search accounted for just 0.5% of total visitors, it drove 12.1% of signups. Our client's data showed a 2.4x conversion advantage, likely because their product requires a longer evaluation cycle but still benefits from the pre-qualification that AI recommendations provide.

The math is straightforward: if half your total addressable market uses AI for vendor research and you're invisible in those results, you're excluded from consideration by nearly half your potential buyers before sales ever gets involved. For a company generating 550 trials per month from traditional channels, missing this segment represented roughly 400-500 lost opportunities monthly.

This isn't a brand awareness play. AI visibility is a pipeline defense strategy. Your competitors are already being recommended. Every month you wait, you're training buyers to associate your category with someone else's brand.

The challenge: High rankings but invisible to AI buyers

The client came to us with a problem they couldn't explain. They ranked #1 on Google for their primary category keyword. Their blog attracted solid organic traffic. Their content team published thoughtful, well-researched articles. Yet their CFO noticed a disturbing pattern: qualified prospects mentioned in discovery calls that they'd "asked ChatGPT for recommendations" and received a list of competitors. The client's name never appeared.

We ran our AI Visibility Audit using our 7-step framework across 50 high-intent buyer queries like "best [category] for mid-market companies" and "[use case] software comparison." The results quantified the problem:

Before state (baseline week):

  • AI citation rate: 0% across ChatGPT, Claude, Perplexity, and Gemini
  • Competitor A share of voice: 34% (mentioned in 17 of 50 queries)
  • Competitor B share of voice: 28% (mentioned in 14 of 50 queries)
  • Client share of voice: 0% (mentioned in 0 of 50 queries)

Our content audit revealed four root causes:

  1. Structure: Their blog posts buried answers under storytelling intros. LLMs need immediate, extractable answer blocks.
  2. Frequency: They published 6-8 posts per month. AI platforms favor sources updated within the last 30 days, and infrequent publishing signals stale information.
  3. Validation: No presence on Reddit, limited G2 reviews, and sparse Wikipedia mentions. AI models weight external consensus heavily.
  4. Consistency: Their company description varied across their website, LinkedIn, and Crunchbase, creating ambiguity that LLMs interpret as low authority.

Traditional SEO metrics showed strength - solid domain authority, healthy backlink profile, excellent keyword rankings. None of that mattered to RAG systems pulling information for conversational answers.

The solution: Implementing the CITABLE framework for daily citations

We had four weeks to prove the concept. Our hypothesis: if we could publish 2-3 pieces of citation-optimized content daily while simultaneously building third-party validation, we'd see measurable citation growth within 14 days and trial impact by week three.

We had to make a significant operational shift. Most B2B SaaS companies publish 4-8 blog posts monthly. We needed to hit 40-60 pieces in four weeks. That required our CITABLE framework applied systematically:

Clear entity: Every piece opened with a 2-3 sentence direct answer block stating exactly what the tool does, who it's for, and the primary benefit. No storytelling preamble, just the extractable answer that an LLM could lift verbatim.

Intent architecture: We mapped 50 buyer questions into three tiers - awareness stage ("What is [category]?"), consideration stage ("How to evaluate [category]"), and decision stage ("[Specific feature] comparison"). Each article answered the primary question in the first 100 words, then addressed 3-4 adjacent questions buyers ask next.

Third-party validation: We launched a parallel Reddit marketing campaign using our aged, high-karma account infrastructure to seed mentions in relevant subreddits. Simultaneously, we requested 15 new G2 reviews from satisfied customers. This created external consensus that LLMs interpret as authority signals.

Answer grounding: Every factual claim included a link to a verifiable source. No vague statements. Data from Gartner, Forrester, and industry surveys anchored each piece in credible third-party research.

Block-structured: Content chunked into 200-400 word sections under clear H2 and H3 headings. We added comparison tables, ordered lists, and FAQ sections because RAG systems parse structured data more reliably than narrative paragraphs.

Latest and consistent: Every piece included a visible "Updated [Date]" timestamp at the top. We unified company descriptions, founder bios, and product specs across all platforms. Conflicting information across sources causes LLMs to skip citing the brand entirely.

Entity graph: We implemented Organization, Product, and FAQPage schema markup. More importantly, we explicitly named relationships in the copy itself: "Company X, founded by [Name] in [Year], serves [Customer Type] in [Industry]."

We chose the daily publishing cadence deliberately. Research on LLM recency bias shows that fresh content receives preferential treatment in retrieval systems. By publishing daily, we signaled to AI platforms that we were a current, actively maintained source of information.

Strategies for AI search optimization we used to win

Three specific tactics delivered the majority of citation growth during the four-week sprint:

1. Question-cluster content mapping

We identified 12 high-volume question clusters that prospects asked AI assistants. For each cluster, we created 3-5 supporting articles that together provided comprehensive coverage. For example, the "[Category] pricing" cluster included:

  • "How much does [category] software cost?"
  • "[Competitor A] vs [Client] pricing comparison"
  • "Hidden costs in [category] implementation"
  • "[Category] ROI calculator and payback timeline"

This topical depth signaled subject-matter expertise. AI platforms favor sources that comprehensively cover a topic from multiple angles rather than single-article treatments.

2. Strategic Reddit seeding with aged accounts

We used our dedicated account infrastructure to contribute genuinely helpful answers in six target subreddits over the four-week period. The key was authenticity - these weren't promotional posts. We answered real questions from prospects in the evaluation stage, occasionally mentioning the client as one option among several.

By week three, when we tested buyer queries in ChatGPT and Perplexity, several AI responses began citing Reddit threads where our accounts had contributed. This external validation proved more citation-worthy than any amount of owned content alone.

3. Synchronized G2 review campaign

We worked with the client's customer success team to generate 18 new G2 reviews during the sprint. G2 profiles are frequently cited by AI platforms answering comparison queries. The review velocity (18 reviews in three weeks) plus consistent mention of specific use cases helped LLMs identify the client as a credible option for those scenarios.

The technical implementation required precision. We added FAQ schema markup to 30 articles, making the Q&A structure machine-readable. We implemented breadcrumb navigation to clarify the site's topical hierarchy. We ensured every product page included structured data defining the software category, target customer, and pricing model.

All content went through our internal evaluation tools before publication to verify citation-worthiness. This quality gate ensured we weren't just publishing volume but citation-ready answers.

Measuring success in AI search: Beyond rankings

Traditional SEO reporting tracks keyword rankings and organic traffic. Those metrics became secondary. Our primary KPIs measured citation frequency and business outcomes.

Week-by-week progression:

Week 1 (Baseline + Initial Publishing):

  • Published 11 CITABLE-framework articles
  • AI citation rate: 0% → 2% (1 citation in 50 test queries)
  • Trials from AI-referred traffic: 550 (baseline)

Week 2 (Volume Ramp + Reddit Launch):

  • Published 14 articles + 8 Reddit contributions
  • AI citation rate: 2% → 8% (4 citations in 50 test queries)
  • Trials from AI-referred traffic: 892 (62% increase)

Week 3 (G2 Reviews + Schema Implementation):

  • Published 16 articles + 12 Reddit contributions + 18 G2 reviews
  • AI citation rate: 8% → 18% (9 citations in 50 test queries)
  • Trials from AI-referred traffic: 1,680 (88% increase week-over-week)

Week 4 (Optimization + Consistency Fixes):

  • Published 13 articles + unified entity data across platforms
  • AI citation rate: 18% → 22% (11 citations in 50 test queries)
  • Trials from AI-referred traffic: 2,340 (39% increase week-over-week)

We validated our hypothesis about intent quality through the conversion data. AI-referred traffic converted to paid customers at 2.4x the rate of traditional organic search traffic. When a prospect arrives from a ChatGPT recommendation, they've already been pre-qualified by the AI's evaluation of fit.

Share of voice transformation:

By the end of week four, we ran the same 50 buyer queries through ChatGPT, Claude, Perplexity, and Gemini:

After state (week 4):

  • Client share of voice: 22% (cited in 11 of 50 queries)
  • Competitor A share of voice: 30% (down from 34%)
  • Competitor B share of voice: 24% (down from 28%)
  • Client trial volume: 2,340 (4.25x baseline)
"We went from completely invisible in AI search to being recommended alongside competitors we've been chasing for market share. More importantly, the leads coming from AI platforms close faster because they've already done their research." - Client CFO case study
"Discovered Labs publishes more citation-worthy content in a week than our internal team produced in a quarter. The CITABLE framework gave us a repeatable system instead of guessing what might work." - Client VP of Marketing case study

We tracked citations using a combination of manual query testing and our proprietary tools. Every Monday, we ran the same 50 test queries across all four major AI platforms and logged which sources were cited. This weekly measurement showed clear correlation between publishing frequency, third-party validation velocity, and citation growth.

Frequently asked questions about AI visibility

Does optimizing for AI hurt traditional SEO performance?
No. Content structured for AEO using the CITABLE framework performs well in traditional search because Google also values clear answers, strong structure, and authoritative sources.

How long does it take to see AI citation results?
Initial citations typically appear within 2-3 weeks of consistent daily publishing. Meaningful share of voice growth takes 6-8 weeks of sustained content production and third-party validation building.

Is this just for ChatGPT or does it work across all AI platforms?
The CITABLE framework optimizes for how all RAG systems work, so it improves visibility across ChatGPT, Claude, Perplexity, Gemini, and Google AI Overviews simultaneously.

Can an internal content team replicate these results?
Possibly, but it requires three capabilities most teams lack: 2-3 pieces of optimized content daily, aged Reddit accounts with subreddit-specific karma, and continuous testing against AI platforms to validate what gets cited.

What's the minimum content velocity needed to maintain AI visibility?
Based on our benchmarks across clients, you need at least 15-20 citation-optimized pieces monthly to maintain share of voice. Competitors publishing more frequently will gradually displace you.

Answer Engine Optimization (AEO): The practice of structuring content so AI-powered platforms like ChatGPT and Perplexity cite it as the authoritative answer to user queries. AEO focuses on passage retrieval and citation-worthiness rather than search ranking position.

Share of Voice: The percentage of relevant AI-generated answers that cite your brand compared to competitors. A 22% share of voice means your brand appears in roughly 11 of every 50 AI responses in your category.

Citation Rate: The frequency with which AI platforms reference your content when answering queries related to your product category. Measured by testing a standardized set of buyer questions weekly.

CITABLE Framework: Discovered Labs' 7-part methodology for creating content optimized for LLM retrieval: Clear entity structure, Intent architecture, Third-party validation, Answer grounding, Block-structured for RAG, Latest and consistent, Entity graph and schema.

RAG System (Retrieval-Augmented Generation): The technical architecture AI platforms use to pull real-time information from external sources to ground their responses. RAG systems exhibit recency bias, favoring content updated within the last 30 days.

Recency Bias: The documented preference LLMs show for recently published or updated content. Research shows the vast majority of AI citations are from pages updated in the last 30 days.


Stop losing deals to competitors AI recommends

Your buyers are asking ChatGPT and Perplexity for vendor recommendations right now. If you're not cited in those responses, you don't exist to that segment of your market.

We offer a no-obligation AI Visibility Audit that shows exactly where you appear (or don't) across 50 high-intent buyer queries in your category. You'll see your current share of voice, which competitors dominate AI recommendations, and the specific content gaps keeping you invisible.

The audit takes 3-5 business days and includes:

  • Citation rate across ChatGPT, Claude, Perplexity, and Gemini
  • Competitive share of voice analysis
  • Content gap report identifying the 20 highest-priority questions to target
  • Technical AEO audit of your existing content

If the audit reveals gaps, we'll handle the daily content production and citation-building work using our dedicated team - no need to hire or overwhelm your current staff.

Book your audit or explore our Answer Engine Optimization services to see how we help B2B SaaS companies get cited where it counts.

If you want to understand the methodology in depth, download our complete Answer Engine Optimization Playbook with the step-by-step framework we used to deliver these results.

Continue Reading

Discover more insights on AI search optimization

Jan 23, 2026

How Google AI Overviews works

Google AI Overviews does not use top-ranking organic results. Our analysis reveals a completely separate retrieval system that extracts individual passages, scores them for relevance & decides whether to cite them.

Read article
Jan 23, 2026

How Google AI Mode works

Google AI Mode is not simply a UI layer on top of traditional search. It is a completely different rendering pipeline. Google AI Mode runs 816 active experiments simultaneously, routes queries through five distinct backend services, and takes 6.5 seconds on average to generate a response.

Read article