article

How a B2B SaaS client grew AI-referred trials 6x in seven weeks

AEO ROI case study shows how a B2B SaaS client drove 6x AI referred trials in 7 weeks using Answer Engine Optimization strategy. This proves AI visibility is a trackable pipeline channel not just brand awareness offering a clear path to measurable MQLs and ROI for your B2B SaaS.

Liam Dunne
Liam Dunne
Growth marketer and B2B demand specialist with expertise in AI search optimisation - I've worked with 50+ firms, scaled some to 8-figure ARR, and managed $400k+/mo budgets.
May 14, 2026
9 mins

TL;DR

  • Our AEO case study shares a B2B SaaS client grew AI-referred trials from 550 to 3,500+ in seven weeks using Answer Engine Optimization, a 6x increase tracked through CRM attribution signals.
  • The mechanism: 66 CITABLE-framework articles published in one month, critical technical fixes, and off-page information consistency across Reddit and third-party sources.
  • In a parallel engagement, incident.io lifted AI visibility from 38% to 64% across their priority query set.

B2B buyers increasingly research software inside AI assistants before visiting a vendor's website. If your brand doesn't appear in those answers, you're missing pipeline the sales team never sees and can't measure. This case study breaks down exactly how one B2B SaaS client used the CITABLE framework to track, measure, and scale AI-referred trials by 6x in seven weeks, and what that pipeline journey looked like in the CRM. For the full AEO ROI picture, start with what AEO ROI means for B2B SaaS.

Essential AEO ROI case study insights

The two Discovered Labs case studies covered in this piece share one core finding: AI visibility is a trackable pipeline channel, not a brand awareness exercise.

The B2B SaaS client profile

The primary client is an anonymized B2B SaaS company who arrived with a recognizable set of problems: organic traffic that wasn't converting into trial signups at the rate needed. Content retrievability in RAG systems depends on factors like semantic structure, passage-level authority signals, and entity clarity rather than traditional search ranking signals. Technical issues were identified and resolved during the engagement.

Self-reported attribution showed 550 trials traced to AI recommendations from ChatGPT, Claude, and Perplexity. The full scope of the starting position is documented in the B2B SaaS AEO case study.

6x AI-referred trials: outcomes

In seven weeks, AI-referred trials grew from 550 to 3,500+, a 6x increase attributed to ChatGPT, Claude, and Perplexity recommendations and tracked through CRM attribution signals. Sixty-six optimized articles shipped in one month. Content started appearing in AI citations within two weeks of publication. By week four, several of the client's most-cited sources in their category were their own content. This is the kind of result that converts a skeptical CFO conversation into a budget approval, because the mechanism is visible and the numbers tie to a specific pipeline motion, not vanity traffic metrics.

Key drivers of AI pipeline

The result came from working three distinct surfaces simultaneously, rather than optimizing for Google rankings alone.

  • Web search: Pages needed to rank so AI systems could find and retrieve them.
  • Citations: Content had to be structured so LLMs could extract and quote specific passages.
  • Training data: Information about the client needed to appear consistently across independent sources, including Reddit, industry publications, and comparison content.

Working all three surfaces simultaneously separates a citation lift from a pipeline lift, as we covered in Is SEO the same as AEO? For a unit economics breakdown of AEO against SEO and paid acquisition, see AEO vs SEO vs paid: channel ROI breakdown.

The challenge: AI-referred trials stuck at baseline

The client started with 550 AI-referred trials but their AEO strategy wasn't optimal. The existing agency was still following an outdated SEO playbook of biasing towards informational content and wasn't thinking about content structure.

SEO for clicks, not AI answers

Standard optimization work focused on signals that matter for Google's ranking algorithm rather than LLM passage selection. Dense retrieval systems outperformed keyword-based retrieval by 9 to 19 points on top-20 passage retrieval, according to Karpukhin et al.'s Dense Passage Retrieval research, which means semantic relevance and extractability matter more than link count for AI answers. Content not structured for passage extraction is less likely to appear in AI citations. Our AEO vs GEO vs SEO guide explains why this structural gap is the most common reason companies get skipped by AI systems.

How the CITABLE framework boosts AEO ROI

The CITABLE framework is our proprietary methodology for structuring content that AI systems can cite. Every component maps to a specific retrieval signal. The 4-month roadmap targets a 40% citation rate for core queries. The client case study applied key elements of that framework in a structured engagement.

Week 1-2: Visibility audit and priority mapping

The engagement opened with a full AI visibility audit across ChatGPT, Claude, Perplexity, and Google AI Overviews using our proprietary auditing platform. The audit maps where the client appears (and doesn't appear) across priority buyer queries, scores current mention rate and citation rate, and surfaces the technical issues blocking indexation and passage retrieval.

For this client, the audit identified a specific set of high-intent queries where competitors were cited and the client didn't appear. Those gaps informed the content priority list for the engagement.

Week 3-5: 66 optimized articles published

Sixty-six articles were published in one month, each structured using every component of the CITABLE framework: clear entity and structure with answer-first openings, content architecture addressing buyer questions, and block structure optimized for Retrieval-Augmented Generation (RAG). Sections ran 120-180 words with structured formats making facts easy for LLMs to extract. Content started appearing in AI citations within one to two weeks. This is the ChatGPT ranking case study approach in practice: volume and structure working together, not separately.

It's worth noting during this period our content also outperformed their incumbent SEO agency by 3-5x in Google's SERP with our content on avg. ranking on page 1.

Week 5-7: Technical fixes and off-page consistency

Alongside content production, we resolved the technical issues flagged in the week-one audit. Schema markup was added across Organization, Product, FAQ, and How-to templates to give LLMs explicit entity relationships. Indexation blockers were cleared so AI crawlers could access the full content library.

Off-page consistency ran in parallel. Our research and client work consistently shows that claims appearing across the company site, Reddit, industry publications, and comparison content carry more weight in AI responses than a single high-authority page. We built that consistency across Reddit, industry publications, and comparison content using our Reddit marketing service, placing helpful information in relevant subreddits with aged, high-karma accounts. Our Reddit content playbook for B2B SaaS covers the full tactical approach.

Measuring AI-sourced MQLs

Attribution was built into the program from day one, using a three-layer measurement model.

  1. LLM leading indicators: Mention rate, citation rate, and share of voice tracked weekly through our AI visibility platform.
  2. Traffic signals: UTM parameters used to tag and track sessions from AI-referred sources.
  3. Self-reported attribution: A "how did you hear about us?" field on demo request forms, with options for specific AI assistants.
    These three layers produce a monthly narrative report: AI-referred sessions, Marketing Qualified Lead (MQL) conversion rate, pipeline contribution, and a stated confidence interval for the portions where attribution is probabilistic rather than deterministic.

Case study outcome: 6x AI trials in seven weeks

The full B2B SaaS case study documents the timeline and results. The headline number is 550 to 3,500+ AI-referred trials in seven weeks.

3,500+ AEO-attributed trial signups

Citation rate climbed as CITABLE articles indexed and appeared in AI answers. Several of the client's most-cited sources in their category became their own content. The final trial count of 3,500+ represents attribution traced through multiple signals, and the layers agreed directionally, which is the closest thing to clean attribution you get in a zero-click research environment.

Boosting AI citations on key queries

Citation rate on priority buyer queries moved materially across the engagement window, with the client's brand appearing as a cited source for a meaningful share of the buyer questions their target audience was asking AI assistants. Share of voice in their category shifted in their favor. Our AI visibility measurement post explains how we track these signals and why probabilistic measurement is the honest standard for this work.

Profitable AI pipeline growth

Trials are a leading indicator, not the final metric. The client tracked MQL-to-opportunity conversion on AI-referred signups against their baseline conversion rate from other organic channels. The ROI calculation guide for CMOs we published uses a formula based on search volume, AI usage rates, citation rate, click-through rate, conversion rate, and Lifetime Value (LTV). That formula was the basis for the client's internal business case before the program started.

Measuring incident.io's AI visibility gains

The incident.io case study is a separate case study using application of the same methodology. Incident.io competes in incident response. The engagement combined SEO and AEO work across all three surface areas.

Tracked AEO visibility lift: 38% to 64%

AI visibility for incident.io moved from 38% to 64% across their priority query set, measured using our AI visibility auditing platform. That 26-point lift represents a material shift in category share of voice. Before working with us, their CMO Tom Wentworth described the starting position clearly:

"Before Discovered Labs, we were using homegrown LLM prompts, without a clear strategy for what to optimize for or exactly how best to structure content." - Tom Wentworth, CMO at incident.io

The structured approach to content architecture using the CITABLE framework contributed to the visibility lift.

AI citations converting to booked meetings

AI visibility translated directly into an increase in organic meetings booked. This is the pipeline connection that matters for a board slide: not citation rate as a standalone metric, but citation rate as a leading indicator that moves downstream conversion. The meeting lift was tracked through CRM attribution using the same three-layer model applied in the primary case study. Organic became a measurably stronger pipeline channel after the AI visibility work, not just a traffic source.

Cost & ROI timeline for AEO

Of course, money talks! Here's the honest version: initial citations appear within one to two weeks, meaningful citation rate lift takes three to four months, and full optimization across all three surfaces takes three to six months. There are no 30-day domination guarantees.

Budgeting for AEO impact

A realistic 90-day milestone map looks like this:

  • Weeks 1-2: Audit and priority query mapping
  • Weeks 3-5: Content production and technical fixes
  • Weeks 6-12: Citation rate tracking and off-page consistency building
  • Month 3-4: Meaningful citation rate lift and MQL attribution visibility

AEO team structure & skills

An AEO-capable team needs AI/ML engineers to build visibility auditing infrastructure, content editors trained on extractability and passage structure, and off-page specialists who understand information consistency across Reddit and third-party sources. At Discovered Labs, the Starter retainer team includes SEO and content specialists, off-page specialists, and content editors, backed by AI/ML engineers who built our auditing platform and knowledge graph tooling. The answer engine optimization agency service page details what our team does week to week.

AEO cost: Sprint vs. retainer

Package

Price

Commitment

Core deliverables

AEO Sprint

€6,995 one-off

None

10 optimized articles, AI visibility audit, schema structure

Starter

€6,995/mo

Month-to-month

Up to 20 articles, AI visibility tracking, structured data, off-page work

Growth

€10,995/mo

Month-to-month

Starter + increased volume, landing page development, content syndication

Enterprise

Custom

Flexible

Programmatic content at scale, original research for category authority

Request a baseline AI visibility audit before deciding whether a Sprint or retainer is the right fit. Look forward to speaking with you!

FAQs

How was AI-referred traffic attributed?

Attribution runs across three layers: UTM parameters on high-citation pages to capture sessions in HubSpot or Salesforce, a "how did you hear about us?" form field at demo and contact forms with options for specific AI assistants, and weekly mention rate and citation rate tracking through our AI visibility platform. No single layer is complete alone, but the three together produce a directionally reliable and board-defensible attribution model.

Will AEO ROI apply to my SaaS?

It applies most reliably to B2B SaaS categories where buyers actively use AI assistants for vendor research. The CITABLE framework targets approximately 40% citation rate over a 4-month implementation cycle, at which point AI-referred MQL volume is measurable and CRM-trackable.

What are strategies for a zero citation baseline?

A citation rate below 10% typically indicates early-stage visibility, which is a clear starting point. The priority sequence is: AI visibility audit to map competitor citation patterns, content restructuring using the CITABLE framework on your ten highest-intent pages, technical fixes for indexation and schema, then off-page consistency across Reddit and third-party sources. Initial citations can appear within one to two weeks of publishing CITABLE-structured content.

When do AEO citations appear?

Initial citations on newly optimized or published content can appear within one to two weeks for high-priority queries. A meaningful citation rate lift typically takes three to four months with consistent content publication and technical fixes in place. Full optimization across web search, citations, and training data surfaces takes three to six months, which is the timeline the CITABLE 4-month roadmap is built around.

Key terms glossary

AI visibility: The measurable presence of your brand in AI-generated answers across ChatGPT, Claude, Perplexity, and Google AI Overviews. It is tracked as citation rate (the percentage of relevant buyer queries where your brand is mentioned) and share of voice (your citation rate relative to the top competitors in your category).

Citation rate: The proportion of queries tested where an AI platform cites or mentions your domain or brand. Typically calculated as the number of queries with your brand appearing divided by total queries tested, expressed as a percentage.

Passage retrieval: The mechanism by which LLMs extract specific sections of text from indexed documents to build a synthesized answer. Passage retrieval is optimized by writing in 120-180 word blocks, leading each section with the answer, and avoiding topic drift within a single section.

Information consistency: The alignment of specific claims about your product or company across independent sources including your own site, Reddit threads, industry publications, and comparison content. Our research and client work consistently shows that claims appearing across the company site, Reddit, industry publications, and comparison content carry more weight in AI responses than a single high-authority page.

Continue Reading

Discover more insights on AI search optimization

Jan 23, 2026

How Google AI Overviews works

Google AI Overviews does not use top-ranking organic results. Our analysis reveals a completely separate retrieval system that extracts individual passages, scores them for relevance & decides whether to cite them.

Read article