The MERIT Framework for AI Search Optimization

A Practical Framework for AI Search Optimization

Published by Searchbloom

Updated May 16, 2026

Audio Overview

Listen to this whitepaper

A conversational walkthrough of the MERIT Framework's five pillars, fifteen chapters, and the realities of AI search optimization. Roughly 44 minutes.

Audio overview generated by AI.

0:00
--:--

Executive Summary

Cody C. Jensen

Written by

Cody C. Jensen

CEO & Founder, Searchbloom

View full bio →

Edition updated May 14, 2026. Original publication October 2025.

Answer Engine Optimization (AEO), also called AI Search Optimization, AI SEO, or Generative Engine Optimization (GEO), is the discipline of earning citations in AI-generated answers across ChatGPT, Copilot, Perplexity, Google AI Overviews, Claude, and Gemini. What people call AEO or GEO is an evolution of SEO, not a separate discipline: traditional SEO surfaces brands in search results, and the same crawlable, authoritative, genuinely helpful content now surfaces them in the synthesized answer. What changes is the depth of execution, particularly the net-new information gain AI retrieval rewards. The MERIT Framework organizes that work across five pillars and fifteen chapters.

The MERIT Framework

The Five Pillars

M = Mentions

Third-party validation across trusted platforms in your industry where AI systems discover authoritative signals about your brand, products, services, and expertise. Verified customer reviews on review and directory platforms, authentic community engagement on forums and social communities, strategic third-party publications and media coverage, consistent web presence across multiple channels, and other external validation that builds credibility.

E = Evidence

Original, quantifiable assets that establish your brand as a primary source AI systems can reference and attribute. Proprietary research and benchmarks, case studies with measurable outcomes, expert analysis and educated opinions, evidence-based frameworks, transparent methodologies, and other authoritative resources that show thought or industry leadership.

R = Relevance

Comprehensive, intent-aligned content structured for AI retrieval in self-contained, citable segments. Answer-first content architecture, question-based headings, passage-level structure for retrieval-augmented generation (retrieval systems internally segment content; this is sometimes called chunking), semantic HTML structure with clear topical boundaries, pillar-and-cluster strategies, multi-format presentation, and related techniques that improve extractability.

I = Inclusion

Technical accessibility and semantic precision that enables AI crawlers to discover, understand, and correctly interpret your content and entities. Proper crawler configuration, entity schema, knowledge graph connections, IndexNow and Google Indexing API integration, semantic markup, server-side rendering, and other technical optimization that ensures machine readability.

T = Transformation

Systematic measurement, organizational evolution, narrative alignment, and continuing optimization that sustains AI visibility through volatility and change. Weekly monitoring with volatility awareness, monthly trend analysis on moving averages, quarterly strategic reviews, realistic expectation management, narrative consistency across owned and earned surfaces, reputation alignment, and the team structure required for sustained execution.

Corpus Engineering: The Operating Discipline Beneath the Framework

MERIT is the strategic framework. Corpus Engineering is the operating discipline beneath it: a systems-level practice for engineering a corpus for retrieval, semantic understanding, citation, ranking, and AI generation. Inclusion is Corpus Engineering at the accessibility and entity scale. Relevance, at the passage and sub-corpus scale. Evidence, at the information-gain and asset scale. Mentions, at the extended-corpus and expansion scale. Transformation, at the lifecycle and maintenance scale.

The Fifteen Chapters

Each chapter distills the strategic case and points to the corresponding chapter in the MERIT Framework Playbook for operational depth, named diagnostics, and worked examples.

Chapter 1 · Mentions

Pay-to-Play Placements

AirOps measured 90% of third-party AI citations come from listicles and review-platform comparison pages, with 80% of cited brands in the top three spots. Three operating tiers cover most situations: entry ($3 to $10K per month per platform), mid-tier ($15 to $30K), enterprise ($50K+ per year). Mid-tier on two platforms is the working pattern for most mid-market brands.

Chapter 2 · Mentions

Community Mentions and Positive Sentiment

Profound measured Reddit cited in 1.2% to 14% of ChatGPT responses, 6.3% of Perplexity. The 90/10 value-to-brand-mention rule and karma thresholds (Reddit 500, Quora 100 Credits, LinkedIn 90 days, HN 200) are non-negotiable. Operator-led work beats brand accounts by 30 to 50%. Full outsourcing reliably fails because community surfaces detect inauthenticity quickly.

Chapter 3 · Mentions

Third-Party Corroboration

85% of AI brand mentions come from third-party sources (AirOps March 2026). The editorial tier covers contributed pieces, listicle placement, podcasts, and analyst coverage. Working cadence: three to six contributed pieces per quarter per named expert. Tier 1 outlets carry 5x to 10x the citation weight of Tier 3. Specific pitches with substance land 30 to 50%, generic pitches below 5%.

Chapter 4 · Evidence

Original Source Asset Development

Five viable asset types earn AI citations: frameworks, opinion, research, calculators, templates. AI cites opinion as readily as research when the source has verifiable expertise. Pick by working four constraints in order: budget, timeline, expertise, competitive landscape. The portfolio reframe almost always beats one-big-asset plans 2 to 3x on per-dollar citation lift.

Chapter 5 · Evidence

Information Gain Architecture

AI systems filter retrieval on net-new substance. AirOps measured 6.5x citation lift for high-gain content. Two Searchbloom metrics: Information Gain Density (IGD, 5 to 7 distinct insights per page) and Information Gain Score (IGS, 0 to 1 cosine-similarity check). B grade (0.35+) is the production target. The 12-technique catalog operationalizes the discipline.

Chapter 6 · Evidence

Citation Reinforcement and Topical Clusters

Single assets stall; clusters compound. Five-to-ten asset rule: one hub plus four to nine spokes. Depth before breadth wins 3 to 5x at the same budget because AI retrieval is entity-aware. Refresh cadences differ by content type: quarterly for benchmarks, annual for frameworks, reactive within 72 hours of trigger events.

Chapter 7 · Relevance

Answer-First Content Architecture

AI retrieves discrete passages, not full pages. Pages that bury the answer are invisible regardless of substance quality. Answer-first pages earn 2 to 3x the citation share of traditional SEO intros (AirOps). Four high-leverage structural elements: FAQ (+40%), lists and tables (80% of ChatGPT citations), question-based headings (2.8x), step-by-step numbered lists.

Chapter 8 · Relevance

Multi-Format Surface Coverage

YouTube cited in 29.5% of Google AI Overviews (BrightEdge October 2025), roughly 200x more frequently than any other video platform. Five-format surface: text on owned domain, video with edited transcripts, images with descriptive alt, structured data (tables, schema), and audio via show notes. Multi-format pages earn 2 to 4x the citation share of single-format pages.

Chapter 9 · Relevance

Semantic HTML and Entity-Rich Language

AI does not parse schema at response time, but schema feeds the discovery layer AI retrieval pulls from indirectly. FAQPage schema lifts citation 40%. Entity-rich language at the sentence level drives correct attribution. Five schema types matter most: Organization, Person, Article, FAQPage, and HowTo or Product.

Chapter 10 · Inclusion

Entity Optimization

AI retrieves by entity, not keyword. Query fan-out breaks each query into 10 to 20 subqueries before retrieval. Four entity types run in parallel: brand, people, products, topical. Wikidata is the structured-data backbone. Knowledge Panels appear when entities cross prominence thresholds; claim within 24 hours when one shows up.

Chapter 11 · Inclusion

Crawler Access

The simplest failure in AI Search is the easiest to fix. A bad robots.txt blocks AI crawlers and zero citations follow. Split training-time bots (GPTBot, anthropic-ai, Google-Extended) from retrieval bots (OAI-SearchBot, ClaudeBot, PerplexityBot). Allow retrieval bots. CDN-layer overrides (Cloudflare) can block AI bots regardless of origin robots.txt. Verify with curl.

Chapter 12 · Inclusion

Indexing Protocols

IndexNow covers Bing, Yandex, Naver, Seznam, and (via Bing) Microsoft Copilot. The Google Indexing API covers Google Search, AI Overviews, and Gemini. Combined, they compress the gap between publishing and AI citation eligibility from days or weeks to hours. Wire both into the publish-event layer from a single hook point.

Chapter 13 · Transformation

Measurement Cadence and Expectations

AI Search is probabilistic. SE Ranking measured 9.2% URL consistency in Google AI Mode. SparkToro: AI recommendations are statistically random in over 99% of measured cases. Three cadences: weekly (technical health), monthly (citation share and cluster health), quarterly (strategic review). Realistic timeline: 3 to 6 months for initial lift, 18 to 36 months for category leadership.

Chapter 14 · Transformation

Narrative and Reputation Alignment

AI synthesizes across every public surface where a brand appears. Inconsistent narrative produces hedged AI output. AI can also be flat wrong about a brand. The correction lever is the source layer, not the AI output itself. Training-baked errors fix on the model retraining cycle (months). Retrieval-baked errors fix in days to weeks.

Chapter 15 · Transformation

Organizational Evolution

AI search optimization is engineering grafted onto creative. Programs that treat it as a campaign do not sustain results. Five functional roles (Program Lead, Content Lead, Technical Lead, Distribution Lead, Named Expert) across three team structures by scale. Vendor selection in order: framework grounding, measurement discipline, vertical depth, operational integration.

AEO and GEO Are an Evolution of SEO

"I think what people call AEO or GEO is simply an evolution of SEO."

- Cody C. Jensen, CEO & Founder, Searchbloom

AEO and GEO are not a separate discipline. Google is explicit that its generative AI features are rooted in its core Search ranking and quality systems, so optimizing for generative AI search is optimizing for the search experience, and is still SEO. The same spam and quality policies that govern Search now explicitly govern AI responses. The discipline has evolved, not forked: the same crawlable, authoritative, genuinely helpful content that wins classic Search is what AI retrieval rewards.

What changes is the depth of execution. AirOps's March 2026 analysis found about 60% of AI Overview citations come from URLs that do not rank in the top 20 organic results, because AI retrieval favors deep pages that answer specific subqueries. MERIT is not a replacement for SEO; it is the operating model for executing modern SEO at the depth AI retrieval demands, the part most "AEO" vendors skip while relabeling legacy SEO.

How SEO Evolved For AI Search

Diagram: AI search optimization as an evolution of SEO, with near-total overlap between SEO and AI search work

Independent Research Validates the Framework

Research since the October 2025 release reinforces the framework. AirOps's March 2026 analysis (about 15 million data points) aligns directly with MERIT's pillars:

  • 85% of AI brand mentions come from third-party sources, with brands 6.5x more likely to be cited through external content. Validates Mentions.
  • FAQs lift citation odds 40%, clear headings 2.8x; lists and tables appear in about 80% of ChatGPT citations versus 29% of Google's top results. Validates Relevance.
  • About 90% of third-party citations come from listicles, comparisons, and review sites, 80% of cited brands in the top three. Validates Pay-to-Play Placements.
  • SparkToro (January 2026) confirmed AI rank volatility: top brands still appear in 55 to 77% of responses regardless of phrasing, validating the consideration-set framing in Measurement Cadence.

Recognizing the AEO Relabeling Pattern

A significant portion of services sold as AEO, GEO, or AI Search Optimization is repackaged traditional SEO. Schema, content restructuring, entity work, and crawler configuration retain real value (schema feeds the knowledge graphs AI leverages indirectly), but relabeling SEO as AEO does not change what it is. The genuinely AEO-specific work is third-party corroboration, original source assets cited across credible third parties, narrative and reputation alignment across surfaces, and entity-level brand recognition AI can attribute correctly.

A Practical Test for Buyers

Ask any AEO vendor to map each proposed deliverable to one of the five MERIT pillars and to cite published evidence that the deliverable affects AI citation outcomes. Deliverables that map only to Inclusion (schema, robots.txt, IndexNow) and Relevance (content structure) without addressing Mentions, Evidence, or Transformation are largely repackaged SEO. The strongest proposals distribute deliverables across all five pillars and back the AI-specific ones with evidence.

Recommendations reflect the AI search landscape as of May 2026 and evolve rapidly; verify before adoption. Statistics are from published research (hyperlinked); examples are representative patterns, not specific engagement disclosures.

Conclusion

Final Considerations

AI Search Optimization requires patience and sustained investment. Volatility is high (only 9.2% URL consistency in Google AI Mode, SE Ranking, August 2025), and AI citations decay on their own citation half-life, so success is measured in trends, not week-to-week swings; expect 3 to 6 months minimum for significant impact. Because AEO and GEO are an evolution of SEO rather than a competing discipline, organizations with strong SEO foundations are best positioned to accelerate AI visibility through these strategies.

The Engineering Shift in Marketing

AI search optimization is increasingly engineering-style work: systematic measurement, version-control thinking applied to content, structured data, automated refresh workflows, and AI systems used as production tools. It adds a third dimension alongside creative and strategic judgment, a systems and engineering discipline. This is not a passing trend; the field is professionalizing, and hiring, training, and team structure should reflect that.

Who Can Execute MERIT

The framework varies in the authority and embedment it requires, which determines whether MERIT is something a team applies itself, hands to a partner, or must restructure to reach:

  • In-house teams with full authority can execute the entire framework across marketing, sales, product, PR, and brand. The published citation-lift case studies come from this model. Highest impact.
  • Embedded strategic agencies with budget authority and cross-functional reach can execute most of it alongside in-house teams. Uncommon; where the embedment is absent, the agency is constrained to channel-specific chapters (3, 4, 7, 10, 11, 12).
  • Consultative advisors can transfer and coach but cannot execute the framework themselves. Useful when the customer has the team and authority to act.
  • Tool-only adoption measures visibility but does not move the work. Necessary, not sufficient.

The framework does not require a particular model, but it requires buyers to be honest about which one they have and scope their AEO ambitions accordingly. MERIT closes the widening gap between AEO ambitions and execution capacity by making the dependency between strategy and execution authority explicit.

Sources & Further Reading

Core research referenced in this whitepaper. The Playbook sources page carries the full citation set. All links open in a new tab.

Industry Research

Public Case Studies

  • Carta: 7x increase in AI citations, 75% citation rate on newly published pages.
  • Webflow: 5x refresh velocity, 6x conversion rate from AI-sourced traffic.
  • Chime: 89% time reduction per refresh, AI citations tripled in four weeks.
  • Docebo: 25% share of voice lead, doubled publishing velocity without adding headcount.

Tools & Vendors Mentioned

Pricing accurate as of April 2026 and subject to change. Inclusion is not endorsement; verify before adoption.

Brand Mention Monitoring

  • Alertmouse: Mention monitoring across news, blogs, and social. Co-founded by Rand Fishkin (SparkToro). Free (1 alert); Basic from $10/month.
  • Ahrefs Firehose: Real-time web monitoring API (SSE, Lucene syntax). Free during beta. For developer and automation workflows.
  • Ahrefs Alerts: Brand and keyword tracking within the Ahrefs platform.

AI Visibility Measurement

  • Profound AI: Multi-platform citation tracking. From $499/month (4 platforms).
  • Peec AI: 115+ language support. From €89/month.
  • Semrush AI Toolkit: AI visibility plus traditional SEO data. $120 to 500/month.
  • Writesonic GEO, Promptmonitor, Otterly.AI: Budget-tier options.

Indexing & Discovery

  • IndexNow: Open-source instant index notification. Free.
  • Cloudflare Crawler Hints: CDN-level IndexNow integration.

Questions & Answers

What is AI search optimization?

AI search optimization is the practice of earning visibility in large language models like ChatGPT, Claude, Perplexity, Google AI Overviews, and Gemini. It is also called Answer Engine Optimization (AEO, sometimes expanded as AI Engine Optimization), AI SEO, or Generative Engine Optimization (GEO). Unlike traditional SEO which targets ranking in search engine results, AI search optimization focuses on being cited by AI systems when they generate responses to user queries. The MERIT Framework provides a structured methodology for AI search optimization across fifteen chapters and five pillars.

How does AI search optimization differ from traditional SEO?

It does not fork from SEO; it is an evolution of it. Google is explicit that its generative AI features are rooted in its core Search ranking and quality systems, so optimizing for generative AI search is still SEO. Traditional SEO targets result rankings; AI search optimization targets being cited in AI-generated responses, but the underlying work is the same crawlable, authoritative, genuinely helpful content. Schema and E-E-A-T originated as SEO factors yet remain foundational inputs to the retrieval and grounding layer AI systems pull from (iPullRank's GEO Core chapter documents how structured signals help generative engines disambiguate entities). What changes is the depth of execution: the third-party corroboration, original source assets, and narrative and reputation alignment that MERIT operationalizes are the part most AEO vendors skip while relabeling legacy SEO.

What are AI search optimization strategies?

The MERIT Framework organizes fifteen AI search optimization chapters across five pillars. Mentions covers third-party validation through review platforms, community engagement, and earned media. Evidence covers original source assets that AI cites. Relevance covers content structured for AI retrieval. Inclusion covers technical accessibility for AI crawlers and entity recognition. Transformation covers measurement, narrative consistency, reputation alignment, and organizational evolution. Each chapter is documented with supporting research and representative examples.

How do you measure ROI from AI search optimization?

AI search optimization ROI is measured through citation rate (frequency of brand citations across major AI engines), share of voice in AI responses, sentiment in AI outputs, AI-referred traffic, and conversion rates from AI-sourced visitors. Tools like Profound AI, Peec AI, Otterly, and Semrush AI Toolkit support periodic audits across ChatGPT, Claude, Perplexity, Gemini, and Google AI Overviews. Because AI citation patterns are volatile, ROI is best evaluated as thirty-day or ninety-day moving averages rather than week-to-week measurements. Chapter 13 (Measurement Cadence) covers the full measurement methodology.

Who should do AI search optimization?

Best executed by in-house teams with full authority across marketing, sales, product, PR, and brand. Embedded strategic agencies can execute most of it with in-house teams; consultative advisors transfer methodology but do not execute; tool-only adoption measures without moving the work. The "Who Can Execute MERIT" section above details the four execution-authority levels.

Cody C. Jensen

Cody C. Jensen

CEO & Founder of Searchbloom

Cody C. Jensen is the Founder and CEO of Searchbloom, a results-driven search engine marketing agency. He began his career at Google and later advanced through some of the largest agencies in the digital marketing industry. During that time, he recognized the need for an agency that focused on transparency, measurable results, and ethical practices.

Searchbloom was his answer, created with the mission to be the most trusted, transparent, and results-driven search marketing agency in the industry. Cody works closely with marketing executives, digital managers, business owners, and enterprise brands to create full-funnel strategies that deliver real growth.

His leadership and innovation have led to the development of proven digital marketing methodologies that continue to help Searchbloom's partners achieve lasting ROI and sustainable success.

GET YOUR FREE PLAN

This field is for validation purposes and should be left unchanged.

They have a strong team that gets things done and moves quickly.

The website helped the company change business models and generated more traffic. SearchBloom went above and beyond by creating extra content to help drive traffic to the site. They are strong communicators and give creative alternative solutions to problems.
Mackenzie Hill
Mackenzie HillFounder, Lumibloom

We hate spam and won't spam you.