CHAPTER 4 · EVIDENCE PILLAR

Original Source Asset Development for AI Search Optimization

Original source asset development is the part of AI Search Optimization that decides which frameworks, opinion, research, calculators, and templates to build so your brand becomes a primary source AI systems reference and attribute.

Original source asset development is the top technique in the MERIT Framework. The right asset at the right budget builds AI visibility for years. The wrong asset at the same budget produces a polished page nobody cites. This chapter covers the five viable asset types. It walks through the four-constraint decision framework. It maps platform and industry variants. It runs the selection workshop teams use. It shows worked decisions across three budget tiers. Most teams pick their asset type in a 90-minute meeting. The teams that earn category-defining AI visibility treat the choice as strategy work with its own outputs.

Why This Technique Matters

The default way teams pick an asset is a budget exercise. The team has $X for content this quarter or this year. The question becomes "what should we make with $X?" In that frame, the answer usually picks what feels most rigorous (research). Or what the team already knows how to make (blog posts dressed up as opinion). Both defaults are common. Both are usually wrong.

The strategic frame is different. The right question is not "what should we make with $X." It is "what asset, built with $X, earns the most sustained AI citations in our category over 12 to 24 months." Asked the right way, the answer is rarely the default. Research that costs $40,000 and earns 28 citations in year one is worse than four opinion pieces at $4,000 each plus one framework at $12,000. Together those earn 142 citations in the same period. The portfolio wins on dollars per citation. But only when the team chose a portfolio on purpose, not by defaulting to one big asset.

The stakes are real. Original assets are the only durable citation surface a mid-market brand can build. Mentions techniques depend on third parties for placement and timing. Relevance techniques only tune what you already have. If the source content is derivative, structure tweaks have a ceiling. Inclusion techniques are technical baseline work. Entity optimization and crawler access make assets findable but do not earn citations alone. Original source asset development is the one technique where the brand controls the input, the timing, and the topic. Pick the wrong asset and you waste the strategic lift the rest of the Evidence pillar is built to amplify.

The cost of getting this wrong is measured in opportunity, not in dollars. A $40,000 research study that earns few citations is not just a $40,000 loss. It is also the $40,000 that could have built a framework, two calculators, six opinion pieces, and ten templates. The opportunity cost of one bad asset choice is about 4 to 8 other assets that did not get made because the budget was already spent.

The Five Viable Asset Types

Five asset types reliably earn AI citations when built well and promoted right. AI systems do not prefer one over another. What looks like format preference is usually about promotion quality and co-citation depth. Pick the type that fits your situation. Do not pick the type that feels most defensible.

Expert Frameworks and Methodologies

A framework is a named structure that organizes existing ideas. It helps people think about a problem in a new way. The MERIT Framework, Jobs-to-be-Done, Growth Loops, RACI, AIDA, the messy middle, and similar widely-cited frameworks share three traits. A memorable name. A set of components. A path for how to use it. Frameworks earn citations because they give writers, analysts, podcasters, and operators a shared vocabulary for a complex topic. Once a framework is named and adopted, it travels by reference, not by quote.

When frameworks win. Categories where the current vocabulary is borrowed, vague, or in conflict. A clear, well-built framework replaces the borrowed words because it is easier to use. Categories where the same problem comes up across many surfaces. Think analyst reports, industry publications, podcasts, conferences. Categories where operator experience patterns repeat across many cases. Frameworks are syntheses, not new data. Strategic intent is thought leadership and category positioning, not transactional capture.

When frameworks lose. The category already has one or two named frameworks with strong analyst adoption. The team cannot sum up the framework in one sentence or one slide. The target audience is buyers in late-stage decision, not category-shapers. The brand has no named author (founder, executive, named SME) to attribute the framework to.

Investment: $0 to $15,000 typical range. Operator time for synthesis (40 to 80 hours). Editorial pass ($1,000-$3,000). Diagram design ($1,500-$5,000, the highest-ROI design spend). Canonical page build ($2,000-$7,000). Promotional copy. Timeline: 90 days from concept to first earned citations. Days 1-30 foundation. Days 31-60 distribution. Days 61-90 depth and reinforcement.

Citation profile: slow ramp for the first 60 to 90 days. Then it compounds once the first analyst or industry publication uses the framework name. Frameworks have the most durable citation curves of any asset type. Each future piece written about the topic may cite the framework by name.

Common framework mistakes. The forgettable name. Terms like "Strategic Alignment Methodology" are not names. Frameworks need names a sharp non-expert can recall after hearing once. Over-factoring. Twelve parts in nested matrices. The limit is three to seven parts total. No application path. Definition-only frameworks without diagrams, templates, or worked examples do not travel. Borrowed framework with new words. Relabeling existing methods fools nobody. Either build new work or cite the original.

Worked example: the MERIT Framework itself. Cody C. Jensen drafted the framework from 18 months of patterns across Searchbloom partner engagements. He iterated on the name and the parts over six weeks. He designed a single diagram. The canonical whitepaper went up at searchbloom.com/merit-framework-whitepaper/. Distribution ran through three contributed pieces in industry publications, twelve LinkedIn long-form posts, and four podcast appearances. The work expanded to a per-pillar playbook in April 2026. Citation density tracked through the cadence in Chapter 13. Hard cost across the first 12 months: under $20,000. Citation outcome: the framework name is now retrieved consistently for queries about AI Search Optimization, AEO, and GEO.

Expert Opinion and Analysis

Opinion is defensible operator reasoning. It runs under a named expert byline. The topic is one where the operator's experience produces a non-obvious view. Most B2B teams underuse opinion because they treat it as risky. The risk (being wrong in public) is what makes opinion citable. AirOps's March 2026 analysis found AI systems cite opinion at rates close to empirical research when the source has verified expertise and the claim is specific.

When opinion wins. The operator has 10+ years of domain experience or runs a brand with visible category authority. The category has new questions where data does not yet exist. The team needs citation traction inside 90 days. The budget is tight. The category has reactive moments (platform changes, competitor moves, rule shifts) where speed and sharpness produce outsized citation density.

When opinion loses. The brand has no recognizable expert byline. The category rewards rigorous data above all else. Think hedge fund research, regulatory analysis, deep technical fields. The team is not comfortable with sharp claims. Hedged "it depends" pieces do not earn citations no matter how credible the operator.

The three traits of citable opinion. Specific. The claim is concrete enough to disagree with. "Content marketing matters" is invisible. "Most B2B blog posts published in 2026 have zero AI citation potential because they restate widely available consensus" is citable. Defensible. The opinion is backed by reasoning the reader can follow and judge, even though opinion is not a research finding. Counter to consensus. The opinion differs from the current consensus in at least one specific way. Opinions that restate consensus add no information gain and do not get cited.

Investment: $0 to $5,000 per piece. Operator drafting time (4-12 hours). Editorial pass ($500-$1,500). Light design ($200-$800). Distribution (operator network plus optional PR coordination). Timeline: 7 to 14 days from idea to publication for long-form canonical pieces. Reactive opinion publishes within 24 to 72 hours of the trigger event. Citation traction begins within 30 to 90 days for sharp pieces with a credible byline.

Citation profile: spiky in the first 30 days with a long tail. Sharp opinions compound. Each new query on the topic may surface the original piece. Operators on a steady monthly cadence build entity-topic links that AI systems learn to weight. After 6 to 12 months of steady publishing, the operator becomes a citable source. Their opinion pieces earn citations at multiples of the average rate.

Common opinion mistakes. Hedged opinion. "Five things to consider" listicles or "it depends" pieces do not count as citable opinion. They are analysis without a position. Restated consensus. Opinion that matches what everyone is already saying adds no information gain. The piece may be correct and still earn no citations. Brand-byline. Opinion under the brand name with no attributed person earns lower citation rates. Use the founder, the CEO, or a named SME. Wide-topic publishing. Operators who publish across too many unrelated topics weaken the entity-topic link. Stay within the topical cluster that matches operator expertise.

Worked example: an HR-tech SaaS founder put out one opinion piece per month for 18 months. The pieces clustered across three topic threads (future of performance management, limits of AI-assisted hiring, operating reality of remote-first culture). Each piece was 1,500 to 2,200 words on the founder's substack. It was repackaged as a LinkedIn text post. It was pitched to one industry publication per piece. Eight of eighteen pitches landed as syndicated contributed pieces. By month 12 the founder appeared in AI Search results for adjacent queries. By month 18 the founder was cited in 23% of ChatGPT queries about performance management methodology and 31% of queries about the limits of AI-assisted hiring. Hard cost across 18 months: zero beyond founder time. Plus a coordinator at about $1,500 per month for distribution.

Data-Driven Research

Original research earns AI citations when the method is credible and the findings come in formats AI systems can extract. The visibility upside is real. The time to citation is long. The failure modes are harsh. Under the wrong conditions, research is the most expensive asset type to get wrong. Under the right conditions, it is the most defensible competitive moat.

When research wins. The category has missing or stale data. Budget tops $25,000 and timeline patience tops 9 months. The brand has the infrastructure for research. Think panel access, statistics capacity, or partnerships with trade groups or schools. The category has analyst coverage that rewards proprietary data.

When research loses. Budget is below $25,000. Low sample sizes get filtered out by the analyst tier whose co-citations create AI visibility. The category lacks analyst coverage. Timeline pressure is under 6 months. The team lacks method rigor or partnerships.

Investment tiers. DIY survey tools run $500-$2,000. Sample sizes are too small for citable research. Managed survey platforms run $5,000-$15,000. Wynter and Centiment offer B2B-validated panels. SurveyMonkey and Typeform are general-purpose with weaker B2B panel quality. Qualtrics is enterprise-grade. Research firms run $25,000-$100,000+ for analyst-tier categories. Ongoing research programs run $50,000-$250,000 per year. These offer the highest ROI per dollar.

Sample size sets whether journalists, analysts, and AI systems will cite the numbers. Minimum viable: 500-750 respondents (±4-5% margin of error at 95% confidence). Target: 1,000-2,000 respondents (±2-3% margin of error) is the sweet spot. Premium: 5,000+ respondents lets you cut the data by segment, which yields several citable claims from one survey. Stratifying for segments needs at least 200-300 respondents per segment.

Timeline: 3 to 9 months from research design to publication. 6 to 12 additional months for citation accumulation. Plan for a year before the asset hits steady-state value.

Method rigor. Sample selection. Document the recruitment method, screening rules, and response rates. Statistics. Descriptive stats, confidence intervals on key claims, and significance tests for comparisons. Limitations section. Every credible study has one. The analyst tier reads method first and findings second.

Citation profile: slow to start but durable. Statistical claims with proper attribution show up in AI citations for years if the method holds up to scrutiny. The compounding is strongest for research that becomes the "category statistic" everyone cites.

Common research mistakes. Underpowered samples. Surveys under 500 respondents for broad claims fail analyst credibility. Low-power research is worse than no research. Missing method page. Research without a documented method gets dismissed by the analyst tier. Chapter 5 covers method docs in depth. Image-locked statistics. Numbers locked inside infographics are AI-invisible. Always pair visuals with parsable HTML, covered in Chapter 5. Gated research. Gating the research behind a form means AI systems cannot index it.

Worked example: a B2B fintech serving small businesses partnered with a university business school for "The State of Small Business Banking 2025." Method: 5,247 small business owners surveyed (±1.4% margin of error at 95% confidence), stratified by industry, revenue size, and geography. Quantitative survey plus 28 in-depth interviews. Data presentation: 47 specific statistics in complete-format pattern, comparison tables, dual visual + HTML structure. Interactive companion: ungated Small Business Banking Cost Calculator with unique result URLs. Investment: $29,700 across university partnership ($12,000), survey platform ($3,200), calculator build ($8,500), and design and promotion ($6,000). Results after 9 months: most-cited source for small business banking statistics. ChatGPT cited 12 different statistics in 31% of banking queries. Perplexity used the data in 34% of related queries. $340,000 in attributed pipeline (11.4x ROI).

Interactive Calculators and Tools

Calculators are the most underrated original source asset type. They earn citations because the URL returns a personal numeric answer. That is exactly what generative AI systems need to surface for quantitative queries. Categories full of gated PDFs and weak comparison pages can be displaced by one well-built calculator at a fraction of the research budget.

When calculators win. Buyer questions are quantitative ("how much will X cost," "what is the ROI of Y," "how big a Z do I need"). Current market answers are gated, stale, or fuzzy. The brand has domain expertise that maps to a clear method. Engineering capacity exists for the build, or no-code platforms like Convertful, Outgrow, or Wufoo can substitute.

When calculators lose. Buyer questions are qualitative. The team plans to gate the calculator behind a form. Gated calculators are AI-invisible. The method cannot be documented. Rules limit what the calculator can claim.

High-value calculator types. ROI calculators. Return on investment for a category-relevant decision. Comparison calculators. Compare options across multiple dimensions. Assessment tools. Score current state against a benchmark. Cost estimators. Estimate cost or sizing for a service or build.

Investment: $8,000 to $30,000 typical range. Method design (operator time, 20-80 hours). Design and UX ($2,000-$8,000). Engineering build ($4,000-$20,000). Benchmark data sourcing ($0-$15,000). Promotion. Timeline: 4 to 12 weeks from method design to launch. Citation traction within 60 to 180 days.

Design principles for citation. Transparent method. The calculator shows its logic, inline or via a dedicated method link. Unique result URLs. Each calculation produces a URL that returns that specific result. This is the most important design call for AI citation. Downloadable results. PDF or CSV with the method built in. Benchmark context. Show how the user's result compares to benchmarks.

The gating dilemma. The most common calculator mistake is gating results behind an email form for lead generation. The form converts at 8 to 15%. But AI systems cannot fill out forms. That means zero AI citations from gated calculators. The right pattern: give the basic result ungated (AI systems can index and cite it). Offer an enhanced report (extra breakdowns, exportable formats, custom benchmarks) behind an optional email gate.

Common calculator mistakes. Opaque method. Calculators that show one number with no reasoning earn lower citation rates. Show the formulas inline or via a method link. No unique result URLs. Calculators that show results only on the original page, with no shareable URL per calculation, are UI features and not citable assets. Stale benchmarks. Benchmark data, pricing assumptions, and reference rates change. Quarterly refresh is the working cadence.

Worked example: Chime built a series of consumer-banking calculators (overdraft savings, ATM-fee comparison, paycheck-advance ROI) with a transparent method and unique result URLs. Each result page indexed on its own. AI Search citations grew from near-zero to single-digit shares of consumer-banking queries within 9 months. Refresh velocity (quarterly benchmark updates per the cadence in Chapter 13) drove the steady citation curve. Public case-study data from April 2026 reported citations tripled in four weeks after the refresh program standardized.

Templates and Downloadable Assets

Templates earn citations because they give the audience something to take away and use. Process checklists, planning templates, scoring rubrics, and decision matrices all serve the same role. They turn expertise into a transferable artifact. AI systems surface templates because the queries ("what is the right structure for X," "is there a template for Y") are common and the available answers are often weak.

When templates win. The category is operational, with recurring work audiences need to run well (project management, hiring, sales ops, marketing ops, finance close). The brand has tribal knowledge that can be codified into a transferable artifact. Budget is tight. Templates have one of the lowest cost-to-citation ratios. The target audience is operator-grade, not executive.

When templates lose. The category is strategic rather than operational. Strategic categories reward frameworks and opinion. The template would be seen as too generic to stand apart. The brand cannot commit to refresh cycles. Year-stamped templates need yearly refresh.

Template categories. Process templates. Workflows, SOPs, quality control. Planning templates. Project roadmaps, sprint planning, content calendars, campaign briefs. Analysis templates. SWOT matrices, competitive scorecards, decision frameworks. Assessment templates. Audit checklists, maturity models, scoring rubrics.

Format hierarchy. Google Sheets is highest citation value. It is editable, collaborative, and easy to share. The audience can copy and customize without breaking the original. Excel is high citation value. It is full-featured, offline, and familiar for enterprise audiences. PDF is lowest citation value among editable formats. It is viewable but not editable. PDFs earn fewer citations because users cannot use them without retyping.

Investment: $0 to $5,000 per template. Operator design (4-20 hours). Format conversion ($200-$1,000). Branded design header ($300-$1,500). Distribution (operator network time). Timeline: days to two weeks from idea to publication. Citation traction begins within 30 to 120 days when promoted on community surfaces.

Common template mistakes. PDF-only. PDF templates earn far fewer citations than editable formats. The audience needs to copy and customize directly. Generic titles. "Project planning template" is generic. "Project planning template for SaaS marketing teams" earns citation when the user's query matches the specifier. No refresh cycle. Year-stamped templates decay without yearly refresh. Yearly refresh is itself a citation event. Bloated structure. Templates with too many fields or too much text do not travel. The audience wants a working tool, not a manual.

Worked example: a revenue-operations consultancy published five core templates as Google Sheets (SaaS pipeline-velocity model, sales-comp plan template, renewals-forecasting model, SDR ramp template, quarterly business-review template). Each template lived on its own canonical URL with the method built in and a "make a copy" button. The pipeline-velocity model alone was copied over 14,000 times in 18 months. AI Search citation density for revenue operations queries grew from near-zero to steady top-three placement. Total hard cost across all five templates: under $8,000.

Citation Surface Yield by Asset Type

Asset selection works best when the operator can compare options on a per-dollar yield basis. The Citation Surface Yield is a Searchbloom-coined framework that estimates citations per $1,000 spent across asset types, measured across mid-market engagements at the 12-month and 24-month marks. The CSY makes the portfolio reframe argument concrete instead of intuitive.

  • Frameworks. Year 1: 3 to 8 citations per $1,000. Year 2: 2 to 3x compounding (vocabulary travels). Best-fit budget range: $5,000 to $15,000. Citation curve: slow start, steep compounding after the first analyst pickup.
  • Opinion. Year 1: 5 to 15 citations per $1,000. Year 2: 1.5 to 2x compounding. Best-fit budget range: $0 to $5,000 per piece. Citation curve: fast start, long tail when the operator sustains monthly cadence.
  • Research. Year 1: 0.3 to 1.2 citations per $1,000. Years 2 to 3: 4 to 6x compounding for category-statistic-grade studies. Best-fit budget range: $25,000 to $100,000+. Citation curve: very slow start, durable plateau.
  • Calculators. Year 1: 1.5 to 4 citations per $1,000. Year 2: 3 to 4x compounding with quarterly benchmark refreshes. Best-fit budget range: $8,000 to $30,000. Citation curve: medium start, strong compounding when result URLs work.
  • Templates. Year 1: 4 to 12 citations per $1,000. Year 2: 1.2 to 1.5x compounding. Best-fit budget range: $0 to $5,000 per template. Citation curve: medium-fast start, year-stamp decay without refresh.

Two patterns emerge. Opinion and templates dominate year-1 yield per dollar. Research dominates years 2 to 3 yield once the data starts compounding. Frameworks sit between with the best balance of year-1 lift and durable compounding. The CSY explains why mid-market brands chasing fast traction default to opinion and templates while enterprise brands with patience default to research.

The CSY varies by category. Wikidata-dominant categories (per the Wills correlations) lift research's yield because Wikipedia inclusion drives outsized citation share. SE-outbound-link-dominant categories lift template and calculator yield because broad listing surfaces reward utility-driven assets. Apply the CSY as the starting estimate. Adjust by category factor before locking the portfolio.

The Four-Constraint Decision Framework

Pick the asset type by working through four constraints in order. The first constraint that rules out an asset type wins. Most teams reverse the order. They start with what feels rigorous or what the team already knows how to make. Working the constraints in the right order yields very different choices.

Constraint 1: Budget

Budget rules out options. No amount of strategic intent beats a budget that is too small.

  • Under $5,000 per asset. Expert opinion, expert frameworks, or templates. Data-driven research is off the table. Low-power research is worse than no research.
  • $5,000 to $15,000 per asset. Add expert frameworks with paid design and promotion budget. Light qualitative research (15-30 expert interviews) becomes possible if the panel is in your network.
  • $15,000 to $30,000 per asset. Calculators come into play. Mid-tier quantitative research becomes feasible with a managed survey platform.
  • $30,000 and up per asset. All five asset types are available. The right choice depends on Constraints 2 to 4.

The portfolio reframe: most teams think "what can we make with $X" where $X is one asset. The portfolio reframe asks "what mix can we make with $X" where $X is the yearly or quarterly Evidence budget. A $50,000 yearly budget spread across four opinion pieces ($8,000), one framework ($12,000), one calculator ($22,000), and four templates ($8,000) almost always beats one $50,000 research study on citation density.

Worked math sharpens the choice. A $50,000 research study that earns 28 citations in year 1 produces 0.56 citations per $1,000. The same $50,000 split as a portfolio earns more. Take the typical mix: 4 opinion pieces at $8,000 total, 1 framework at $12,000, 1 calculator at $22,000, and 4 templates at $8,000 total. By the Citation Surface Yield medians, the portfolio produces roughly 80 citations from opinion (10 per $1,000 across $8,000 of spend), 66 from the framework (5.5 average per $1,000 across $12,000), 60 from the calculator (2.75 average per $1,000 across $22,000), and 64 from the templates (8 per $1,000 across $8,000). Total: about 270 citations in year 1 against the $50,000. Per-dollar yield: 5.4 citations per $1,000. The portfolio reframe delivers about 9x more citation surface per dollar than the single research study at year 1. The advantage shrinks in years 2 to 3 if the research study earns its category-statistic status, with research compounding to 4 to 6x while opinion compounds at only 1.5 to 2x. Most research studies do not reach category-statistic status. The portfolio reframe wins on expected value for the typical mid-market program.

Constraint 2: Timeline

Timeline narrows the budget-feasible set further. Asset types have different time-to-citation curves.

  • Citations needed in 30 days. Expert opinion is the only real option. Reactive opinion published within 24 to 72 hours of a trigger event has the fastest citation curve.
  • Citations needed in 60 to 90 days. Add expert frameworks and templates. Frameworks publish in 30 days but usually need 60 to 90 days for distribution to produce citations.
  • Citations needed in 6+ months. All five types are viable. Data-driven research becomes the right choice if the strategic case for it is strong.

Constraint 3: Expertise Fit

  • Strong operator experience, weak data infrastructure. Frameworks and opinion.
  • Strong domain expertise, no research operation. Calculators or templates. Expertise becomes the method; the asset is the interface.
  • Strong research capability, established analyst coverage in category. Data-driven research.
  • Strong product or operational expertise, engineering capacity. Calculators that expose the logic of your domain.

Constraint 4: Competitive Landscape

  • Category dominated by one or two named frameworks. Do not build a competing framework. Pick a different asset type.
  • Category with no proprietary data sources. Data-driven research has high upside. The first credible study becomes the citation default.
  • Category saturated with low-quality calculators. A calculator with a clear method and ungated result URLs displaces the incumbents.
  • Category dominated by gated PDFs. Any ungated asset has unusual citation upside.

Conflict resolution: when constraints pull in different directions, the first one that rules out an asset type wins. If budget rules out research, research is off the table even if timeline and expertise both favor it. The four constraints are AND, not OR.

Platform-Specific Considerations

AI systems behave differently. The asset-type choice should reflect which platforms matter most to the brand's audience.

  • ChatGPT. Favors structured listicles, comparison tables, FAQs, and step-by-step content. AirOps March 2026 data found lists and tables appear in nearly 80% of ChatGPT citations vs 29% in Google's top results. Asset types with list-heavy structure (templates, frameworks with component breakdowns, calculators with comparison views) over-index.
  • Claude. Weights academic citations and source diversity. Research with a documented method, opinion with a credible author entity, and frameworks with named author attribution all over-index.
  • Perplexity. Heavy weight on Reddit, community discussion, and recent news. Opinion shared through community surfaces, reactive opinion on recent events, and frameworks talked about in active Reddit threads over-index.
  • Gemini. Pulls mostly from Google-indexed content with strong SEO signals. Because AI search is an evolution of SEO rather than a separate discipline, assets that earn organic Google rankings also earn Gemini citations; the same crawlable, authoritative work feeds both.
  • Google AI Overviews. 97% of AI Overviews cite at least one source from top 20 organic results (seoClarity, February 2025). Asset types that rank well organically (research with backlinks, calculators with utility, in-depth frameworks with topical authority) over-index.
  • Microsoft Copilot. Pulls mostly from LinkedIn, Microsoft-indexed enterprise sources, and Bing-ranked content. Opinion shared through LinkedIn over-indexes.

Most teams optimize for a portfolio. The citation spread across platforms is wide enough that single-platform optimization is rare.

Industry Variants

Asset-type winners vary by industry. Ben Wills's March 2026 research (145 industries, 1,595 personas, 105,000+ LLM prompts) surfaced industry-specific signal patterns that guide asset selection.

  • Wikidata-dominant categories. Accounting software, baby care brands, budget hotel chains, CRM software. Reward research and frameworks because both produce the entity-level claims coded into Wikidata and Wikipedia.
  • SE-outbound-link-dominant categories. Agricultural equipment, B2B marketing data providers, beauty and cosmetics retail, beer brands, bottled water. Reward templates and calculators that get listed across third-party sites.
  • Wikipedia-citation-dominant categories. CRM software (ρ=0.577). Reward research and frameworks with citation density high enough to merit Wikipedia inclusion.
  • Harmonic-centrality-dominant categories. Affiliate marketing networks (ρ=0.577), auto insurance, brokerage and wealth management apps. Reward research with downloadable data and tools with embed-friendly result URLs.
  • Backlink-count-dominant categories. Car rental brands. Favor calculators and templates that earn organic backlinks through utility.
  • Best-search-rank-dominant categories. Most industries fall here at moderate correlation. Research and frameworks that produce ranking-friendly canonical pages are the defaults.

For asset-type selection, check your category against the Wills correlations. Find the dominant signal type. Bias your asset choice toward types that produce that signal.

Three Worked Decisions

Mid-Market B2B SaaS at $50,000 Annual Budget

Project-management category. Default plan was a single $40,000 industry research report.

Constraint review. Budget: all five asset types viable if spread across a portfolio. Timeline: the board reviewed AI visibility quarterly. Anything slower than 90-day citation traction was a problem. Expertise: founder had 12 years operator experience. The team had no in-house research operation. Competitive landscape: the category had one annual analyst report from a larger competitor. Another study would not move citation share.

Revised portfolio. Four opinion pieces from the founder ($8,000 total), one named framework with hub-and-spoke playbook ($12,000), one ROI calculator ($22,000), and one quarterly template release ($8,000 across four releases). Total: $50,000 across nine assets instead of one.

Twelve-month outcome. Nine distinct citable assets. Opinion content cited in 17 third-party articles. Framework adopted by two analyst firms and one industry publication. Calculator embedded on six partner sites. Templates downloaded over 14,000 times. AI citation share in the category went from 4% to 21%.

Professional Services Firm at $25,000 Annual Budget

Boutique M&A advisory firm. Default plan was contributed pieces in industry publications with no canonical owned asset.

Constraint review. Budget: research ruled out. Timeline: 12-month patience available. Expertise: senior partners had 20+ years operator experience and deep deal data. No analytical or engineering capacity. Competitive landscape: the category had no named M&A advisory frameworks. Deal-data benchmarks were behind paywalls at incumbent research firms.

Revised portfolio. One named M&A advisory framework ($10,000), eight opinion pieces from senior partners ($8,000), three quarterly templates ($5,000), and light operator-network research via 15 expert interviews ($2,000). Total: $25,000 across thirteen assets.

Twelve-month outcome. Framework cited in three industry publications and one analyst report. Opinion pieces cited in 22 third-party articles. Due-diligence template downloaded 6,200 times. AI citation share for category-specific queries grew from 1% to 14%.

Enterprise Brand at $200,000 Annual Budget

Enterprise cloud-infrastructure vendor. Default plan was an annual research report executed by a research firm at $150,000.

Constraint review. Budget: all asset types fully available. Timeline: annual research cycle workable. Expertise: in-house research team plus partnerships with two industry associations and one academic institution. Competitive landscape: the category had three set research reports (Gartner, IDC, one peer vendor), one widely-cited framework, and weak calculator coverage.

Revised portfolio. Annual flagship research with academic partnership ($85,000), four quarterly micro-research releases ($40,000), one new named framework for an emerging sub-category ($25,000), three calculators (TCO, capacity sizing, migration ROI; $30,000), and monthly opinion from the CTO and three SVPs ($20,000). Total: $200,000 across about fifty distinct citable assets.

Twelve-month outcome. Flagship research set the new sub-category benchmark. The new framework was adopted by two analyst firms within nine months. Calculators were embedded on partner sites. Per-leader opinion built six new citable operator entities. AI citation share for cloud-infrastructure queries grew from 18% to 42%.

The Selection Workshop

Asset selection is most reliable when the team runs a structured workshop, not when it gets decided in a planning meeting. The workshop is 90 minutes with the executive sponsor, the operator who will produce the content, and one or two context stakeholders.

Pre-workshop data to gather (2 to 5 days before the session).

  • Yearly Evidence budget (or quarterly if planning runs on a shorter cycle).
  • Current AI citation baseline for the category, measured via Profound, Peec AI, or Semrush AI Toolkit.
  • Competitive map: top three competitors and the assets they get cited for.
  • Wills industry data for your category from the LLM Ranking Factors research.
  • Operator-network audit: named experts with the credibility to carry content.

The 90-minute workshop agenda.

  1. Minutes 0 to 10. Review the pre-workshop data. Confirm constraints (budget, timeline, expertise, competitive landscape).
  2. Minutes 10 to 30. Work the four-constraint decision framework. Rule out asset types blocked by binding constraints.
  3. Minutes 30 to 50. Build the portfolio. Draft each asset (topic, format, author, timeline, hard cost). Multiple candidate portfolios are fine at this stage.
  4. Minutes 50 to 70. Pressure-test each portfolio against three questions: does it fit the budget; does it produce citations on the required timeline; can the team run it with current resources.
  5. Minutes 70 to 85. Pick the portfolio. Write down the decision and the constraint reasoning.
  6. Minutes 85 to 90. Assign owners and confirm the next-90-day execution plan.

Post-workshop deliverables (within 48 hours). One-page portfolio summary with asset list, owners, budgets, and timelines. Decision document with the constraint analysis. Calendar of asset publish dates. Measurement plan per Chapter 13.

Common Selection Mistakes

1. "Let's do a survey." The most common wrong default. Research feels rigorous and produces visible deliverables. Counter-test: would four opinion pieces, one framework, and one calculator at the same total budget produce more citation surface area? If yes, the default is wrong.

2. "Let's build a tool." The second most common wrong default. Engineering-led organizations fall into it most. Counter-test: name the specific quantitative query the calculator answers. If the answer is vague, the calculator will struggle to earn citations.

3. One big asset versus a portfolio. Teams default to one large asset and put the full budget there. The portfolio reframe almost always wins on citation density per dollar. Counter-test: have you compared the one-big-asset plan to a portfolio plan at the same budget?

4. Building frameworks in saturated categories. Teams keen on thought leadership try to build frameworks where one or two named incumbents already exist. Counter-test: name the top one or two frameworks in your category. If you cannot clearly stand apart, pick a different asset type.

5. Choosing based on team comfort rather than audience need. Writers prefer opinion. Researchers prefer research. Engineers prefer calculators. Counter-test: if a different team owned this decision, would the answer change?

6. Ignoring industry-specific signal patterns. Most teams pick asset types based on general best practice. They miss that asset preferences vary a lot by industry. Counter-test: have you checked the Wills industry correlations for your category and weighted your portfolio to match?

7. Skipping the promotion plan. Teams budget for asset development and forget promotion. Counter-test: for every dollar budgeted for asset development, is there a matching dollar (or operator time) budgeted for distribution per Chapter 3 (Third-Party Corroboration)?

The Asset Refresh Cadence Calendar

Published assets decay without refresh. The decay rate varies by asset type. The Asset Refresh Cadence Calendar gives operators a per-type schedule for when to update each asset. Programs running on a single quarterly refresh cycle waste effort on assets that do not need it and skip refreshes for assets that decay fast.

  • Frameworks: 18 to 24 months between refreshes. Vocabulary drift is the trigger. Categories evolve their language. Framework terms become stale or misleading. The refresh updates examples, adds new sub-components if the framework's scope expanded, and replaces case studies that no longer reflect current category dynamics.
  • Opinion: 6 to 12 months for evergreen pieces. Counter-claim erosion is the trigger. The contrarian view that was sharp in year 1 may become consensus by year 2. The refresh either re-sharpens the claim against the new consensus or pivots to a new contrarian angle. Reactive opinion pieces (published within 24 to 72 hours of a trigger event) do not refresh. They retire.
  • Research: 12 months between full refreshes. Data staleness is the trigger. Survey-based research carries a 12-month freshness window before journalists, analysts, and AI systems weight it less. Annual refresh is the working cadence. Sample-sized panel research can refresh on a quarterly micro-release cycle (one new cut per quarter) without redoing the full study.
  • Calculators: 3 to 6 months between benchmark refreshes. Benchmark data, pricing assumptions, and reference rates change. The Chime case (citations tripled in four weeks after refresh program standardized, per the April 2026 public data) shows the lift quarterly refreshes drive. Skip the refresh and the citation curve flattens.
  • Templates: 12 months for year-stamped templates. Year-stamp decay is the trigger. A "Q1 2026 Marketing Plan Template" stops being useful at the end of 2026. The refresh adds the new year, updates field labels, refreshes example data, and bumps the "last updated" date. Templates without year stamps refresh on an 18 to 24 month cycle similar to frameworks.

The calendar drives quarterly planning. At the start of each quarter, audit which assets are due for refresh against the cadence. Schedule the operator and coordinator time. Treat refresh work as part of the Evidence budget, not as discretionary work that gets pushed. Refresh velocity (the share of assets refreshed on time per quarter) is itself a tracking metric. Programs at 80%+ refresh velocity see steady citation share. Programs below 60% see citation share decline within 12 months of consistent under-refresh.

The Asset Retirement Decision Framework

Not every asset deserves a refresh. Some assets should retire. The Asset Retirement Decision Framework uses four signals to distinguish a refresh-and-extend asset from a retire-and-replace asset. Continue refreshing assets that pass the framework. Retire assets that fail. Refreshing a retire-grade asset wastes operator time that should go to new asset development.

  • Signal 1: Citation share trend. The asset's primary topic citation share dropped 30% or more over the trailing 12 months. The drop reflects category evolution that the asset cannot recover through refresh alone.
  • Signal 2: Co-citation velocity decay. The Co-Citation Velocity Score per Chapter 3's framework has fallen below 0.5 of the asset's baseline. The asset is no longer being referenced by third parties at the rate that drives compounding.
  • Signal 3: Competitive displacement. A competitor published an asset in the last 6 months with materially better data, method, or format. The displacement is visible in AI citation patterns. The competitive asset now occupies the citation slot the original asset used to hold.
  • Signal 4: Operator credibility gap. The operator can no longer defend the asset's claims due to category evolution. Common cases: the data assumption changed, the framework's components do not match current category dynamics, or the operator's view has matured past the asset's original framing.

The decision logic:

  • Three or four signals triggered. Retire the asset. Replace it with a new asset using a refreshed approach. Move the URL to a 301 redirect to the replacement if AI systems are still citing the original URL.
  • Two signals triggered. Run a deep refresh attempt. Add a new section that addresses the strongest of the triggered signals. Re-measure 90 days post-refresh. If at least one signal flipped to positive, continue maintaining. If not, retire.
  • One signal triggered. Monitor. Address the specific signal in the next scheduled refresh.
  • Zero signals triggered. The asset is healthy. Continue the standard refresh cadence.

Most programs over-maintain underperforming assets. The sunk-cost effect is strong with assets the operator built personally. The retirement framework converts the decision into a data-driven check. It frees operator time for new asset development, which has higher expected citation yield than refreshing a stalled asset.

Questions & Answers

How do I choose between a framework and a calculator if budget allows both? Build the framework first. Frameworks earn citations by becoming vocabulary that travels by reference. Calculators earn citations by becoming utilities that earn URLs. The framework sets the category vocabulary that the calculator then puts into use.

What if our category does not have established asset types? Default to expert opinion and frameworks. Categories without set asset types are categories where vocabulary is missing. Opinion and frameworks both supply vocabulary. Research and calculators need category context that does not yet exist.

Can we mix asset types in one launch? Yes, and this is usually better than betting one large asset against the same budget. A $50,000 budget spread across opinion, frameworks, calculators, and templates produces more citation surface area than one $50,000 research study.

Does the right asset type change if we are optimizing for ChatGPT vs Perplexity vs AIO? Yes. ChatGPT favors structured listicles. Perplexity weights Reddit. AIO leans on top-20 organic. Copilot pulls from LinkedIn. Most teams optimize for a portfolio. The citation spread across platforms is wide enough that single-platform optimization is rare.

How long until I see citation traction after publishing my first original asset? Opinion: 30 to 90 days. Frameworks: 60 to 180 days. Templates: 30 to 120 days. Calculators: 60 to 180 days. Research: 180 to 360 days. The variance is mostly a function of co-citation velocity.

Should we gate our research, calculator, or framework? No, with one exception. AI systems cannot fill out forms. Gated content is invisible to AI Search. The exception is enhanced versions of an ungated base asset. Give basic results ungated. Gate downloadable enhanced reports.

What is the minimum viable budget to start an Evidence program? $5,000 per quarter is the working floor. At that level, the real portfolio is one opinion piece per month plus one framework or template per quarter.

Can opinion pieces really compete with research for citations? Yes, when the operator credibility is set and the claim is specific. AirOps's March 2026 analysis found AI systems cite opinion-based content at rates close to empirical research when the source has verified expertise and the claim is specific enough to pull out as a discrete statement.

GET YOUR FREE PLAN

This field is for validation purposes and should be left unchanged.

They have a strong team that gets things done and moves quickly.

The website helped the company change business models and generated more traffic. SearchBloom went above and beyond by creating extra content to help drive traffic to the site. They are strong communicators and give creative alternative solutions to problems.
Mackenzie Hill
Mackenzie HillFounder, Lumibloom

We hate spam and won't spam you.