AI gathers signals about a brand across every surface it can reach. Sources include owned-domain content. Also third-party coverage, community talk, review sites, social posts, podcast transcripts, conference talks, and analyst reports. The model's response is its best attempt to reconcile what those surfaces say. When the surfaces agree, the picture is sharp. When they disagree, the picture is fuzzy or wrong. Narrative coherence and sentiment management work as one motion. AI treats them as one signal. This chapter covers the narrative coherence work across owned and third-party surfaces. It also covers active sentiment management, the crisis-response pattern, narrative drift detection during brand change, and worked examples. The examples show brands that kept entity coherence through inflection events.
Why This Technique Matters
Chapter 10 builds the entity. This chapter protects it once built. The split matters. Brand trust can build for years. It can erode in months when narrative drift or a crisis hits unmanaged. Mature brands invest as much in protection as in building. The downside is higher when protection is dropped.
AI makes the work easy to measure. Old-school marketing did not. Brands now know if the AI is describing them well. Just ask the AI. The model gives the description back. A brand can run 20 to 50 queries on the major AI tools. The queries cover the brand, its products, its category, its rivals, and its named experts. Then read what the model says. The output is the test. The work is closing the gap between what the model says and what the brand wants it to say.
The mechanism that makes this work load-bearing is multi-surface aggregation. AI gathers from every surface at retrieval. Picture a brand whose owned-domain narrative is sharp but whose Trustpilot reviews trend negative. The two surfaces send mixed signals. The model pulls from both. It then produces a hedged or negative answer. The fix is not just to clean up the owned-domain narrative. It is to align the sentiment trend across the surfaces AI pulls from. Reputation and narrative work are one job.
This work also matters because crises now spread through AI faster than through old search. A Reddit thread with mass, a Twitter thread, and a Glassdoor pattern can build a negative story. The model pulls it and uses it across thousands of brand queries. The crisis-response window shrank from weeks to hours. The discipline for managing these events has to match the new tempo.
The Narrative Coherence Discipline
Narrative coherence is the brand's story about itself, its products, its skill, and its category. The story has to match across every surface AI pulls from. This work runs at three levels.
Brand-Level Narrative
This level covers the brand's one-sentence pitch, category framing, founding story, and value claim. These should match across the website, LinkedIn page, Crunchbase profile, Wikidata record, Wikipedia article (where it exists), contributed pieces, and any third-party coverage the brand can shape.
The work is operational. Keep a brand narrative document. It covers the canonical one-sentence pitch, the 30-word elevator pitch, the 100-word company overview, and the 250-word boilerplate. The boilerplate goes into contributed pieces and press materials. The document is the source of truth. Every surface should match it. Wording can vary. Substance should not. When surfaces vary in substance, you get narrative drift. AI then pulls that drift into mixed signals.
Product and Service Narrative
Each product or service entity (covered in Chapter 10) has its own narrative. It covers what the offering does, who it serves, how it differs from rivals, and what outcomes customers achieve. The story has to match across the product page, the help docs, the sales material, third-party reviews (where the brand engages), and contributed pieces that mention the offering.
This work mirrors the brand-level work but is more granular. Each product gets its own narrative document. The cadence is tighter because products change faster than brands. Feature launches, pricing updates, and positioning shifts all trigger narrative updates. These updates should reach all surfaces in the same quarter as the product change.
Expert Narrative
Each named-expert Person entity has a personal narrative. It covers background, role, expertise area, and the topics they speak on with authority. The story has to match across the expert's bio page on owned domain, LinkedIn profile, conference speaker bios, contributed piece bylines, and podcast guest profiles.
Expert narrative also covers the topics the expert claims authority on. Drift here is common. The expert's interests change faster than the formal narrative. Take an operator who has been speaking on AI Search Optimization for two years. If the LinkedIn bio still lists prior PPC expertise, the story does not match. AI then mirrors the drift in mis-attributed citations.
The Active Sentiment Management Layer
Narrative coherence aims for a positive base. Sentiment management protects that base from the noise and negative signal that builds up over time. The work has four parts.
Review-Platform Sentiment
The review-platform work from Chapter 1 sets a sentiment base that AI reads. Sustained 4-plus-star ratings with active brand reply on negative reviews send a positive signal. AI then folds that into its answers. Falling ratings, unanswered negative reviews, or below-3-star averages send a negative signal. AI surfaces that in brand queries.
The upkeep is operational. Keep the review-acceleration motion going (Chapter 1). Reply to negative reviews within 48 hours. Watch for new sites entering the brand's category. Some programs win the review work at launch and then drop it. Their sentiment base erodes over 12 to 24 months. New reviews then skew toward problem-focused posts.
Community-Surface Sentiment
The community work from Chapter 2 builds sentiment on Reddit, Quora, Hacker News, and industry forums. AI reads community sentiment heavily for Perplexity citation share. It also reads it for ChatGPT and AIO. Sentiment management here looks like steady named-operator participation. It also looks like reply to negative threads under brand or operator handle within 48 hours. And direct fixes for wrong claims about the brand.
Community work is the hardest sentiment surface to manage. The sites move fast. The cultural norms punish heavy brand replies. The 90-10 rule from Chapter 2 applies. The brand has to be a member of the community before it can manage sentiment there.
Social-Surface Sentiment
LinkedIn, X, Bluesky, and other social sites send a sentiment signal that AI reads for some queries. Copilot weights LinkedIn heavily. The work here looks like named-operator replies to industry talk. It also looks like reply to brand mentions when they need a fix. And active narrative input that shapes how others talk about the brand.
Social sentiment is more volatile than review or community sentiment. The sites have shorter content half-lives. A single LinkedIn thread can drive the brand's social sentiment signal for a few weeks. Then it vanishes. The management cadence has to match the site's tempo. Brands that check social only weekly miss most of the opportunity.
Search-Surface Sentiment
The Google and Bing search pages themselves are a sentiment surface. Brand-name queries surface a mix of results that adds up to a signal. The mix has the brand's own pages, review-site pages, news mentions, community references, and complaints where present. AI Overviews and AI search summaries read this surface and pull from it. Brands with negative results filling their brand-name SERP face an uphill fight in AI Search. The model's first pass surfaces the negative content.
The work here is old-school SEO with a sentiment overlay. Own as much of the brand-name SERP as you can. That means the homepage, product pages, About page, founder bios, key resource pages, social profiles, and key review sites in good standing. Suppress weak negative results with a positive-content surge. Address real negative content head-on through correction outreach or reply posts.
The Crisis-Response Pattern
Crises now spread through AI faster than through old search. The response pattern shrinks to hours, not days. Four steps within 72 hours of the trigger event.
Step 1: Document the Specific Incorrect Content
Within the first few hours, write down what the AI is surfacing that needs a fix. Screenshot the responses from ChatGPT, Claude, Perplexity, and Google AI Overviews. Capture the queries that draw out the wrong story. Save the source citations the AI listed. Find the third-party content the AI is pulling from. These are the source URLs the model cited. The notes become the brief for the rest of the response.
Most brands skip this step. They react to a vague sense of "AI is saying bad things about us." A vague sense makes a vague response. It does not target the real source. A documented brief makes a focused response. One that can be tested for results.
Step 2: Remediate Upstream Sources
The third-party content the AI is pulling from is the upstream source. Put your work there first. Send correction requests to the site hosting the wrong content. Post replies on community sites where the negative story is spreading. Send factually-correct contributed pieces to industry sites that compete for the same citation surface.
Upstream work is the highest-leverage step. Fixing the source fixes the downstream pull. Owned-domain fixes without upstream work produce a confused story. The AI pulls both the original wrong source and the brand's fix. The model then gives a hedged answer. Upstream fixes collapse the answer to match the fixed source.
Step 3: Publish Authoritative Correction on Owned Domain
Within 24 to 48 hours, publish factually-correct content on owned domain. The content has to address the wrong claims head-on. It carries proper schema, a named-author byline from a known operator, evidence and citations that back the fix, and clear words AI can extract cleanly. Send the new content via IndexNow (Chapter 12) for fast index pickup.
Owned-domain fixes do not alone solve the issue. Upstream matters more. But the fix gives AI a counter-signal to pull from when building answers. Without that counter-signal, even good upstream fixes leave the index in flux during spread.
Step 4: Monitor Recovery
The recovery curve runs 30 to 90 days. Citation share for the hit queries returns to base over time. Indexes update. The fix spreads. Monthly cadence reports (Chapter 13) track the recovery. Some teams call the crisis "done" after the first response. They stop watching. Months later they often find the issue partly came back. Or it stuck around on certain sites or queries.
Recovery monitoring is also where lessons get learned. The post-event review runs about 2 weeks after the trigger event. It finds what went wrong upstream, what the brand could have caught sooner, and what process changes block similar events. The work matters. Crises tend to repeat in patterns. A brand that handles the first event with rigor is better set up for the second.
Narrative-Drift Detection
Narrative drift is the slow split between narrative surfaces. It shows up when the brand changes faster than its surfaces. At the retrieval layer this surfaces as semantic-relationship drift: the associations the model holds between the brand and its category, claims, and experts loosen as the surfaces stop telling one story. To catch drift, run audits on a set cadence.
The Quarterly Narrative Audit
Each quarter, the program runs a narrative audit. The audit covers five surface types. First, owned-domain narrative: homepage, About page, key product pages, key bio pages. Second, third-party-described narrative: Wikipedia article where present, Wikidata, Crunchbase, key analyst reports, recent media coverage. Third, social narrative: LinkedIn company page, executive LinkedIn bios, X profile. Fourth, review-platform narrative: G2 profile description, Clutch profile, key category-specific sites. Fifth, AI-retrieved narrative. For the fifth, run 10 to 20 sample queries across major AI tools and write down what the model says.
The audit checks the five sources against the canonical brand narrative document. Flag the drift. Which surfaces describe the brand off the canonical? Which AI pulls give a mixed story? What content needs refresh to close the gap? The audit takes 4 to 8 hours per quarter for mid-market brands.
Brand-Evolution Triggers
Certain events trigger out-of-cycle narrative audits. Product launches or major feature releases. Repositioning or category-shift moves. Executive transitions: CEO change, new CMO, or the exit of a publishing expert. Funding events. Mergers and acquisitions. Major partnership news. Each trigger calls for a narrative review across the surfaces above. Targeted updates then bring the narrative in line with the new reality.
Build narrative impact review into launch planning. Before a product launch, the team checks which owned-domain pages need refresh, which third-party surfaces need outreach, which contributed pieces are now outdated, and what AI samples should be tested post-launch. Brands that skip this step often find six months later that the AI still describes them in pre-launch terms.
The Owned-Third-Party Narrative Alignment
Owned-domain narrative is fully under the brand's control. Third-party narrative is shaped through Chapter 3 contributed pieces, Chapter 2 community work, and the analyst-relations work in Chapter 1. Alignment is reachable but it takes deliberate work.
Here is the high-leverage pattern. Every contributed piece the brand publishes uses brand and product wording that matches the source-of-truth document. Every community reply uses the same terms and positioning. Every analyst briefing points to the same framework, the same value claim, the same category framing. The build-up over 18 to 24 months is that third-party sources adopt the brand's wording on their own. The brand has repeated it at every touchpoint.
The pattern fails when contributed pieces vary the positioning. One piece says "we are a marketing automation platform." Another says "we are a revenue intelligence solution." It also fails when community replies disagree with owned-domain claims. Or when analyst briefings carry mixed stories. AI then gathers the gap and gives hedged answers to brand queries.
Worked Examples
A mid-market B2B SaaS partner repositioned from "social media analytics" to "marketing intelligence platform." The company had moved into broader analytics. The repositioning took 6 weeks. The narrative-alignment work took 9 months.
Pattern. Updated the canonical brand narrative document. Refreshed homepage, About page, and key product pages in the first 2 weeks. Audited all 87 published contributed pieces for outdated positioning. Reissued or marked up the top 23 most-cited ones with notes. Outreach to industry sites to update author bios. Wikidata record updated. New Wikipedia article submission. The first one had been rejected for notability. The new positioning plus fresh analyst coverage met the bar on the second try. All review-site profiles updated to the new positioning.
Outcome at 9 months. AI samples described the brand as marketing intelligence, not social media analytics. The 12-month-old contributed pieces still surfaced at times. But the new notes gave context. Citation share for the new category queries grew from 1% to 14%. Citation share for the old category queries dropped to a steady 7%. The brand decided that was acceptable residual recognition.
A consumer-services brand had a viral Reddit thread claiming deceptive pricing. The thread reached r/all within 36 hours. It drew 4,200+ comments mostly backing the original critique. AI began surfacing the thread in brand queries on Perplexity and ChatGPT within 5 days.
Response pattern. Wrote down the specific wrong pricing claims in the first 6 hours. The CEO had a built-up Reddit presence at 1,400 karma. As a named expert, she posted a full reply on the thread within 18 hours. The post owned the spots where the brand's pricing wording had been unclear. It fixed the factual errors in the original post. It also pledged doc improvements with a date. The reply was upvoted to 2,100 net and pinned by mods. Within 48 hours, the brand published a full pricing FAQ page on owned domain. The page had FAQPage schema and IndexNow push. Outreach to two major consumer-protection sites then drew updated coverage. The new coverage cited both the original concern and the brand's reply.
Outcome at 90 days. AI for pricing queries about the brand surfaced both the original concern and the brand's reply in a balanced way. The CEO's Reddit handle gained 3,800 karma during the crisis from the high-quality reply. That built up Person-entity recognition. The brand's owned-domain pricing FAQ became a citable source for AI Search answers about category pricing practices broadly. Not just the brand's own pricing.
A B2B services agency ran the Quarterly Narrative Audit pattern from this chapter. In Q3 of year 2, the audit surfaced real drift. The founding partner's LinkedIn bio still listed prior PPC expertise from before the firm's pivot to SEO. Three contributed pieces published 14 to 18 months prior carried outdated positioning. Two industry directory profiles described the firm in pre-pivot terms. AI samples gave mixed positioning. "SEO firm" in 60% of samples. "PPC firm" in 25%. "Digital marketing agency" in 15%.
Drift remediation. Updated founder LinkedIn bio and X profile in the first week. Outreach to the two industry directories for profile fix. Added editor notes to the three outdated contributed pieces that point to current positioning. Published two new contributed pieces to back the SEO-firm positioning and add fresh canonical signal.
Outcome at 90 days. AI samples shifted to 90% describing the firm as an SEO firm. The other 10% described it as digital marketing. The firm decided that was acceptable. The agency does run a smaller paid-media practice alongside SEO. The early catch avoided a year-long drift that would have led to a major narrative-realignment job later. The quarterly audit cost about 5 hours per audit. The work it surfaced was much cheaper to fix when caught early than when allowed to compound.
Brand profile. A Series-D B2B SaaS company at $85M ARR. The founding CEO had spent 4 years building her Person entity to deep trust. The signals: a Wikipedia article, an industry voice on the company's core topic, 80-plus contributed pieces in major industry sites, steady podcast spots, and steady conference talks. The board chose to bring in a new CEO from outside to scale the company into its next growth phase. The founding CEO would stay on the board as Executive Chair. She would also keep speaking on the topic where her trust ran deepest. This was a real scale-up call, not a forced exit. The brand wanted to keep both Person entities. It also wanted to move the current-role narrative cleanly.
Pre-transition coordination. The MERIT program lead was brought in 3 months before the public news. The early start let the program map which surfaces would need updates. The map covered the LinkedIn company page leadership section, the About page on owned domain, the Crunchbase profile, several analyst-firm CRM entries that drove how analysts described the company in reports, and the Wikipedia article. New bios were drafted for the founder's Executive Chair role and the new CEO's role. Both bios went through brand-narrative review against the canonical narrative document. The program worked with the PR team on news timing. It also worked with legal on disclosure wording and with the board chair on the transition message. The framing called the change continuity, not disruption.
The transition week. The news went out with both Person entities featured. The new CEO was joining. The founder was moving to Executive Chair. The wording made clear what each leader would own. Both bios were updated across all owned-domain surfaces the same day the news went live. The Wikidata entries were updated within 48 hours. That covered the founder's entry and the new one for the incoming CEO. LinkedIn profiles for both leaders were updated within 24 hours. The Wikipedia article about the company was updated by the brand through the edit-suggest process within 7 days. The new article about the incoming CEO was sent through the notability review soon after.
Post-transition AI retrieval sampling. The program ran AI sampling at 30, 60, and 90 days. The queries covered the company and its current leadership. The 30-day sample showed AI still tying the founder to the CEO role. This is the expected training-data lag. The program noted it in the cadence report and did not panic. The 60-day sample showed the change starting to spread. About 40% of samples named the new CEO for current-role queries. 60% still named the founder. Mixed pulls were the main pattern. The 90-day sample showed the change taking hold. About 75% of current-role queries surfaced the new CEO. The founder was still cited for past-context queries. She was also cited for the topical-expertise queries where her authority stayed the gold standard.
Defensive narrative work. Two months after the news, the founder published a contributed piece. The piece framed her move to Executive Chair and the topics she would keep speaking on. It ran in a top-tier industry site. The structure made it pull-friendly as the canonical reference for the founder's new role. The brand updated its About page to feature both leaders with clear role splits. The founder's section called her Executive Chair and topical-expert lead. The new CEO's section called her company CEO and operating leader. The PR team pitched the new CEO for podcasts and contributed pieces during months 2 and 3. The push drew about 8 solid publications in that window. It sped up the entity build for the new role.
Outcome at 12 months. The new CEO had built a known entity for current-role queries. The founder's prior entity authority gave the company a head start. The new CEO did not start from zero. The founder kept authority for the topical-expertise queries where her depth was deepest in the category. The brand entity kept compounding across both Person entities rather than splitting. AI citation share for category queries kept growing through the transition. It did not plateau or fall. That had been the board's worry going into the change. The two-Person-entity pattern proved to be a strength, not a dilution. The entities had clear topical and role anchors.
Honest caveat. This clean transition took 3-month pre-transition work. Transitions handled reactively often produce 6 to 12 months of narrative drift. The reactive pattern: the news goes out, narrative updates happen after. The drift erodes entity authority and citation share. That happens during the period when the brand is most open to rival narrative capture. The MERIT program lead has to be part of the C-suite transition planning. Not told after the fact. Most brands learn this the costly way on the first executive transition. The second one is when the pre-transition work becomes part of the org.
The Narrative Coordination Calendar
The work this chapter describes takes org-level coordination. Ad-hoc effort does not produce it. The work covers narrative coherence across surfaces, active sentiment management, crisis response, and drift detection. What sets brands that do this well apart from brands that mean to do it well is the narrative coordination calendar. The calendar ties MERIT narrative review to recurring business events. The coordination then happens whether or not the MERIT program lead remembers to surface it. Without the calendar, the work depends on the program lead learning about narrative-affecting events through informal channels. That fails at orgs larger than 30 people.
Quarterly narrative audit. The quarterly audit covered earlier is the spine of the calendar. Run the audit on a calendar cadence. Same week of the same month each quarter. The audit owner blocks time before the quarter begins. This catches drift before it compounds across the surfaces. Some brands treat the audit as something to schedule when there is bandwidth. Not as a fixed quarterly commitment. They typically skip it the first quarter. Then the second. By the third quarter the work has fallen to once-a-year retrospective drift cleanup rather than ongoing prevention. The calendar treats the audit as a fixed recurring meeting. Not a project to schedule.
Product launch planning integration. Every product launch planning session should include a MERIT narrative impact review. The review covers four checks. Which owned-domain pages need refresh? Which third-party surfaces need outreach? Which contributed pieces are now outdated? What AI samples should be tested post-launch? The MERIT program lead joins the product launch planning meeting. Marketing, sales, and customer success leads are there too. Not as an observer. As a contributor with review outputs that affect the launch readiness sign-off. The deliverable is a one-page narrative impact doc. The product team and the MERIT lead sign off on it 4 weeks before launch. The 4-week buffer matters. Some surface updates have their own lead times. They need advance work. Wikipedia, analyst-report briefings, and contributed-piece notes all fit this pattern.
Executive communication review. Any executive note that affects brand or named-expert narrative gets a MERIT narrative review before public release. Covered notes include board letters that get published or quoted, earnings call talking points (for public companies), all-hands updates that may leak or be quoted, and major customer or partner news. The coordination point is operational. Not gatekeeping. The comms team copies the MERIT program lead on drafts at least 48 hours before release. The program lead flags any drift risks against the canonical brand narrative. The comms team then folds in the feedback. Or documents the choice to deviate. The 48-hour window is short enough to fit inside normal comms cycles. It is long enough to give the review real surface area.
Pricing and positioning updates. Every change to pricing, packaging, or category positioning triggers a MERIT narrative audit. The audit updates the brand-level narrative doc to match the new positioning. It refreshes the hit owned-domain pages: pricing page, product pages, comparison pages. It also covers outreach to third-party sites with old positioning. That means G2, Capterra, Clutch, and category-specific directories where the brand has profiles. The audit refreshes Wikidata and Wikipedia entries when the positioning change is big enough to affect them. Timing matters. Changes go live no sooner than 2 weeks after the narrative audit ends. The buffer keeps the surfaces aligned before the pricing change becomes the topic of talk on community and third-party sites. It blocks the worst failure mode in pricing changes. That failure mode: the AI pulls old pricing context next to community critique of the new pricing. The model then gives a confused, negative answer.
M and A and partnership announcements. Major partnership or acquisition news affects entity attribution and narrative scope in ways that are easy to underrate. An acquisition changes the brand entity boundary. AI needs to map the acquired company's surfaces into the acquiring brand's entity. That work covers Wikidata sameAs updates, Wikipedia article merges or redirects, analyst-report briefings, and owned-domain narrative updates. A major partnership creates a new shared narrative surface. Both partners need to tell the same story across their owned and third-party surfaces. The MERIT program lead checks the news narrative before public release. The lead coordinates the updates across owned and third-party surfaces during the news window. The lead also sets up AI sampling for 30, 60, and 90 days post-news to check the move is working. Without this work, AI will often surface the pre-news entity boundaries for 12 to 18 months after the news. That erodes the citation value of the deal.
Hiring and team transitions. Senior hires, exits, and role changes affect the Person entity narrative in ways that compound across the full set of named-expert surfaces. The MERIT program lead reviews bios and news wording before public release. The lead updates schema and sameAs links across owned domain, LinkedIn, and Wikidata where it fits. The lead also works with the named expert (whether joining or leaving) and the rest of the executive team on the transition story. Outgoing senior people who were named experts get a coordinated narrative update across surfaces. The update reflects their post-exit role and clears out stale current-role attributions. Incoming senior people get a coordinated entity-build effort during their first 90 days. The effort mirrors the Chapter 10 named-expert pattern packed into the onboarding window.
Crisis response overlay. The crisis-response pattern covered earlier in this chapter runs inside the broader narrative coordination calendar. It does not run beside it. The calendar is the ongoing upkeep that blocks most crises from emerging. The crisis-response pattern is the reactive overlay for the crises that emerge anyway. Brands that try crisis response without the calendar work find their replies are slower. They have to rebuild context every time. Brands running the calendar work find their crisis response is faster and more accurate. The canonical brand narrative doc, the named-expert surface map, and the AI sampling baselines are already current.
Annual brand narrative refresh. Once per year, the program runs a full review of the canonical brand narrative doc, the named-expert narrative docs, and the product narrative docs. The annual refresh catches drift built up through small changes. Each change alone was below the threshold of an out-of-cycle audit. Together they shifted the narrative. The refresh updates outdated examples. A 3-year-old reference case study gets replaced with a 12-month-old one. The refresh also updates current-year positioning. The category framing the brand uses outside aligns with the current-year market. Then it signs off the canonical version for the next year. The annual refresh usually takes 2 to 3 days of focused work. The output becomes the new source of truth for every surface update over the next 12 months.
The governance pattern. The MERIT program lead does not own all the narrative calls. The CEO owns the executive-communication narrative. The product team owns the product narrative. The pricing committee owns the pricing positioning. The legal team owns the disclosure wording. What the MERIT program lead holds is the governance role. The role makes sure narrative-affecting calls get the needed MERIT review before going public. The org-design point is structural. The MERIT program lead needs a seat in the product launch planning meeting, the executive communication review process, the pricing committee, and the M and A communications process. Without that seat, the calendar work falls to whatever surface area the program lead can reach through informal sway.
A worked example of the institutionalization. A mid-market B2B brand built the narrative coordination calendar into the org over a 6-month rollout. The rollout went in order. Quarterly audit first. Then product launch integration. Then executive communication review. Then pricing and M and A. Within the 6 months the calendar caught 4 specific narrative drift events that would have eroded entity authority if handled reactively. First, a product line rebrand where the original launch plan had no surface-update piece. The calendar caught the gap 4 weeks before launch. The team built the surface-update plan into the launch readiness checklist. Second, a senior executive exit where the original news plan would have gone out with stale current-role attributions still live on three analyst-firm CRM entries. The calendar review caught the gap 48 hours before release. The comms team updated the CRM entries before the news went live. Third, a partnership news where the partner's narrative for the deal conflicted with the brand's framing. The calendar review caught the conflict 2 weeks before news. The two partners aligned on a shared narrative before public release. Fourth, a pricing simplification where the original plan would have gone out 5 days after the pricing change. The calendar review pushed the prep window to 2 weeks. The surface updates happened before the pricing change went live. The community talk about the simplification was much less negative than the brand had feared.
Each of the four events was caught by the calendar coordination. Not by post-event recovery. The post-event recovery alternative would have made 6 to 12 months of narrative drift on each event. The combined effect on entity authority compounds across many events. The way credit-score events compound across many late payments. The built-in calendar is the difference between a brand that protects its entity through ongoing upkeep and a brand that has to do costly narrative cleanup every 12 to 18 months when the built-up drift becomes visible.
The Narrative Coherence Score
AI retrieval aggregates narrative signals across four surfaces: owned-domain, third-party editorial, community, and review platforms. When the four surfaces tell the same story, the model retrieves a coherent narrative. When the four surfaces diverge, the model retrieves a confused or contradictory narrative. The Narrative Coherence Score is a Searchbloom-coined composite that measures alignment across the surfaces on a 0 to 100 scale.
NCS = (sum of surface-level coherence scores) / (number of surfaces) where each surface scores 0 to 100
Per-surface scoring:
- Owned-domain coherence (0 to 100). Measure consistency of the brand's positioning, category framing, and key claims across the top 30 owned-domain pages. Inconsistent positioning (the about page says one thing, the product pages say another, the blog says a third) scores below 50. Aligned positioning across all pages scores 90+.
- Third-party editorial coherence (0 to 100). Sample 20 third-party articles citing the brand from the trailing 12 months. Score how consistently they frame the brand's category, value proposition, and differentiation. Editorial drift (different categorizations across articles, different value propositions) scores below 50.
- Community coherence (0 to 100). Pull recent community discussion of the brand from Reddit, Hacker News, LinkedIn, and category-relevant forums. Score how consistently the community describes the brand. Mixed community sentiment with conflicting takes scores below 50. Aligned community discussion scores 80+.
- Review-platform coherence (0 to 100). Pull reviews from G2, Capterra, Trustpilot, and Clutch as applicable. Score how consistently reviewers describe the brand's strengths and weaknesses. Polarized review patterns (some reviewers love feature X, others say feature X is terrible) score below 50. Aligned review themes score 80+.
Reading bands. Composite NCS above 80 indicates aligned narrative. AI retrieval reads a coherent brand story. The model attributes citations consistently. Below 60 indicates narrative fragmentation. AI retrieval pulls conflicting framings into the same response. The brand reads as confused or contradictory to the model and to human readers of AI responses.
Track NCS annually as part of the Narrative Coordination Calendar work above. The score moves slowly. Quarter-to-quarter changes are within noise. Annual changes signal real narrative drift or alignment work. Programs that score below 70 should prioritize the lowest-scoring surface in the next year's narrative work. Programs above 80 maintain through the quarterly narrative audit and active sentiment management.
Common Mistakes That Defeat Narrative and Reputation Alignment
1. Owned-only narrative discipline. The brand keeps its owned-channel narrative pristine but ignores third-party surfaces. AI retrieval gathers the inconsistent picture. Counter-test: when did you last audit third-party narrative surfaces against your canonical brand description?
2. No canonical narrative document. Different team members and outside contributors use different versions of the brand description. Drift builds up. Counter-test: does the team have a written canonical brand narrative document everyone references for new content?
3. Crisis response without upstream remediation. The brand publishes a correction on owned domain but does not address the third-party source the AI is actually pulling from. The hedged retrieval continues. Counter-test: in your last crisis response, did you fix the upstream source as well as the owned domain?
4. Quarterly audit skipped. The discipline gets de-prioritized as the program ages. Drift builds up silently. It surfaces as a major realignment six to twelve months later. Counter-test: how many of your last 4 quarters included a documented narrative audit?
5. Brand-evolution triggers ignored. Product launches and repositioning moves happen without narrative impact assessment. Six months later the AI still describes the brand in pre-event terms. Counter-test: does your product launch process include a narrative impact assessment step?
6. Sentiment monitoring only. The brand watches sentiment trends but does not actively manage them. Negative threads pile up without brand-side engagement. Counter-test: what is your time-to-response on the last 10 negative community threads or reviews?
7. Expert narrative diverging from brand narrative. Named experts evolve their personal positioning without coordination with brand narrative. AI retrieval surfaces inconsistent expert and brand stories. Counter-test: when did you last align the expert's personal bio across surfaces with the brand's canonical positioning?
8. Treating reputation work as PR. The brand treats reputation as a press-relations function rather than as an entity-coherence discipline tied to MERIT measurement. The work gets episodic attention rather than sustained discipline. Counter-test: is reputation management built into your quarterly MERIT measurement review?
Questions & Answers
Why narrative and reputation as one chapter? AI retrieval treats them as one. The model gathers signals across owned, third-party, community, and review surfaces at the same time. Narrative consistency and sentiment trajectory feed the same entity-recognition system.
How does AI handle negative content? It pulls negative the same way it pulls positive. The model weighs information gain, source authority, and structural retrievability, not sentiment. A well-structured negative review on a high-authority site outranks a thin positive brand response.
What is narrative drift and how to detect it? Slow divergence between narrative surfaces as the brand evolves faster than propagation. Detect it through quarterly narrative audits across owned, third-party, social, review, and AI-retrieved samples.
Crisis-response pattern? Four steps within 72 hours. Document specific incorrect content. Fix upstream sources. Publish authoritative correction on owned domain with IndexNow notification. Monitor recovery over 30 to 90 days.
Different from traditional messaging consistency? Traditional aims for unified voice across owned channels. AI Search alignment extends to every surface AI retrieves from. That includes third-party sources the brand does not control directly.
Coordinate narrative changes with engineering or product? Every time. Roadmap changes, launches, and repositioning all affect narrative. Include narrative impact assessment in product launch planning.
Interaction with review platforms? Chapter 1 and Chapter 2 work feed reputation alignment directly. Strong execution there inherits positive sentiment that this chapter protects and aligns.
Brand entity vs named-expert narrative? Brand narrative covers organization, products, and category positioning. Expert narrative covers operator expertise, role, and topical authority. Both need consistency. They should align but are not identical.
