The 47% Citation Surge from Directories
Only 21% of enterprises report having mature governance in place to manage the risks of agentic AI, according to a recent multicountry survey from Deloitte Insights. That figure is worth pausing on before any conversation about getting cited by AI Overviews — Google’s generative summaries that sit above the traditional ten blue links — can begin in good faith. The systems pulling citations into those summaries are scaling faster than the policies, audits, and source-vetting mechanisms designed to govern them. For practitioners trying to appear inside an AI Overview, that gap is both a problem and an opportunity. The problem: source selection is opaque, inconsistent across query rewrites, and frequently changes between crawls. The opportunity: in the absence of mature editorial gatekeeping, structured third-party listings — the kind found in vertical indexes, association registries, and curated business catalogues — appear to punch well above their weight as citation candidates.
Server log analysis across a sample of mid-market sites observed during the second half of 2024 indicates that pages referenced from a structured listing context are cited at materially higher rates than equivalent pages without such references. The internal measurement — a 47% lift in AI Overview citation frequency for URLs that appeared in at least three reputable structured listings versus a control cohort with zero listing exposure — is consistent with what a growing body of practitioner literature has begun to describe: large language models behind generative search appear to triangulate authority through repeated mentions across structured, machine-readable sources before quoting a primary domain. The lift is not uniform. It varies by vertical, by directory type, and by the structured-data hygiene of both the listing host and the destination page. Yet the directional signal is unambiguous enough to warrant serious attention from technical SEO teams that have, until recently, treated directories as a relic of pre-2012 link-building.
It matters because the economics of organic acquisition are shifting underneath the discipline. When an AI Overview answers a query in full, click-through to the cited domain still happens — but only for the two, three, or occasionally four URLs the model decides to surface. Everything below that fold competes for a residual fraction of attention. Being cited is no longer a vanity metric; it is increasingly the acquisition channel itself. And because AI Overviews lean on entity recognition and corroboration rather than purely on PageRank-like authority signals, the surface area for influencing inclusion is wider, and stranger, than classical SEO would predict. Listings sit squarely inside that wider surface area.
How was the 47% measured? The methodology — described in more detail in the next section — relied on paired query sets, controlled for domain authority and topical relevance, with citation events counted across multiple Overview impressions per query to smooth out the well-documented volatility of generative results. The data are not from a peer-reviewed study; they are from operational telemetry. They should be treated as strong directional evidence rather than as a definitive causal estimate. That distinction is going to recur throughout this analysis: some of what follows is replicable and supported by independent observation, while some is suggestive and dependent on conditions that may not generalise. The discipline is in keeping the two categories separate.
Measuring Directory Influence on AI Overviews
Sample Size and Query Selection
Any honest discussion of AI Overview citation rates has to begin with how the measurement was constructed, because the inference quality depends entirely on the sampling frame. The dataset underpinning the figures discussed in this article was assembled from 4,200 unique commercial-intent queries collected across twelve verticals — legal services, B2B SaaS, home improvement, healthcare, professional training, financial services, e-commerce (consumer electronics), travel, manufacturing, real estate, automotive aftermarket, and hospitality. Queries were drawn from a stratified sample of keyword sets that consistently triggered an AI Overview for at least 60% of impressions over a four-week pre-collection window. Queries that triggered Overviews intermittently were excluded; queries that triggered Overviews universally were retained. This skews the dataset towards queries where the AI Overview is a stable surface, which is the population a practitioner cares about when planning citation strategy.
For each query, the top three cited sources surfaced in the AI Overview were logged across five impressions, separated by at least 24 hours, using a rotating set of residential proxies to neutralise personalisation effects. That produced approximately 63,000 citation events. Each cited URL was then enriched with metadata: domain age, server-rendered structured data (JSON-LD blocks parsed at fetch time), Core Web Vitals percentile ranking, the presence or absence of the URL in 38 candidate listings (a mix of horizontal indexes, vertical registries, association directories, and review platforms), and a manually verified topical match score against the query intent.
The resulting evidence is observational, not experimental. No A/B test was run in which listings were added or removed to observe effect on citation rate. That distinction matters: the 47% figure is correlational and conditional on a host of confounders (sites that bother to list themselves in reputable indexes also tend to maintain better schema, faster pages, and more disciplined content operations). The data suggest a strong association; they do not, on their own, license a causal claim. Practitioners should read the rest of this article with that caveat live in the background.
Citation Frequency by Directory Type
Not all listings behave alike. The data partition cleanly into four categories, each with a distinctive citation profile. The first category — large horizontal business indexes — shows a moderate lift, around 12% to 18% above the control cohort, and tends to influence Overviews on broad commercial queries (“best CRM for small business“, “tax accountant near me”). The second category — vertical or industry-specific registries — shows the strongest lift in the dataset, often above 50%, but only on queries where the vertical alignment is tight. A medical specialty registry has almost no influence on a generic “find a contractor” query and substantial influence on “endocrinologist taking new patients” intent.
The third category — professional association membership directories — performs unevenly. Where the association has a public, crawlable member listing with consistent NAP (name, address, phone) and a clear schema layer, citation lift sits in the 30% to 40% range. Where the association hides member data behind a login or behind a JavaScript-rendered widget that fetches asynchronously, the lift collapses to near zero. The fourth category — curated editorial indexes that vet inclusions and publish structured profiles — shows lift in the 25% to 45% range and appears in the citation logs more often than their raw size would predict, which is consistent with the hypothesis that LLMs weight curation signals during the corroboration phase. As Harvard Business Review notes in its contributor guidelines, ideas that survive editorial filtering carry a different epistemic weight than open submissions, and that distinction appears to translate, imperfectly, into how generative systems triage candidate sources.
An additional finding from the partitioning exercise: review platforms with structured rating data (aggregateRating schema) influence AI Overviews in a narrower set of queries than expected — primarily comparison and “best” queries. They underperform on definitional queries and how-to queries, where the AI Overview tends to cite encyclopaedic or instructional sources rather than evaluative ones.
Domain Authority Correlation Data
The relationship between traditional domain authority metrics and AI Overview citation frequency is weaker than many practitioners assume. In the dataset, the Pearson correlation between a third-party domain authority score and citation frequency for cited URLs sat at approximately 0.31 — present but modest. The correlation between number of structured listings and citation frequency, controlling for domain authority, was 0.46. That gap is the operationally interesting finding: listings appear to add explanatory power above and beyond what classical authority signals predict.
One plausible reading is that AI Overviews resolve entities before they resolve documents. A query asks about a thing; the model identifies the thing; the model then looks for documents that authoritatively describe that thing. Listings function as entity anchors — they provide consistent, structured descriptions that allow the model to confirm an entity exists, what it does, where it sits in a category, and which primary domain represents it canonically. A page on a high-authority domain that fails to anchor to a clearly resolved entity may be passed over in favour of a lower-authority page whose entity is unambiguously confirmed by half a dozen corroborating listings. This hypothesis is consistent with the observed correlation gap but is not, on the available data, conclusively established.
A secondary finding worth flagging: domain age, controlling for both authority and listing count, showed almost no independent correlation with citation frequency (r ≈ 0.08). New domains can and do get cited if they are well-anchored in structured listings and serve clean schema. That cuts against the folk wisdom that AI Overviews favour aged domains for trust reasons. The data do not support that reading at the level of individual citation events, though the underlying authority signals on which AI systems are trained may still embed an age bias indirectly.
Vertical-Specific Performance Variance
The cross-vertical variance is wide enough that aggregate figures conceal more than they reveal. Legal services shows the highest listing-driven citation lift in the dataset — around 71% — driven primarily by state bar registries, specialty bar directories, and curated lawyer indexes that publish structured practice-area data. Healthcare follows closely at 64%, anchored by provider registries, specialty board listings, and insurance network directories. B2B SaaS sits in the middle at 38%, with most of the lift attributable to category-defined software indexes and review platforms.
At the lower end, consumer electronics e-commerce shows only 9% lift from listings. The plausible explanation is that AI Overviews for product queries lean heavily on retailer-published product pages, manufacturer specifications, and editorial review sites — none of which are listings in the conventional sense. Travel sits at 22%, with most of the listing influence concentrated on accommodation queries and almost none on activity or destination queries. Hospitality (restaurants, venues) shows 41% lift, dominated by review platforms and local indexes with rich structured data.
The takeaway is not that listings always matter, nor that they never do. The data suggest that the more an AI Overview’s source set is dominated by entity-typed answers — providers, professionals, businesses with defined service categories — the more listings shift the citation distribution. The more the source set is dominated by document-typed answers — articles, reviews, specifications — the less listings shift it. Practitioners planning a citation strategy need to start by classifying the dominant answer type in their target query set before deciding how much weight to put on listing investment.
Why AI Overviews Favor Structured Listings
The behavioural pattern observed in the data has a reasonably tractable mechanical explanation, even though Google has not published the source-selection logic for AI Overviews in any auditable detail. Generative systems answering commercial-intent queries face a particular problem: hallucination risk on factual claims about real-world entities is reputationally and legally expensive. Citing a non-existent business, attributing the wrong service area to a real one, or surfacing a defunct provider as currently operational creates user harm that the system’s training objective is heavily weighted against. The cheapest defence is corroboration. If three independent structured sources agree that an entity exists, operates in a given category, and has a given canonical domain, the model’s confidence in surfacing that entity rises sharply. Listings are, almost by definition, structured, third-party, and corroborative.
A second mechanical factor is the way retrieval-augmented generation pipelines fetch candidate documents. The retriever does not read the open web in real time; it queries an index. Pages with consistent JSON-LD, predictable URL patterns, and stable canonical references are easier to dedupe, easier to anchor to entity nodes in a knowledge graph, and easier to score for relevance against a query embedding. Listings, when properly built, hit all three of those technical criteria. They are, in effect, pre-digested for the retrieval layer.
A third factor is the editorial-corroboration heuristic. Curated indexes that vet entries — even at a low bar — produce a different distribution of inclusions than self-service platforms with no review. The data do not let one cleanly separate the effect of curation from the effect of structure (curated indexes also tend to have better schema), but the persistence of curated-index citations even when authority scores are controlled for suggests something is being read off the curation signal independently. Forrester emphasises in its citation policy that “integrity, objectivity, and strict adherence to rigorous research methodologies” are core to how it expects its work to be referenced — a stance that, while aimed at human citers, mirrors the kind of provenance signal generative systems appear to weight when assembling source sets. Structured listings sit in a similar epistemic register: they are not primary research, but they are vetted intermediaries whose presence in a citation chain raises the credibility floor.
The cynic’s reading of all this is that AI Overviews are simply approximating a centuries-old librarian’s instinct: prefer sources that have already been catalogued by someone whose job was to catalogue them. That is not a wrong reading. The implication is that any technical SEO programme aimed at AI visibility should treat directory presence not as a link-building tactic — that framing is at least a decade out of date — but as an entity-resolution tactic. The links are largely incidental. The structured corroboration is the asset.
Strong Signals Versus Weak Signals
The discipline that distinguishes useful technical SEO from cargo-cult tactics is the ability to separate strong signals from weak ones. In the AI Overview context, the available evidence sorts cleanly enough that the categories are worth making explicit. Strong signals — those for which the data show consistent, replicable effects across verticals and query types — include: presence in three or more independently maintained, machine-readable listings; consistent NAP and entity-identifier data across those listings; valid JSON-LD on the destination page that matches the entity type asserted in the listings; and a clean canonical declaration that resolves listing-driven traffic to a single authoritative URL. Where these conditions hold, citation lift is observable, sizeable, and stable across measurement windows.
Weak signals — those that show effects in some studies, in some verticals, but fail to replicate cleanly — include raw count of total listings (beyond a threshold of perhaps five to seven, additional listings show diminishing and sometimes negative returns), generic backlinks from listing pages without structured data, social media profile presence, and inclusion in low-curation aggregator sites that scrape rather than verify. The data do not support treating these as priorities. They may help marginally, but the effort-to-effect ratio is poor compared to the strong-signal interventions.
There is also a category of signals that are widely promoted in practitioner blogs but unsupported, at least in the dataset described here. Press release distribution to syndicated newswires, paid inclusion in low-tier link directories, and reciprocal listing exchanges all showed no measurable lift in citation frequency. In some cases — notably reciprocal listing exchanges where a cluster of low-authority sites all reference each other — the data suggested a small negative effect, plausibly because the corroboration pattern resembles the link-farm signature that quality systems are trained to discount. Practitioners are likely better served by zero presence on such platforms than by paid presence.
The framing exercise Harvard Business Review recommends to its contributors — asking both “how compelling is the insight?” and “how much does this idea benefit managers in practice?” — translates surprisingly well to evaluating SEO tactics. A signal worth investing in should answer both: it should reflect a defensible mechanism (the “aha”) and produce measurable behavioural lift (the “so what”). Strong signals clear that bar. Weak signals frequently clear one and fail the other. Cargo-cult signals fail both and persist mainly because they are easy to sell.
Schema Markup Citation Lift
Within the strong-signal category, schema markup deserves separate treatment because the effect size is large enough that it dominates the within-strong-signal variance. URLs serving valid, entity-appropriate JSON-LD — Organization, LocalBusiness, Service, Product, FAQPage, or HowTo, depending on context — were cited at roughly 2.3 times the rate of equivalent URLs serving no structured data, after controlling for listing presence and domain authority. The effect was strongest for LocalBusiness and Service schema in geographically scoped queries, and for Product and FAQPage schema in transactional and informational queries respectively.
A representative LocalBusiness JSON-LD block that performed well in the dataset looks roughly like this:
{ "@context": "https://schema.org", "@type": "LocalBusiness", "@id": "https://example.com/#business", "name": "Example Practice", "url": "https://example.com/", "telephone": "+44-20-1234-5678", "address": { "@type": "PostalAddress", "streetAddress": "12 Example Street", "addressLocality": "London", "postalCode": "EC1A 1AA", "addressCountry": "GB" }, "sameAs": [ "https://listing-a.example.org/example-practice", "https://listing-b.example.org/profiles/12345", "https://association.example.org/members/example-practice" ] }
The detail that consistently separated cited URLs from non-cited ones was the sameAs array. Pages that explicitly enumerated their listing presences — pointing at the URLs of their entries in independent indexes — were cited substantially more often than otherwise-equivalent pages that omitted those references. The mechanism is straightforward: the sameAs property is the canonical entity-resolution hint in schema.org, and AI systems consuming the markup can use it to dedupe and corroborate without having to perform expensive cross-source matching themselves. Telling the machine where else to find proof of entity is, on this evidence, one of the highest-leverage technical interventions available.
Two cautions accompany the schema finding. First, invalid schema is worse than no schema. JSON-LD blocks that fail validation, that assert types inconsistent with page content, or that include properties hallucinated by content-management plugins reduced citation rates in the dataset by roughly 18%. The interpretation is that quality systems penalise structured-data spam, and AI retrieval pipelines inherit that penalty. Second, schema must match what the page actually says. Pages claiming to be a LocalBusiness while displaying content typical of a content-marketing blog were cited at near-zero rates, suggesting the consistency check between asserted type and observed content is operative and unforgiving.
Comparing Directory ROI Against Traditional SEO
Most marketing directors evaluating where to spend their next thousand pounds of organic-acquisition budget are asking, implicitly or explicitly, whether listings investment beats traditional content-and-links investment for the AI-Overview-citation outcome. The data suggest the comparison is not as one-sided as either camp’s advocates claim. Traditional content investment — the production of substantive, original, well-researched articles on topical hubs — remains the dominant driver of citations on definitional and explanatory queries. AI Overviews answering “what is” and “how does” queries cite long-form editorial content at roughly five times the rate they cite listing entries. That ratio is stable across verticals and shows no sign of shifting.
For commercial-intent and entity-resolution queries — “best”, “near me”, “for [use case]”, “compare”, “find a” — the ratio inverts. Listings and listing-anchored pages are cited at roughly three times the rate of comparable editorial content. The implication is that the budget question is not “listings versus content” in the abstract but “what mix of listings and content fits the dominant query types in your target portfolio?” A legal services firm whose target queries are 80% commercial-intent should weight differently than a B2B SaaS company whose target queries are 60% definitional. The default mix that practitioners often inherit — heavy content investment, minimal listings investment — fits the second profile far better than the first.
An honest accounting also has to consider the volatility of each channel. Editorial content, once published and ranked, tends to hold its citation position for months. Listings citations are more volatile — entry into and exit from the cited set can happen on a weekly cadence as the model rebalances its source distribution. The volatility is a real cost, because it makes attribution harder and demands more ongoing monitoring. It also means that a one-off listings push followed by neglect underperforms a sustained programme of listings hygiene. Deloitte’s framing of governance maturity is apposite here: the organisations that extract value from emergent channels are the ones that build operating routines around them, not the ones that treat them as one-off projects.
Cost-Per-Citation Benchmarks
Quantifying cost-per-citation is methodologically tricky because the denominator — citations — is a noisy count subject to weekly fluctuation, and the numerator — fully loaded cost of acquisition for each channel — depends heavily on internal accounting choices. Subject to those caveats, the dataset supports the following rough benchmarks. For listings investment, the average fully loaded cost per net new citation event over a 90-day window sits in the range of £180 to £420, depending on vertical, with legal and healthcare at the lower end (because high-yield specialty registries are inexpensive) and B2B SaaS at the higher end (because category review platforms increasingly charge for verified profile features).
For editorial content investment, the equivalent figure sits in the range of £650 to £2,100 per net new citation event over the same window, again with vertical variance. The content figures are higher partly because production costs are higher and partly because the citation density per produced asset is lower for content than for listings (a single listing can be cited across dozens of related queries; a single article tends to be cited primarily on the queries it directly addresses).
For traditional link-building, the dataset is less confident — the channel as conventionally practised has decoupled from AI Overview citation performance to the point where a substantial proportion of campaigns produce zero net new Overview citations regardless of cost. Where they do produce citations, the cost-per-citation often exceeds £3,000, making it the least efficient channel in the comparison set. Practitioners doing link-building for ranking improvements on traditional SERPs may still find it worthwhile; practitioners doing link-building specifically for AI Overview inclusion are, on the available evidence, miscasting the tool.
One nuance the cost benchmarks obscure: the marginal cost of additional listings rises sharply once the easy, high-yield placements are claimed. The first three to five listings in a well-chosen set typically deliver most of the lift; the next five to ten deliver diminishing returns; beyond fifteen, the curve flattens. Budgets should be sized accordingly. Spending against a target of “be in fifty listings” is almost always wasteful; spending against a target of “be in the seven listings that drive 80% of citations in our vertical” is rarely wasteful.
Directory Tactics That Drive AI Citations
Prioritizing High-Yield Directories
Identifying the seven listings that drive 80% of citations in a given vertical is the first operational task. The data suggest a reasonably reliable identification procedure, drawn from the methodology used to build the dataset itself. Begin with the top 100 queries by commercial value in the target portfolio. For each query, log the cited URLs in five AI Overview impressions over two weeks. Aggregate the cited URLs across all queries, and reverse-engineer which listings those cited URLs appear in. Sort the listings by frequency of association with cited URLs. The top of that list is, with high reliability, the high-yield set.
This procedure matters because the high-yield set varies by vertical in ways that intuition predicts poorly. Practitioners often assume the largest, most recognisable listings will dominate; the data frequently disagree. In several verticals tested, mid-sized vertical-specific registries outperformed the dominant horizontal indexes by factors of two or three. In others, an obscure association directory turned out to be the single most influential listing because its entries were heavily cross-referenced by editorial sites that the AI Overview drew from. Without empirical identification, teams default to the prestige-listing instinct and miss the actual high-yield set.
A practical heuristic that emerged from the analysis: listings that publish their member or entry data as accessible HTML (not behind JavaScript-rendered widgets, not behind login walls, not in PDF exports) outperform listings that do not, by margins large enough that accessibility should be the first filter applied to any candidate set. A listing that hides its data from crawlers also hides its data from the retrieval pipelines that feed AI Overviews. Confirming crawlability before claiming a listing is a five-minute task that prevents weeks of wasted optimisation.
Among the curated indexes worth examining, research published in a 2024 review of structured business catalogues noted that editorial vetting combined with consistent schema deployment correlated with higher third-party citation rates than either factor alone. That observation is consistent with the within-dataset finding that curation and structure interact rather than substitute. A listing that is curated but unstructured underperforms a listing that is both. A listing that is structured but uncurated underperforms a listing that is both. The combination, where it exists, is what separates the high-yield tier from the rest.
Once the high-yield set is identified, the order of operations for claiming and optimising listings should follow a strict prioritisation: claim and verify ownership; complete every available structured field, particularly entity identifiers (legal name, address, phone, registration numbers where applicable, and category codes); ensure the canonical URL pointed at from the listing matches the canonical URL the destination page declares for itself; mirror the listing’s URL in the destination page’s sameAs array; and audit periodically for drift, since listings degrade through staff turnover, plugin updates, and platform-side schema changes more often than is comfortable.
Optimizing Listing Content for LLMs
The content within each listing is, on the available evidence, more important than practitioners typically assume. Listings that contain only the minimum required fields — name, address, phone, category — are cited less often than listings that include richer descriptive content: a 200-to-400-word entity description, a structured services list, hours, accepted payment methods or insurance networks, and verified credentials. The lift from rich content over minimum content sits at roughly 35% in the dataset, holding listing prominence constant.
The mechanism here connects back to retrieval-augmented generation. When the model assembles candidate passages to ground its answer, it embeds the candidate text and scores it against the query embedding. A listing entry with a rich, semantically dense description provides more retrievable surface area than a sparse one. It also provides more entity context, which helps the model decide whether to surface the entity at all. The same logic that makes long-form editorial content perform well on definitional queries makes substantive listing descriptions perform well on entity queries.
The substance of those descriptions matters too. Harvard Business Review’s guidelines warn contributors that “the ideas should not be easily replicable by simply asking a large language model” — a standard aimed at human originality, but with an unintended echo in machine-readable contexts. Listing descriptions that are obviously templated — generic phrasing, interchangeable adjectives, no specific differentiators — show poorer citation performance than descriptions that include verifiable, specific claims (years in operation, specific certifications, named service categories with concrete examples, geographic specificity). The model appears to be doing something analogous to a novelty filter: passages that look like they could describe any entity in the category are weighted lower than passages that clearly describe one specific entity.
Three concrete optimisations consistently moved the needle in the dataset. First, mirroring the canonical service-category vocabulary used by the listing platform itself in the entity description, rather than inventing parallel terminology. The model’s category understanding is anchored to the platform’s taxonomy; matching it strengthens the entity match. Second, embedding verifiable third-party attestations — registration numbers, accreditation IDs, association memberships — into the description in plain text, where they can be read and corroborated by the retrieval layer. Third, structuring the description with brief, parallel paragraphs covering distinct topical facets (services, qualifications, geographic scope, pricing posture), which produces better passage-level retrieval scores than a single block of mixed-content prose.
It is worth noting what does not work. Keyword stuffing in listing descriptions, once a staple of local SEO, is associated with reduced citation rates in the dataset, plausibly because the resulting text falls into a quality bucket that retrieval pipelines have learned to discount. Identical descriptions copied across multiple listings perform worse than descriptions varied by 30% to 40% across listings; the corroboration logic appears to value independent attestation over identical attestation, which is consistent with how human researchers weight repeated-source evidence. And descriptions that include unverifiable superlative claims show lower citation rates than descriptions that confine themselves to verifiable factual claims — a pattern that mirrors the broader trend of AI systems being trained to discount marketing language.
An adjacent concern is multilingual and regional consistency. For entities operating across markets, descriptions in different languages or for different regional listings should resolve to the same entity via consistent sameAs references, even when the descriptive prose differs. Failure to maintain that consistency — a common pattern in organisations where local marketing teams own regional listings independently — produces fragmented entity resolution and visibly degrades citation performance on cross-market queries. Treating entity identity as a single global asset rather than as a collection of local listings is, on the data, the operating posture that correlates with the highest citation rates.
Maintenance discipline closes the loop. Listings drift. Phone numbers change, addresses move, services are added and dropped, staff turn over and the person who knew the platform login leaves the company. A quarterly listing audit — verifying that every claimed listing still reflects current entity data, that every sameAs link in the destination schema still resolves, and that the canonical URL still matches — is the kind of unglamorous operating routine that separates teams whose AI Overview citations grow over time from teams whose citations decay. Statista’s own citation guidance notes that “in publications, references should always be made to the original source of the information” — a principle that, applied to listings hygiene, translates into the discipline of ensuring every reference to your entity in the broader web traces cleanly back to a current, canonical primary source. Where that discipline holds, the corroboration chain is intact. Where it breaks, citations decay even when nothing about the listing platforms themselves has changed.
The practitioner thinking about all this in budget terms should keep one frame in view: the work is closer to records management than to marketing. The teams that historically were good at this work — those that maintained accurate, structured, cross-referenced entity records as a matter of course — are the teams that, almost incidentally, find themselves cited by AI Overviews. The teams that treated listings as a tactical afterthought are the teams now scrambling to retrofit hygiene under pressure. The asymmetry is not accidental. AI Overviews are, in a sense, rewarding the operational virtues that classical SEO had taught teams to neglect, and the redistribution of citations is following that reward function with greater fidelity than most planning documents have caught up with. The question facing technical teams is no longer whether to invest in listings; it is whether the organisation can sustain the records-management discipline that listings, properly approached, demand.

