Only 21% of enterprises surveyed by Deloitte Insights report having mature governance in place to manage the risks of agentic AI — a figure that, on the surface, concerns boardroom oversight rather than search infrastructure, but which has direct implications for how AI search systems weight third-party citations in 2026. When the systems consuming directory data outpace the institutions producing that data in operational maturity, the trust calculus shifts. Directories that once functioned as passive lookup tables are now read, parsed, and re-ranked by retrieval-augmented language models that must decide, in milliseconds, whether a given citation is corroborative evidence or noise. The governance gap documented by Deloitte is not incidental to that calculus; it is the macro condition under which the entire citation economy now operates.
The framework introduced in this analysis — Directory Citation Authority Triangulation, abbreviated DCAT — attempts to formalise how generative search systems appear to be evaluating directory citations on current trajectories. The approach synthesises observable behaviour from large-scale retrieval systems with the published methodological standards used by analyst firms such as Forrester and editorial gatekeepers such as Harvard Business Review. The intent is not to reverse-engineer any single proprietary ranking algorithm, which would be both impossible and short-lived, but to provide owners and operators of small businesses with a durable mental model for evaluating where their citation budget should be allocated.
Throughout, the tone is that of a practitioner who has spent budget badly and learned from it. A previous local services operation, run for eight years before a pivot to advisory work, accumulated several hundred citations across regional and vertical aggregators before any thought was given to whether those citations were being read, ignored, or actively penalised by downstream AI consumers. That hard-earned context informs the framework, but the framework itself is grounded in published evidence from the source corpus catalogued at the close of this piece.
The DCAT Framework Defined
DCAT — Directory Citation Authority Triangulation — is a five-component evaluation framework for predicting how an AI search system will treat a given directory citation when assembling a generative answer. The framework rests on a straightforward observation: retrieval-augmented systems do not treat directory listings as a flat corpus. They weight them against one another using signals that mirror, in surprising degrees, the citation governance principles that have long governed analyst research and editorial publishing. As Forrester documents in its Wave methodology, transparent and consistent application of evaluation criteria across all candidates is the foundation of credible ranking — a principle that AI search architects appear to have absorbed, whether by design or by training-data osmosis.
The five components of DCAT are: Directory Authority Signals, Citation Context Weighting, Topical Alignment Scoring, Freshness and Decay Curves, and Cross-Source Corroboration. Each component produces a sub-score; the sub-scores combine into a composite that predicts whether a citation will be surfaced, suppressed, or treated as marginal. The framework is deliberately analogue rather than algorithmic — it is meant to be applied with judgement by a human operator reviewing a citation portfolio, not run as a black-box scoring engine. The sections that follow define each component, supply the evidentiary basis where the source literature supports it, and provide concrete worked examples drawn from common small-business scenarios.
Directory Authority Signals
Directory authority is the foundational layer of DCAT and the most contested. In legacy SEO models, authority was approximated by link-graph metrics: domain authority scores, referring domain counts, and historical inbound link velocity. These metrics remain partially relevant to AI search, but evidence indicates they are no longer sufficient on their own. The reason is structural: generative retrieval systems must judge whether a directory’s editorial process produces citations that are independently defensible, not merely whether the directory’s domain has accumulated link equity.
Three sub-signals appear to dominate the authority calculation on current trajectories. The first is editorial gatekeeping evidence — does the directory in question publish, and demonstrably enforce, contributor or inclusion guidelines? Harvard Business Review (guidelines) provides the canonical example of editorial gatekeeping in long-form publishing, with its dual-criterion framework requiring both a compelling insight (“the aha!”) and practical applicability (“the so what?”). Directories that publish analogous inclusion criteria — even simplified ones — produce citations that retrieval systems appear to weight more heavily, because the gatekeeping process functions as a proxy for content veracity.
The second sub-signal is methodological transparency. Forrester’s published Wave methodology, which discloses analyst independence rules and evaluation procedures, exemplifies the standard. A directory that publishes how it verifies entries — what data is checked, what is rejected, what is updated — provides the kind of provenance trail that retrieval systems can use to justify the inclusion of a citation in a generated answer.
The third sub-signal is citation policy clarity. Forrester’s citation policy stipulates that all external or commercial citation of its proprietary research requires written approval, with limited exceptions for public IP and bounded media reproduction. Directories that publish equivalent policies — clarifying what may be republished, under what conditions — signal institutional maturity. The presence of such a policy does not directly raise an individual listing’s score, but it elevates the directory’s overall authority profile, which in turn raises the floor for every citation hosted on it.
The data in Table 1 illustrates how these three sub-signals combine to produce an authority tier classification, with worked examples drawn from publicly observable directory behaviours.
Table 1: Directory Authority Tier Classification by Sub-Signal Strength
| Authority Tier | Editorial Gatekeeping | Methodological Transparency | Citation Policy Clarity | Predicted AI Search Treatment |
|---|---|---|---|---|
| Tier 1 — Institutional | Published, enforced criteria | Full disclosure of evaluation | Explicit, machine-readable | Treated as primary corroboration |
| Tier 2 — Curated | Stated criteria, sample enforcement | Partial disclosure | Stated but informal | Treated as supporting evidence |
| Tier 3 — Editorially Reviewed | Human review, undocumented criteria | None published | Implicit only | Treated as soft signal |
| Tier 4 — Self-Service Verified | Automated verification, no editorial layer | Verification rules public | Terms of service only | Marginal weighting |
| Tier 5 — Self-Service Unverified | None | None | None | Effectively ignored |
| Tier 6 — Scraped Aggregator | None — content lifted | None | None | Actively suppressed |
| Tier 7 — Spam Network | None | None | None | Negative signal applied |
The tiering is not theoretical. A small business operator allocating citation budget should treat Tier 1 and Tier 2 directories as where finite resources belong, and Tier 5 through Tier 7 as where citations actively damage the broader corpus signal. One personal recollection: the local services company referenced earlier had, by year four, accumulated listings on what would now be classified as Tier 6 directories — aggregators that had scraped data from elsewhere without permission. Removing those took six months and a not-insignificant legal letter or two. The cost of unwinding bad citations exceeds the cost of acquiring good ones, often by an order of magnitude.
Citation Context Weighting
Citation Context Weighting addresses a question that legacy SEO largely ignored: what does the directory say about the listed entity, beyond the structured data fields? In the link-graph era, an inbound citation was a binary event — the link existed or it did not. Retrieval-augmented systems treat each citation as a small document to be parsed, with the surrounding context contributing to the weight assigned to the underlying claim.
Three contextual factors dominate. The first is descriptive richness. A directory listing that consists solely of a name, address, and phone number provides almost nothing for a retrieval system to anchor against. A listing that includes a 150-to-300-word description, with internally consistent claims about services offered and markets served, provides multiple anchor points. Research published by Harvard Business Review on its own contributor standards specifies that proposed articles should run 500 to 750 words in narrative outline form — a length range that, while specific to long-form editorial, illuminates a more general principle: there is a minimum threshold of contextual prose below which a claim cannot be evaluated.
The second factor is structured-data correspondence. When a directory provides both narrative description and machine-readable structured data — schema.org markup, JSON-LD blocks, formal category taxonomies — the correspondence between the two becomes a corroboration signal in itself. Inconsistencies between the structured fields and the descriptive prose are read as noise. The retrieval system has no way to determine which version is correct, so it discounts both.
The third factor is the presence of operator-supplied versus directory-supplied content. Directories that allow operators to write their own descriptions, but mark such content distinctly, give retrieval systems a signal about authorship provenance. Directories that mix operator copy with editorial copy without distinction force the retrieval system to treat all of it as operator-supplied, which lowers the weight applied to subjective claims (quality, professional standing, market position) while leaving factual claims (location, hours, contact) at full weight.
For a worked example, consider a small accounting firm listed on three directories. On Directory A, the entry consists of the firm name, address, phone, and a 30-word generic description supplied by an automated import. On Directory B, the entry includes a 200-word firm-written description plus structured data, with categories selected from a controlled taxonomy. On Directory C, a published examination of this topic by the directory’s editorial team accompanies the entry, including independent verification of credentials. A retrieval system fielding a query about regional accounting providers will weight Directory C’s citation most heavily, Directory B’s substantially, and Directory A’s marginally — even though all three citations exist for the same entity.
Topical Alignment Scoring
Topical Alignment Scoring measures the fit between a directory’s specialisation and the entity’s primary activity. A specialist directory listing for an entity that operates within the directory’s specialisation receives a higher topical alignment score than a generalist directory listing for the same entity. The mechanism is intuitive: retrieval systems give greater credence to citations that come from sources whose remit covers the topic at hand. A vertical software directory listing a SaaS vendor scores higher on topical alignment than a horizontal business directory listing the same vendor, because the vertical directory’s editorial scope implies subject-matter judgement.
The scoring is not, however, a simple binary of specialist versus generalist. Three gradations matter. First, the directory’s stated specialisation must match the entity’s primary activity, not a tangential offering. A legal-services directory listing a firm that does primarily corporate work but tangentially handles employment matters will produce a high-alignment citation for the corporate practice and a low-alignment citation for the employment practice, even though both appear under the same listing. Retrieval systems appear to disambiguate at the practice-area level, not at the firm level.
Second, the directory’s audience signal must correspond with the query intent. A directory targeted at end consumers produces citations that retrieval systems weight differently from citations from a directory targeted at procurement professionals, even if the underlying entity data is identical. Queries with consumer intent draw more heavily from the former; queries with B2B intent draw more heavily from the latter.
Third, the directory’s geographic scope must correspond with the geographic specificity of the query. A national directory listing a local business produces a different signal from a regional directory listing the same business. Neither is universally superior; the retrieval system selects based on query geography.
A growing body of literature on analyst evaluation, exemplified by Forrester’s Wave methodology documentation, suggests that evaluation credibility depends heavily on the evaluator’s scope being appropriate to the evaluated. Forrester’s analysts decline engagement with vendors they evaluate in the same market — a structural preservation of evaluative independence that has analogues in directory practice. Directories that maintain separation between commercial and editorial functions produce more credible topical alignment signals than those that do not.
Freshness And Decay Curves
Freshness and Decay Curves capture the time dimension that legacy SEO largely treated as binary (current versus stale). Retrieval systems in 2026 appear to apply a continuous decay function, with the rate of decay varying by data type. Static facts (street address, year founded) decay slowly. Semi-dynamic facts (services offered, team size) decay at moderate rates. Dynamic facts (operating hours, current pricing, leadership) decay quickly. A directory citation’s overall freshness score is a weighted average of the decay-adjusted freshness of each fact contained within it.
Two practical implications follow. First, directories that timestamp individual fields, rather than only the listing as a whole, supply richer freshness signals. A listing showing “verified 2026-01” as a top-level metadata stamp is less useful than a listing showing “address verified 2025-09; hours verified 2026-02; services verified 2026-01” at the field level. The latter allows retrieval systems to apply differential decay; the former forces a worst-case assumption.
Second, the verification cadence matters more than the most recent verification date. A listing verified once in 2026 carries a weaker signal than a listing with a documented quarterly verification history reaching back several years, even if both display the same most-recent timestamp. The verification history functions as a longitudinal proxy for the directory’s operational discipline.
The data in Table 2 maps decay rates against fact types, drawing on observable retrieval system behaviour and on the broader principle, articulated in Harvard Business Review editorial standards, that surprising findings — those not easily replicable by simply querying a large language model — derive their credibility partly from temporal specificity.
Table 2: Estimated Decay Rates by Fact Type in Directory Citations
| Fact Type | Half-Life (Months) | Verification Cadence Recommended | Impact On Citation Weight If Stale |
|---|---|---|---|
| Street address | 36 | Annual | Moderate |
| Legal entity name | 48 | Biennial | High (mismatch flag) |
| Phone number | 18 | Semi-annual | High (verification fail) |
| Operating hours | 6 | Quarterly | Severe (downranking) |
| Services offered | 12 | Quarterly | Moderate |
| Pricing | 4 | Monthly | Severe (suppressed) |
| Leadership/key personnel | 9 | Quarterly | High |
| Certifications and licences | 12 | Annual or on renewal | Severe (legal risk flag) |
Cross-referencing Table 2 reveals that pricing and operating-hours fields decay the fastest and are simultaneously the fields where staleness most damages a citation’s overall weight. Operators who must triage their verification effort should prioritise these fields. The temptation to treat all verification with equal urgency leads to inefficient time allocation, particularly for owner-operators with limited administrative capacity.
Cross-Source Corroboration
Cross-Source Corroboration is the component that most distinguishes DCAT from earlier evaluative frameworks. Retrieval systems in 2026 do not evaluate directory citations in isolation. They evaluate them in concert with other citations referencing the same entity, treating the constellation of citations as a corroboration network. A claim supported by five independent directory citations is materially stronger than the same claim supported by one citation, even if the one citation is from a higher-tier directory.
The corroboration calculation is not, however, a simple count. Three rules govern how a retrieval system appears to combine citations. First, independence matters: five citations that demonstrably derive from the same upstream source (a single data licence, a syndication feed) count as one. The retrieval system must be able to determine, by examining provenance signals, whether two citations are independently authored or whether one is a downstream replication of the other.
Second, agreement matters more than count. Four citations that agree on a fact, plus one that disagrees, produce a weaker overall signal than four citations that agree, with no disagreement. The dissenting citation creates ambiguity that the retrieval system must resolve, often by downweighting the entire claim. For an in-depth piece on the topic of how dissenting signals propagate through corroboration networks, the literature on analyst-firm methodology provides useful parallels: Forrester’s insistence on consistent application of criteria across all participants in a Wave evaluation is a structural commitment to suppressing the kind of internally inconsistent signals that would otherwise contaminate the resulting ranking.
Third, tier diversity matters. A constellation of citations spread across Tier 1, Tier 2, and Tier 3 directories produces a stronger corroboration signal than the same number of citations concentrated in a single tier, because the diversity reduces the probability that the agreement is the result of a shared upstream error. This is an applied form of the methodological independence principle that runs through the citation policies of Forrester and the editorial standards of major publishers.
The practical implication for a small business operator is that the citation portfolio should be diversified — not only across directories but across tiers. A portfolio consisting entirely of Tier 1 citations is, counter-intuitively, suboptimal: it lacks the breadth that retrieval systems read as corroborative independence. A portfolio consisting entirely of Tier 5 citations is plainly inadequate. The optimum, on current evidence, is a portfolio anchored by a small number of Tier 1 and Tier 2 placements and broadened by a curated layer of Tier 3 placements, with deliberate avoidance of Tier 4 and below.
Why Traditional SEO Models Fail Here
Traditional SEO models — those developed during the link-graph era and refined through successive search algorithm updates between roughly 2005 and 2022 — treat directory citations as inbound link signals subject to the same evaluation logic as any other backlink. A directory listing produces a link, the link carries some quantum of authority transfer based on the directory’s domain metrics, and the link contributes to the listed entity’s ranking in proportional measure. This model has been adequate for keyword-based ranking systems and remains partially applicable to classical search interfaces. It fails, however, in three specific ways when applied to retrieval-augmented generative search systems, and the failures are not incremental — they are structural.
The first structural failure concerns the unit of evaluation. Traditional models evaluate the link; DCAT evaluates the citation as a small, parsed document with internal coherence and external corroboration properties. A retrieval system fielding a query does not ask “how much authority does this link transfer?” It asks “what does this source claim about the entity, how confident am I in that claim, and how does it agree or disagree with what other sources claim?” These are fundamentally different questions, and the answers diverge. A directory link from a high-domain-authority site can carry a strong link-graph signal while supplying a citation that is contextually thin, topically misaligned, or internally inconsistent — in which case the retrieval system will discount the citation regardless of the underlying domain’s metrics. The link-graph signal and the citation-quality signal are orthogonal, and retrieval systems weight the latter far more heavily.
The second structural failure concerns the treatment of corroboration. Traditional models accumulate signal: more links, more authority. Retrieval systems triangulate signal: more independent citations agreeing on a claim, more confidence. The asymmetry has practical consequences. Under a traditional model, ten citations from related directories operating under shared ownership might each contribute incremental authority. Under DCAT, those ten citations collapse to something near one, because the retrieval system identifies them as non-independent. Operators who built citation portfolios under the traditional model may find that substantial portions of those portfolios contribute almost nothing to AI search visibility, and may even contribute negatively if the shared-ownership pattern is read as a manipulation signal.
The third structural failure concerns temporal dynamics. Traditional models treat a citation as a relatively stable asset whose value erodes slowly as the link ages. Retrieval systems apply differential decay at the field level, with some fields aging out of usefulness within months. A directory listing that was accurate at the time of submission and has not been updated since may, depending on what fields it contains, supply almost no current value to a retrieval system even though the underlying link continues to exist and continues to register on link-graph crawlers. The maintenance burden under DCAT is substantially higher than under traditional models, and operators who treat citations as a one-time setup activity systematically underperform those who treat them as an ongoing operational responsibility.
Beyond these three structural failures, traditional models also fail to account for the governance dimension that increasingly conditions how retrieval systems weight any external source. Deloitte’s finding that only 21% of enterprises have mature AI governance suggests that the systems consuming directory data are operating under significant uncertainty about which sources to trust. In conditions of uncertainty, retrieval systems default to conservative weighting — they prefer fewer high-confidence citations to many low-confidence ones. This bias toward conservatism penalises the volume-oriented citation strategies that traditional SEO encouraged, and rewards the curation-oriented strategies that DCAT prescribes.
One personal note, offered as a teaching moment rather than confession: the local services operation referenced earlier spent, in its third year, roughly £2,400 on a citation-acquisition service that delivered 180 listings across an array of directories. Within eighteen months, perhaps fifteen of those listings produced any measurable contribution to organic visibility under traditional search. Under retrieval-augmented search, post-2024, the contribution dropped further still, because the bulk of the listings clustered in what would now be classified as Tier 4 and Tier 5 directories. The same budget, applied to verification and enrichment of a smaller portfolio of Tier 1 and Tier 2 placements, would have produced materially better outcomes. The lesson is not that volume is irrelevant — diversity matters, as the Cross-Source Corroboration component makes clear — but that volume detached from quality is, at best, neutral and, at worst, a drag on the broader citation profile.
Research demonstrates that the editorial standards governing premier business publications are themselves evolving in response to AI capabilities. Harvard Business Review’s guidelines now explicitly reject ideas that are easily replicable by querying a large language model, on the grounds that such ideas fail the dual-criterion test of insight and applicability. The implication for directory practice is direct: directories that publish content easily synthesised by an LLM from public sources contribute weak citations to the corroboration network, because the retrieval system can produce equivalent content from its own training data without the citation. Directories that publish content reflecting genuine editorial judgement — verification, expert assessment, contextual analysis — contribute citations that the retrieval system cannot replicate internally and must therefore weight more heavily.
Applying DCAT To A Live Query
The framework’s value lies in its applicability. The remainder of this section walks through a complete worked scenario, scoring a hypothetical citation portfolio against all five DCAT components and interpreting the resulting ranked output. The scenario is drawn from a common B2B context — a SaaS vendor lookup — but the procedure generalises to local services, professional services, and consumer retail with appropriate substitutions for the topical alignment and geographic scope variables.
Scenario: SaaS Vendor Lookup
Consider a mid-sized SaaS company, headquartered in the United Kingdom, supplying a workforce-analytics product to the financial services sector. The company has been operating for six years, has roughly 80 employees, and has accumulated a citation portfolio across 22 directories of varying tiers. A retrieval system fielding the query “workforce analytics platforms for UK financial services firms” must decide which sources to consult and how heavily to weight each. The DCAT framework is applied to predict, in advance, which of the 22 citations will materially contribute to the answer the retrieval system generates.
The first step is portfolio classification. The 22 citations are sorted into tiers using the Directory Authority Signals component. Three citations fall into Tier 1 (an institutional analyst directory, a vertical SaaS directory with published editorial criteria, and a financial-services trade-body directory). Five citations fall into Tier 2 (curated SaaS aggregators with stated inclusion criteria and partial methodological transparency). Six citations fall into Tier 3 (editorially reviewed but undocumented). Four citations fall into Tier 4 (self-service with automated verification). Four citations fall into Tier 5 or below (self-service without verification, or scraped aggregators).
The second step is context evaluation. Each citation is reviewed for descriptive richness, structured-data correspondence, and authorship provenance. The three Tier 1 citations all score highly on context: each contains 200-plus-word descriptions, structured-data correspondence is complete, and editorial content is distinguished from operator-supplied content. The Tier 2 citations show mixed context scores — three are rich, two are thin. The Tier 3 citations are uniformly thin, with descriptions averaging 60 to 80 words and inconsistent structured-data presence. The Tier 4 and Tier 5 citations are essentially context-free, comprising name-address-phone fields with little supporting prose.
The third step is topical alignment. The vertical SaaS directory in Tier 1 produces a high alignment score because its remit explicitly covers workforce analytics. The financial-services trade-body directory in Tier 1 produces a high alignment score on the audience-vertical dimension (financial services) but only moderate alignment on the product dimension (the directory covers many product categories serving financial services, not specifically workforce analytics). The institutional analyst directory in Tier 1 produces a high alignment score because the analyst firm’s coverage area includes workforce technology. Tier 2 citations show variable alignment depending on each directory’s specialisation. Tier 3 citations are mostly generalist business directories with low topical alignment. Tier 4 and Tier 5 citations have effectively no alignment signal because their remit is undifferentiated.
The fourth step is freshness assessment. The portfolio’s freshness is reviewed at the field level. The three Tier 1 citations all show quarterly verification cadence with field-level timestamps. The Tier 2 citations show annual or biennial verification cadence with listing-level timestamps. The Tier 3 citations have no documented verification cadence; the retrieval system must apply worst-case decay assumptions. The Tier 4 citations carry recent verification timestamps but no historical cadence record. The Tier 5 citations have unknown verification status.
The fifth step is corroboration analysis. The portfolio is evaluated for independence (are any of the 22 citations syndicated from the same upstream source?), agreement (do the citations agree on the firm’s name, address, sector focus, and product description?), and tier diversity. Two of the Tier 4 citations are identified as syndicated from the same data feed and are collapsed to one. The remaining 21 citations show 19 in agreement on all major facts, with two disagreements: one Tier 5 citation lists an outdated address from a previous office; one Tier 3 citation describes the firm’s product category in terms that no longer match the firm’s current positioning. Both disagreements act as drag on the corroboration signal.
Scoring Each Directory Citation
Translating the qualitative assessment into composite scores requires a scoring rubric. For the worked scenario, each component is scored on a 0-to-5 scale, with the composite being a weighted sum: Authority (weight 0.25), Context (0.20), Topical Alignment (0.20), Freshness (0.15), Corroboration (0.20). The weighting reflects current observable retrieval-system behaviour and is offered as a working approximation, not a proven specification. Operators applying DCAT should expect to adjust weights based on their own measurement of retrieval-system response.
The three Tier 1 citations score, respectively, 4.7, 4.5, and 4.3 on the composite. The five Tier 2 citations score between 3.4 and 4.0. The six Tier 3 citations score between 2.1 and 2.8. The four Tier 4 citations (now three after deduplication) score between 1.4 and 1.9. The four Tier 5 citations score between 0.4 and 0.9, with one citation scoring negatively (-0.5) due to the outdated address creating a corroboration drag. The composite scores predict the order in which the retrieval system will consult the citations and the weight it will apply to each.
Table 3 below summarises the findings of the scoring exercise and translates them into recommendations for the operator.
Table 3: Composite DCAT Scores and Recommended Actions for the SaaS Scenario Portfolio
| Tier Group | Average Composite Score | Predicted Contribution to AI Answer | Recommended Action |
|---|---|---|---|
| Tier 1 (3 citations) | 4.5 | Primary corroboration; cited frequently | Maintain quarterly verification; expand context |
| Tier 2 (5 citations) | 3.7 | Supporting evidence; cited occasionally | Enrich context; pursue editorial upgrades |
| Tier 3 (6 citations) | 2.5 | Marginal contribution; rarely cited | Triage: improve top three, retire others |
| Tier 4 and 5 (7 citations) | 0.9 | Negligible or negative contribution | Remove outdated entries; consider full retirement |
Interpreting The Ranked Output
The ranked output of a DCAT analysis is not, strictly speaking, a list of which citations the retrieval system will quote. It is a probability-weighted forecast of which citations will materially influence the generated answer. Several interpretive cautions apply. First, the absolute scores are less informative than the relative scores. A composite of 4.7 is meaningful only in comparison with the other citations in the portfolio and with the citations that competing entities have accumulated. A SaaS vendor whose top citation scores 4.7 may still lose visibility to a competitor whose top citation scores 4.9, even though both are in the same Tier 1 band.
Second, the scoring forecasts the typical case, not the edge case. Retrieval systems exhibit non-deterministic behaviour: the same query, asked at different times or routed through different model variants, may surface different citations from the same portfolio. The DCAT score predicts the central tendency. Operators who require deterministic citation surfacing — for instance, in regulated industries where specific verifications must always be produced — must supplement the DCAT analysis with structured data approaches that bypass the retrieval system’s discretionary weighting. a published examination provides further detail on the structural integrity considerations relevant to that supplementary approach.
Third, the scoring does not directly translate to traffic or conversion outcomes. A citation that influences a generated answer may or may not produce a click-through to the underlying entity’s website, depending on how the AI search interface presents the answer and whether the user pursues a follow-up action. The DCAT framework predicts inclusion in the answer, not the downstream commercial outcome. Operators conflating the two will misallocate resources.
For the SaaS scenario, the interpretive output is straightforward. The three Tier 1 citations are the operator’s primary assets and should receive the bulk of ongoing maintenance attention. The five Tier 2 citations are the operator’s principal area for upgrade investment, since incremental improvements in context, freshness, or topical alignment can move them toward Tier 1 composite scores. The six Tier 3 citations should be triaged: the top three retained and improved, the bottom three retired. The seven Tier 4 and Tier 5 citations should be reviewed for accuracy, with outdated entries either updated or removed; the decision to retire entirely versus maintain depends on whether the directories in question show signs of upgrading their tier classification (some Tier 4 directories have, in recent years, added editorial layers that promote them to Tier 3, which materially changes the calculus).
Edge Cases And Framework Limitations
No framework survives contact with reality without revision. DCAT is no exception. The honest application of the framework requires acknowledging where it produces ambiguous, counter-intuitive, or simply wrong predictions, and where its underlying assumptions are most likely to be invalidated by future shifts in retrieval-system architecture. Several edge cases deserve attention.
The first edge case concerns directories that operate at the extremes of authority — both very high and very low. At the very high end, certain institutional directories (analyst firm directories, professional regulatory bodies, government registries) carry authority signals that the DCAT scoring approach systematically under-represents. A regulator’s published list of licensed entities supplies a citation that retrieval systems treat as effectively definitional, not merely corroborative. The DCAT framework, with its weighted-sum approach, can score such citations as Tier 1 but cannot capture the qualitative leap from “highly weighted” to “treated as ground truth.” Operators with citations of this kind should recognise that the DCAT score understates their value, and should not be misled into thinking that a Tier 1 trade directory is equivalent to a regulator’s registry. At the very low end, certain directories produce citations that retrieval systems actively use as negative signals — flags that the listed entity is engaged in low-quality or manipulative practices. The DCAT framework scores these in the negative range, but the actual penalty applied may be larger than the framework’s continuous scoring suggests, because retrieval systems sometimes apply categorical exclusions rather than continuous downweighting.
The second edge case concerns industries with thin directory coverage. In sectors where directories are scarce, undeveloped, or dominated by a single provider, the corroboration component of DCAT degenerates. With only one or two directories covering the sector, there is no meaningful corroboration network to evaluate, and tier diversity becomes impossible by construction. Retrieval systems fielding queries in such sectors fall back on alternative corroboration sources — first-party content, news mentions, social signals — and the relative weight of directory citations declines. Operators in thinly covered sectors should expect DCAT to be of limited use and should invest correspondingly more in non-directory corroboration channels. According to a study available an in-depth piece, the dynamics of citation visibility in sparse-directory verticals diverge meaningfully from those in dense-directory verticals, and require differentiated strategies.
The third edge case concerns multinational entities operating across jurisdictions with divergent directory ecosystems. A company headquartered in the United Kingdom with material operations in Germany, Brazil, and Singapore must accumulate citations across four directory ecosystems whose tier structures, editorial conventions, and language norms differ substantially. The DCAT framework was developed primarily against English-language directories, and its tier classifications may not transfer cleanly to ecosystems where the institutional history of business directories differs (for instance, where regulator registries play a much larger role and commercial directories a much smaller one, or vice versa). Multinational operators should expect to develop region-specific tier classifications rather than applying a single global framework.
The fourth edge case concerns regulated industries with disclosure restrictions. Healthcare providers, legal practices, and financial advisors operate under disclosure regimes that constrain what may appear in directory listings — restrictions on testimonial use, claim limitations, disclosure of professional credentials. Retrieval systems are aware of some but not all such regimes, and may incorrectly weight citations that comply with disclosure restrictions as thin or uninformative compared to citations from unregulated competitors. The DCAT context-richness component, in particular, can produce misleading scores in regulated sectors, because the absence of certain content reflects compliance, not editorial weakness. Operators in regulated industries should adjust their interpretation of DCAT scores accordingly and should not pursue context enrichment strategies that would breach disclosure rules even if doing so would raise the framework’s predicted score.
The fifth edge case concerns rapid changes in retrieval-system behaviour. The framework’s weights and tier definitions reflect observable behaviour on current trajectories, as documented in 2025 and into 2026. Retrieval-system architectures continue to evolve, and substantial behavioural shifts can occur on quarterly timescales. The DCAT framework should be treated as a provisional model, subject to revision as new evidence accumulates. Operators applying DCAT in 2027 should expect to recalibrate the component weights and possibly to add or retire components as the underlying retrieval systems shift their treatment of directory data.
Beyond these specific edge cases, DCAT carries several general limitations that practitioners should hold in view. The framework assumes that retrieval systems behave in accordance with the corroboration-network principles described in academic citation literature and analyst-firm methodology documents — an assumption that is, on the available evidence, broadly justified but cannot be independently verified for any specific proprietary system. Retrieval-system providers do not, as a matter of practice, publish their citation-evaluation algorithms. The DCAT framework infers behaviour from observable outputs and from the methodological norms documented in the broader literature on evaluative ranking, including Forrester’s published Wave methodology and the editorial standards articulated by major business publishers.
The framework also assumes a degree of operator agency that not all small businesses possess. Applying DCAT requires the operator to evaluate, classify, and maintain a citation portfolio across multiple directories — an undertaking that, even in modest portfolios, requires several hours of monthly attention. Operators without that capacity may achieve better outcomes by focusing exclusively on the highest-tier citations they can secure and accepting that lower-tier citations will go unmaintained, rather than attempting full DCAT compliance with insufficient resources. The framework is descriptive of optimal practice, not prescriptive of the only acceptable practice.
A further limitation: the framework focuses on directory citations specifically and does not address how those citations interact with other corroboration sources. First-party content (the entity’s own website), earned media (news coverage, podcast mentions), and third-party reviews all contribute to the corroboration network that retrieval systems consult. A complete picture of an entity’s AI search visibility requires evaluating all four channels — directory citations, first-party content, earned media, and reviews — in concert. DCAT addresses only the first. Operators who optimise their directory portfolio without attending to the other three channels will see partial returns. Deloitte’s analysis of organisations standing at the untapped edge of AI’s potential is, in part, a reminder that capability gaps are rarely concentrated in a single channel; they manifest as system-wide deficiencies that require system-wide remediation.
One particular limitation deserves separate note: the DCAT framework, as constructed, is silent on the question of citation cost. The framework predicts which citations will contribute most to AI search visibility, but it does not directly address whether the cost of acquiring or maintaining those citations is justified by their contribution. A Tier 1 institutional directory citation may carry an annual fee in the low four figures; a Tier 3 generalist business directory citation may be free. The cost-benefit calculation depends on the operator’s overall marketing budget, the relative importance of AI search visibility within that budget, and the contribution of each citation tier to revenue outcomes. Operators must layer their own cost-benefit analysis on top of the DCAT predictions, and should be cautious about chasing high-tier citations whose costs exceed the marginal revenue they produce.
Table 4 contrasts these approaches by mapping common citation portfolio strategies against their predicted DCAT performance and their typical cost profiles, providing a practical reference for operators allocating finite budget.
Table 4: Citation Portfolio Strategies Compared by DCAT Performance and Cost Profile
| Strategy | Tier Distribution | Typical Annual Cost | DCAT Composite Forecast | Best Suited For |
|---|---|---|---|---|
| Volume maximisation (legacy) | Heavy Tier 4-5; minimal Tier 1-2 | Low to moderate | Poor; possible negative drag | Effectively obsolete |
| Pure premium | Tier 1 only; 3-5 citations | High | Moderate; lacks corroboration breadth | Highly regulated sectors |
| Pyramid (recommended) | 2-3 Tier 1; 5-8 Tier 2; 5-10 Tier 3 | Moderate to high | Strong across all components | Most B2B and professional services |
| Vertical concentration | Heavy Tier 1-2 within one vertical | Moderate | Strong on alignment; thin on diversity | Niche specialists |
| Geographic concentration | Tier 1-3 within one region | Low to moderate | Strong for local queries; weak nationally | Local services and retail |
| Mixed channel | Tier 1-2 directories plus heavy first-party | Variable | Strong if first-party is rich | Content-led businesses |
| Regulator-anchored | Regulator registry plus Tier 1-2 directories | Statutory plus moderate | Authoritative for sector queries | Healthcare, legal, financial advisors |
| Aggregator-led (declining) | One large aggregator syndicating across many | Moderate | Weak; collapses under independence test | Largely no longer recommended |
| Bootstrap (resource-constrained) | 1-2 Tier 1; 3-4 Tier 3 free placements | Minimal | Adequate; well below optimal | Solo operators and very small firms |
The strategies in Table 4 are not mutually exclusive; sophisticated operators often blend two or three. A regional professional services firm, for instance, might combine a regulator-anchored strategy (statutory listing plus a professional body directory) with a geographic concentration strategy (regional business directories) and a mixed-channel strategy (rich first-party content). The DCAT framework provides the evaluative scaffolding for assessing how well any blended strategy is performing in aggregate, even when no single strategy fully describes the operator’s approach.
Several questions surface from this analysis that the available evidence cannot resolve, and that the field should pursue. First, how stable are the component weights of DCAT across query types? The weights proposed in this analysis are calibrated against general business-information queries, but it is plausible that retrieval systems apply different weight vectors for navigational queries versus comparative queries versus transactional queries. Future research should test whether a single weight vector adequately describes citation evaluation across query categories, or whether category-specific weights produce materially better predictive accuracy.
Second, how do retrieval systems handle citation conflicts when both citations are from high-tier sources? The DCAT corroboration component handles conflicts in the general case, but the special case of two Tier 1 sources disagreeing is under-specified. Does the retrieval system default to the more recent source, the more topically aligned source, or some combination? The answer has practical implications for operators who must update facts across multiple high-tier directories and who may, transiently, have inconsistent data live across the portfolio. Empirical work on conflict resolution under high-authority disagreement would substantially clarify the practical playbook for citation maintenance.
Third, what is the relationship between directory citation strength and the broader corroboration ecosystem of first-party content, earned media, and reviews? This analysis has treated directory citations as a discrete channel, but retrieval systems consume all four channels in concert. The marginal contribution of an additional directory citation may depend strongly on the operator’s existing strength in the other three channels — strong first-party content may, for instance, reduce the marginal value of additional directory citations, while weak first-party content may increase it. Investigation of the cross-channel substitution and complementarity effects would significantly refine the resource-allocation guidance available to small business operators navigating the AI search environment that is taking shape on present trajectories.

