Three directory operators rang me last quarter with the same story. Traffic down 40% year-on-year. Long-time advertisers quietly letting contracts lapse. Google Search Console showing impressions flat but clicks halved — the classic AI Overview bruise. Each of them asked the same question in slightly different words: should I sell the site while it still has value?
My answer surprised them. No. But you do need to rebuild the thing from the schema up, and you have roughly 18 months to do it.
Here’s the counterintuitive bit: the directories dying fastest aren’t being killed by AI search. They’re being killed by their own refusal to become the source material AI search depends on. That distinction matters, and it changes everything about how you respond.
The 3 AM Panic Every Directory Owner Knows
When ChatGPT answers the question your site used to rank for
You type “best commercial roofers in Manchester” into ChatGPT. It gives you six names, contact details, specialisations, and a short comparison. No click-through. No ad impression. No affiliate commission. The directory that spent seven years ranking for that exact query earned precisely nothing from the interaction — except, possibly, a citation link at the bottom that 3% of users will actually tap.
I audited a trade directory in April that had lost 61% of its organic sessions in nine months. The queries hadn’t disappeared from search demand data — SEMrush showed volume had actually grown. The users were still asking. They just weren’t arriving.
The traffic graph that broke last Tuesday
If you run GA4 alongside Search Console, you’ve probably seen the pattern: impressions holding steady or rising, click-through rate collapsing from 4–5% down to 1.2%. That’s not a ranking problem. That’s Google answering the query on the results page itself via AI Overviews and Gemini features, with your content feeding the answer.
Did you know? B2B websites are averaging 34% year-over-year traffic declines, with Gartner projecting 25–50% traditional search volume reductions by 2028 depending on vertical, according to analysis from ZipTie.dev.
Why your best advertisers stopped renewing
Advertisers don’t cancel because AI scares them. They cancel because their monthly reports show fewer qualified leads. When a plumber pays you £240/month for a featured listing and their dashboard shows calls-from-listing dropping from 22 to 8, the renewal conversation writes itself. The uncomfortable truth: most directory owners don’t know whether those calls dropped because AI intercepted the intent, or because the listing page genuinely got less useful. Both are fixable. Neither gets fixed by ignoring it.
What AI Search Actually Replaces
Transactional queries vs trust-based decisions
AI search is excellent at summarising. It’s mediocre at vouching. This distinction is the entire basis for directory survival.
What are the top-rated wedding photographers in Bristol?” — ChatGPT handles this passably. “Which Bristol wedding photographer has actually shot at my venue, costs between £1,800 and £2,400, and won’t ghost me three weeks before the date?” — that’s a trust-based decision requiring verified, first-party signals no LLM can hallucinate into existence.
The “ten blue links” use cases that are dying
Let me be blunt about what’s already dead or dying:
| Query Type | Example | AI Handles It? | Directory Value | Honest Verdict |
|---|---|---|---|---|
| Informational | “What is a quantity surveyor?” | Yes, perfectly | Near zero | Stop competing |
| Comparison | “Xero vs QuickBooks for freelancers” | Yes, with caveats | Low unless you have first-party data | Needs unique angle |
| Navigational | “Booking.com Edinburgh” | Poorly | High | Defend aggressively |
| Verification | “Is [company] legitimate?” | Unreliably | Very high | Your new moat |
Categories where directories still win by default
Local services with licensing requirements (electricians, solicitors, medical practitioners). Trade-specific B2B verticals where buyers need insurance verification, capacity data, and references. Anywhere the cost of a bad decision exceeds £500. AI can list ten options; it cannot phone the business, confirm they’re accepting new clients, or verify their Gas Safe registration is current. That’s not a limitation of today’s models — it’s structural.
Myth: AI search will replace directories because it gives users faster answers. Reality: Gemini Deep Research takes 3–5 minutes to produce a comprehensive response. For “plumber near me at 10pm on a Sunday”, that’s an eternity. Speed is a directory advantage, not a liability.
The Citation Economy Nobody Saw Coming
How Perplexity and Google AIO pick their sources
I’ve spent the last 14 months reverse-engineering what gets cited in AI answers. The patterns are clearer than people admit:
Perplexity and Google AIO disproportionately cite sources with (a) clean schema markup, (b) high topical authority in a narrow niche, (c) structured data that matches the query’s entity type, and (d) recent freshness signals. Directories check three of these four boxes by design. That’s the opportunity most operators are missing while they panic about traffic graphs.
Why structured directory data became LLM fuel
Large language models don’t read web pages the way humans do. They consume tokens, weight entities, and lean heavily on structured data when it’s available because it reduces inference cost. A well-formed LocalBusiness schema with aggregateRating, priceRange, and areaServed is easier for an LLM to ingest than a 2,000-word blog post making the same claims in prose.
This is why some directories are seeing increased citation share in AI answers even as their direct traffic drops. The business model shifts from “user finds listing → clicks → converts” to “AI cites listing → brand recall → direct visit later. Harder to measure. Still valuable, if you can prove it to advertisers.
Tracking referral patterns from AI overviews
Set up these tracking mechanisms today if you haven’t already:
- GA4 custom channel grouping for referrals from
perplexity.ai,chat.openai.com,gemini.google.com, andcopilot.microsoft.com - Branded search volume tracking in Search Console (rising branded search with flat organic = AI citation working)
- Direct traffic to deep listing URLs (AI citations often drop users on specific pages, bypassing homepages)
- Server log analysis for AI crawler user agents:
GPTBot,PerplexityBot,Google-Extended,ClaudeBot
Quick tip: Check your server logs for GPTBot and PerplexityBot visits over the past 90 days. If you’re getting fewer than 50 hits per week and your directory has more than 500 listings, your robots.txt or site architecture is blocking the very crawlers that decide whether AI cites you. Fix that before you touch anything else.
Rebuilding Directories as AI-Native Assets
Schema markup that actually gets ingested
Most directories I audit have schema — technically. It validates in Google’s Rich Results Test. It also misses 60% of the fields LLMs actually use for disambiguation. The minimum viable schema for an AI-native directory listing in 2025:
| Schema Field | Why It Matters | Adoption Rate I See |
|---|---|---|
@id with stable URI | Entity resolution across citations | ~15% |
sameAs linking to Wikidata, Companies House, LinkedIn | Cross-reference verification | ~8% |
knowsAbout / areaServed | Query-to-listing matching | ~25% |
review with individual Person authors | Trust signals LLMs weight heavily | ~12% |
hasCredential for licences/certifications | Verification-based ranking | ~3% |
First-party data moats LLMs cannot replicate
This is where it gets strategic. An LLM can scrape and summarise publicly available business information. It cannot generate the following unless a directory provides it:
Actual response times for enquiries (measured, not claimed). Verified insurance and licensing status with expiry dates. Real booking availability. Project portfolios with completion dates and client verification. Price ranges derived from actual quotes, not self-reported brackets.
Every single one of those data points is defensible. None can be hallucinated. All of them command premium pricing from both users and advertisers.
Did you know? McKinsey projects $750 billion in U.S. revenue will flow through AI-powered search by 2028. The directories that capture a share of this aren’t the ones fighting AI — they’re the ones becoming its verification layer.
Verification layers as your new pricing tier
My most successful directory client (a regional trades platform I won’t name for contractual reasons) restructured pricing around verification depth last summer:
Tier 1 (free): Basic listing, self-reported data. Tier 2 (£89/month): Verified insurance, licence checks quarterly. Tier 3 (£340/month): Monthly site visits, verified project portfolio, response-time SLA monitoring. Tier 3 sold out in 11 weeks. The waiting list is currently 40 businesses deep. Revenue per listing tripled; the number of listings fell 35%; total revenue rose 68%. The advertisers who left were the ones who couldn’t have survived AI disruption anyway.
Community signals over aggregated listings
The directories surviving this transition are the ones that built genuine communities — not review counts, but actual ongoing relationships with both the listed businesses and the users. G2 is the obvious B2B example (more on that shortly). Locally, the pattern holds: directories with active forums, verified reviewer networks, and regular business events outperform pure-aggregation sites by roughly 3:1 on retained organic traffic, based on the 14 portfolios I’ve audited in the past year.
Myth: More listings equal more value. Reality: A directory with 400 verified, actively managed listings outperforms one with 40,000 stale entries on every metric that matters post-AI: citation rate in LLM answers, advertiser retention, user session depth, and direct traffic share.
Proof From Directories That Pivoted
Yelp’s API licensing play with OpenAI
Yelp’s approach is instructive. Rather than resist AI ingestion, they structured licensing deals with AI platforms — getting paid for the data access that was happening anyway. It’s the same playbook Reddit ran with Google (the $60M/year deal). The lesson: if your data is going to be consumed by LLMs regardless, monetise the pipe rather than fight it. Small directories can’t strike these deals individually, but you can absolutely structure your llms.txt and terms of service to require attribution and rate-limit unlicensed crawlers.
G2’s reviewer network as defensive moat
G2’s moat isn’t its listings — those are largely scrapeable. The moat is the 2.4 million verified reviewers with LinkedIn authentication, company email verification, and ongoing engagement. An LLM can summarise G2 reviews; it cannot produce new ones. When a buyer needs current sentiment on a SaaS tool released three months ago, G2 has it and ChatGPT’s training data doesn’t. That’s the structural advantage directories need to manufacture.
How Tripadvisor rewrote its content strategy in 9 months
Tripadvisor’s pivot is less publicised but more replicable. They moved from generic “top 10” lists (trivially AI-replaceable) to deep first-person experience content with verified trip data, photographer credentials, and timestamp verification. Bookings-from-content rose even as pure informational traffic fell. The reallocation was brutal — they sunsetted thousands of low-quality pages — but the commercial outcome justified it.
What if… ChatGPT launched a free, high-accuracy local business finder tomorrow with real-time availability and verified pricing? The directories that survive are the ones already providing verification data ChatGPT can’t independently produce. Everyone else becomes archive material. The good news: you probably have 12–18 months before that specific scenario becomes real. The bad news: 12–18 months is roughly how long a proper pivot takes.
Your 30-Day Repositioning Plan
Week 1: Audit which queries AI already stole
Open Search Console. Filter by past 16 months. Export queries where impressions are flat or rising but CTR has dropped more than 30%. Those are your AI-intercepted queries. Then — and this is the part most operators skip — test each one manually in ChatGPT, Perplexity, and Google AIO. Note which ones cite your site and which don’t.
You’ll end up with four buckets:
- Lost and uncited: AI answers this query without referencing you. These pages are dead weight. Prune or redirect.
- Lost but cited: AI answers the query and credits you. Different problem — you need to convert citation-driven brand awareness into direct traffic.
- Holding: Traffic hasn’t dropped significantly. Usually navigational or hyper-local. Defend with schema and freshness.
- Opportunity: Queries where AI answers are demonstrably wrong, outdated, or generic. These are your new ranking targets — create content AI can’t fake.
Week 2: Ship schema and llms.txt
If you don’t have llms.txt at your root, add it this week. It’s the emerging standard for telling LLMs what they can and can’t ingest, with attribution requirements. Pair it with JSON-LD schema upgrades on your top 100 listings. Don’t try to fix all 40,000 at once — I’ve watched three directories paralyse themselves trying. Start with the 100 that drive the most revenue.
Test ingestion by asking Perplexity to describe one of your listings after you’ve updated it. If the summary reflects your structured data, it’s working. If it’s still pulling from three-year-old scraped prose, you’ve got a crawl-frequency problem to solve.
Did you know? The RecSys 2010 conference convened a panel titled “Will recommenders kill search?” — documented by IBM Research. Fifteen years later, both are still here. “Will X kill Y?” debates almost always resolve into “X changes Y”.
Week 3: Launch one feature AI cannot fake
Pick one. Just one. Options I’ve seen work:
- Response-time tracking: when users enquire through your directory, measure and publish how long each business takes to reply
- Verified availability: integrate with calendars so listings show actual booking slots, not just contact forms
- Photo verification: require date-stamped, geotagged photos from listed businesses, refreshed quarterly
- Credential expiry tracking: display when each licence, insurance policy, or certification was last verified and when it expires
Any of these creates data an LLM structurally cannot produce from public sources. Resist the urge to ship all four at once. One feature, properly executed, beats four half-built ones every time.
If you’re researching how verification-layer directories structure their listings in practice, Business Directory provides a reasonable reference point for manual review standards — the kind of editorial curation AI can’t replicate at scale.
Week 4: Renegotiate advertiser contracts around qualified intent
This is where operators get squeamish, and it’s where the money is. Stop selling impressions. Stop selling “featured listing” placements that advertisers can’t tie to outcomes. Start selling qualified leads, verified calls, or booked appointments.
The conversation goes like this: “Your current £240/month listing is delivering 8 calls. Under our new structure, you’ll pay £18 per verified call with a £150 monthly minimum. If we deliver 20 calls, you pay £360 — more than you pay now. If we deliver 6, you pay £150 — less than you pay now. Risk is shared.” About 40% of advertisers say yes immediately. About 30% negotiate. About 30% leave, and those 30% were going to leave anyway.
Myth: Switching to performance pricing will collapse revenue because you can’t predict delivery. Reality: In the five directories I’ve helped through this transition, average revenue per advertiser rose 22–58% within six months. The operators who resisted are the ones whose revenue actually collapsed.
Quick tip: Before pitching performance pricing, run a 60-day shadow measurement period where you track actual calls, form submissions, and bookings per listing without charging for them. You need real delivery data to negotiate confidently. Guessing is how you either undercharge (leaving money on the table) or overpromise (losing the advertiser in month two).
The Directories Worth Building Now
I’ve been asked twice in the past month whether it still makes sense to start a new directory in 2025. My answer depends entirely on the category.
Don’t start: a generic local directory, a “top 10 SaaS tools” aggregator, or anything whose content can be reconstructed from public web data. These were marginal businesses before AI; they’re unviable now.
Do start: verification-heavy niches (regulated trades, medical specialisations, financial services), community-led vertical directories where relationships compound, and data-licensing plays where structured first-party data has clear value to both enterprise buyers and AI platforms. The economics of these are arguably better now than five years ago, because the competitive bar has risen and AI has eliminated the low-quality aggregators that used to dilute the category.
Did you know? Chegg’s stock collapsed 90% after the company acknowledged AI’s impact on its business. The cautionary detail: Chegg’s collapse wasn’t inevitable — it was the result of failing to pivot from content aggregation to verified, first-party assessment data that AI couldn’t replicate. Industry analysis suggests the same choice faces every directory operator now.
The directory owners I worry about aren’t the ones asking hard questions at 3 AM. They’re the ones still running 2019’s playbook in 2025, assuming SEO best practices will quietly reassert themselves once the AI hype cycle passes. It won’t. The shape of search has changed, the citation economy is real, and the window for repositioning is measured in quarters, not years.
Start with the server logs this week. The schema next week. The feature the week after. And by the time your competitors have finished another panicked board meeting about whether to sell, you’ll have already built the thing that makes the sale unnecessary.

