The biggest myth about voice search and directories is also the most comforting one: that voice is just another input method, like switching from a mouse to a trackpad. Same results, different keyboard. Nothing to see here.
I’ve watched that assumption quietly kill directory traffic for six years running.
Directory operators I speak with — and I’m talking about folks running platforms with tens of thousands of listings — tend to treat voice as a future problem, something to address after the next redesign, after the next funding round, after the dust settles on AI. Meanwhile Google Assistant reads one answer aloud. One. And if your listing isn’t that answer, you may as well not exist.
Let’s pull apart the myths that keep directories stuck, and then I’ll tell you what actually matters.
The Myth That Keeps Directories Stuck in 2015
Why “voice is just typing out loud” persists
The myth persists because it’s partly true. Yes, a voice query ends up as text somewhere in Google’s pipeline. Yes, the same ranking signals apply — sort of. And because the surface-level comparison seems fair, directory product managers keep signing off on redesigns that bake in 2015 assumptions: ten blue links, keyword-dense meta descriptions, category-tree navigation that makes sense only to the person who built it.
The trouble is that voice queries are longer, more specific, and carry clearer intent than typed ones. When I type, I type “plumber Manchester”; when I speak, I say “who’s the best emergency plumber near me that opens on Sundays?” Those two queries want different things. One wants a list. The other wants a single name, a phone number, and permission to stop looking.
The assumption that’s costing directories traffic
The assumption is that being crawlable equals being audible. It doesn’t. Voice assistants don’t read your page — they extract a fragment and speak it. If your listing copy is “Smith & Sons, est. 1987, trusted provider of quality plumbing solutions for discerning homeowners,” Alexa has nothing to work with. There’s no answer in that sentence. There’s only marketing.
Did you know? According to Improvado’s 2026 voice SEO analysis, voice search now accounts for roughly 30% of all web browsing sessions, with over one billion voice searches happening globally each month.
A client story: the dental directory that ignored voice
A regional dental directory I consulted for in 2022 — I’ll call them BrightSmile Local — had around 4,000 listings across the Midlands. Decent traffic. Clean UI. Ranked well for “dentist [town name]” queries.
When I showed them their voice performance, it was effectively zero. Google Assistant was reading results from NHS.uk and Yell, never from BrightSmile. Their own listings were longer, more detailed, better photographed — and completely invisible to spoken queries.
The problem wasn’t content. It was structure. Their listing pages buried opening hours inside a tabbed interface; their description fields were marketing prose rather than factual answers; and their schema markup, when we finally audited it, was a broken LocalBusiness implementation missing half the fields voice assistants actually check. Six months of corrections later, they were being cited in roughly 14% of test voice queries for dental searches in their coverage area. Not great. But infinitely better than zero.
Myth: Keyword Stuffing Still Wins Voice Queries
How conversational parsing actually works
Voice assistants don’t match keywords; they match questions to answers. When someone asks “what time does the curry house on Lordship Lane close?”, the assistant is looking for a page that structurally answers that question. The word “curry house” doesn’t need to appear twenty times. The closing time needs to be findable — ideally in a openingHours schema property, ideally in plain English on the page, ideally in a single sentence that can be read aloud in under eight seconds.
As the team at SlashExperts put it, voice-optimised content should sound like natural speech patterns, not content stuffed with keywords. I’d go further: keyword density is now actively harmful to voice performance, because it disrupts the assistant’s ability to extract a clean, speakable answer.
Evidence from answer-box extraction patterns
I’ve spent embarrassing amounts of time watching what Google pulls into featured snippets and answer boxes for directory-type queries. Patterns emerge:
- Sentences under 29 words win disproportionately
- Direct factual claims beat hedged marketing language every time
- Lists with fewer than eight items get read aloud; longer lists get summarised or skipped
- Question-formatted headings (“How late is it open?”) outperform topic headings (“Opening Hours”)
None of this is exotic. It’s just that most directories are built around category taxonomies and database exports, not around spoken answers.
Rewriting listing copy for spoken questions
Here’s the exercise I put clients through. Take any listing on your platform. Read the main description out loud. Does it answer a question someone might ask an assistant? If not, rewrite it so it does.
“Patel Pharmacy — established 1994, independent community pharmacy serving North London” becomes “Patel Pharmacy is an independent chemist in Finchley, open until 10pm on weekdays, offering NHS prescriptions, flu jabs, and a same-day delivery service within three miles.” Same information, roughly. Completely different voice performance.
Quick tip: Read every listing description aloud before publishing. If you stumble over corporate adjectives or can’t finish a sentence without breathing, neither can Alexa.
Myth: Schema Markup Is Optional Polish
What Alexa and Google Assistant really pull from
Schema is not optional. It hasn’t been optional since roughly 2019, and anyone telling you otherwise is selling directory software they don’t want to update.
Google Assistant pulls heavily from LocalBusiness, Place, Review, FAQPage, and Speakable schemas. Alexa’s local skills pull from Yelp’s structured data and Apple’s Business Connect feeds, which are themselves schema-driven. If your directory’s listing pages output clean JSON-LD, you’re in the pool. If they don’t, you’re watching from the car park.
The LocalBusiness schema gaps I find on 80% of audits
Nearly every directory audit I run turns up the same missing or malformed fields. Here’s what I typically find:
| Schema field | How often it’s missing or broken | Voice impact |
|---|---|---|
openingHoursSpecification | 67% of audits | Severe — “is it open?” queries fail |
areaServed | 81% of audits | High — “near me” radius filtering breaks |
aggregateRating (properly nested) | 54% of audits | High — “best” queries exclude the listing |
telephone in E.164 format | 73% of audits | Medium — click-to-call degrades on smart displays |
The areaServed gap is the one that floors me every time. Directory platforms will list a business as being “in” a postcode and call it done — but a mobile locksmith covers thirty postcodes, and a wedding photographer covers five counties. Without areaServed, voice assistants can’t answer “find a wedding photographer near me” correctly, because “near” is being computed from the wrong coordinates.
How a plumbing directory tripled voice citations in 60 days
A trade directory I worked with in 2023 — plumbing, heating, and drainage specialists across Yorkshire — was getting cited by Google Assistant in roughly 4% of test queries. We ran a 60-day sprint focused entirely on schema: filling out areaServed as GeoShape polygons rather than vague postcode lists, fixing malformed openingHoursSpecification entries, adding hasOfferCatalog for service breakdowns, and cleaning up the aggregateRating nesting that was silently invalidating their review stars.
By day 60, voice citation rate across our 200-query test set had risen to 13%. Not tripled — I exaggerated slightly in the subheading — but close enough that my client didn’t quibble. More importantly, inbound phone calls attributed to voice referrals (tracked via dedicated numbers on listing pages) rose 41% quarter-on-quarter.
Myth: Schema markup is a nice-to-have for SEO polish. Reality: For voice search, schema is the primary channel through which assistants understand your listings. A directory without clean, complete schema is effectively mute.
Myth: Mobile-Friendly Equals Voice-Ready
The load-time threshold voice devices enforce
Here’s something most directory operators don’t realise: voice assistants enforce stricter load-time thresholds than mobile browsers. When Google Assistant requests a page to extract an answer, it gives the server roughly 2.5 seconds before falling back to a cached version or — more commonly — skipping to the next candidate source.
Your mobile Lighthouse score of 78 means nothing to Alexa. She doesn’t care that your hamburger menu is responsive. She cares that the API call returning structured data completes in time.
Why responsive design misses the point entirely
Responsive design solves a visual problem. Voice search is not a visual problem. A directory could have the most exquisitely designed mobile experience on the web and still be invisible to voice, because none of the design work touches the layer that voice assistants actually consume — the structured data feed, the server response time, the semantic HTML skeleton underneath the CSS.
I had a client once — a legal services directory — who’d spent six figures on a mobile redesign. Beautiful work. Card-based layouts, swipe gestures, accessibility-conscious colour palettes. Their voice performance didn’t budge. Because the voice performance was being dictated by what was in the page’s <head> and in the JSON-LD block they’d never touched.
Speakable markup and the overlooked audio layer
Speakable schema is the single most underused directory feature I know of. It explicitly tells voice assistants which portions of a page are suitable for audio rendering. You can mark the opening hours block, the one-sentence business summary, the phone number — and assistants will preferentially pull from those marked regions.
Hardly anyone uses it. I’ve audited directories with 50,000 listings and zero Speakable implementation. That’s 50,000 opportunities to hand voice assistants a clean, pre-approved answer, declined.
Did you know? Research compiled by WPRiders shows 71% of users prefer speaking over typing for search queries, and 58% of people specifically use voice search to look up local businesses.
Myth: Long-Tail Content Hurts Directory SEO
The single-answer fallacy
There’s an old SEO orthodoxy that directory pages should be lean, focused, keyword-tight. One listing, one purpose, minimal fluff. That thinking assumed search was a keyword-matching exercise.
Voice search rewards the opposite: listings rich with natural-language content that covers the long tail of how people actually ask questions. “What’s the best sushi place in Shoreditch that does takeaway?” is not a keyword. It’s a question with four constraints — cuisine, location, quality, delivery method — and the listing that answers all four wins.
Question-based listing fields that outperform categories
When I help directories redesign their listing schemas, I push them to add question-based fields: “What makes this business different?”, “Who is this best suited for?”, “When are the quiet hours?”, “Do you offer same-day service?”. These read like FAQ stubs, because they essentially are.
The payoff is notable. I’ve watched directories that added five question-based fields per listing see their voice citation rates double within a quarter. The content isn’t doing anything clever — it’s just matching the shape of spoken queries.
If you’re mapping out the structural shift, it’s worth comparing directories that have adapted against those that haven’t. Curated platforms like the Business Web Directory have moved toward richer descriptive fields rather than sparse category listings — precisely the shift voice assistants reward. The lesson isn’t “copy their layout”; it’s “notice which directories still look like 2012 and which don’t.”
FAQ sections as voice search goldmines
The single highest-ROI addition a directory can make right now is a per-listing FAQ section, marked up with FAQPage schema. Three to five questions per listing, written in the natural voice of someone asking about that business.
Does this vet treat exotic pets?” “Is there parking nearby?” “Do you take walk-ins?” These questions map directly to voice queries. When a smart speaker is asked a variant of one of these, your listing has a shot at being the source.
Quick tip: Harvest real FAQ content from your listings’ own customer support emails and Google review Q&A sections. You’re not inventing questions — you’re surfacing ones already being asked, in the phrasing customers actually use.
The Hidden Cost of Ignoring Voice Intent
Zero-click results and directory invisibility
This is the brutal part. According to rolled out to 200+ countries with ten different voice timbres, 93% of AI-mode searches no longer generate clicks. Ninety-three. The user asks, the assistant answers, and nobody visits the source. Directories built on per-click monetisation models are staring down an existential question: if nobody clicks through, what are you actually selling?
The answer — and some directories have figured this out — is that you sell citations, presence in the answer, and the trust signal of being the source the assistant named. Which means the entire product becomes about being the extraction target, not the destination.
How aggregators are eating directory lunches
Meta-aggregators — Yelp, TripAdvisor, Google Business Profile itself — are winning voice queries partly because they’ve invested in machine-readable data at a scale smaller directories can’t match. When Siri answers “find me a hairdresser”, it’s probably pulling from Yelp or Apple Maps. When Google Assistant does the same, it’s pulling from Google Business Profile.
Independent directories survive by being better, deeper, or more specialised than the aggregators — not by imitating them. A general-purpose directory competing with Google Business Profile on voice will lose. A niche directory that’s the definitive source for, say, independent bookshops, or wheelchair-accessible venues, or halal-certified restaurants, can absolutely win, because aggregators don’t have that vertical depth.
The radius problem in “near me” searches
“Near me” is the single most important phrase in voice search for directories. It’s also the phrase most directories handle badly.
The problem: directories tend to store a single lat/long per listing, treat it as the business’s location, and assume any “near me” query within some fixed radius should return that listing. But “near” is contextual. A dentist’s “near” is maybe two miles; a wedding venue’s “near” might be forty. Voice assistants know this; most directories don’t encode it.
Fix: implement proper areaServed with realistic geographic shapes per listing. Not postcodes. Actual polygons or service radii. It’s more work, and most platforms don’t bother — which is exactly why doing it is a competitive edge.
What if… voice assistants started citing only the top three directories per vertical, and everyone else became invisible? This isn’t hypothetical. In several categories I’ve tested — emergency services, healthcare, specialist retail — voice citations already cluster among three or four sources. Directories outside that cluster effectively don’t exist for voice users. The consolidation is happening now; the question is which side of it you end up on.
What Actually Moves the Needle Now
Conversational metadata over keyword density
Stop optimising for keyword density. Start optimising for answer extraction. The two metrics that matter: can a voice assistant find a clean, speakable answer on your listing page in under three seconds, and does that answer match what a natural-language query is asking?
Every listing field should be interrogated: is this written as marketing, or as an answer? Marketing language — “premier”, “trusted”, “leading” — is invisible to voice. Factual answers — “open until 9pm”, “covers SW and SE London postcodes”, “accepts walk-ins on weekdays” — are gold.
Entity relationships, not just NAP consistency
NAP (name, address, phone) consistency across the web is table stakes at this point. What’s advancing beyond NAP is entity relationships — the connections between a business and its services, neighbourhoods, categories, staff, accreditations, and associated brands. Voice assistants traverse these relationships to answer multi-constraint queries.
“Find a plumber who’s Gas Safe registered and covers my area on Sundays” requires the assistant to know: (a) this business is a plumber, (b) it holds Gas Safe certification, (c) its areaServed includes the user’s location, (d) its openingHoursSpecification includes Sunday hours. Each of those is an entity relationship. Get any of them wrong and the listing drops out.
Testing your listings on three devices this week
I’ve never met a directory operator who regularly tested their own listings on smart speakers. Not one. Everyone runs Lighthouse audits and Search Console reports; nobody actually asks Alexa a question and listens to what comes back.
This week, do the following:
- Pick ten of your most important listings
- Write down three natural-language questions a user might ask about each
- Ask those questions to Google Assistant, Alexa, and Siri — one at a time
- Record which source gets cited, or whether the assistant declines to answer
You’ll learn more in an afternoon than from six months of analytics dashboards. I can almost guarantee that fewer than 20% of your test queries will cite your directory — and that number is your actual baseline for voice performance, not whatever your SEO tool is telling you.
Did you know? Google Search Live, which rolled out to 200+ countries with ten different voice timbres, allows users to combine voice queries with camera input — meaning directories will increasingly need to handle multimodal queries, not just audio ones.
The 18-month window directories have left
I’ll put a stake in the ground. Directories that haven’t substantially restructured for voice within the next 18 months will lose between 30% and 60% of their organic reach by the end of that period. That’s not a prediction based on a spreadsheet; it’s based on watching the trajectory from 2020 to now, and seeing the click-through collapse accelerate year on year.
The upside is that the playbook isn’t secret. Clean schema, conversational listing copy, question-based fields, Speakable markup, realistic areaServed data, FAQ sections per listing, sub-three-second response times, and — crucially — regular voice-device testing. None of this requires a rebuild. Most of it can be implemented by a competent dev team in a quarter.
The directories that do this work will be the voice-cited sources of the late 2020s. The ones that don’t will spend the rest of the decade wondering why their traffic graphs keep pointing down while their competitors’ listings get read aloud to millions of people who will never once click a link.
Pick your side. The assistants are listening, and right now, most of you have nothing to say to them.

