“Family dentist accepting new patients near Brunswick”
Cited by AI, but with the wrong hours, wrong insurance, and an outdated new patient policy. Answers were hurting, not helping.
What we found
This was not a visibility problem, it was an accuracy problem. AI answers already referenced the practice, but pulled from directory data years out of date: hours wrong across three platforms, dropped insurance networks still listed as accepted, and 'not accepting new patients' shown despite open capacity. Patients were self selecting out before ever reaching the website.
What we did
- 01
Audited the top twelve directories for NAP, hours, insurance, and new patient status. Reconciled every mismatch to a single source of truth.
- 02
Added LocalBusiness and Dentist schema with explicit acceptsReservations, paymentAccepted, healthPlanNetworkId, and openingHoursSpecification fields.
- 03
Published an llms.txt declaring current hours, insurance networks, new patient availability, and emergency policy in plain language.
- 04
Updated Google Business Profile attributes and posted a current 'accepting new patients' update.
- 05
Added a dedicated insurance page with each accepted plan listed as structured data.
What changed
Claude and ChatGPT answers now quote accurate insurance and new patient availability.
Directory data propagated cleanly after two cycles; hours are consistent across all five platforms.
Front desk reported fewer corrective calls about insurance acceptance.
Bookings attributed to AI referred traffic began that month.
Week by week
- Week 1
Baseline and blueprint
Query battery across insurance, hours, and new patient variants. Identified three platforms quoting stale data from two directories as the upstream source.
- Weeks 2 to 4
Foundations live
LocalBusiness and Dentist schema deployed with explicit insurance and hours fields. llms.txt published. Twelve directories reconciled. GBP attributes updated.
- Weeks 5 to 8
Citations and mentions
Insurance page published with structured data. Two local health directory citations added. Claude and ChatGPT began quoting the corrected data.
- Weeks 9 to 12
Compounding visibility
All five platforms showed consistent hours and insurance. Practice shifted focus to pediatric and emergency content for the next quarter.
“We were losing new patients to information we hadn't approved. The fix wasn't marketing, it was hygiene on data we didn't know was being read by AI.”
Questions about this case study
What was the root cause of the wrong insurance data in AI answers?
Stale entries in two widely-indexed directories propagated to every answer engine. The models were not wrong to cite the data; the data was wrong at the source.How many directories were reconciled?
Twelve — the top directories known to be retrieval sources for dental queries, including GBP, Healthgrades, Zocdoc, Yelp, and nine others reconciled against a single source of truth.Which schemas were deployed for the practice?
LocalBusiness and Dentist schema with openingHoursSpecification, acceptsReservations, paymentAccepted, and healthPlanNetworkId for each in-network plan. A dedicated insurance page listed each accepted plan as structured data.How long until all five AI platforms showed consistent data?
Six weeks from deployment. Perplexity and Claude updated fastest; ChatGPT and Gemini followed after their next crawl cycles.Did the practice publish any new content?
One dedicated insurance page with structured data for every accepted plan. The rest of the work was data reconciliation and schema, not new content.
Representative case study. Industry, location, and specifics are illustrative composites drawn from recurring patterns in our work. AI answer engines are probabilistic; actual results vary by category, competition, and baseline.
Other representative cases.
See where your business actually stands.
Thirty minutes, real queries, real findings.