Skip to content
Back to case studies
Dentist · Brunswick, GA

“Family dentist accepting new patients near Brunswick”

Cited by AI, but with the wrong hours, wrong insurance, and an outdated new patient policy. Answers were hurting, not helping.

What we found

This was not a visibility problem, it was an accuracy problem. AI answers already referenced the practice, but pulled from directory data years out of date: hours wrong across three platforms, dropped insurance networks still listed as accepted, and 'not accepting new patients' shown despite open capacity. Patients were self selecting out before ever reaching the website.

What we did

  1. 01

    Audited the top twelve directories for NAP, hours, insurance, and new patient status. Reconciled every mismatch to a single source of truth.

  2. 02

    Added LocalBusiness and Dentist schema with explicit acceptsReservations, paymentAccepted, healthPlanNetworkId, and openingHoursSpecification fields.

  3. 03

    Published an llms.txt declaring current hours, insurance networks, new patient availability, and emergency policy in plain language.

  4. 04

    Updated Google Business Profile attributes and posted a current 'accepting new patients' update.

  5. 05

    Added a dedicated insurance page with each accepted plan listed as structured data.

What changed

  • Claude and ChatGPT answers now quote accurate insurance and new patient availability.

  • Directory data propagated cleanly after two cycles; hours are consistent across all five platforms.

  • Front desk reported fewer corrective calls about insurance acceptance.

  • Bookings attributed to AI referred traffic began that month.

12
Directories reconciled
~40% → 100%
Insurance accuracy
2 of 5 → 5 of 5
Hours accuracy
6
Weeks to full propagation

Week by week

  1. Week 1

    Baseline and blueprint

    Query battery across insurance, hours, and new patient variants. Identified three platforms quoting stale data from two directories as the upstream source.

  2. Weeks 2 to 4

    Foundations live

    LocalBusiness and Dentist schema deployed with explicit insurance and hours fields. llms.txt published. Twelve directories reconciled. GBP attributes updated.

  3. Weeks 5 to 8

    Citations and mentions

    Insurance page published with structured data. Two local health directory citations added. Claude and ChatGPT began quoting the corrected data.

  4. Weeks 9 to 12

    Compounding visibility

    All five platforms showed consistent hours and insurance. Practice shifted focus to pediatric and emergency content for the next quarter.

We were losing new patients to information we hadn't approved. The fix wasn't marketing, it was hygiene on data we didn't know was being read by AI.

Practice manager, family dental office, Southeast US.

Questions about this case study

  • What was the root cause of the wrong insurance data in AI answers?
    Stale entries in two widely-indexed directories propagated to every answer engine. The models were not wrong to cite the data; the data was wrong at the source.
  • How many directories were reconciled?
    Twelve — the top directories known to be retrieval sources for dental queries, including GBP, Healthgrades, Zocdoc, Yelp, and nine others reconciled against a single source of truth.
  • Which schemas were deployed for the practice?
    LocalBusiness and Dentist schema with openingHoursSpecification, acceptsReservations, paymentAccepted, and healthPlanNetworkId for each in-network plan. A dedicated insurance page listed each accepted plan as structured data.
  • How long until all five AI platforms showed consistent data?
    Six weeks from deployment. Perplexity and Claude updated fastest; ChatGPT and Gemini followed after their next crawl cycles.
  • Did the practice publish any new content?
    One dedicated insurance page with structured data for every accepted plan. The rest of the work was data reconciliation and schema, not new content.

Representative case study. Industry, location, and specifics are illustrative composites drawn from recurring patterns in our work. AI answer engines are probabilistic; actual results vary by category, competition, and baseline.

See where your business actually stands.

Thirty minutes, real queries, real findings.