Skip to main content
Case studyDigital Agency3 days

Lifting our own GEO score from 27 to 48 in two working sessions

How we applied our Foundation and Content methodology to angkordigital.co before asking any client to do the same.

By William Mallett·
27 → 48
Composite GEO score (+21 pts, 78% relative uplift)
+50
Technical GEO (35 → 85) — largest single-category jump
+20
Schema & Structured Data (65 → 85)
14
Atomic, revertable commits across two PRs
~8 hrs
Total Claude Code execution time
30 min
Total human input time (Phase 1B intake only)

Summary

Before selling GEO audits as a service, we ran our own methodology against angkordigital.co. Baseline: 27/100 (Critical). Two sessions and 14 atomic commits later: 48/100 (Poor, upper range) — a 78% relative improvement. This page documents what was broken, what we fixed, and what we left for Phase 2.

The journey, phase by phase

Baseline

27/100Critical

Critical-rated across the board. The single biggest gap was that server-rendered HTML arrived in Khmer by default — AI crawlers saw Khmer-only body content when the bilingual team intended English to be the canonical locale.

What shipped

  • Identified 18 numbered issues across six GEO categories
  • Published the baseline audit with severity-classified findings
  • Agreed a 3-tier automation classification (🟢 fully automatable, 🟡 hybrid, 🔴 human-gated)

Phase 1A — Structural uplift

43/100Poor

Pure-automation tier: 10 atomic commits, zero human input. The single highest-leverage change was flipping the SSR default locale in LanguageContext.tsx from 'km' to 'en' — a one-line change that unlocked ~4 KB of body content to AI crawlers.

What shipped

  • Server-rendered HomePage, blog post page, and ServicesSection (Server/Client split with use-client child components)
  • Added public/llms.txt (Jeremy Howard AI discovery standard)
  • Added explicit Allow directives for 11 AI crawlers (GPTBot, OAI-SearchBot, ChatGPT-User, ClaudeBot, Claude-SearchBot, Claude-User, PerplexityBot, Perplexity-User, Google-Extended, Applebot-Extended, CCBot)
  • BreadcrumbList component emitting JSON-LD, wired on internal pages
  • WebSite schema added to root layout; restructured JSON-LD as @graph with cross-referenced @id entities
  • html lang='en' + hreflang alternates (en, km, x-default)
  • Per-page lastmod in sitemap via git commit time (fs.statSync fallback)
  • BlogPosting + Person + WebPage schemas on all 3 blog posts
  • Generated 1200×630 PNG OG image from existing SVG via rsvg-convert
  • Fixed navigation: replaced #anchor fragments with /services/[slug] routes

Phase 1B/1C — Data layer

48/100Poor (upper range)

Data-dependent tier: resolved three disqualifying trust failures that AI retrievers were penalising heavily. Unblocked by a 30-minute intake form covering NAP, stats decision, and testimonials decision.

What shipped

  • Replaced placeholder contact data (+855 000 000 000 → real mobile; 123 Street 51 → actual Siem Reap address)
  • Pivoted all 'Phnom Penh' references to 'Siem Reap' across 9 source files + llms.txt
  • Expanded Organization schema to [Organization, ProfessionalService, LocalBusiness] with full PostalAddress, GeoCoordinates (13.38575, 103.85579), and OpeningHoursSpecification
  • Removed Brooklyn Simmons + Jenny Wilson stock testimonials (91 lines deleted)
  • Reconciled conflicting home (4.8k/12+/2.5k+/120+) and about (50+/30+/5+/100%) stats via src/lib/company-stats.ts single source of truth
  • Added Service + OfferCatalog schema on all 9 service category pages with provider @id back to Organization

Phase 2 — Content (in progress)

Currently shipping as part of the GEO-agency pivot. Expected exit score 58-65. Shifts scope from generalist-agency support content to GEO-specific proof: FAQ blocks on GEO service pages, team page, case studies (starting with this one), published audit reports.

What shipped

  • GEO-first positioning across homepage, navigation, metadata, llms.txt
  • New geo-audit-and-implementation category with 4 sub-services (free audit, 97 USD report, Foundation retainer, Content retainer)
  • 45+ FAQ Q&As drafted across the GEO service category with FAQPage JSON-LD emission
  • /geo-audit landing page with 3-field form + /api/audit server proxy to the cloned n8n pipeline
  • /work/[slug] case study route (this page)

Key lessons

  • 01

    Measure before fixing. The baseline audit was what made the 78% lift legible.

  • 02

    SSR content is the single highest-leverage technical fix. The one-line locale default change in LanguageContext.tsx moved more of the needle than any schema addition.

  • 03

    Trust failures compound. Placeholder NAP + fake testimonials + conflicting stats together drove E-E-A-T below 35. Resolving all three together (not one at a time) is what makes the score jump.

  • 04

    Atomic commits let you revert with confidence. Every change in this case study sits on an individually revertable commit — your audit trail, and your bail-out option.

  • 05

    Automation caps at about 85% of effort. The last 15% — business decisions about pricing, stats, testimonials, brand positioning — is irreducibly human. Plan intake for it, don't try to automate through it.

Want the same journey on your site?

Start with the free audit to see your baseline, then decide whether to DIY, buy the 97 USD comprehensive report, or bring us in on a Foundation or Content retainer.