Case Studies

From SEO Score 39 to 94 — AI Audit, Real Fixes, Measurable Results

We ran an AI-powered SEO audit on a Western European e-commerce site built in Laravel and React, identified the issues that mattered most, fixed them in order of impact, and watched organic traffic grow by over 60%.

39 → 94SEO score improvement
+60%Organic traffic increase
Laravel + ReactClient's existing stack — no platform change
Priority-firstIssues fixed by impact, not volume

Client Snapshot

IndustryE-commerce / retail
GeographyWestern Europe
SizeMid-size retailer, 20–50 people
StackLaravel + React
What they doOnline retail — established presence, existing customer base, organic search as a key acquisition channel

The Challenge

The site was working. Products were listed, orders were coming in, and the team was not unhappy with where things stood. But organic search performance told a different story. The SEO score sat at 39 — not broken, but far from what the site could achieve with its content and domain authority. Traffic from search was leaving significant potential on the table.

The issue was not a lack of content or links. It was the accumulated weight of technical debt: configuration choices made years ago, image handling that had never been reviewed, metadata that was inconsistent across pages, and open graph tags that were simply absent. None of these problems were obvious from inside the product. All of them were visible the moment you ran a proper audit.

The client's site was built in Laravel and React — the same stack we work with daily. That mattered. Understanding the architecture meant we could scope fixes accurately and implement them without disrupting the live store.

The question was where to start. A raw audit of a mature e-commerce site produces a long list of issues. Without prioritization, teams fix the easiest things first rather than the highest-impact ones. That is how sites stay stuck at 39.

The Approach

We had already run this process on our own site — conimext.co.rs — before bringing it to clients. Our score started at 34 and reached 87 after working through the audit results systematically. That internal run validated the tool and the workflow. When we brought it to this client, we knew what to expect and what to ignore.

The tool we used was claude-seo — a Claude Code skill that runs a full technical SEO audit and does something Screaming Frog does not: it prioritizes issues by business impact and generates specific fix recommendations for each one. The output is not a CSV to interpret. It is a ranked list of what to fix, and how.

We ran the audit against the client's product on site. The tool returned a structured breakdown — critical issues, medium-priority issues, and lower-priority items — with a recommended fix for each. We worked through them in order, starting with the changes that would move the score the most.

The highest-impact issues fell into three categories.

First, Open Graph tags were missing across the entire product catalogue. Every product page lacked OG image, OG title, and OG description tags. This meant every time a product link was shared on social messaging platforms, it rendered as a blank preview. No image, no title, no click incentive. The fix was systematic — a single template change in the Laravel view propagated correct OG tags to every product page automatically.

Second, meta descriptions were either missing or duplicated across category pages. Search engines were generating their own descriptions from page content, often pulling irrelevant text. We wrote structured meta description templates per page type and pushed them through the CMS.

Third, images across the catalogue lacked alt text. This affected both accessibility and image search indexing. We implemented a fallback rule — product name as alt text where no manual alt had been set — which resolved the issue across thousands of product images without requiring manual editing of each one.

The quality assurance pass after each fix batch verified the changes had deployed correctly and the score moved in the expected direction before we proceeded to the next priority tier.

What We Shipped

  • Full AI-powered SEO audit using claude-seo, with issues ranked by priority and fix recommendations for each
  • Open Graph tag implementation across the full product catalogue (OG image, title, description per page)
  • Meta description templates for all page types — category pages, product pages, static pages
  • Alt text fallback system for product images
  • img attribute corrections and hreflang tag implementation across all page types
  • llms.txt file added to guide AI crawlers on site structure and content priority
  • Post-fix audit run confirming score improvement and identifying any remaining items

Results

SEO score moved from 39 to 94.

That 55-point improvement came from working through a prioritized list of fixes rather than trying to address everything at once. The critical issues — the ones the AI audit flagged as highest impact — were resolved first. The score reflected that order.

Organic traffic increased by over 60% in the period following the fixes. Search engines that had been rendering product pages without proper metadata, images with no alt text, and shared links without previews now had clean, consistent signals to index and rank.

The Open Graph fix alone changed how the site appeared every time a product link was shared. That is not a ranking signal, but it is a click signal — and click-through rate feeds back into search performance over time.

The site's architecture made the fixes fast. Because the client ran Laravel and React — a stack we know precisely — we could implement template-level changes that propagated across thousands of pages in a single deployment. A site on a less familiar stack would have taken longer to audit safely and fix correctly.

The efficient coding approach here was the same as always: understand the architecture first, make targeted changes at the right level, verify each fix before moving to the next. No broad rewrites, no platform changes, no risk to the live store.

Tech Stack

  • Laravel + React — client's existing platform (no changes to core architecture)
  • claude-seo — AI-powered audit tool (Claude Code skill by github.com/AgriCircle/claude-seo)
  • PostHog — product analytics used to measure organic traffic before and after the deployment

Lessons Learned

The most important thing the AI audit changed was not the list of issues — it was the order.

Every SEO audit produces a long list. The difference between a list that gets acted on and one that sits in a folder is prioritization. When the tool tells you "fix these three things first and you will move the score more than fixing the next twenty items," it changes how a team allocates time.

The OG tags finding is a good example. It was not a ranking factor. It would not have appeared near the top of a traditional audit sorted by technical severity. But it was flagged as high-impact because of its effect on social sharing and click-through — and once we looked at the client's traffic sources, it was obvious that social referrals were a meaningful channel. Fixing it had an outsized effect relative to the effort.

We would apply this process earlier in future projects — ideally during initial site review rather than after a site has been live for years. Technical SEO debt accumulates quietly. An AI audit run on a new site takes the same time as one on an established site, but the fixes are simpler when the patterns have not had years to propagate.

Want to Know What Your Site's Score Is?

We run this audit as a standalone engagement or as part of a broader project. If your site is on Laravel, React, or a similar stack, we can tell you exactly where you stand and what to fix first.

Talk to us →