Spotting a random “Red Flag” here and there is useful, but it’s not a strategy. To truly secure your organic traffic against rendering failures, you need a systematic process, a way to prove indexability before you deploy code, not after your rankings drop.

You know the warning signs from our 8 Red Flags guide. Now let’s turn that knowledge into a rigorous audit.

Whether you’re validating a migration to Next.js, diagnosing a mysterious traffic leak, or just sanity-checking your storefront, this process is designed to be repeatable. It takes you from “I hope Google sees my content” to “I have verified byte-for-byte what the crawler sees.”

When to Audit

You don’t need to do this every day. But there are specific moments when a rendering audit is non-negotiable:

Phase 1: Preparation

Don’t boil the ocean. You can’t audit every URL on a 10k-page site manually. You need a representative sample.

Select Your Targets:
Pick 5-10 URLs that represent your unique templates:

  1. Homepage (often unique layout/logic)
  2. Product/Service Detail Page (the money page)
  3. Category/Listing Page (often heavy on client-side filtering/pagination)
  4. Content/Blog Article (text-heavy)
  5. A page with known issues (if investigating a bug)

Phase 2: The Initial Scan

Open jsbug.org. Enter your first URL.

By default, we run two checks simultaneously: one using a standard browser user agent with JavaScript enabled, and one with JavaScript disabled (simulating raw HTML crawlers or a failed render).

Look at the side-by-side comparison. Check these four critical health signs in order:

  1. Word Count Delta: Check the Content Card. If the “Body Words” count drops by >20% when JS is off, you have a dependency. A drop from 1,000 words to 50 means your content is effectively invisible until JavaScript runs.

  2. Title Drift: Look at the Content Card‘s Title row. Does it match in both columns? If the raw HTML title is “React App” or “Loading…”, that’s what users will see in search results if rendering times out.
  3. Meta Description: Check the Content Card‘s Meta Description row. It must be present in the raw HTML column. If it’s missing, you lose control over your search snippets.
  4. Canonical Stability: In the Indexation Card, check the Canonical URL. It must be present and correct in the raw HTML. If it’s missing or different, you risk duplicate content issues or Google ignoring your signals entirely.

If the meta robots tag or canonical relies on JavaScript to appear, you are betting your entire indexing strategy on the renderer working perfectly 100% of the time. Don’t take that bet.

Phase 3: Deep Dive – Content Analysis

Now we get granular. It’s not just about how much content is missing, but which content.

The Word Diff Analysis:
On the Content Card, click the number in the “Body Words” row to open the Word Diff Modal.

This visualization is powerful. It highlights exactly which words are added by JavaScript (green) and which are removed (red).

Heading Hierarchy:
Back on the Content Card, look at your H1s and H2s. Expand them to see the text.

Severity Check:
Calculate your dependency:
JS Dependency = (Rendered Words – Raw Words) / Rendered Words

If the result is >50%, mark this as High Severity. You are taxing the search engine’s rendering queue, and for AI crawlers like ClaudeBot or GPTBot (which often don’t run JS), your page is essentially blank.

Phase 4: Technical SEO Checks

Content is king, but metadata is the crown. If the technical signals are wrong, the content doesn’t matter.

Metadata Validation:

Check the Indexation Card and Technical Card.

Structured Data (JSON-LD):

Look at the Content Card‘s “Schema” row.

Does your Product, Article, or BreadcrumbList schema appear in the “With JS” column but not “Without JS”?
Google can render structured data, but it’s fragile, and AI crawlers like ClaudeBot often miss it entirely. If the script takes too long or the crawler doesn’t execute JS, you lose your Rich Snippets and answer engine citations. Best practice: put JSON-LD in the initial HTML.

Phase 5: Links & Images Audit

This is where the crawl budget goes to die.

Link Discovery:
Open the Links Modal from the Links Card. Filter by “Internal” and switch the view to “Added” (links only found after JS).

Image Visibility:
Open the Images Modal from the Images Card.

Phase 6: JavaScript Debugging

Why is the rendering failing? The Result Tabs at the bottom of the panel hold the answers.

Console Errors:
Switch to the Console Tab.

Network Performance:
Switch to the Timeline Tab and Network Tab.

Watch for two key events:

Phase 7: User Agent Testing

You’ve tested “With JS” vs. “Without JS”. Now test who is asking.

From our experience, bugs or developer misunderstandings often lead to a dangerous discrepancy: a site may serve full SSR content to human users but an empty CSR shell to Googlebot (or vice-versa) due to misconfigured bot-detection or edge-caching rules. User agent testing is the only way to verify that your content delivery remains consistent and that you aren’t accidentally “hiding” your SSR benefits from the very crawlers that need them most.

In the Config Modal, change your User Agent.

Googlebot (Smartphone):
This is how Google actually indexes the web.

ClaudeBot / GPTBot:
Set the User Agent to a custom string like ClaudeBot/1.0.

Phase 8: Reporting & Prioritization

You have your data. Now you need to fix it. Group your findings by severity:

Critical (Fix Immediately):

High (Fix Next Sprint):

Medium (Roadmap):

Low (Best Practice):

The Goal: Resilience

The web is moving toward more interactivity, not less. But the mechanism for discovery, the crawler is still fundamentally an HTML-parsing engine.

By auditing your site with this workflow, you aren’t just “fixing SEO.” You’re building resilience. You’re ensuring that no matter how complex your application becomes, the fundamental bridge between your content and your users remains intact.

If maintaining that bridge feels like a distraction from building your product, consider EdgeComet. We open-sourced our rendering-cache engine to help teams solve these discovery problems at the infrastructure layer, so you don’t have to fight the rendering gap manually.

— Max Kurz, Product manager/Developer