Module 035 Expert 26 min read

JavaScript SEO

How Google renders JavaScript, the SSR/SSG/ISR/CSR trade-offs, dynamic rendering, why most AI crawlers don't execute JS, and testing rendering reliably.

By SEO Mastery Editorial

JavaScript SEO is what happens between Googlebot fetched the HTML and Google indexed the page. Modern Googlebot uses an evergreen Chromium (Chrome 124 as of early 2026) and renders JS reliably for most sites. But most AI crawlers do not execute JavaScript at all — and a CSR-only React app that ranks fine on Google can be invisible to ChatGPT Search, Claude, and Perplexity. The right rendering strategy is now a multi-surface decision.

TL;DR

  • Googlebot renders JavaScript, but with a queue and a cost. Pages enter a render queue after the initial fetch. Render can lag the fetch by hours to days, especially for heavy bundles. Server-rendered HTML still wins on speed of indexation.
  • Most AI crawlers do not execute JS. GPTBot does not. ClaudeBot does not. OAI-SearchBot does not. PerplexityBot does limited rendering. If your content depends on JS to appear in the DOM, AI search engines see nothing — your AI Overview, ChatGPT Search, and Claude citations drop to zero.
  • SSR or SSG is the safe default in 2026. Next.js, Astro, Remix, SvelteKit, Nuxt 3 all default to server-rendered or statically generated output. Pure CSR (create-react-app style) is no longer defensible for content that needs to rank.

The mental model

JavaScript SEO is like building a stage production where some audience members read the script (the HTML), while others watch the live performance (the rendered DOM). For decades, search engines only read the script. Today’s Googlebot watches the performance — but it has a backlog of shows to attend, so it sometimes reads the script first to decide whether the show is worth seeing live.

AI crawlers, in 2026, mostly still only read the script. GPTBot opens the script, scans the lines, files them. If your page’s lines are blank in the script and only filled in during the live show, GPT trains on empty content. Your page contributed nothing.

The expert’s mental model is therefore: “Whatever you want crawlers and AI engines to know, ship it in the initial HTML response.” Hydration can layer interactivity on top, but the bones must be there at byte zero.

Deep dive: the 2026 reality

Google’s rendering pipeline since 2019 has been the Web Rendering Service (WRS), an evergreen headless Chromium. Current behavior:

  1. Fetch phase: Googlebot gets the raw HTML response. Stored, queued for indexing.
  2. Render phase: WRS picks up the URL, runs JavaScript in headless Chromium 124, captures the rendered DOM.
  3. Index phase: The rendered DOM is parsed, signals extracted, indexed.

The render queue can lag the fetch by anywhere from minutes to a few days depending on resource availability. This is the two-wave indexing problem: pages get indexed once with the initial HTML, then re-indexed with the rendered DOM when WRS catches up.

The rendering trade-off matrix:

StrategySpeed of first byteSEO-safe out of boxPersonalizationExamples
CSR (Client-Side Rendering)Fast HTML, slow contentNo — blank initial HTMLEasyOld SPAs, create-react-app
SSR (Server-Side Rendering)Slower HTML, full contentYesEasyNext.js app router, Remix, SvelteKit
SSG (Static Site Generation)Fastest, full contentYesHardAstro, Next.js output: 'export', Hugo
ISR (Incremental Static Regeneration)Fast, full contentYesSomeNext.js revalidate, Astro on-demand
Streaming SSRProgressiveYesEasyReact Server Components, SvelteKit streamed
Dynamic RenderingVariableYes (deprecated by Google in 2024)Noneprerender.io, Rendertron

Dynamic rendering — serving pre-rendered HTML to bots and CSR to humans — was Google’s recommended fallback from 2018–2023. Google deprecated dynamic rendering in March 2024, calling it a “workaround” that creates maintenance burden. Migrate off prerender.io to true SSR/SSG if you can.

AI crawler rendering matrix (2026):

CrawlerRenders JSNotes
GooglebotYes (Chrome 124)Two-wave indexing
BingbotYes (limited)Slower than Google
GPTBotNoStatic HTML only
OAI-SearchBotNoBing index downstream — Bing’s render
ClaudeBotNoStatic HTML
Claude-UserNoUser-initiated fetches
PerplexityBotLimitedSome JS executed
Perplexity-UserYesUser-initiated browsing
ApplebotYesFor Spotlight/Siri
Google-ExtendedN/AControls usage of Google’s already-rendered DOM
CCBotNoCommon Crawl is HTML only

The framework picture in 2026:

  • Next.js 15 (App Router): Server Components are default; client components opt-in. SSG via generateStaticParams. Streaming SSR built in. Best general-purpose default for content sites that rank.
  • Astro 5: Static-first by default with selective hydration (“Islands”). Output-zero JS by default. Best for content-heavy sites where JS is the exception, not the rule.
  • Remix 3 (now folded into React Router 7): Loader-based SSR, no static output. Strong for app-like sites with dynamic data.
  • Nuxt 3.13: Vue’s SSR/SSG framework. Comparable to Next.js for Vue ecosystems.
  • SvelteKit 2: Loader-based, supports SSR/SSG/CSR per-route. Smaller bundles than React equivalents.
  • Gatsby: Now legacy. Most projects migrating to Next.js or Astro.

The hybrid strategy is where most sites land: SSG for content that does not change per user (blog, docs, marketing pages), SSR for personalized or commerce pages, ISR for content with predictable update cadence (product listings, news indexes). Astro and Next.js both support per-route mixing.

Visualizing it

flowchart TD
  A[User or bot requests URL] --> B{Rendering strategy}
  B -->|SSG/ISR| C[CDN serves prebuilt HTML]
  B -->|SSR| D[Server renders on demand]
  B -->|CSR| E[Server returns shell HTML]
  C --> F[Full content in initial response]
  D --> F
  E --> G[JS bundle downloads]
  G --> H[JS executes, fetches data]
  H --> I[Content appears in DOM]
  F --> J[Googlebot, Bingbot, GPTBot, ClaudeBot all see content]
  I --> K[Only Googlebot/Bingbot/Applebot see content<br/>after WRS render]

Bad vs. expert

The bad approach

A pure CSR React app with content fetched client-side after mount:

// Bad: content invisible to non-rendering crawlers
import { useEffect, useState } from 'react';

export default function ProductPage({ id }) {
  const [product, setProduct] = useState(null);

  useEffect(() => {
    fetch(`/api/products/${id}`)
      .then(r => r.json())
      .then(setProduct);
  }, [id]);

  if (!product) return <div>Loading...</div>;

  return (
    <article>
      <h1>{product.title}</h1>
      <p>{product.description}</p>
    </article>
  );
}

The initial HTML response contains <div>Loading...</div> and a script tag. GPTBot indexes “Loading…” for training. ClaudeBot sees no product. OAI-SearchBot (via Bing) eventually catches up if Bingbot rendered. Googlebot renders the JS within hours-to-days, but the indexed signal is delayed and weakened.

Diagnostic via curl with a Googlebot user-agent:

curl -s -A "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" \
  https://example.com/products/red-widget \
  | grep -E '<h1>|<p>'
# Output: nothing of substance

The expert approach

Astro Islands pattern — static HTML by default, hydration only where interactivity is needed:

---
// src/pages/products/[slug].astro
import Layout from '../../layouts/Layout.astro';
import AddToCart from '../../components/AddToCart.svelte';
import { getProduct } from '../../lib/products';

export async function getStaticPaths() {
  const products = await fetch('https://api.example.com/products').then(r => r.json());
  return products.map(p => ({ params: { slug: p.slug }, props: { product: p } }));
}

const { product } = Astro.props;
---
<Layout title={product.title}>
  <article>
    <h1>{product.title}</h1>
    <p class="price">${product.price}</p>
    <p class="description">{product.description}</p>

    <!-- Only this island ships JS -->
    <AddToCart client:visible product={product} />
  </article>

  <script type="application/ld+json" set:html={JSON.stringify({
    "@context": "https://schema.org",
    "@type": "Product",
    name: product.title,
    description: product.description,
    offers: { "@type": "Offer", price: product.price, priceCurrency: "USD" }
  })} />
</Layout>

The HTML response now contains the product title, price, description, and JSON-LD before any JavaScript runs. Every crawler — Googlebot, Bingbot, GPTBot, ClaudeBot, PerplexityBot — sees full content.

Next.js 15 equivalent with Server Components:

// app/products/[slug]/page.tsx
import { notFound } from 'next/navigation';
import AddToCart from './AddToCart'; // Client Component

export const revalidate = 3600; // ISR: regenerate hourly

export async function generateStaticParams() {
  const products = await fetch('https://api.example.com/products').then(r => r.json());
  return products.map((p: { slug: string }) => ({ slug: p.slug }));
}

export async function generateMetadata({ params }: { params: { slug: string } }) {
  const product = await fetch(`https://api.example.com/products/${params.slug}`).then(r => r.json());
  return {
    title: product.title,
    description: product.description,
    alternates: { canonical: `https://example.com/products/${params.slug}` },
  };
}

export default async function ProductPage({ params }: { params: { slug: string } }) {
  const product = await fetch(`https://api.example.com/products/${params.slug}`,
    { next: { revalidate: 3600 } }).then(r => r.json());

  if (!product) notFound();

  return (
    <article>
      <h1>{product.title}</h1>
      <p className="price">${product.price}</p>
      <p>{product.description}</p>
      <AddToCart product={product} />
    </article>
  );
}

Verifying server-rendered output reaches crawlers:

# Test with Googlebot UA
curl -s -A "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" \
  https://example.com/products/red-widget \
  | grep -E '<h1>|product"'

# Test with GPTBot UA (no JS)
curl -s -A "Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); compatible; GPTBot/1.2; +https://openai.com/gptbot" \
  https://example.com/products/red-widget \
  | grep -E '<h1>|<p>'

# Diff rendered DOM (Chromium) vs initial HTML to find render-only content
node -e "
  const { chromium } = require('playwright');
  (async () => {
    const browser = await chromium.launch();
    const page = await browser.newPage();
    await page.goto('https://example.com/products/red-widget');
    const rendered = await page.content();
    const initial = await fetch('https://example.com/products/red-widget').then(r => r.text());
    require('fs').writeFileSync('rendered.html', rendered);
    require('fs').writeFileSync('initial.html', initial);
    await browser.close();
  })();
"
diff initial.html rendered.html | head -50

Do this today

  1. Open GSC > URL Inspection on a key template (product, article, category page). Click Test Live URL > View tested page > HTML tab. Compare against your visible page. If content is missing, JS is gating it.
  2. Run the Mobile-Friendly Test at search.google.com/test/mobile-friendly. Click View Tested Page to see Googlebot’s rendered DOM. Confirm key headings, body text, and links are present.
  3. Use curl with a Googlebot user-agent against your top 5 templates. The initial HTML must contain title, h1, primary content, and JSON-LD. If it doesn’t, you are CSR-only.
  4. Use curl with GPTBot and ClaudeBot user-agents against the same templates. These crawlers do not render JS. The HTML response must already contain your indexable content.
  5. Audit your framework. If you are on Create React App, Vue CLI’s CSR-only mode, or vanilla SPA setups, plan a migration to Next.js 15 (App Router), Astro 5, or SvelteKit 2 — all of which default to SSR or SSG.
  6. For Next.js: confirm your routes use Server Components or generateStaticParams. For Astro: confirm output: 'static' or output: 'hybrid', not output: 'server' for content pages. For SvelteKit: confirm prerender = true or ssr = true in your route’s +page.ts config.
  7. Eliminate dynamic rendering. If you are on prerender.io or rendertron, schedule a migration. Google deprecated this pattern in March 2024 and recommends true SSR/SSG.
  8. Test with headless Chrome via Playwright to capture the fully rendered DOM. Diff against the curl-fetched HTML. Anything in the rendered DOM but missing from initial HTML is invisible to non-rendering crawlers.
  9. Move all SEO-critical content into the initial HTML response: titles, headings, body copy, canonical link, meta robots, JSON-LD, internal links, image src (not lazy-loaded src), and structured data.
  10. Set up a CI render-parity test that runs on every deploy: fetch the URL with curl, fetch with Playwright, and assert that key selectors (h1, [data-test="content"], link[rel="canonical"]) match. Fail builds where rendered content depends on JS.

Mark complete

Toggle to remember this module as mastered. Saved to your browser only.

More in this part

Part 5: Technical SEO

View all on the home page →
  1. 026 Technical SEO Fundamentals 12m
  2. 027 Site Architecture 20m
  3. 028 Crawling & Indexing 17m
  4. 029 robots.txt Deep Dive 15m
  5. 030 XML Sitemaps 12m
  6. 031 Canonical Tags 20m
  7. 032 Meta Robots & X-Robots-Tag 13m
  8. 033 HTTP Status Codes 15m
  9. 034 Crawl Budget Management 16m
  10. 035 JavaScript SEO You're here 26m
  11. 036 Core Web Vitals 17m
  12. 037 Site Speed & Performance 19m
  13. 038 HTTPS & Site Security 12m
  14. 039 Mobile SEO & Mobile-First Indexing 14m
  15. 040 Structured Data & Schema Markup 17m
  16. 041 International SEO (hreflang) 19m
  17. 042 Pagination 12m
  18. 043 Faceted Navigation 26m
  19. 044 Duplicate Content 13m
  20. 045 Site Migrations 24m