Updated February 24, 2026
TL;DR: JavaScript frameworks like React, Vue, and Angular create serious indexation risks when configured for client-side rendering (CSR). Googlebot uses a two-wave indexing process that delays JavaScript rendering by hours or days, while AI crawlers including GPTBot, ClaudeBot, and PerplexityBot don't execute JavaScript at all, making dynamic content invisible to them. Server-Side Rendering (SSR) or Static Site Generation (SSG) resolves both problems by delivering complete HTML on the first crawl. For marketing and product pages specifically, SSG offers the highest impact at the lowest infrastructure cost.
Modern JavaScript frameworks deliver excellent interactive experiences but create a visibility problem most teams don't discover until traffic stalls and Google Search Console fills with "Discovered - currently not indexed" warnings. If your team built a React or Vue single-page application (SPA) and assumed search engines would handle the rest, this guide covers exactly why that assumption fails and what to do about it.
Why JavaScript frameworks break search visibility
Developers build for browsers, which execute JavaScript well. Search crawlers execute JavaScript poorly, slowly, or not at all. That asymmetry is the root cause of most JavaScript SEO problems, and understanding it changes how you prioritize your technical roadmap.
When a user opens your React SPA, their browser downloads a JavaScript bundle, executes it, and builds the full page in the DOM. A crawler requesting that same URL gets raw HTML from your server. With CSR, that HTML is nearly empty: a <div id="root"></div> and a reference to your JavaScript bundle. Your product descriptions, blog posts, pricing tables, and CTAs all live inside JavaScript files that haven't run yet.
Google Search Central warns there are often significant limitations in how crawlers render JavaScript. The "application shell" your SPA loads first is a container for your content, not the content itself, and that distinction matters enormously for indexation.
The problem is more severe for AI answer engines. Research from amicited.com shows that 69% of AI crawlers can't execute JavaScript at all, making dynamic content like product listings, feature explanations, and pricing completely invisible to them. GPTBot, ClaudeBot, and PerplexityBot work entirely from your raw HTML, not the rendered page your users see. For teams trying to get recommended by AI platforms like ChatGPT and Perplexity, fixing the rendering layer is the mandatory first step.
How Google processes JavaScript
Google handles JavaScript rendering through a two-phase process. Understanding it explains why CSR pages sit in limbo for weeks and why sitemap submissions don't always produce the indexation you expect.
Wave 1: the initial HTML crawl
Googlebot first requests a URL and processes whatever HTML the server returns immediately. As The SEM Post reported, Google processes HTML resources first to conserve crawl resources, deferring JavaScript for a later rendering pass. For a CSR application, the first wave indexes almost nothing useful: the app shell and possibly a page title, but none of the content that drives rankings. To simulate what Googlebot sees during this wave, open DevTools, disable JavaScript, and reload your page. If your navigation and body content disappear, your site is fully CSR-dependent.
Wave 2: JavaScript rendering
After the first wave, Google places the page in a rendering queue. Google's headless Chromium instance then executes the JavaScript, builds the final DOM, and updates the index with the rendered content. Onely's research found that Googlebot needs 9x more time to crawl JavaScript pages than plain HTML pages, and that pages with JavaScript elements can remain unindexed for two or more weeks after being submitted to a sitemap, particularly on newer or lower-authority domains.
The render budget and a critical mistake to avoid
Google's rendering resources are finite. Pages compete for position in the render queue based on site authority, crawl budget, and resource availability. One specific misconfiguration amplifies this problem significantly: blocking JavaScript or CSS files via robots.txt. Impressiondigital.com notes that blocking these resources prevents Googlebot from rendering on-page content at all, collapsing the second wave for every page those resources affect. This error appears frequently on sites that blocked staging resources and never cleaned up the rule before launch.
Rendering strategies compared: CSR vs. SSR vs. SSG vs. Dynamic
Choosing the right rendering strategy is the most impactful architectural decision for search visibility and AI citation rates. Here is how the four main approaches compare:
| Strategy |
SEO impact |
User experience |
Implementation complexity |
Indexation speed |
| Client-Side Rendering (CSR) |
Poor |
Slow initial load (fast TTFB, high TTI) |
Low |
Delayed or unreliable |
| Server-Side Rendering (SSR) |
Excellent |
Fast initial load (good TTFB and FCP) |
High |
Fast |
| Static Site Generation (SSG) |
Excellent |
Best (instant from CDN cache) |
Medium |
Fast |
| Dynamic Rendering |
Good |
Good for users, variable for bots |
Medium |
Fast for bots only |
Implementation complexity assumes a greenfield project. Migrating an existing CSR application to SSR or SSG typically requires medium to high effort regardless of framework.
Client-Side Rendering (CSR) and SPAs
With CSR, your server sends a minimal HTML shell and a JavaScript bundle. As Strapi explains, the browser loads a basic HTML file with minimal content, then JavaScript renders the full page. This is the default output of create-react-app, plain Vue CLI projects, and Angular without Universal. This configuration causes the most indexation harm.
SPAs create a compounding problem: every route returns the same empty HTML shell, so Googlebot has no first-wave content to index for any page across the entire site. As Zeo documents, crawlability is particularly hard to achieve with CSR because rendering depends on client CPU availability and can time out before content loads.
Server-Side Rendering (SSR)
SSR generates fully populated HTML on the server for each request. Crystallize describes SSR with hydration as combining the benefits of both approaches: the server sends complete HTML for fast initial load and reliable indexation, then client-side JavaScript hydrates the page by attaching event listeners to make it interactive. Next.js (React), Nuxt.js (Vue), and Angular Universal are the standard SSR solutions.
Static Site Generation (SSG)
SSG builds all pages into complete HTML files at deployment time. Strapi's SSG comparison shows CDNs serve these files with near-zero Time to First Byte, maximum security because there is no live server to attack, and the lowest possible infrastructure cost. For content that doesn't change on every request, SSG is the strongest choice for both SEO and performance.
Dynamic rendering
Dynamic rendering detects the requesting user-agent and serves pre-rendered static HTML to crawlers while serving the normal client-side app to users. Ramotion's rendering comparison describes it as a workaround that provides fast bot indexation without requiring you to rebuild your application architecture. Google considers dynamic rendering a bridge solution, not a best practice. If you're planning a significant rebuild, invest in SSR or SSG instead.
Technical implementation guide for React, Vue, and Angular
React: Next.js and React Helmet
The most reliable fix for a React application is migrating to Next.js, which provides SSR and SSG out of the box. For managing meta tags within React, React Helmet gives you declarative control over the document <head> from within your component tree:
import React from 'react';
import { Helmet } from 'react-helmet';
const ProductPage = ({ product }) => {
return (
<div>
<Helmet>
<title>{product.name} - Your Company Name</title>
<meta name="description" content={product.description.substring(0, 155)} />
<link rel="canonical" href={`https://yourdomain.com/products/${product.slug}`} />
<meta property="og:title" content={product.name} />
<meta property="og:description" content={product.description.substring(0, 155)} />
<meta property="og:url" content={`https://yourdomain.com/products/${product.slug}`} />
</Helmet>
<h1>{product.name}</h1>
<p>{product.description}</p>
</div>
);
};
export default ProductPage;
There is a critical limitation here. As Fullstack.com explains, React Helmet modifies meta tags in the browser, but search crawlers take the first version of the HTML and don't wait for the JavaScript-modified version. React Helmet only delivers reliable SEO results when combined with SSR. Check your page source with Ctrl+U to confirm meta tags appear in the raw HTML. If <title> and <meta name="description"> are absent from the source, Googlebot and AI crawlers won't see them.
For new Next.js projects using the App Router, use the built-in generateMetadata API, which writes meta tags server-side by default and removes the Helmet dependency entirely.
Vue.js: Nuxt.js
Vue applications face the same CSR indexation problem as React SPAs. Nuxt.js provides both SSR and SSG modes without requiring you to rewrite your Vue components. For meta tag management in Nuxt, use the built-in useHead composable, which writes tags server-side in SSR mode and at build time in SSG mode. One nuance specific to Vue: older projects using hash-based routing (#/path) need to migrate to History API routing before any other SEO work takes effect.
Angular: Angular Universal
Angular's default configuration is CSR. Angular Universal adds SSR capability, allowing the server to render initial HTML before hydrating it client-side. Angular's built-in Meta and Title services write tags correctly in the server-side rendering context.
URL structure and internal linking
For all three frameworks, use History API routing (pushState) rather than hash-based URLs. Use standard <a href="/path"> anchor tags for all internal links, not onClick handlers on <div> or <span> elements. Links built through JavaScript click handlers may not appear in the DOM during Googlebot's first-wave crawl, causing it to miss entire sections of your site's link graph. Our semantic authority guide covers how internal link architecture affects AI citation probability across every framework.
Lazy-loading safely
Use the native loading="lazy" attribute for images and iframes below the fold. Web.dev specifically advises against data-src attributes that require JavaScript to render, particularly for LCP candidate images, noting that 7% of pages hide their LCP image behind data-src. Never lazy-load your hero image or primary above-the-fold content, because Googlebot needs that content available on the first wave.
JavaScript execution time has a direct, measurable relationship with all three Core Web Vitals metrics, and those metrics feed directly into Google's ranking signals for both desktop and mobile.
INP: the metric most JS-heavy sites fail
Interaction to Next Paint (INP) measures how quickly your page responds to user interactions throughout the entire lifecycle, including every click, tap, and keyboard input. According to ableneo.com, fixing INP demands deep JavaScript architecture changes because long tasks exceeding 50ms block the browser's main thread and delay interactions in ways that accumulate across a session. The practical fixes:
- Code splitting: Load only the JavaScript needed for the current route, not the entire app bundle at once
- Tree shaking: Remove unused code from your production build at compile time
- Deferred non-critical scripts: Load analytics, chat widgets, and third-party scripts after the main content renders
- React Server Components: Move non-interactive components to the server to reduce client-side JavaScript shipped
LCP, bundle size, and mobile rankings
Mobile CPUs parse and execute JavaScript more slowly than desktop hardware, which means heavy JavaScript bundles disproportionately hurt mobile users. Web.dev's Core Web Vitals guidance confirms that LCP delays caused by JavaScript-dependent image rendering are among the most common performance failures on mobile.
Success metric: Target INP under 200ms, LCP under 2.5s, and CLS under 0.1 on mobile, measured via PageSpeed Insights for your five highest-traffic pages. Core Web Vitals failures suppress ranking potential site-wide, so treat any failing key page as higher priority than most content-level optimizations.
The business cost of JavaScript errors
Share this section with your marketing leadership before the next sprint planning meeting. Every technical rendering failure creates a chain of business consequences:
- Rendering fails - CSR pages return empty HTML shells to Googlebot
- You lose indexation - Pages sit in "Discovered - currently not indexed" indefinitely
- Rankings never appear - Target keywords produce no organic visibility
- Organic traffic stays at zero - Your highest-intent, lowest-cost channel contributes nothing to pipeline
- CAC rises - You compensate with paid search, which costs more per acquisition and stops the moment you pause spend
The AI visibility dimension makes this chain significantly worse. According to searchviu.com's crawler analysis, AI crawlers like GPTBot and ClaudeBot fetch JavaScript files but don't execute them, making your dynamically rendered product features, case studies, and comparison pages invisible. Vercel's analysis of Cloudflare crawler traffic found PerplexityBot traffic grew 157,490% on their network, making it one of the fastest-growing content consumers on the internet, yet it reads only raw HTML.
Research from gpo.com puts the competitive implication plainly: "If AI agents can't read your product info, location data, return policies, or pricing in the HTML source, they can hallucinate - or worse, cite a competitor whose data is exposed in plain text." The teams that have migrated to SSR or SSG are accumulating AI citation history while CSR sites remain invisible. Understanding the strategic differences between GEO and SEO makes clear why rendering is a prerequisite for both, not just traditional search.
How Discovered Labs' CITABLE Framework helps with JavaScript SEO
Discovered Labs uses a proprietary CITABLE Framework to structure content for AI extraction and citation. Three of its seven components directly address the JavaScript rendering problem.
C - Clear entity & structure requires every page to open with a 2-3 sentence BLUF (Bottom Line Up Front) that defines the entity clearly in server-rendered HTML text. Even when JavaScript rendering is incomplete, a server-side opening paragraph containing your entity definition gives GPTBot and ClaudeBot actionable content to extract.
E - Entity graph & schema focuses on explicit schema markup in JSON-LD format, embedded directly in the initial HTML response. Structured data tells AI crawlers exactly what your product is, who it serves, and what problem it solves without requiring any JavaScript execution.
B - Block-structured for RAG addresses JavaScript-heavy sites by structuring content into 200-400 word sections with clear headings, tables, FAQs, and ordered lists. Retrieval-Augmented Generation (RAG) systems parse content in discrete blocks. A well-structured server-rendered page can compete for AI citations even when a JavaScript-heavy competitor page is skipped because the crawler couldn't parse it.
- Google Search Console URL Inspection Tool (support.google.com) - Shows how Googlebot renders your page. Use "View crawled page" to inspect the raw HTML response, or "View tested page" for the rendered output and a screenshot of what Googlebot sees. As Search Engine Land explains, this tool reveals whether content injected by JS frameworks is actually visible to Google.
- Lighthouse / PageSpeed Insights - Performance audits identifying JS-related Core Web Vitals bottlenecks, including INP, LCP, and CLS, with specific recommendations for code splitting and deferred scripts.
- Rich Results Test - Live test to confirm JSON-LD schema is correctly detected and parsed, especially useful after implementing server-side structured data.
- Browser DevTools with JavaScript disabled - As gracker.ai notes, disabling JavaScript in DevTools and reloading the page simulates Googlebot's first-wave crawl. If navigation or content disappears, the site is CSR-dependent.
JavaScript SEO checklist
Prerequisites: Access to Google Search Console with verified property ownership, access to your site's source code, a browser with DevTools available, and knowledge of your JS framework's routing and build configuration.
Step-by-step:
- Verify content in initial HTML source - Open
Ctrl+U on key product and landing pages. Confirm headings, body text, and primary CTAs appear in raw HTML before JavaScript execution. - Check meta tags and canonicals in raw source - Confirm
<title>, <meta name="description">, and <link rel="canonical"> exist in the raw HTML response. JavaScript-injected tags are invisible to most AI crawlers. - Audit internal links for proper anchor tags - Verify navigation uses standard
<a href="/path"> tags, not JavaScript onClick handlers. Check that links appear in the DOM before any user interaction. - Confirm JavaScript resources aren't blocked - Review your
robots.txt file. Blocking /js/ or /css/ directories prevents Googlebot from rendering any page that loads those resources. - Run the disabled-JS test - Disable JavaScript in DevTools and reload. If content disappears, the page requires SSR or SSG for reliable indexation.
- Test Core Web Vitals on mobile - Run PageSpeed Insights in mobile mode for key pages. Flag INP over 200ms, LCP over 2.5s, or CLS over 0.1 as high-priority fixes.
- Verify structured data - Use the Rich Results Test to confirm JSON-LD schema appears and parses correctly. If schema is JavaScript-injected and missing from the tool, move it to server-rendered HTML.
- Audit lazy-loading implementation - Confirm no above-the-fold images or LCP candidates use
data-src or are lazy-loaded.
Checks and validation: After implementing SSR or SSG, re-run URL Inspection on 10+ previously unindexed pages, submit updated sitemaps, and monitor the "Discovered - currently not indexed" count in Search Console over 2-3 weeks. Citation rate improvements in AI platforms typically take 4-6 weeks to appear after rendering fixes, because AI crawlers need to revisit and re-index the corrected pages.
JavaScript frameworks are powerful tools, and there's no reason to abandon them. The architecture decision that matters is what happens between your server and the crawler. GPTBot, ClaudeBot, and PerplexityBot don't execute JavaScript. If your product pages exist only inside a JavaScript bundle, you're invisible to buyers who are right now asking AI assistants for vendor recommendations in your category. And research showing how Reddit invisibly dominates ChatGPT's sourcing is a useful reminder that AI citations are shaped by far more than just your own site's technical configuration, but your site's renderability is the part you control directly, and fixing it first is the highest-leverage starting point.
If you want to understand exactly where your JavaScript configuration is blocking AI crawlers, Discovered Labs runs AI Visibility Audits that map your current citation rate across buyer-intent queries and identify the specific technical gaps holding you back. Book a consultation with our team and we'll be straightforward about whether and how we can help.
Frequently asked questions
Can Google index all JavaScript sites?
No, and indexation isn't guaranteed even for established sites. Google's rendering queue is resource-constrained and probabilistic, and as Onely's research shows, Googlebot needs 9x more time to crawl JavaScript pages than plain HTML pages, meaning newly added pages can remain unindexed for weeks. SSR or SSG eliminates this uncertainty by giving Googlebot complete HTML on the first wave.
Is React bad for SEO?
React itself isn't the problem. A React application using Next.js with SSR or SSG performs as well or better than plain HTML for indexation. A React SPA using the default create-react-app CSR configuration performs poorly for SEO and AI visibility. The framework matters less than the rendering strategy.
Do AI crawlers like GPTBot ever execute JavaScript?
No. Analysis documented by usehall.com found zero evidence of JavaScript execution across half a billion GPTBot fetches. GPTBot downloads JavaScript files in approximately 11.5% of requests, but only as raw text for potential training purposes. Content that exists only after JavaScript runs is invisible to GPTBot.
What's the fastest fix if I can't rebuild the whole application?
Implement prerendering for your marketing and product pages using a dedicated prerendering service. It generates and caches static HTML snapshots served to crawlers without requiring changes to your application code. Treat it as a bridge while you plan a proper SSR or SSG migration, not a permanent architecture.
Key terminology glossary
DOM (Document Object Model): The browser's in-memory tree representation of an HTML page that JavaScript manipulates to build and update page content dynamically.
Hydration: The process where client-side JavaScript attaches event listeners and interactivity to server-rendered HTML, making a static server-generated page fully functional. SSR with hydration combines fast initial load and reliable indexation with interactive framework capabilities.
Client-Side Rendering (CSR): A rendering approach where the browser downloads a JavaScript bundle and builds the page DOM client-side after the initial HTML loads. The server sends a minimal HTML shell. Default for most SPA frameworks without additional configuration.
Server-Side Rendering (SSR): The server generates fully populated HTML for each incoming request and sends it to the client. Crawlers receive complete content on the first request, enabling fast and reliable indexation.
Prerendering: A process where a headless browser executes JavaScript at build time or on-demand and saves the resulting HTML output, which is then served to crawlers as a form of static SSG or dynamic rendering.
Headless browser: A web browser without a graphical interface, operated programmatically. Google's Web Rendering Service functions as a headless browser to execute JavaScript and generate final page state for the indexation queue. Most AI crawlers do not operate as headless browsers.