How does Googlebot index and render JavaScript websites?

  1. Googlebot downloads an HTML file
  2. Googlebot downloads the CSS and JavaScript files
  3. Googlebot has to use the Google Web Rendering Service (a part of the Caffeine Indexer) to parse, compile, and execute a JS code
  4. WRS fetches the data from external APIs, from the database, etc.
  5. Finally, the indexer can index the content
  6. Now Google can discover new links and add it to the Googlebot’s crawling queue

Parsing, compiling, and running JS files is very time-consuming. In the case of a JavaScript-rich website, Google has to wait until all the steps are done before it can index the content. The rendering process is not the only thing that is slower. It also refers to the process of discovering new links.

Here’s the kicker: The rendering of JavaScript-powered websites in Google search is deferred until Googlebot has resources available to process that content. That can take up to two weeks.