Crawling and rendering depend upon three factors:
Also Read: Top Features of Angular
There are two types of rendering – Client side and Server side.
Server-side rendering is the traditional approach to rendering. It is very simple and does not have complications. The browser, or in some cases the Googlebot, receives an HTML. This HTML tells the structure of the page that a content copy that is already there and all that browser needs to do is download the CSS and display the content as defined. Search engines do not have much problem with this approach
As discussed earlier, traditional HTML is very straight forward. Let’s see how Google crawls traditional HTML.
Also Read: Common WordPress Errors
Now, you need to understand what is crawl budget. Crawl budget is the number of the page that is crawled and indexed on a website by the Googlebot in a given timeframe. Why crawl budget is important? Well, if Google does not index a page, it’s not going to rank for anything. To understand this, if the number of pages on your website exceeds the crawl budget(crawl budget of your website), there will be pages that will not be indexed.
Google is very good at finding and indexing pages. The vast majority of websites do not need to worry about the crawl budget. But there are few cases where the crawl budget needs to be considered.
If your website comes under the above cases, Google may struggle with crawling and indexing it.
Also Read: Features of React JS
Chrome 77 is the latest release (while writing this), and Google uses its Chrome 41, as mentioned earlier, for the rendering. You can see the difference. For more clarity, download the chrome 41 and try to render your website. You may notice what I am trying to say more clearly.
We have been talking about Google using a browser or Googlebot for crawling and rendering. But Googlebot does not actually work as any of the real browsers do. A real browser such as Chrome, Mozilla, etc downloads every file -script, image, and stylesheet- to render the view. But Googlebot only downloads the resources required for crawling.
As the internet is so huge, Google optimizes its crawler for better performance. Obviously, visiting every page will affect performance. Another reason why Googlebot does not visit every page is the algorithm it uses. It checks if a particular resource is required to be rendered, if not, Googlebot will ignore it.
So if something is not crawled or rendered, it may be because Googlebot’s algorithm decided it was not necessary, or simply there was a performance issue.
Actually, there is no time limit, but it is often believed that Google cannot wait for more than 5 seconds for a script to load. It is not easy to make an assumption on this topic, but here are a few factors that are considered.
If the website is slow, there can be losses such as:
It is always a better option to create a lightweight website and make the server response fast. Don’t make Googlebot’s task more complex, as we know it is already difficult for Googlebot to crawl and render.
Also Read: Top WordPress SEO Plugins
Isomorphic: It is a popular approach and it is recommended over Prerendering. In this approach, both search engines and the user receives the page containing full content at the initial load.
Also Read: WooCommerce SEO Tips
If there is a problem in rendering a robust website, use the fetch and render, to check if Google can still perform rendering.
You may have noticed “hashes” in the URLs. It is common and it can cause problems. Googlebot may not crawl such hashtags. For example:
Avoid such URLs. The following is an example of a good URL.
Acowebs are developers of WooCommerce plugins that will help you personalize your stores. It supports the additional option with feature-rich add-ons which is WooCommerce Product Addons, that are lightweight and fast. Update your store with these add-ons and enjoy a hassle-free experience.