Impact of Google bot’s User Agent on Website Traffic

Getting more traffic to your website in the age of A.I search engine  is a bit tricky. You have to get yourself familiar with the technicality of website ranking in order to increase your website’s visibility online. In December 2019 Google updated Googlebot user agent strings to reflect the new browser version, and periodically updated the version numbers to match Chrome updates in Googlebot. Below depicts user agents in robots.txt where several user-agents are recognized in the robots.txt.

If you want to block or allow Google’s crawlers from accessing some of your content, you can do this by specifying Googlebot as the user-agent. For example, if you want all your pages to appear in Google search, and if you want AdSense ads to appear on your pages, you don’t need a robots.txt file. Similarly, if you want to block some pages from Google altogether, blocking the user-agent Googlebot will also block all Google’s other user-agents.

But if you want more fine-grained control, you can get more specific. For example, you might want all your pages to appear in Google Search, but you don’t want images in your personal directory to be crawled. In this case, use robots.txt to disallow the user-agent Googlebot-image from crawling the files in your /personal directory (while allowing Googlebot to crawl all files), like this:

User-agent: Googlebot

User-agent: Googlebot-Image
Disallow: /personal

To take another example, say that you want ads on all your pages, but you don’t want those pages to appear in Google Search. Here, you’d block Googlebot, but allow Mediapartners-Google, like this:

User-agent: Googlebot
Disallow: /

User-agent: Mediapartners-Google

User agents in robots meta tags

Some pages use multiple robots meta tags to specify directives for different crawlers, like this:

<meta name="robots" content="nofollow"><meta name="googlebot" content="noindex">

Googlebot user agents Mobile:

Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.96 Mobile Safari/537.36 (compatible; Googlebot/2.1; +


Mozilla/5.0 (compatible; Googlebot/2.1; +


Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Googlebot/2.1; + Safari/537.36

The new evergreen Googlebot and its user agent
In December Google will start periodically updating the above user agent strings to reflect the version of Chrome used in Googlebot. In the following user agent strings, “W.X.Y.Z” will be substituted with the Chrome version we’re using. For example, instead of W.X.Y.Z you’ll see something similar to “76.0.3809.100”. This version number will update on a regular basis.

Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; Googlebot/2.1; +


Mozilla/5.0 (compatible; Googlebot/2.1; +
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Googlebot/2.1; + Chrome/W.X.Y.Z Safari/537.36

How to test your site
Google has run an evaluation so that the most websites will not be affected by the change.
Sites that follow Google’s recommendations to use feature detection and progressive enhancement instead of user agent sniffing should continue to work without any changes.

If your site looks for a specific user agent, it may be affected. You should use feature detection instead of user agent sniffing.

If you cannot use feature detection and need to detect Googlebot via the user agent, then look for “Googlebot” within the user agent.

Some common issues you may explore during the change:

Pages that present an error message instead of normal page contents. For example, a page may assume Googlebot is a user with an ad-blocker, and accidentally prevent it from accessing page contents.

Pages that redirect to a roboted or noindex document.
If you’re not sure whether your website is affected or not, you can try loading your webpage in your browser using the new Googlebot user agent. These instructions show how to override your user agent in Chrome.

Getting more visit from Google Bots 

Here are some tips to get Google bots crawl your website often
1. Optimize Speed Site
You may want to compress images articles and minimizing css, and the other images.
2. Add more relevant content to your website
Many ecommerce websites lack blog section. Add articles related to your niche in order to increase awareness about your products or services online. This way you will boost sales.
3. Sitemap
Add your website’s sitemap to your website and google search console. This is how you notify your site’s presence in Google search engine.
4. Backlink
Adding more back links increases your website’s authority. Use a combination of dofollow and nofollow back links.
5. Internal Links
Internal linking helps with your website’s visibility and reduces bounce-rate.  The benefits of anchor linking gives signal to Googlebot to crawl the page associated with posting your website.
6. Remove Duplicated Content
Do not add the same content on different pages, change them and their meta tags. I have seen a lot of ecommerce websites do it. They put the same meta tags in the similar pages. For example if you run a website related to dresses for women: in Maxi dresses for women category make sure to modify the meta tags or google will ignore that page. The whole point of creating those pages is to sell those dresses, so why making Google to ignore them by adding the same content to those pages? The same goes for used car dealership websites. Modify the meta and content in order to make that page as important as your main page.  Every single product of your page should have a series of unique meta tag and unique content.
( see: How Artificial Intelligence Changed Data Science and Search Engines )
In summary, keep an eye on your website’s performance. Use my tips to get more Google search engine traffic to your website and get rid of unwanted bots by adding them to your robot.txt or .htaccess  file under your server.