Alert! Google will not support robots no-index files

This is a piece of big news. Yes, Google stops supporting unsupported and unpublished rules within the robots.txt file, the company declared on the Google Webmaster journal that Google will not support robots no-index files with the noindex directive listed among the file.

“In the interest of maintaining a healthy scheme and making ready for potential future open supply releases, we’re retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019. For those of you who relied on the noindex categorization directive within the robots.txt file, that controls crawl, there are a variety of other choices,” the corporate aforesaid.

Is there any option after this?

Yes, there are so many options the ones you probably should have been using anyway:

Search Console Remove URL tool :

This is the easiest way to remove an unused URL through your website. This tool temporarily remove the url

Password protection:

Hiding a page behind a login will generally remove it from Google’s index. Unless markup is used to indicate subscription or paywalled content.

Use Disallow in robots.txt:

Only that pages indexed by Search engines to whom they actually know, so blocking the page from being crawled often means its content won’t be indexed. While the search engine may additionally index a URL based on links from alternative pages, while not seeing the content itself, we have a tendency to aim to form such pages less visible within the future.

Use robots meta tags for Noindex

Supported both in HTML and in the HTTP response headers, this is the most popular and effective way to remove URLs from the index when crawling is allowed.

SHASHANKSHARMA

Leave a Reply

Your email address will not be published. Required fields are marked *

Next Post

Bing webmaster beta version: Verification via Google search console

Mon Sep 2 , 2019
Yeah Hurry! Bing webmaster beta version added a new option […]
Bing webmaster beta version