What is Google Crawling and Indexing

Last Updated on August 20th, 2016
0
893
What-is-Google-Crawling-and-Indexing

Every blogger either Professional or Newbie everyone spend their most precious time for studying the concept What is Google Crawling? and What is Google Indexing? or How Google Indexing Work? Basically SEO is very huge and before start learning about SEO, we must aware about the basic of SEO.

Basic of SEO are as follows:

  1. What is Crawling?
  2. What is Indexing?

World Wide Web works completely only on 2 concept Crawling and Indexing. If we talk about Google Search Engine then question will be change a bit i.e What is Google Crawling? and What is Google Indexing?

So, let understand the concept of SEO

What is Crawling?

When any search engine track your website’s each and every page for gathering the data available on your website. This process is called Crawling.

What is Indexing?

After crawling your website and gathering all the data. Now search engine put your website data on the search engine result (i.e. Web Searching result). This process is called Indexing.

What is Google Crawling and Indexing 3

What is Google Crawling?

When Google search engine track your website and gather all the data posted on your website. This process is call Google Crawling. Now Let’s understand How Google Crawl Works?

Google Crawling always follows a path. Here definition of Path is links. So Google Crawling always follows the links available on your website. Let’s take an example Google Spider Bot comes to your website home page and crawl your 1 link i.e. homepage. Now Google Spider Bot found 5 more links i.e. 5 menu’s and crawl the same. So here total crawl pages are 6 (1 Homepage + 5 Menu options).

Here Site maps plays vital role and guides google spider bot in crawling the website. Site maps stores all the links of your website and Google spider bot gather every data available in those site map links. This is the reason why we create site maps.

Now Robots.txt plays another crucial role in crawling. Robots.txt file guide google spider bot and explains which site maps is required to crawl the website. There might be several pages which webmaster or administrator don’t want to crawl by any search engine. So using robots.txt file you can stop spider bot crawling.

How Google Crawling Works

What is Google Indexing?

When Google shows the put of any search query on search result page, using your website data which was gathered while crawling is called Google Indexing.

Indexing the dependent upon the meta tag, here meta tag is used to index or No-index. Here Google will crawl the website and shows the result in search engine, this is index meta tag. Similarly if you don’t your post or page to be displayed on search engine, then use NO-index. Google will never add those pages in indexing.

Important Tips: Always allow important parts of the blog/website for crawling and indexing. never index achieves, tags, categories, index.php, etc.