The most important thing you’ll realize in maintaining your web page is that you cannot rely solely on search engine spiders and bots when they visit your site. These bots often cause duplicate content problems or list your site as junk, among other things.

For example, Googlebot has, in the past, done such things as move valuable pages into the Google supplemental index or given rank to pages that do not need it, such as login pages, register pages, subscribe pages and so on. As a result you need to guide these bots, or herd them, so that they only report on the pages that need to be ranked. But how? There are different ways to do this with varied success.

One way would be to use the nofollow meta tag. This tag instructs the search bots not to pass rank through a linked page, and it does it at page level rather than link level. It is up to you to decide if this is an adequate PageRank sculpting solution. One downfall of this is the incidence where page A is linked to page B on your site using the nofollow tag. In this case the PageRank would not carry from page A to page B. For this to work, however, you would have to know all the incidences of other pages linking to page B, which would be almost impossible to do.

One must concur that this method would not help your situation. Through outside links to your page, it would almost sure end up being ranked at some point. So you will still need to find a way to ensure that the incoming PageRank for page B is being passed to the most important web pages of your web site.

Another method is to disallow Page B as follows:

User-agent: *
Disallow: /login.php

However, the same problem still persists in that external links can not be accounted for. PageRank would still be applied and it would come up as a result in a search.

There is a way to solve this problem, and it can be found in Google. Following are examples of methods to use:

1. Adding to page B the noindex HTML tag like this:

<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">

or

2. Adding in your robots.txt the noindex directive using this:

User-agent: Googlebot
Noindex: /login.php

This works because Google supports the Noindex robots.txt directive. However this method is not currently supported in other search engines. Furthermore, this directive enables you to block or advise Googlebot not to index or de-index a page, but it will not hinder Googlebot from following the links on page B, resulting in passing the PageRank to outgoing linked pages which are not protected.

Additionally you should not use the nofollow tag on page B because it will result in Page B becoming a dead end or dangling link. However, you can add the nofollow tag to outgoing links of page B, but you must ensure that at least one of the links can be followed.

In conclusion it seems that you can efficiently control your PageRank flow using the noindex directive and achieve maximum control within your website. You won’t achieve 100% control due to outside variables, but certainly enough to make your website productive and searchable as well as a valuable tool for you.

Last updated by at .