Each sub domain could have many pages with login screens, T&Cs, Forgot your password, pages.
The client site has about 500 URLs out of a total of over 6,000 URLs in Google when we do a < site: >.
We see many URLs that look like: example3.clientsite.com/a/234j3k4
We still want visitors to have access to these pages - just not the search engines.
One idea our contractor proposed was to
- use a canonical tag for every URL that looks like this: example3.clientsite.com/a/234j3k4
- include a Robots NOINDEX META tag:<META NAME="ROBOTS" CONTENT="NOINDEX">
"" When we see the noindex meta tag on a page, Google will completely drop the page from our search results, even if other pages link to it."
Wouldn't it be better to use block all subdirectories using robots.txt?
Would this technique also signal Google to remove these URLs from the index -- but would it take longer?
Edited by ttw, 05 November 2012 - 12:33 PM.