Jump to content

  • Log in with Facebook Log in with Twitter Log In with Google      Sign In   
  • Create Account

Subscribe to HRA Now!

 



Are you a Google Analytics enthusiast?

Share and download Custom Google Analytics Reports, dashboards and advanced segments--for FREE! 

 



 

 www.CustomReportSharing.com 

From the folks who brought you High Rankings!



Photo
- - - - -

Should I Use Noindex And Nofollow On "cart" And "checkout&


  • Please log in to reply
6 replies to this topic

#1 stevebarakat

stevebarakat

    HR 1

  • Members
  • Pip
  • 3 posts
  • Location:Jacksonville, FL

Posted 11 July 2014 - 07:11 PM

Hi. I am running a small WooCommerce shop on a Wordpress site. I have been going through all the pages and optimizing them for SEO.  I have a "Shop" page which shows the products we are selling and an individual product page for each one. I plan on those pages being indexed by search engines. But I also have a "Cart" page which displays the content of a user's shopping cart and a "Checkout" page where the put make the purchase. Should the "Cart" and "Checkout" pages be indexed? Or would that superfluous and have a negative affect on SEO? 



#2 chrishirst

chrishirst

    A not so moderate moderator.

  • Moderator
  • 7,103 posts
  • Location:Blackpool UK

Posted 12 July 2014 - 09:13 AM

Search engines cannot normally place products in a "cart", neither can they "checkout" those products so they are NEVER likely to discover those URLs, but on the off chance of it happening, they should be blocked using robots.txt rather than using the "after the event" of the "robots" meta directive.

 

 

 

have a negative affect on SEO

No.



#3 stevebarakat

stevebarakat

    HR 1

  • Members
  • Pip
  • 3 posts
  • Location:Jacksonville, FL

Posted 12 July 2014 - 07:43 PM

@chrishirst Thanks for your response. That makes sense but it brings up another thing I'm a bit confused about. Can search engines "search" archives or products on a site? If not, what is the point of configuring url parameters? 

 

From Google:

"Duplicate content often occurs when sites make the same content available via different URLs—for example, by using session IDs or other parameters, like this:

example.com/products/women/dresses/green.htm
example.com/products/women?category=dresses&color=green
example.com/shop/index.php?product_id=32&highlight=green+dress&cat_id=1&sessionid=123&affid=431

In this case, all these URLs point to the same content: a collection of real green dresses."

How do search engines get to that same page from different urls?



#4 Jill

Jill

    Recovering SEO

  • Admin
  • 33,005 posts

Posted 13 July 2014 - 10:26 AM

 I would use rel=canonical for that.



#5 chrishirst

chrishirst

    A not so moderate moderator.

  • Moderator
  • 7,103 posts
  • Location:Blackpool UK

Posted 13 July 2014 - 10:29 AM

 

Can search engines "search" archives or products on a site?

 

Not if the "search" uses a form and POSTs data on submit.



#6 stevebarakat

stevebarakat

    HR 1

  • Members
  • Pip
  • 3 posts
  • Location:Jacksonville, FL

Posted 13 July 2014 - 10:43 AM

Great! Thanks for clearing that up.. 

 

they should be blocked using robots.txt rather than using the "after the event" of the "robots" meta directive.

One more quick question; Is the robots.txt something that sits on the root level of the server like the .htaccess file? If so, do you have a code snippet you can share that blocks urls in the method you described above?



#7 chrishirst

chrishirst

    A not so moderate moderator.

  • Moderator
  • 7,103 posts
  • Location:Blackpool UK

Posted 13 July 2014 - 02:46 PM

Robots.txt






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users

SPAM FREE FORUM!
 
If you are just registering to spam,
don't bother. You will be wasting your
time as your spam will never see the
light of day!