Jump to content

  • Log in with Facebook Log in with Twitter Log In with Google      Sign In   
  • Create Account

Subscribe to HRA Now!

 



Are you a Google Analytics enthusiast?

Share and download Custom Google Analytics Reports, dashboards and advanced segments--for FREE! 

 



 

 www.CustomReportSharing.com 

From the folks who brought you High Rankings!



Photo
- - - - -

Internal Pr Juice & Pr Sculpting


  • Please log in to reply
42 replies to this topic

Poll: Internal PR & PR Sculpting (10 member(s) have cast votes)

Do internal links carry less PR than external links

  1. Yes (1 votes [10.00%])

    Percentage of vote: 10.00%

  2. No (5 votes [50.00%])

    Percentage of vote: 50.00%

  3. Don't Know (4 votes [40.00%])

    Percentage of vote: 40.00%

Can you perform PR sculpting with the rel='nofollow' attribute

  1. Yes (6 votes [60.00%])

    Percentage of vote: 60.00%

  2. No (2 votes [20.00%])

    Percentage of vote: 20.00%

  3. Don't Know (2 votes [20.00%])

    Percentage of vote: 20.00%

Is PR sculpting considered 'Black Hat'

  1. Yes (1 votes [10.00%])

    Percentage of vote: 10.00%

  2. No (7 votes [70.00%])

    Percentage of vote: 70.00%

  3. Don't Know (2 votes [20.00%])

    Percentage of vote: 20.00%

Vote Guests cannot vote

#16 NASA

NASA

    HR 4

  • Active Members
  • PipPipPipPip
  • 183 posts

Posted 19 April 2009 - 11:46 AM

I guess the saying 'a little knowledge is dangerous', certainly applies here.

So should i just exclude them with a robots.txt file?

#17 Randy

Randy

    Convert Me!

  • Moderator
  • 17,540 posts

Posted 19 April 2009 - 06:33 PM

QUOTE
So should i just exclude them with a robots.txt file?


Do you not want those pages to show up in the search engines for any possible search term? I can't imagine a situation with a normal site where it would be that critical, but if it is you should at a minimum exclude those pages via robots.txt. If it were that critical though I think the argument could probably be made that the pages shouldn't be there at all in the first place. And if the pages are required to be there for some reason, they should be reachable both by the site navigation and from search engine searches for specific, targeted searches.

It's a circular argument as you can see. wink1.gif

Seriously, I cannot fathom a situation where a well-constructed site of fewer than several thousand pages should need PR Sculpting. It would be easier and make more sense to fix the site than to rely on a crutch that relies on someone else's implementation of a non-standard.

#18 NASA

NASA

    HR 4

  • Active Members
  • PipPipPipPip
  • 183 posts

Posted 20 April 2009 - 04:25 AM

QUOTE
they should be reachable both by the site navigation and from search engine searches for specific, targeted searches.
So now you think it's worthwile SEO'ing the 'w3c' page for longtail kwd's relating to the on page content?

You see I don't, they are there as part of the site content, but the likely hood of someone finding the site via these pages is so unlikely, not to mention undesired.

OK some may argue a visitor finding your site by any means is a good thing, but you cannot use the argument of thinking like your ideal visitor, appealing to them and targeting the 'right visitor' and then say these pages are ok to bring in traffic.

If people find the site via this link, it is most likely they will be the wrong type of visitor.

The whole SEO nightmare is as you say
QUOTE
It's a circular argument as you can see.


On one hand you say don't attract the wrong visitor, it just increases your bounce rate, but then say allow these non-product specific, nothing to do with the services being offered pages to bring in visitors.

These pages are useles in terms of relating to what the site is trying to offer and the KWD's we want people to find the site by.

And If I made up bogus titles on these supurflour pages just to try and capture longtail keywords, that goes against SEO ethics and the purpose of the title tag to be descritptive of the page content.

These pages are there for the sole purpose of being available to the visitor once they arive at the site, should they wish to view the W3C standards the site adheres to. For SEO and marketing purposes these pages are worthless.

I couldn't care if they are in the index or not, what I do care about is any PR they may be leaching!

I've decided to use the rel='nofollow' attribute on these page links and well we'll just have to see how it goes. wink1.gif









#19 Randy

Randy

    Convert Me!

  • Moderator
  • 17,540 posts

Posted 20 April 2009 - 06:28 AM

Who said anything about actively SEO'ing a w3c page? Or any other page for that matter?

Do you seriously think that allowing the spiders to find a page naturally is SEO'ing it? Or that every page that shows up for any phrase has been SEO'd?

Big, big difference between simply allowing the spiders to treat a page like they do every other page they run across and actually SEO'ing it for targeted terms.

#20 Jill

Jill

    Recovering SEO

  • Admin
  • 32,982 posts

Posted 20 April 2009 - 08:17 AM

QUOTE
I've decided to use the rel='nofollow' attribute on these page links and well we'll just have to see how it goes.


Great! Please let us know how it works out for you over time. I'm sure your info will be helpful to others who are thinking about this. We can all spout theories till the cows come home, but nobody will ever know without trying it for themselves.

#21 NASA

NASA

    HR 4

  • Active Members
  • PipPipPipPip
  • 183 posts

Posted 20 April 2009 - 09:56 AM

QUOTE
Who said anything about actively SEO'ing a w3c page?
every page in a away gets actively SEO'd doesn't it.

Because I know of SEO, I think of SEO when ever I write a page, which is what was giving me this PR dilema.

I know you just want me to forget about these pages, let the SE's do what they want with them and move on.

But I can't, it's not in my nature, I'm like a dog with a bone sometimes and just can't let go.

The missus calls me tenatious, which can be a good personality trait, but also a bad one, bit of a double edged sword I guess.

Also because this site is so personal (as i've mentioned), I want to do the best job i possibly can, leaving no stone unturned.

I guess the proof in this proverbial pudding will be in the proverbial eating!

his main page (index.html) is no.6 on page 1, let's see if this improves it any or not.

However, it's a new site and I know G! shows better rankings for new sites and then they fade, so as SERPs are unreliable, how will I know if it's working Jill? (or not as the case may be)



#22 Michael Martinez

Michael Martinez

    HR 10

  • Active Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,145 posts
  • Location:Georgia

Posted 20 April 2009 - 01:01 PM

I take a weekend off and Ron Carnell writes a long, thoughtful post.

"You cannot improve any page's search visibility by hiding other pages".

Ron -- very funny, haha. Of course you're right on a macro level but this thread had nothing to do with how search engines can improve their search results.

When someone wants to publish data that shows how using "rel='nofollow'" on internal pages actually helps improve visibility, I'll be glad to retract my words.

In the mean time, all you do when you hide pages is diminish your site's crawlability. You don't enhance it. You don't add to it. You impede search engines' ability to find pages and you don't speed up crawling of other pages.

I get the impression that people treat PageRank like water flowing through a hose. They have this image in their minds of water leaking out of holes all along the hose and that if they plug those holes they'll get more pressure pushing water out of the nozzle.

That works for water and hoses, it doesn't work for Web sites and PageRank. PageRank only flows as fast as the analysis a PageRank calculating algorithm performs and it has nothing to do with what you choose not to link to.

As Jill said (which is something I've pointed out many times before), all those so-called "unimportant" pages should be linking back to at least the main parts of your Web site, so they can and should be helping with your crawling.

In any event, I don't lose sleep over people shooting themselves in the foot by attempting to sculpt PageRank. Good luck with that. If you feel it works, I for one will appreciate as detailed an explanation of why when you have data available. That would be a first in this ridiculous 2-year-old debate, and you would earn my praise and thanks.

#23 NASA

NASA

    HR 4

  • Active Members
  • PipPipPipPip
  • 183 posts

Posted 21 April 2009 - 03:45 AM

QUOTE
all those so-called "unimportant" pages should be linking back to at least the main parts of your Web site, so they can and should be helping with your crawling.
How, Why?

They link to the main pages, yes, but the main pages link to them? , remove the main links to them and only allow them to link to the main page (and not themselves!) , I think people here aren't undestanding SSI includes and footer links!

What's it got to do with crawlability, I use a site map (HTML & XML) don't you?

But of course if notice any change, not sure how, I'll be glad to share smile.gif

#24 Michael Martinez

Michael Martinez

    HR 10

  • Active Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,145 posts
  • Location:Georgia

Posted 21 April 2009 - 11:59 AM

Sitemaps create the least efficient, minimal amount of crawlability.

You should be managing your crawl, showing your visitors and search engines your most important pages through your internal navigation. If you're disabling links to pages in your navigation system you're not managing crawl you're impeding it.

The proper way to document a cause-and-effect relationship between any change you make and changes in search results is a bit tedious (this is why most people don't do it).

Let's say your current search results are in a state ALPHA. Here is how you document cause-and-effect.
  • Document search results state ALPHA
  • Make a change (on your site, in your links, etc.) and wait for the search results to change
  • Document search results state BETA
  • Undo your change and wait for the search results to change
  • IF NO CHANGE OCCURS, you're done -- there is NO cause-and-effect
  • If the search results change to state GAMMA, you're done -- there is NO cause-and-effect
  • If the search results change to state ALPHA, document the change
  • Make the same change you made in step 2 and wait for the search results to change
  • If you see search results state BETA, you probably have a cause-and-effect relationship
  • If you see search results state DELTA, you're done -- there is NO cause-and-effect
  • If the search results don't change, you're done -- there is NO cause-and-effect

The question some people ask is: "How long should I wait for the search results to change?"

The answer is: It depends.

It depends on how often your site is normally recrawled and reindexed.

It depends on how often your competitors' sites are normally recrawled and reindexed.

It depends on how often your linking resources are normally recrawled and reindexed.

It depends on how often your competitors' resources are normally recrawled and reindexed.

It depends on how often the search engine updates its index and rankings.

For most sites, at a normal (between-major-algorithm-updates) time of the year, I would expect to see search results change anywhere from 2 to 6 weeks. Outside that window usually something else is going on. But there is no hard-and-fast rule-of-thumb that works for every site. You have to know your query space and how it shifts and changes (active query spaces change constantly).

If you do a test like this and then the search engine releases a significant algorithm update, your test is invalidated.

If you do a test like this and then your competitors change what they do, your test is invalidated.

So measuring how effective your changes are is very, very difficult. The people who do these kinds of tests well repeat them as many times as they practically can, with different sites, in different queries.

And they usually don't publish the details of their tests (which invalidates all claims of success or failure).

I am one of the annoying slobs who point out the flaws in all claims of cause-and-effect in the SEO community because they are so misleading and such time-wasters.

Your real priority should not be to chase the algorithm, however. It should be to provide your visitors with the best possible Web experience possible. Your search engine optimization should be designed to achieve the highest possible return on investment, and most people should be measuring that ROI in terms of converting traffic. How much more converting traffic do you get after making a change? If your numbers improve, you don't have to prove cause-and-effect to anyone, least of all me.

As long as you build more converting traffic, keep doing whatever you're doing. You don't have to know precisely what works in order for it to work.



#25 NASA

NASA

    HR 4

  • Active Members
  • PipPipPipPip
  • 183 posts

Posted 21 April 2009 - 12:15 PM

QUOTE
You should be managing your crawl, showing your visitors and search engines your most important pages through your internal navigation. If you're disabling links to pages in your navigation system you're not managing crawl you're impeding it.
you've completley lost me. how do you 'manage' your crawl? and how does rel='nofollow' disable a link from working?

#26 Michael Martinez

Michael Martinez

    HR 10

  • Active Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,145 posts
  • Location:Georgia

Posted 21 April 2009 - 07:07 PM

QUOTE(NASA @ Apr 21 2009, 10:15 AM) View Post
you've completley lost me. how do you 'manage' your crawl? and how does rel='nofollow' disable a link from working?


I've written some pretty long articles about crawl management. It's hard to encapsulate it in a few points.

Basically, you manage crawl by placing links where they will be found and followed. You build up link pathways in tree-like structures and through redundancies to increase page visibility.

I recommend that people try to place at least three links to every page within their internal linking. You can do this through multi-layered navigation, cross-promotional links, and inbody references.

Here is an example of multi-layered navigation. Suppose you have a 500-page site. You cannot possibly squeeze all 500 pages into a human-comprehensible menu so you break up the menu into two parts. The main part only includes links to the most important pages on the site (root URL, About Us, Contact Form, HTML Sitemap, and special section roots). Every section on the site then adds a custom second-layer that is only specific to itself, so each page has navigation that takes the user to the high-level stuff and the section-level stuff.

Of course, you might still end up with 50-100 pages per section, so you can add a second navigation tool (usually in the form of a local page table or secondary nav menu positioned elsewhere on the page) to help users move around within small sub-groups.

Cross-promotional links are usually dressed up as intrasite advertisements: banners, special promotional links, and other "window dressing" in the margins.

Sometimes it's appropriate to have an onsite guide that tells visitors about the rest of your site. Some sites do this as mini-directories. Some sites do this as feature articles (often under the "About Us" or "Company History" categories in the main navigation).

The idea is to use several navigational tools that operate by consistent rules, staying within clear boundaries.

You manage crawl by leveraging your most frequently crawled pages to help your less frequently crawled pages.

As far as "disabling" a link goes, if you embed the "rel='nofollow'" attribute on a link, you're telling search engines not to follow that link and not to allow that link to pass any value it might otherwise pass. The search engine is expected to behave as if the link does not exist.

I would never recommend that anyone put "nofollow" on a link in primary navigation. In my opinion, if a page is useful for your visitors then it should be visible to search engines, too (allowing for some reasonable exceptions, such as "thank you" pages and private subscription content).

#27 NASA

NASA

    HR 4

  • Active Members
  • PipPipPipPip
  • 183 posts

Posted 22 April 2009 - 04:09 AM

QUOTE
As far as "disabling" a link goes, if you embed the "rel='nofollow'" attribute on a link, you're telling search engines not to follow that link and not to allow that link to pass any value it might otherwise pass. The search engine is expected to behave as if the link does not exist.
Your getting functionality/usability mixed up with indexing.

The link still works, the user is not affected and the page is indexed because it's in my sitemap.xml and I webmaster the site through GWMT. Nothing you have said there makes any sense to me and I don't suffer from any of the problems you seem to indicate I would.

Also what does this mean
QUOTE
You manage crawl by leveraging your most frequently crawled pages to help your less frequently crawled pages.
Why would G! want to and why would I want G! to visit a page more frequently where the content on the page never ever changes and never ever will? Again makes no sense to me, sorry!

Also as I understood things G! is the only SE that PR is a factor so rel='nofollow' shouldn't affect the other SE's, but someone correct me if i'm wrong, if so I'll go create a YWMT account and add a sitemap.xml there.

Also are we 100% sure that any page pointed to via a rel='nofollow' link doesn't get indexed? or does it just not get any juice or trust from the link?









#28 Randy

Randy

    Convert Me!

  • Moderator
  • 17,540 posts

Posted 22 April 2009 - 06:21 AM

rel="nofollow" does not equate to Do Not Index or Do Not Crawl. All of the engines will still crawl and index pages behind a nofollow. Just that they probably won't rank well if there are no other clean links pointing to them.

If you want to exclude pages from being indexed you need to include them in your robots.txt, use a meta robots tag to instructed the engines not to archive them or put them behind a password system.

QUOTE
Also as I understood things G! is the only SE that PR is a factor so rel='nofollow' shouldn't affect the other SE's, but someone correct me if i'm wrong


Google is the only one that uses PageRank specifically. But all of the engines incorporate some type of link popularity into their ranking algorithms. In theory rel="nofollow" is supposed to halt the passing of link popularity, so in theory it should have an effect on all of the search engines.

#29 NASA

NASA

    HR 4

  • Active Members
  • PipPipPipPip
  • 183 posts

Posted 22 April 2009 - 06:34 AM

Thanks Randy, that's what I thought and couldn't get my head round what Michael was saying.

I guess it's possible you and I are wrong though wink1.gif

#30 Randy

Randy

    Convert Me!

  • Moderator
  • 17,540 posts

Posted 22 April 2009 - 08:39 AM

Yeah, the problem is that there are no standards for the non-standard attribute. Which makes sense in a weird way.

I've seen where Matt Cutts himself (over on Rand's SEOMoz site I think) said flat out that nofollow'd links are not used for discovery because they drop them from their link graph. But at the same time you can see nofollow'd links in the Webmaster Tools area, which would lead one to believe that something might be passed. Then there's the other issue where when nofollow first came out you could nofollow a link to a previously non-existent page and use a made up word as the anchor text. Remarkably, the test page would show up for searches on the made up word. Even if it only had a single nofollow'd link pointing at it.

I haven't tested that one lately, but I think Google at least fixed that little bug last summer. I'm still not sure I'd believe that nofollow'd links aren't used for discovery, even though I saw the quote from Matt C. I'd have to test it again to say for sure, but seen tests where pages that were orphan's with the exception of a single nofollow link still showed up in the index.

It's just not a issue for me since I don't much believe in rel="nofollow". I'll use other, more proven methods of link pop sculpting if I need to, because even if you figured out exactly how Google treats them, Yahoo, MSN, Ask and all the rest are probably doing something totally different.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users

SPAM FREE FORUM!
 
If you are just registering to spam,
don't bother. You will be wasting your
time as your spam will never see the
light of day!