Are you a Google Analytics enthusiast?
More SEO Content
Internal Pr Juice & Pr Sculpting
Posted 19 April 2009 - 11:46 AM
So should i just exclude them with a robots.txt file?
Posted 19 April 2009 - 06:33 PM
Do you not want those pages to show up in the search engines for any possible search term? I can't imagine a situation with a normal site where it would be that critical, but if it is you should at a minimum exclude those pages via robots.txt. If it were that critical though I think the argument could probably be made that the pages shouldn't be there at all in the first place. And if the pages are required to be there for some reason, they should be reachable both by the site navigation and from search engine searches for specific, targeted searches.
It's a circular argument as you can see.
Seriously, I cannot fathom a situation where a well-constructed site of fewer than several thousand pages should need PR Sculpting. It would be easier and make more sense to fix the site than to rely on a crutch that relies on someone else's implementation of a non-standard.
Posted 20 April 2009 - 04:25 AM
You see I don't, they are there as part of the site content, but the likely hood of someone finding the site via these pages is so unlikely, not to mention undesired.
OK some may argue a visitor finding your site by any means is a good thing, but you cannot use the argument of thinking like your ideal visitor, appealing to them and targeting the 'right visitor' and then say these pages are ok to bring in traffic.
If people find the site via this link, it is most likely they will be the wrong type of visitor.
The whole SEO nightmare is as you say
On one hand you say don't attract the wrong visitor, it just increases your bounce rate, but then say allow these non-product specific, nothing to do with the services being offered pages to bring in visitors.
These pages are useles in terms of relating to what the site is trying to offer and the KWD's we want people to find the site by.
And If I made up bogus titles on these supurflour pages just to try and capture longtail keywords, that goes against SEO ethics and the purpose of the title tag to be descritptive of the page content.
These pages are there for the sole purpose of being available to the visitor once they arive at the site, should they wish to view the W3C standards the site adheres to. For SEO and marketing purposes these pages are worthless.
I couldn't care if they are in the index or not, what I do care about is any PR they may be leaching!
I've decided to use the rel='nofollow' attribute on these page links and well we'll just have to see how it goes.
Posted 20 April 2009 - 06:28 AM
Do you seriously think that allowing the spiders to find a page naturally is SEO'ing it? Or that every page that shows up for any phrase has been SEO'd?
Big, big difference between simply allowing the spiders to treat a page like they do every other page they run across and actually SEO'ing it for targeted terms.
Posted 20 April 2009 - 08:17 AM
Great! Please let us know how it works out for you over time. I'm sure your info will be helpful to others who are thinking about this. We can all spout theories till the cows come home, but nobody will ever know without trying it for themselves.
Posted 20 April 2009 - 09:56 AM
Because I know of SEO, I think of SEO when ever I write a page, which is what was giving me this PR dilema.
I know you just want me to forget about these pages, let the SE's do what they want with them and move on.
But I can't, it's not in my nature, I'm like a dog with a bone sometimes and just can't let go.
The missus calls me tenatious, which can be a good personality trait, but also a bad one, bit of a double edged sword I guess.
Also because this site is so personal (as i've mentioned), I want to do the best job i possibly can, leaving no stone unturned.
I guess the proof in this proverbial pudding will be in the proverbial eating!
his main page (index.html) is no.6 on page 1, let's see if this improves it any or not.
However, it's a new site and I know G! shows better rankings for new sites and then they fade, so as SERPs are unreliable, how will I know if it's working Jill? (or not as the case may be)
Posted 20 April 2009 - 01:01 PM
"You cannot improve any page's search visibility by hiding other pages".
Ron -- very funny, haha. Of course you're right on a macro level but this thread had nothing to do with how search engines can improve their search results.
When someone wants to publish data that shows how using "rel='nofollow'" on internal pages actually helps improve visibility, I'll be glad to retract my words.
In the mean time, all you do when you hide pages is diminish your site's crawlability. You don't enhance it. You don't add to it. You impede search engines' ability to find pages and you don't speed up crawling of other pages.
I get the impression that people treat PageRank like water flowing through a hose. They have this image in their minds of water leaking out of holes all along the hose and that if they plug those holes they'll get more pressure pushing water out of the nozzle.
That works for water and hoses, it doesn't work for Web sites and PageRank. PageRank only flows as fast as the analysis a PageRank calculating algorithm performs and it has nothing to do with what you choose not to link to.
As Jill said (which is something I've pointed out many times before), all those so-called "unimportant" pages should be linking back to at least the main parts of your Web site, so they can and should be helping with your crawling.
In any event, I don't lose sleep over people shooting themselves in the foot by attempting to sculpt PageRank. Good luck with that. If you feel it works, I for one will appreciate as detailed an explanation of why when you have data available. That would be a first in this ridiculous 2-year-old debate, and you would earn my praise and thanks.
Posted 21 April 2009 - 03:45 AM
They link to the main pages, yes, but the main pages link to them? , remove the main links to them and only allow them to link to the main page (and not themselves!) , I think people here aren't undestanding SSI includes and footer links!
What's it got to do with crawlability, I use a site map (HTML & XML) don't you?
But of course if notice any change, not sure how, I'll be glad to share
Posted 21 April 2009 - 11:59 AM
You should be managing your crawl, showing your visitors and search engines your most important pages through your internal navigation. If you're disabling links to pages in your navigation system you're not managing crawl you're impeding it.
The proper way to document a cause-and-effect relationship between any change you make and changes in search results is a bit tedious (this is why most people don't do it).
Let's say your current search results are in a state ALPHA. Here is how you document cause-and-effect.
- Document search results state ALPHA
- Make a change (on your site, in your links, etc.) and wait for the search results to change
- Document search results state BETA
- Undo your change and wait for the search results to change
- IF NO CHANGE OCCURS, you're done -- there is NO cause-and-effect
- If the search results change to state GAMMA, you're done -- there is NO cause-and-effect
- If the search results change to state ALPHA, document the change
- Make the same change you made in step 2 and wait for the search results to change
- If you see search results state BETA, you probably have a cause-and-effect relationship
- If you see search results state DELTA, you're done -- there is NO cause-and-effect
- If the search results don't change, you're done -- there is NO cause-and-effect
The question some people ask is: "How long should I wait for the search results to change?"
The answer is: It depends.
It depends on how often your site is normally recrawled and reindexed.
It depends on how often your competitors' sites are normally recrawled and reindexed.
It depends on how often your linking resources are normally recrawled and reindexed.
It depends on how often your competitors' resources are normally recrawled and reindexed.
It depends on how often the search engine updates its index and rankings.
For most sites, at a normal (between-major-algorithm-updates) time of the year, I would expect to see search results change anywhere from 2 to 6 weeks. Outside that window usually something else is going on. But there is no hard-and-fast rule-of-thumb that works for every site. You have to know your query space and how it shifts and changes (active query spaces change constantly).
If you do a test like this and then the search engine releases a significant algorithm update, your test is invalidated.
If you do a test like this and then your competitors change what they do, your test is invalidated.
So measuring how effective your changes are is very, very difficult. The people who do these kinds of tests well repeat them as many times as they practically can, with different sites, in different queries.
And they usually don't publish the details of their tests (which invalidates all claims of success or failure).
I am one of the annoying slobs who point out the flaws in all claims of cause-and-effect in the SEO community because they are so misleading and such time-wasters.
Your real priority should not be to chase the algorithm, however. It should be to provide your visitors with the best possible Web experience possible. Your search engine optimization should be designed to achieve the highest possible return on investment, and most people should be measuring that ROI in terms of converting traffic. How much more converting traffic do you get after making a change? If your numbers improve, you don't have to prove cause-and-effect to anyone, least of all me.
As long as you build more converting traffic, keep doing whatever you're doing. You don't have to know precisely what works in order for it to work.
Posted 21 April 2009 - 12:15 PM
Posted 21 April 2009 - 07:07 PM
I've written some pretty long articles about crawl management. It's hard to encapsulate it in a few points.
Basically, you manage crawl by placing links where they will be found and followed. You build up link pathways in tree-like structures and through redundancies to increase page visibility.
I recommend that people try to place at least three links to every page within their internal linking. You can do this through multi-layered navigation, cross-promotional links, and inbody references.
Here is an example of multi-layered navigation. Suppose you have a 500-page site. You cannot possibly squeeze all 500 pages into a human-comprehensible menu so you break up the menu into two parts. The main part only includes links to the most important pages on the site (root URL, About Us, Contact Form, HTML Sitemap, and special section roots). Every section on the site then adds a custom second-layer that is only specific to itself, so each page has navigation that takes the user to the high-level stuff and the section-level stuff.
Of course, you might still end up with 50-100 pages per section, so you can add a second navigation tool (usually in the form of a local page table or secondary nav menu positioned elsewhere on the page) to help users move around within small sub-groups.
Cross-promotional links are usually dressed up as intrasite advertisements: banners, special promotional links, and other "window dressing" in the margins.
Sometimes it's appropriate to have an onsite guide that tells visitors about the rest of your site. Some sites do this as mini-directories. Some sites do this as feature articles (often under the "About Us" or "Company History" categories in the main navigation).
The idea is to use several navigational tools that operate by consistent rules, staying within clear boundaries.
You manage crawl by leveraging your most frequently crawled pages to help your less frequently crawled pages.
As far as "disabling" a link goes, if you embed the "rel='nofollow'" attribute on a link, you're telling search engines not to follow that link and not to allow that link to pass any value it might otherwise pass. The search engine is expected to behave as if the link does not exist.
I would never recommend that anyone put "nofollow" on a link in primary navigation. In my opinion, if a page is useful for your visitors then it should be visible to search engines, too (allowing for some reasonable exceptions, such as "thank you" pages and private subscription content).
Posted 22 April 2009 - 04:09 AM
The link still works, the user is not affected and the page is indexed because it's in my sitemap.xml and I webmaster the site through GWMT. Nothing you have said there makes any sense to me and I don't suffer from any of the problems you seem to indicate I would.
Also what does this mean
Also as I understood things G! is the only SE that PR is a factor so rel='nofollow' shouldn't affect the other SE's, but someone correct me if i'm wrong, if so I'll go create a YWMT account and add a sitemap.xml there.
Also are we 100% sure that any page pointed to via a rel='nofollow' link doesn't get indexed? or does it just not get any juice or trust from the link?
Posted 22 April 2009 - 06:21 AM
If you want to exclude pages from being indexed you need to include them in your robots.txt, use a meta robots tag to instructed the engines not to archive them or put them behind a password system.
Google is the only one that uses PageRank specifically. But all of the engines incorporate some type of link popularity into their ranking algorithms. In theory rel="nofollow" is supposed to halt the passing of link popularity, so in theory it should have an effect on all of the search engines.
Posted 22 April 2009 - 06:34 AM
I guess it's possible you and I are wrong though
Posted 22 April 2009 - 08:39 AM
I've seen where Matt Cutts himself (over on Rand's SEOMoz site I think) said flat out that nofollow'd links are not used for discovery because they drop them from their link graph. But at the same time you can see nofollow'd links in the Webmaster Tools area, which would lead one to believe that something might be passed. Then there's the other issue where when nofollow first came out you could nofollow a link to a previously non-existent page and use a made up word as the anchor text. Remarkably, the test page would show up for searches on the made up word. Even if it only had a single nofollow'd link pointing at it.
I haven't tested that one lately, but I think Google at least fixed that little bug last summer. I'm still not sure I'd believe that nofollow'd links aren't used for discovery, even though I saw the quote from Matt C. I'd have to test it again to say for sure, but seen tests where pages that were orphan's with the exception of a single nofollow link still showed up in the index.
It's just not a issue for me since I don't much believe in rel="nofollow". I'll use other, more proven methods of link pop sculpting if I need to, because even if you figured out exactly how Google treats them, Yahoo, MSN, Ask and all the rest are probably doing something totally different.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users