Are you a Google Analytics enthusiast?
More SEO Content
Purchasing Links For Pagerank
Posted 19 December 2005 - 11:49 AM
The main topic was "Purchasing Links for Pagerank" By Mark Daoust
I would like to bring in front of this forum members part of it since I totally disagree with it and would like to see how big I am wrong
"It would be so easy to say that buying links is a decent practice and that you will never get in trouble for doing so......But when it all comes down to the choice you have to make, you have to realize that link buying is a risk. If a search engine catches you buying or selling a link, they will undoubtedly consider your site to be more questionable".
Few questions popped up in my mind:
1. What about Google AdSense? Aren't they sell link on your site the very same way?
2. Yahoo charge ~$300 to be include in their directory, so.... isn't a paid link?
All of them are using the website popularity to make money, isn't ?
Posted 19 December 2005 - 11:54 AM
I also disagree.
At worst, they simply won't allow your links (well the ones from the page that links are purchased) to pass link popularity. No big deal.
Posted 19 December 2005 - 02:21 PM
The Yahoo! $300 "listing fee" does not guarantee that you'll be placed in the directory. Some people feel that Yahoo! continues to exercise editorial control over their directory because of that.
The links that Google especially is concerned with are the straight hypertextual links which look and function exactly like normal unpaid links and which (in their opinion) skew the results of their link-based algorithms. They feel such links do not represent "editorial choice", as the sites probably would not carry the links if they were not paid to do so.
You have to understand that, despite all their technology, the people at Google really don't yet grasp how the Web works. Most outbound links on sites are not worth anything with respect to the site operators' opinions: they tend to grab links for the sake of completeness, and usually get their lists from search engines without understanding whether the content they link to is worth linking to.
Posted 20 December 2005 - 10:45 AM
Posted 20 December 2005 - 11:03 AM
Posted 20 December 2005 - 11:49 AM
Actually, if I were to make that statement, I'd say nothing wrong with buying links.
There is something wrong with buying links for PageRank, and it is in fact what got the SearchKing network severely penalized.
Not to mention that there's no sense doing anything for PageRank in and of itself.
Posted 20 December 2005 - 12:57 PM
I would like to add that there "is something wrong" with selling links with regards to PageRank which is the main reason why the SearchKing network was penalized (they were selling links based on the PR of any given page). I don't think those who bought the links were penalized but rather that the links were worthless after Google lowered the boom on the SK network.
Posted 20 December 2005 - 02:29 PM
Link Popularity - Using the number of inbound links pointing to pages returned for a query result to order the pages by greatest number of links to least number of links. Page A has 1000 links, page B has 800 links, page C has 775 links, etc. Inktomi (purchased by Yahoo! a couple of years ago and now providing the search engine for Yahoo!) used to order its results by link popularity. There is considerable evidence indicating that Yahoo! still takes link popularity into consideration.
Click Popularity - Using the number of searcher clicks on links in search results to order the pages. So, page A gets 1000 clicks, page B gets 900 clicks, page C gets 800 clicks, etc. DirectHit made this concept popular, and several services (including Yahoo!) actually combined DirectHit results into their query resolution algorithms. There is evidence that Yahoo!, Google, Ask, and MSN all count clicks but it's not clear if they are using the clicks to help order results.
PageRank - An algorithimically determined probability that a random surfer will land on a given page by randomly clicking on links. Pioneered by Google, they have historically only ordered search results in their directory by PageRank. They claim to use PageRank among 100+ other factors to determine the rankings of their search results. The sum total of all PageRank is supposed to be 1 (because probability distributions are represented as values between 0 and 1). A popular "proxy" value represents PageRanks as values between 0 and 10.
In theory, every document starts out with a minimal PageRank (1 divided by the number of documents being ranked). All the links are normalized (presumably, this means that any duplicate links from page A to page B are treated as one link). Links to non-indexed documents are temporarily set aside. For every document with outbound links, its PageRank is divided by the number of outbound links. Each document pointed to by one of those links then accumulates that portion of PageRank (the page linking out does not lose any PageRank by doing this).
All the PageRanks are summed up again (and adjusted within the formula by a damping factor). The process is repeated through a varying number of iterations until the differences between the PageRanks of two consecutive iterations become insignificant. At this point, the non-indexed documents that have links pointing to them from the indexed documents are assigned their PageRanks.
Documents which have no outbound links are treated as if they link to every other document in the collection.
PageRank is a derivative of link popularity but the distinction between PageRank and Link Popularity comes down to the value of any given link. In PageRank, a link may count for more or less than others either due to its document's PageRank or due to the damping factor applied to the document (additionally, the receiving document's damping factor can reduce or magnify the value of the PageRank conferred by inbound links). In Link Popularity, all links have static value, hence there is no reason for multiple iterations to recalculate the Popularity Ranks of the documents.
Engineers at Ask and Yahoo! have been reported (by Mike Grehan) as claiming that Google has not implemented (or has not fully implemented) PageRank. Several technical papers (published by search engineers in the academic and professional communities) have suggested ways of speeding up the calculation of PageRank to reduce the time and resources involved.
Link Popularity, Click Popularity, and PageRank are all vulnerable to manipulation. PageRank researchers have proposed a variety of methods for refining the PageRank calculation process to account for (and filter out) manipulative links.
Based on my own study of the various technical papers and patents, my feeling is that Google (and probably also Yahoo!, Ask, and MSN) most likely maintains a core set or seed set of Good or Trusted Sites from which PageRank (or something like PageRank) is conferred out to other sites. Based on comments made by Matt Cutts and other people, my feeling is that Google (and probably also Yahoo!, Ask, and MSN) maintains a core set of Flagged or Suspicious Sites from which outgoing PageRank is reduced or blocked.
Other people have expressed similar assessments/opinions of how PageRank or its equivalents are being managed.
Delistings are distinguished from Penalties in that a site may be delisted (removed from a search engine's index) or it may only be penalized (it can be found in the index by searching on the title and/or URL but it won't rank for any other queries).
Moving into the murky area of semantics, many people feel that their own sites are penalized when, in fact, only some or all of the inbound links that have helped boost their sites in search results get penalized. This situation could be described as a Consequential or Sibling Penalty, in that the penalty is not applied directly to the site itself. A site suffering from a Consequential Penalty should be able to address its ranking problems by seeking links from more reputable sites.
Finally, Matt Cutts has sort of explained what he means by reputation:
...Yes, Google has a variety of algorithmic methods of detecting such links, and they work pretty well. But these links make it harder for Google (and other search engines) to determine how much to trust each link....
...Reputable sites that sell links won’t have their search engine rankings or PageRank penalized–a search for [daily cal] would still return dailycal.org. However, link-selling sites can lose their ability to give reputation (e.g. PageRank and anchortext).
A few months ago, Roy and Sharon Montero began participating in HighRankings discussions and they spoke in terms of "link reputation". I had never encountered that expression before, but after doing some research I learned that they were part of an SEO community that has developed a philosophy around "link reputation" for the past several years. The concept is not new, but I'm not the only person who has frowned upon hearing about it.
As I understand their point-of-view, they regard link anchor text to be "link reputation". Let's say my Web page has three inbouns links with the following anchor texts:
"Michael's great Web site!"
"Michael's Web page"
"Michael's awful Web presence"
Each of these anchors confers what the Monteros and their associates call "link reputation". My page has three different kinds of reputation.
Is that what Matt means, too? I'm not so sure it's quite that simple in Google's eyes.
Google literally associates link anchor text with the receiving document. Everyone seems to agree on that much. I won't recap the quibbling over variations. But the inbound link anchor text helps define a document's content (as far as determining relevance). That is why the miserable failure link bomb pushes the White House and Michael Moore Web sites to the top of a perfectly irrelevant query (irrelevant in terms of what content those sites actually present with respect to the query).
When Google blocks a page's ability to confer reputation, Matt says, it blocks both the PageRank and the anchor text that the page could associate with receiving documents. He only offers those as examples. In our ignorance, we can assume that they are the only valid examples or we can assume there may be others.
One possible other example of reputation could be a page's relationship to its community. That is, you and I each have a Web site about frog hunting and we link to each other. But my otherwise trusted site is caught selling text links without Google's preferred rel=nofollow attribute. So Google penalizes my site by blocking the reputation it can confer. Suddenly, your site gets no PageRank from mine and my outbound link anchor text is no longer associated with your site. But what if my site is no longer considered to be part of that community of frog hunting sites?
Would that hurt me in the rankings? I don't know. Maybe a search on RELATED SITES would differ based on whether a site has lost its ability to confer reputation.
Despite some history behind it, a Reputation Penalty is a new concept to SEOs. We are used to thinking of sites being penalized by losing their ability to rank for search results. We are not used to sites losing their ability to pass on value. The only well-documented instance is when Bob Massa's SearchKing network was penalized by Google (because he operated another network, which he claimed was separate from SearchKing, where he sold text links on the basis of the reported PageRank of the documents provided links).
The October 2005 Google Update may be the most recent example of this Reputation Penalty at work. That is, the sites so many people have depended on to help boost their own rankings may suddenly have had their outbound links devalued while suffering no other penalties. Hence, you should still be able to find many of those linking sites for their own targeted search expressions, but getting links from them no longer helps.
How did Google identify these sites? Maybe by looking at the Web communities in which they participate. While most of us can sort of visualize what Google means when it speaks about "neighborhoods" on the Web, we don't really know what criteria go into identifying a neighborhood. So, maybe Google just started assigning Reputation Penalties to certain neighborhoods, the boundaries of which only they can see. Sites outside those neighborhoods won't be affected because they don't depend on much linkage from the isolated neighborhoods.
If that is the case, then Google is starting to look beyond mere Web documents and Web sites. They are looking at sub-collections within the whole collection of indexed documents.
This goes back to what Dan Thies suggested in my Notes on October 2005 Google Update thread:
So, well, that was a long and winding path from "link popularity" to "link variety/link diversity" but this is the first time I've been able to put all those thoughts together as well as this. I'm still working on it.
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users