Are you a Google Analytics enthusiast?
More SEO Content
Scammers Drop Your Google Rank Using Your Content!
Posted 18 March 2006 - 09:06 PM
Jill maybe your site has not been affected by the cloaking because it's white listed? Who knows.
I also have to agree with Stephen, about Google. I'm not whining, but i have noticed since last year it's getting harder and harder to find a good search. And why all the dead links? They are becoming the AltaVista of old.
Don't get me wrong I love using Google, but now once I get so many bum searches, I have to really use advance features of Google and lots of double quotes to get at what I am looking for. Didn't used to be that way.
But am I the only person here who has noticed on several occassions that clicking on search results on Google often brings up dead links and useless pages with Adwords sponsored links on them?
Posted 18 March 2006 - 11:26 PM
So, maybe you need to figure out how to get your site "whitelisted" then, if there even is such a thing.
Then don't use it. Simple solution for you.
Posted 19 March 2006 - 08:09 AM
Certainly, links from .gov and .edu sites can't hurt you, because for a .gov to link to you, then you must really be a high quality useful site. Luckily we get that on our car site. Now if could just convince MIT to to exchange links with our bridal tips site from their home page...
Edited by jeffostroff, 19 March 2006 - 08:28 AM.
Posted 19 March 2006 - 08:31 AM
"Site sues Google over search rankings"
I did not post the link to the story, I don't want Jill to get even more upset at me. Probably not the first time, but the site that is suing them is also suing to get Google to revel all the criteria it uses for ranking you.
They basically want the court to force Google to turn over their recipe.
Posted 19 March 2006 - 08:36 AM
Posted 19 March 2006 - 09:54 AM
And there's a good reason for that. Nobody but government and educational sites can get .gov or .edu. Theoretically, that would mean if you have links from them to your site, they would have to be REAL links and not scammed ones like most people are getting these days.
It's not about the .gov or the .edu. It's about counting links that are a true vote for a site and discounting those that are there to subvert the link popularity algo.
Regarding the law suit, please feel free to start a new thread about that, I don't want to discuss it in this one as it's a different issue all together. (Although I'm sure there's overlap...i.e., people who mistakenly think Google owes them a living.)
Posted 20 March 2006 - 01:14 AM
But one thing that bothers me though is that why the first one or two results are supplemental results? I mean, if those pages are supplemental to its own site, shouldn't the one with the main content be ranked higher (regardless of site)?
Posted 20 March 2006 - 10:15 AM
There is probably only 2 ways these other sites can show up on a Google search for stolen content that does not actually appear in visible form on the offending site.
1) Maybe the stolen content was on the offending site for a while, and once indexed, the scammers removed it, having obtained their ranking. Highly unlikely, because next time they get crawled they will disappear.
2) Some java script or some other program is waiting for Google's crawler to index the site. Once it does, it feeds Google's crawler the information manually, which I believe is the methhod that people with dynamically generated html pages do it.
The bottom line is somehow Google is tricked into thinking the content stolen from your site is on the offending site.
Posted 20 March 2006 - 11:11 AM
The easiest way is to visit the page with a spider simulator like a lynx viewer.
You can't see cloaked content in the page source because you've already been redirected to a "clean" page by the time it loads in a regular browser.
Posted 20 March 2006 - 11:51 AM
Very Cool! I'm off to find it.
Posted 20 March 2006 - 01:03 PM
Big Daddy is a infastructure update on Google's servers. There are several threads on this forum that discuss the essentials of the update - this one in particularly should give you the information you are seeking: February 2006 Google Update
There have been documented issues across other forums discussing the Big Daddy update that both "Google Guy" and Matt Cutts have addressed. Also lately Matt has been encouraging webmasters/seo folks to fill out spam reports of suspected "spammy" sites as well as "spammy" SERPS.
Posted 20 March 2006 - 02:46 PM
As explained by Matt Cutts and GoogleGuy, Big Daddy is not supposed to be about changing the way Google ranks Web sites in its search results.
Now let me play devil's advocate and throw the official line out the window.
A little background:
Algorithm is the defined process is that applied to the data as it is gathered, indexed, and searched. The algorithm in this broad context would include filters for excluding content from indexing, methods for penalizing content, methods for organizing the data, methods for analyzing queries, and methods for searching the indexed data.
Source: Michael Martinez, HighRankings Forums, March 2006
With respect to Google's operation, I would say that they have three applications that we as Webmasters and searchers deal with:
- The crawler application -- variations of Googlebot, some service-specific such as for images, blogs, determining advertising context, etc.
- The indexer application -- the collection of programs that break down fetched pages and parcels out data to the various databases
- The query application -- the collection of programs that accept, manage, and process the queries we type into the search tool(s) Google provides
Based solely on my ignorance (meaning, no one at Google has explained exactly what they are doing), I am guessing that they have changed the underlying file system to allow for faster processing and perhaps greater scalability. They already had immense scalability, but I gather from snippets being leaked by the press that Google's ambitions outmatch their technical capabilities. Their response is not to let that stop them, but rather to retool as required to improve efficiencies across what may already be the most efficient computer technology in the world.
Based solely on my ignorance, I am guessing that they have rewritten some of the software involved in the various applications. The rewrites might have been entailed by changing the file system, but they could also be implementing some new processing efficiencies.
Anyone who has participated in a computer programming class knows that, when you give 30 people a high-level specification (a problem to solve and a basic set of tools with which to solve it), you'll see up to 30 different ways to fulfill the specification. Just because the class assignment is to take an order and pass it to the service department doesn't mean everyone will do it the same way.
Now, think about what may be required to implement a very complex algorithm that captures, filters, organizes, penalizes, and processes data.
The new architecture could include new programs intended to do the same tasks according to the original specification, but they may be doing some things differently.
We recently saw how millions of documents became trapped in what I call the Supplemental Results Zone. That is, the documents were showing (when they showed at all) in the search results as Supplemental Results. When enough Webmasters complained loudly, Google announced they were making a change and that things should start righting themselves in a week.
Well, what sort of change would be required, where this wasn't happening prior to the Big Daddy rollout? Usually, it's either a parameter entered into a data table or it's an actual set of commands stored in a program.
My gut feeling is that while Google has not intentionally altered the methods it uses to capture, filter, organize, penalize, and search/process data they seem to have altered or replaced some of the programs that implement these methods and by doing so unintentionally (but allowing for what they may deem an acceptable margin of error) altered the way the service performs.
So, put me down on the side of "Big Daddy seems to have changed the way Google does business despite their intentions to keep doing business the same way as before."
Sorry, but that's just the way it looks to me.
Posted 20 March 2006 - 03:01 PM
Also, no one has still been able to answer how so many dead links can rank higher than a live site. Also, when we find sites that break virtually every SEO no no in the book, how is it they rank above all the other sites?
Posted 20 March 2006 - 04:00 PM
In my opinion, based strictly on ignorance, I seriously doubt that is the problem. The October 2005 Google Update took down a lot of sites. It was an algorithmic update that zeroed in on specific factors, and some innocent people reported lost rankings during the update. Most of them have come back, but not all.
Matt Cutts provides an indirect partial answer on his blog:
The following citation is edited for brevity. Emphasis is mine. It does not necessarily convey Matt's point accurately:
They may be ranking well in spite of violating the content guidelines. Many sites do actually stumble their way to the top of search results.
If you would prefer a more informed opinion, PM me with the URL of your site and three targeted keyword expressions.
Posted 21 March 2006 - 12:16 AM
I found nothing immediately obviously wrong in the on-site content, so I did a quick link analysis. I feel he has weak backlinks. That is just an opinion, and it's not a terribly well-informed opinion since I don't know how Google evaluates backlinks.
But looking back at my Notes on October 2005 Google Update, you'll see where Dan Thies wrote the following on November 11 (the update extended into the first part of November):
I eventually came to agree with part of Dan's conclusion, that "the trend is in favor of sites with a more diverse population of incoming links (links coming in from more unique web hosts)".
I incorporated this idea into my Google Reputation and Trusted Content paper, which I wrote in December.
What I feel, without being able to prove, is that Jeff's site may have suffered what I defined as "Rank Depression":
My definition for "Penalization" was:
If my analysis is correct, and if Jeff can do some good link building, he should recover his lost rankings (or at least improve significantly from where he is now). The types of links I suggested he go for are not the easy ones to get. He already has easy links (including, as he pointed out, a considerable number of unwanted scraper site links).
I don't believe Google is as easily deceived by the scraper sites as it was a year ago. I think they pretty much fixed that problem (with possibly a few exceptions) in their July 2005 and October 2005 updates. Most of the scraper sites vanished from the search results in those two updates.
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users