Are you a Google Analytics enthusiast?
More SEO Content
Posted 20 September 2006 - 05:28 AM
can any one tell me why wordtracker uses only metacrawler and dogpile search results to be included in its database.as there are many other meta search engine out there.....
Posted 20 September 2006 - 06:15 AM
There are lots of other meta engines out there.
Posted 20 September 2006 - 06:49 AM
Posted 21 September 2006 - 12:59 AM
At the time when they set the service up, there was probably a much better case to limit it to those two. With the manipulation that goes on now, I'm sure they're considering alternatives.
Posted 30 September 2006 - 12:46 AM
We get our data from metacrawlers, rather than the search engines themselves. After all, the metacrawlers contain the results from the search engines.
To be fair we do make an assumption. We assume that people will search for the same things regardless of whether they use a search engine like Yahoo (www.yahoo.com) or a metacrawler like Metacrawler. The argument can be raised that many new people who have just joined the internet wouldn't know a metacrawler if it jumped out at them. But then... it works both ways. There are many portals which use metacrawlers for searching the web (www.cnet.com for example).
We believe that a user (especially a new user) would see a search box as a search box. It matters not the technology underneath. Also remember that a metacrawler uses the major search engines for its results!
The other great thing about metacrawler results is that we do not have to contend with the skew from people using software robots checking keyword positions. All in all, it works out pretty well!
Customer Support Manager
<removed url >
Posted 04 October 2006 - 02:21 PM
This is an excellent question and there are fundamental reasons why we use the Metacrawler data against other data sources that we monitor and evaluate.
(i) We have analyzed a large number of providers including those used by other keyword tools. We have made a conscious decision not to use this data. Many of these providers do not provide enough information to allow us to de-spam the data. As a result you can't filter out many of the robotic queries and have to rely on a different method of filtering out the keyword spam. This other method means getting as much data as you can and letting the keyword spam fall to the bottom of the pile. We thought about utilizing this model but decided against it because we wanted to have control over results we provide our users.
(ii) Keyword data coming from the Metacrawlers is about as clean as you can get. Metacrawlers are very useful for humans because they save time and bring back reliable results: but they are not very useful for the bots that make automated queries because these bots are interested in results from a single search engine.
So, as a result humans use the Metacrawlers, bots tend not to. That means that The Metacrawler data represents real people searching.
(iii) We also protect our data sources by constantly monitoring and evaluating raw data, looking for spam and eliminating it from our results.
Any keyword must pass through a minimum of 10 different filters before it wins a place in the Wordtracker database. And in addition, our developers constantly monitor our data and any spam that survives is removed manually.
(iv) Our data also eliminates hard coded links - Suppose a webmaster wanted to generate some additional income from his site, say from 'online casinos'. They'd publish a live link with the text 'online casinos' that when clicked triggers a search on a PPC engine. This brings the webmaster a commission but does not represent a natural search by a person as they are being led into the action by the website. This will in turn inflate the keyword counts. This type of query does not exist inside the Metacrawler data.
(v) Duplicate results also do not exist inside the Metacrawler data. We often get asked why our keyword counts are lower than others: the main reason is that we exclude multiple counts. Many PPC search engines have partnership arrangements in place. One search engine might have at least 2/3 different PPC engines on their results page. Because the PPC engines give you all the data in one lump it means that you can't filter out these duplicate results and therefore data from two PPC engines is likely to contain a number of duplicate engines.
PPC engines give you the illusion of large volumes of data arriving from many engines but these highly inflated counts will simply undermine your keyword strategy.
(vi) Data quality is without doubt the most important thing, it doesn't matter what tools you build - the core data has to be right. We constantly analyze other data sources and there is still nothing to beat the quality of the Metacrawler data.
We continue to speak to other data sources but we will only add a new source to our database if it betters the high quality of what we currently have.
It's always quality over quantity that really matters and that approach is what has brought us and the many thousands of webmasters who depend on us such consistent success.
I hope this answers your question and wish you every success.
Posted 06 October 2006 - 10:39 PM
I beg to differ... A userpanel comprised of search toolbar users doing a search via their actual browser, is by far the cleaner source.
As we all know that even meta crawlers are very much subject to skews.
Posted 06 October 2006 - 11:19 PM
Any suggestions for those who can't pony up the extra $1000 for the premium database to minimize skew? I'm guessing comparitive results might be in order? Any other options other than WT?
Posted 07 October 2006 - 09:29 PM
Thanks for the discussion, but I beg to differ with both of you a little... either approach has its strengths and weaknesses.
Posted 08 October 2006 - 10:38 AM
what do u suggest for keyword identification i mean to say which software..
Posted 09 October 2006 - 06:18 AM
Posted 09 October 2006 - 07:07 AM
Posted 10 October 2006 - 07:16 PM
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users