Jump to content

  • Log in with Facebook Log in with Twitter Log In with Google      Sign In   
  • Create Account

Subscribe to HRA Now!

 



Are you a Google Analytics enthusiast?

Share and download Custom Google Analytics Reports, dashboards and advanced segments--for FREE! 

 



 

 www.CustomReportSharing.com 

From the folks who brought you High Rankings!



Photo
- - - - -

Server Lookup Tool


  • Please log in to reply
13 replies to this topic

#1 SEMMatt

SEMMatt

    HR 4

  • Active Members
  • PipPipPipPip
  • 132 posts

Posted 16 March 2009 - 05:54 PM

Hello,

Can anyone recommend a good and simple tool to see what server is hosting a particular domain like www.domain.com?

What should I be looking for as far as checking for SEO reasons (dup content of multiple sites)? Is it the MX record, DNS?

Thanks,

#2 Jill

Jill

    Recovering SEO

  • Admin
  • 33,107 posts

Posted 16 March 2009 - 06:02 PM

Not really sure how knowing what server a domain is on helps SEO.

#3 SEMMatt

SEMMatt

    HR 4

  • Active Members
  • PipPipPipPip
  • 132 posts

Posted 16 March 2009 - 06:23 PM

QUOTE(Jill @ Mar 16 2009, 07:02 PM) View Post
Not really sure how knowing what server a domain is on helps SEO.


Thanks for the quick reply Jill. Actually the crux of it is that I'm concerned about duplicate content. My client is a content provider for many smaller sites....all using the same content and some of those sites might be on the same server but under different domain names.

#4 Randy

Randy

    Convert Me!

  • Moderator
  • 17,540 posts

Posted 16 March 2009 - 08:00 PM

It won't matter what server they're on Matt. The duplicate content will be enough to cause ranking problems for one or the other.

As far as seeing what IP number a site tracks to there are several tools. A simple one is using tracert from your Windows command prompt. Or if you're on a Linux system dig will show you the IP number. Of course one server could have several IP numbers attached to it.

MX records aren't what you're looking for. Those are Mail Exchanger entries. You need the main "A" record for the hostname, which will include the IP number.

#5 Jill

Jill

    Recovering SEO

  • Admin
  • 33,107 posts

Posted 16 March 2009 - 09:09 PM

Right. Dupe content is dupe content regardless of same server or not. Either way, it'll likely be filtered out.

#6 ogormask

ogormask

    Shane O'Gorman

  • Active Members
  • PipPipPipPip
  • 134 posts
  • Location:Eau Claire Wisconsin

Posted 16 March 2009 - 10:12 PM

I dont know if a whois lookup would help but thats where you go for domain info. But as pointed out nothing to do with SEO.

#7 SEMMatt

SEMMatt

    HR 4

  • Active Members
  • PipPipPipPip
  • 132 posts

Posted 17 March 2009 - 07:01 AM

Thanks folks.

However, the thing is that I see many of my clients partner sites (to whom he provides the content) being indexed with the dup content? Is it just a matter of time before all of these sites start to be de-indexed?

#8 Randy

Randy

    Convert Me!

  • Moderator
  • 17,540 posts

Posted 17 March 2009 - 07:22 AM

They won't necessarily be de-indexed at all Matt. Especially if there is other content that is not being duplicated available on the sites. More times than not duplicate content will continue to be indexed.

This however doesn't mean the sites won't be filtered when someone conducts a keyword search for which two or more sites might normally rank. Filtering happens well after the Indexing stage.

#9 SEMMatt

SEMMatt

    HR 4

  • Active Members
  • PipPipPipPip
  • 132 posts

Posted 17 March 2009 - 09:51 AM

QUOTE(Randy @ Mar 17 2009, 08:22 AM) View Post
They won't necessarily be de-indexed at all Matt. Especially if there is other content that is not being duplicated available on the sites. More times than not duplicate content will continue to be indexed.

This however doesn't mean the sites won't be filtered when someone conducts a keyword search for which two or more sites might normally rank. Filtering happens well after the Indexing stage.



Thanks Randy...can you say a bit about what filtering is...that's the first time I've heard that term used in this context.

#10 Randy

Randy

    Convert Me!

  • Moderator
  • 17,540 posts

Posted 17 March 2009 - 11:55 AM

Well, duplicate content is such a large topic that one needs to look at the whole thing, with the possible causes and search engine responses. So this is going to be a longer answer than you likely suspected. Hey, long answers are my specialty! giggle.gif

What I'm talking about with Filtering is what happens most often. It's basically for those times when no attempt at deception or trickery is in the picture and you're dealing with either normal duplication and/or content syndication. I get a sense you're basically syndicating content.

This accounts for the vast majority of cases, so Filtering is usually the response.

To get a better grasp on the different types of Duplicate Content and the responses we need to first understand that there are different places in the process where dupes might be discovered and dealt with. It comes back to how the entire process works, from the search engines point of view. This process breaks down roughly as:
  • Crawling
  • Indexing
  • Scoring
  • Displaying the Results of a Search

Crawling is just that, when a spider comes out and grabs your page. The search engines today are sophisticated enough that they can detect both exact duplicate and near duplicate content during the Crawling stage. In certain situations they can even take action against duplicate content at this stage, such as in the case of a 302 Hijack. If they deal with the duplication at this point it's going to look like (and be for all intent and purpose) a penalty, because the page doing the 302 won't be Indexed. Duplicate content penalties are very, very, very rare. You have to try really hard to get one of those.

Indexing is where a page gets included in the search engine index. Again, the engines today can check for duplicates at this point, getting into even finer detail in their examination. Not only detecting exact duplication, but also near duplication and duplication of sections of data. Sections of duplication being basically like when someone is citing or quoting something from another document, but only a portion of the original.

When they can see duplication here it they can either ding the page, which again is going to look like a penalty (very rare) or they can simply mark it in their database as a duplicate, near duplicate or citing type of duplication. Most times they do nothing more than mark it as a duplicate, taking no further action at this point. They only take a more proactive approach if they detect something nefarious, where someone is obviously trying to deceive the search engines. Very few fall into this category.

Scoring is where each page receives it's score, where the score is the determining factor in how well a page ranks for a given phrase or concept. At this point the engines look at everything they know about a page. What words are there, how many links point to it, how the page is structured, what anchor text is used to point to the page, etc, etc. All of the 200 or so factors that go into why a page ranks where it does.

The scoring algorithm contains both positives elements, those that help a page, and potential negative elements that might deflate the overall score. If two pages are duplicates, and if there's no obvious pointer for the search engines to use to decide which is the Original version (which is often the case), it basically comes down to the score of each of the pages when determining which will rank better for a given search.

Then the last step is Displaying the Results when someone searches on Google etal. This is where Filtering kicks in when necessary. Filtering is used when there has been duplication detected in one of the above steps of the process, but there hasn't been any determination made that the duplication is an attempt at deception. So basically everything is cool, but there is some normal duplication for which the engines to account.

What they'll typically do is display one or two of the duplicates that scored best for the search phrase entered, then show other results that don't contain the duplication. Even if those other results don't necessarily score quite as well as any additional duplicates.

The goal of engines is to show their users a wide variety of possible choices. So they don't want to show the same exact document that happens to be housed at multiple locations. So when duplication exists they choose a couple of those and several other options.

#11 SEMMatt

SEMMatt

    HR 4

  • Active Members
  • PipPipPipPip
  • 132 posts

Posted 19 March 2009 - 12:03 PM

Thank you Randy.

I suppose that the search engines don't care whether that page is hosted on the main feeder site or on all of the child sites which just pull that content through when the page is crawled - am I correct in assuming that the SERPS only care about the final results and not where the content is being drawn from?

Best,
Matt

#12 Randy

Randy

    Convert Me!

  • Moderator
  • 17,540 posts

Posted 19 March 2009 - 12:20 PM

You're correct Matt.

They only time they care about the origination point of the data is if they suspect something nefarious. Otherwise they could care less.

You are however also correct in being somewhat concerned when syndicating content like this. The basic bottom line is that you need to be very clear on what you're doing and very clear that it's simple Content Syndication and nothing more. And realize that the engines aren't going to list any more than two instances 9 times out of 10.

#13 zephyr

zephyr

    HR 3

  • Active Members
  • PipPipPip
  • 62 posts
  • Location:Connecticut

Posted 19 March 2009 - 09:07 PM

ID Serve by good ol' Steve Gibson of Gibson Research

#14 Gerry White

Gerry White

    HR 2

  • Active Members
  • PipPip
  • 48 posts
  • Location:UK

Posted 23 March 2009 - 06:32 AM

Not sure if this helps, but two things
1. looking in Google Analytics at the hostnames - if your talking about the same site resolving to multiple addresses.

SEO for Firefox (install)
SEO Xray
Click on the link which does an IP lookup and search

search.live.com/results.aspx?q=ip%3A146.101.138.68

Example above.

There is also another FF extension, called domain details - I find these things essential often when looking at clients and why their (primary) domain isn't ranking at all, it can often be because the content and links are diluted down amoungst loads of other hostnames (domain names)... !




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users

SPAM FREE FORUM!
 
If you are just registering to spam,
don't bother. You will be wasting your
time as your spam will never see the
light of day!