Are you a Google Analytics enthusiast?
More SEO Content
Dedicated Server: Separate Ips, One Box
Posted 02 October 2005 - 11:42 AM
One of the things you really need to consider before a final decision is reached is what you intend to do with DNS. I installed bind on my last server, using CPanel, but perhaps others here can offer different possibilities?
Posted 02 October 2005 - 11:48 AM
DNS. Well. My ::ahem:: thought was to use the host's DNS. Would this not be possible ... which takes that <huge> hassle away from me ...?
Posted 02 October 2005 - 12:18 PM
I even asked them to install WordPress for me a week or two ago, and although it was a few days wait for something like that, they did it for me. They don't support it or anything, but seem to not mind installing things like that.
Posted 03 October 2005 - 09:48 AM
50 is overkill, but since the anti-spamming community doesn't operate in a consistent manner, you have to have backup IP addresses, and getting them in different B-blocks is a good idea (if you have a hosting provider who can accomodate you).
I've twice found myself in blacklisted B-blocks. The first time was because a (now hopefully former) anti-spam activist who made his blacklist publicly available engaged in economic blackmail. He blacklisted entire hosting providers and made no provision whatsoever for individual IP vetting. His stated goal was to drive all customers away from any hosting services that refused to comply with his demands. Unfortunately, many small ISPs subscribed to his blacklist, and people in obscure parts of the world started complaining that they never received email from me.
When I found out what was happening, I had no recourse but to change hosting providers.
Within a year, I found myself being blacklisted again, this time by ALL the anti-spammers because someone had fraudulently sold 32,000 IP addresses to an unsuspecting ISP, who in turned sold them to another ISP, who in turn....
The anti-spammers in general were sympathetic to the thousands of Web site operators who were being blacklisted, but they laid out a very plain-spoken and (in my opinion) well-stated case for the blacklisting. The IP addresses were indisputably stolen by someone who took advantage of the original owner's business failure.
After going through two massive IP blockings like that, however, I decided I would never again operate with fewer than 8 IP addresses across as many B-blocks as I could get. If one gets blacklisted and I run into the intransigence of an idiot like the first guy who tried to blackmail hosting providers, I hope I'll have the flexibility to switch over to one of my secondary IP addresses (at least temporarily -- it depends on how determined the blacklisters are).
It's just not worth the expense to be found wanting an extra IP address.
Posted 03 October 2005 - 09:59 AM
Are you saying the actual file is corrupted, or that due to server load it's just not being read right because it's not as "available" as it should be, or ...?
"File corruption" is a ubiquitous expression that programmers use to describe any sort of unexpected appearance in data.
If the issue is not, as Ron suggested, due to a misconfigured server -- but rather if it is indeed due to server load, what happens is the process that logs data to the files simply ends up writing data to the wrong data space. This sort of thing happens far more often than people realize, and the reasons it happens are legion.
Which is not to say that it happens frequently. Just that it's not a rare, Earth-shaking event when it does happen.
If expression A is supposed to be written to file A-1 but is instead written to file B-1, which normally takes only expression B, that is a file corruption.
A server misconfiguration could be the cause, an electrical jolt, a bump against the computer as a tech moves something around, a bad disk sector could corrupt the file allocation data, blah, blah, blah.
Files with complex internal structures (embedded internal indices) are more prone to corruption (usually because of poor data locking procedures in the software that manipulates the files) than straight text files (which is what server logs are), but even a straight text file can be corrupted when a server is so busy that it is unable to properly manage the file locking.
A "data lock" is an operation where a program asks the computer to restrict access to a region of the hard drive. Only the requesting application is allowed to change data in that region. Sometimes, the computer gets so busy it cannot issue the lock (the region may already be locked by other programs). If a program does not properly respond to the a lock failure, it may try to change the data anyway.
It happens. The more intricate a program becomes, the easier it is for something to go wrong.
Web Server software, of course, has been tested on millions of computers for many years. It's unusual to see actual performance-caused corruption.
More than you asked for, but I just moved this weekend and am without Internet access from home for a while. I just needed to pontificate.
Posted 03 October 2005 - 10:45 AM
I'm thinking it's a "server too busy" issue because I find this in the error logs:
"server reached MaxClients setting, consider raising the MaxClients setting".
Though I haven't been able to tie that notation to any particular request, I'm going to take Apache at its word. And since I have a VPS instead of a server, I can't raise the MaxClients setting -- however, that would be one clue that I need a beefier server setup.
There aren't that many sites on the two VPSes, but they've been getting more and more traffic over time.
One thing I *could* do: the .conf settings specify automatically checking for .htaccess. My understanding is that this means .htaccess will be checked for in *any* website directory -- and since most requests are not for every file in a directory but one here, one there -- my guess is that that results in an awful lot of extra requests. I'd not wanted to have to manually include a check for .htaccess only in directories that have/need them, as that's an extra hassle, but perhaps it's time.
Posted 03 October 2005 - 11:39 AM
I did eventually get to be the resource hog -- that was why I switched to dedicated servers. My last shared hosting ISP sent me a polite message saying they would let me finish out my contract (it was six weeks away from renewal). I have heard how some other providers just pull the plug.
Unless you can look at the NETSTAT report (which only tells you who is connecting and sometimes to what), you have no real way of knowing who is bringing the server down.
But you may have reached that threshold point where a dedicated server is the way to go.
You can get them for under $100 per month now. I pay for 24/7 technical support, so my contract is running at $132/month. But my server just crashed again, and we may need to hit them for the support (that would be the fourth time this year).
The maintenance fee has, unfortunately, paid for itself.
That's the price of generating too much traffic. Right now, I seriously doubt we could budget load-balancing or multiple server contracts, but we may have to go that route in another year or two.
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users