April 22, 2011 / Leave a Comment
This is the first post in a series about building out an entry-level infrastructure footprint. I will later re-address each of these topics from a mid-level build out perspective.
When looking for colocation there are a couple things to keep in mind:
- The Data Center
- The Space in the Data Center
The Data Center
The data center is like a living organism with many parts that make it whole. You will want to do a tour of wherever you are moving into to make sure things are the way they should be. You can go through a reseller to get a good deal, but that is no reason to still not get a tour. They should have a big enough footprint at the data center that you can get the help you might need. In fact when on the tour, have them show you the space the reseller already has leased for them.
Location: You want your data center where you can get to it, or where someone you trust can. There are “remote hands” offered at many data centers, but they are expensive, so its generally more advantageous to go with a data center near you. But the location of the data center will also determine the options available for your IP Transit (Bandwidth). So make sure you get a list of everyone who “peers” at the location so you know you have a lot of options. I’d also suggest calling some of them in advance and ask their opinion of the data center. A good example is one data center in LA near some train tracks is notorious for having issues getting bandwidth into the building as there is only one way in cause you can’t go under or over the railway tracks all around it. An IP Transit vendor warned us in advance that they charge a premium and never plan to run more bandwidth into the location because of this hassle.
Cooling: Keeping your equipment the right temperature is extremely important. Make sure they have redundant cooling, that when walking down the “hot” aisle you feel like you are in a wind tunnel, and that instantly upon walking on to the floor you think “crap, I should have brought a jacket.” If you aren’t freezing before your equipment is in there, you will be boiling once it is. Hot servers mean the fans have to try harder and you will use more power (which you are paying for) and your equipment will die sooner.
Backup Power: First, use the internet and make sure you don’t see any articles about blackouts or brownouts in the area even during hot summer days. Your data center should be in an area that is exempt from such things. You also want to see if there are any articles about power issues with the building itself. Take the tour of the building and make sure they have massive generators for backup power in the case where there is an issue. Also ask them where the power comes in from the street, and what their emergency plan looks like if the transformer blew out. They should have a clear plan that gets it replaced in hours not weeks.
Security: What measures does the building have to keep people out of the building, out of the elevators, off your floor, from walking out with your equipment? The standards should be 24 hour security, no entrance to the elevator bank without a key card, no access to the data center without a keycard, a lock on your cabinet or cage that is keyed specifically for you, and a policy of no equipment entering or leaving the building without authorization forms from an account manager. These are pains for you, but also for people who might want to take your stuff. It is quite easy to rack up 100s and millions of dollars in equipment in a data center, and you don’t want anyone taking it! Plus it will make your insurance rate go down… YES you need insurance on everything in your data center.
The Space in the Data Center
You will likely just want to start out with a single cabinet. Try not to go with a shared cabinet it generally saves you little money and creates huge security holes, and last thing you need to find out is your site is down because someone accidentally unplugged your power cord (it might still happen but at least it will be your guy!) A standard cabinet will provide you with 42U. This unfortunately does not mean you can cram 42 1U servers in there. You won’t have nearly enough power density to do that, and you will likely have way too much heat. So down start stacking equipment on top of each other, give them room to breath, and try and make it look nice with cables maintained. At some point your CEO is going to want to take a look at it and pretend to geek out with you.
You should also try to get a cabinet in an area with lots of open cabinets near you so when you outgrow the one cabinet you can just daisy chain into the next one before eventually needing a cage.
When you lease a cabinet it will come with a set amount of power. It might be possible to upgrade that amount of power later, but generally it is made cost prohibitive by the data center as they want to lease more space, not sell more power. Also you can not exceed 80% of the allotted power without the data center threatening to cut you off (and they will). The most common is giving you 20A of power (or sometimes sold as 20A+20A redundant, but you still can’t go over the 20A limit). This will equate to only 16A of usable power and will not be enough as you grow. Try to get 40A of usable power. And if you can work it where you get a 40A+40A, which means you have 40A available on two separate circuits. This allows you plug redundant power supplies into the separate circuits and if one blows out you are still running. The caveat is you can never exceed the power of a single circuit, so you actually need to be running at 40% on each (16A) which combined gives you the 32A of actual usable. These numbers are all based on 120V which is the most common used in cabinets. If they offer a 208V option, go for it! You can half your amperage needs and actually save on power consumption.
Referred to as IP Transit is your link to the outside world. Many resellers of cabinet space will often package in IP Transit. More often then not, its not worth your while. Not all IP Transit is the same! As you begin to investigate there are your Tier 1 network providers, your non-Tier 1 providers and then there are “blends” that companies resell. See which the bundled bandwidth falls into before going with that rather than getting your own.
Blends: Several IP Transit providers being used in combination to provide a single transport. In theory it provides you higher uptime by there being multiple options if one line goes down. In reality the more common occurrence is failing routes, high latency and dropping of packets. These often go unnoticed until they are really bad, and a blend rarely helps in these cases. You can have important routes monitored by your blend (like the route to your credit card processor), to help identify issues faster. What matters most is the providers in the blend. If they aren’t the Tier 1 network providers, its pointless as eventually you will have problems. See below for Tier 1 providers. If you can find a good blend with all Tier 1, then its probably a good fit.
Tier 1 Network Providers: These are providers that can reach every other network on the internet without using someone else’s lines. Rather than listing some here, Wikipedia has a nice list here, you can include their “other major networks” in your search as they are closer to Tier 1 than anything else. I would highly suggest using one of the providers from this list. It won’t be that much more money than someone not on it. Being with a Tier 1 provider by no means makes you fail proof, but they have much higher uptimes than any other network. And at least you will know that if they are down huge portions of the internet are down with you!
So here is a bunch of information on choosing your colocation. The best advice I can give is ask around your tech community and get referrals. Also do your due diligence on the internet to check reviews. I’ll be back soon with the next installment of this series on Firewalls.