Data center design considerations – build vs. lease?

Running out of capacity and need to make a decision whether to upgrade, build new, or lease? Keep it simple, here are some tips based on having performed this exercise…

Having gone through the process of determining whether we should build, upgrade, or lease, here are some things to consider if you are facing the same challenge.

Consider hiring a professional data center design engineer – doing so will allow you to focus on the big picture, and help you avoid making any major mistakes.

With or without a professional, you will need to start by documenting/determining two factors:

  • Square footage: rack space to house current and future equipment. This should include rack types and sizes, quantity, etc.
  • Power load: current load, plus load based on future equipment. This includes documenting current types e.g.: single phase, 3 phase, 110 or 220 volt, amperage, and plug types.

With this information in hand, you can begin to shop around for leased (colocation or managed) solutions. Find at least three vendors and issue an RFP. Be prepared to answer questions. The range of options will vary from vendor to vendor, e.g.: shared space vs. dedicated, bandwidth needs, special security requirements, etc, etc. Many colocation vendors have options that could prove valuable to some organizations, e.g.: managed firewalls, remote hands, etc.

We made the decision to build. We hired a data center design professional that has helped us on several small projects in the past…this turned out to be a great decision. Why? Because it is important to understand that going the build route will require you to project manage a construction effort that will include contractors, architects, engineers, electricians, plumbers, HVAC techs, and the list goes on. This is not to discourage going the build route. There are many reasons why building your own data center is justified, in our case CAPEX vs. OPEX was a big part going the build route, we owned the property and had the space.

Build/upgrade considerations:
The same space and power requirements used in the RFP for the leased option are also the starting point for building/upgrading your data center. With space and power requirements, the cooling, uninterruptible(UPS) and backup (generator) power requirements can be determined. Additionally, there will be ancillary systems required e.g.: security, fire detection/suppression, and management system/s required to monitor and administer these systems.

  • Lower data center power consumption and increase cooling efficiency by grouping together equipment with similar head load densities and temperature requirements. This allows cooling systems to be controlled to the least energy-intensive set points for each location.
  • Implement effective air management to minimize or eliminate mixing air between the cold and hot air sections. This includes configuration of equipment’s air intake and heat exhaust paths, location of air supply and air return, and the overall airflow patterns of the room. Benefits include reduced operating costs, increased IT density, and reduced heat-related processing interruptions or failures.
  • Under-floor and overhead cable management is important to minimize obstructions within the cooling air pattern.
  • Prevent mixing of hot and cold air by implementing a hot aisle/cold aisle configuration. Create barriers and seal openings to eliminate air recirculation. Supply cold air exclusively to cold aisles and pull hot return air only from hot aisles.
  • Higher return air temperatures extend the operating hours of air economizers.
  • Choose an enclosure configuration that supports your cooling method.
  • If using raised-floor cooling, carefully consider the location of perforated floor tiles to optimize air flow.
  • Managing a uniform static pressure in the raised floor by careful placement of the A/C equipment allows for even air distribution to the IT equipment.

Finally, having solid documentation on our existing infrastructure was a tremendous help in planning and executing the device (server, comms, storage, etc) migration. We use a product called Device42 for our data center infrastructure management. We are now in the process implementing the data center power management module (an add-on to Device42 core product) which will give us visibility into power utilization and enable us to start optimizing power consumption in our new data center.

Happy trails!

 

WP Super cache vs W3 Total cache

This post just reflects my findings for both WP Super cache and W3 total cache plugin for wordpress site caching.

There has been much debate on this topic on the internet and as covered in the comments here: http://blog.tigertech.net/posts/use-wp-super-cache/

We have few different wordpress sites for some of our content. None of these site use CDN or get very high traffic. But we used Apache Benchmark to simulate large loads and tested which plugin worked for us and surprisingly both plugins cater to different use case scenarios as discussed below.

For post based sites, which are mostly the largest installations for the wordpress – W3 Total cache fairs very good.

Apache Benchmark test with 10000 requests and 500 concurrency barely breaks a sweat on a low powered VPS(1 vcpu, 512 MB RAM).

However, for a site which is totally using “pages” in wordpress and not “posts”, we could not get W3 total cache to perform very well. Even with 1000 requests at 100 concurrency would take cpu used by apache forks to 20-30% and system load would go up in double digits.

So for “page” based site, we tested WP super cache and it really faired very well. The load barely moved when hit with 10000 requests with 500 concurrency.

Both the servers have apache2 as front end web server.

So, tldr: W3 Total cache is better for most scenarios(and hence has higher user ratings). But in case you are running just a page based wordpress site, WP Super cache is your answer.

 

Why can’t you just increase my mailbox size, storage is so cheap

Just had to deal with a passive/aggressive type exec who wants more capacity for his mailbox. After I politely told him that he has to comply by the policy set for the corporate and we don’t have budget this year to buy extra storage, this is the reply I got:

How expensive is storage really?  I will go buy 1TB storage from staples.

Now as system administrators we get it and we do understand end user frustration with .pst for people who want to use their ipad  and still keep all their emails handy. But these are business decisions not to spend money on archiving solutions or buy more storage not my decisions.

Regardless, I just started thinking what would be good answer for him. Although I haven’t replied back yet, I just started noting down the  differences between business-class storage and  home storage. These are not in order of importance.

1. Backups. There is a central backup of all enterprise storage. This includes cost of the backup licenses, backup server. Every extra GB of storage added to a server adds extra tapes in the rotation for the backup, increases backup window and the cost of offsite backup storage.

2. Speed. Enterprise storage systems(called SANs) are optimized for speed(other points would come later). 1TB external drive vs an array of  15k rpm fiber channel drives front-ended by a read and write cache system nark a day and night difference between speed.

3. Reliability. Hard drives fail and that is a hard fact. Enterprise drives have a lower failure rate plus they are configured to run in raids which means if one or two drives fail, your systems can keep running without any glitch.

4. Replacement. Home drives fail and you might get a replacement from the manufacturer if you are lucky after spending hours on the phone with tech support. In case of enterprise SAN systems, most of the times if there is a drive failure, the SAN vendor is notified automatically and they come replace the drive for you rather quickly.

5. System vs drive. As discussed in speed and reliability above, it is a whole system and not just a single drive that makes it high performing and highly reliable. Enterprises get systems because of the critical need of the  business operations and pay(through the nose) for the systems(SAN).

6. Staff. Specialized systems need specialized staff.

I am pretty sure I am missing other important points here, but just wanted to share some of the points.

One more reason to hate EMC

We have a love/hate relation with EMC, their support and products. We do use EMC for lot of their products and use clarrion storage as well. Recently started the project to upgrade our redhat OS and hence there was a need to updgrade powerpath on linux hosts.

So I did what I am supposed to do, go to powerlink, get the support matrix and get the supported powerpath software as well.

One thing I could’t find is upgrade guide if you upgrading the OS.

Just doing my due diligence, I call support and these are the steps for the upgrade according to support:

1. Un-install powerpath

2. Upgrade OS

3. Re-install newer powerpath.

So there is no option to upgrade powerpath if you are upgrading the OS. Don’t like that, but I can live with that and here comes the kicker:

According to support guy, people don’t know these steps and call most often with this issue. Duh! It is not documented on your site, how would people know. And if this is one of the  most called issue, why not have a warning on the download page?

Not paying attention to customer support is what makes me hate EMC everytime.

But here you go, use the steps mentioned above if you are upgrading powerpath and upgrading your OS.

spacewalk Error: Cannot retrieve repository metadata (repomd.xml) for repository

Created a new channel on spacewalk server and after using rhnpush to add the packages, client wasn’t able to download any packages and I kept getting following:

Error: Cannot retrieve repository metadata (repomd.xml) for repository: <channel name>

I tried making sure time was in sync. Tried yum clean all on client, no help.

Then on the spacewalk server in the channel details, this is what I saw:

Last Repo Build: none
Repo Cache Status: none

So it had not built the repo yet.

Searching on google, multiple other tries, I got the following hint:

/etc/init.d/taskomatic status
stopped.

taskomatic was not running on spacewalk server and hence repo was not built. starting the service fixed this in few minutes.

/etc/init.d/taskomatic start

 

cisco catos and Big IP LACP setup

Going through F5 setup with legacy catos switches and trying to setup LACP, I only couple of commands on forums and cisco docs:

        CatOSSwitch (enable) set channelprotocol lacp 2
        Mod 2 is set to LACP protocol.
        CatOSSwitch (enable) set port lacp-channel 1/1,2/1
        Port(s) 1/1,2/1 are assigned to admin key 56

But, this is not enough, 1 more step is needed for this setup to work:

This one is quite obvious but there is a catch.

All the ports in port-channel need to have same native vlan. And here is the catch, since it is 801q VLAN trunk, you can’t use a 801q VLAN as native VLAN. So as soon as you set a native vlan not part of 801q tagging, you would be able to see all interfaces in lacp trunk in Big IP.

Setting up Big-IP LTM for first time

Got the shiny new LTM Big-IP 1600 from F5. Just getting the box setup notes. Will write more as I setup the box to do the actual work.

IP setting and all can be done from the LCD menu presumably as well, I used the console connection.

Step A. Console Connection

To configure a serial terminal console for the BIG-IP system, perform the following procedure:

  1. Connect the null modem cable to the console port on the BIG-IP system.
  2. Connect the null modem cable to a serial port on the management system with the terminal emulator.
  3. Configure the serial terminal emulator settings according to the following table:
Setting Value
Bits per second [baud] 19200
Data Bits 8
Parity None
Stop Bit 1
Flow Control None
The default root password is default, and the preferred default IP address is 192.168.1.245

Step B. Configure IP settings.


After logging into console, type config

That opens up IP settings menu, easy to follow.

Step C: HTTPS access and licensing:

https://<IP&gt;

At the login prompt, type admin for the user name, and admin for the password.

The Licensing screen of the Configuration utility opens.

Network config first, give hostname, change password.

then license activations- > takes to f5 site, enter dossier….

That is it, you are ready to start configuring and load balancing.

I would most more as I work on setting this up.

Thanks for reading.

Follow

Get every new post delivered to your Inbox.