The Blade Myth and 10Gb Ethernet
Over the last several years there has been a big push to adopt blade servers. The idea being that you can cram more CPU cores into less space allowing you to build a more efficient data center. Lets take a few minutes and look at IBM BladeCenter vs 1U white label pizza boxes. The results may surprise you.
I picked IBM BladeCenter because it has the largest market share in the blade space and frankly is a great product! It is price competitive with other blades and I think gives a fair representation of the market.
The BladeCenter E chassis supports 14 bays in a high density chassis. At only 9U high you can support 4 chassis in a rack and still have 6U free to support two top of rack switches and any other support hardware. This provides a maximum density of 56 dual processor blades in a standard 42U rack.
Before we jump into the actual blades, lets next look at 1U chassis. In a standard 42U rack you can support 40 dual processor 1U servers, leaving room for two 1U top of rack switches. This is one of the biggest selling points for the blade camp, we are able to support many more raw CPUs with blades.
Our test systems are Dual Intel Xeon E5-2680 8 core 2.7 GHz CPUs, with dual 10Gb Ethernet, and 128 GB of RAM. The BladeCenter HS23 rings in at $11,713 (including 1/14th the BladeCenter H chassis) and the white label rings in at $5,882 based on Supermicro X9DRH-7TF motherboard.
Both of our systems come with 10 Gb Ethernet, however they are very different from factors. On the IBM we are handed older SFP+ connectors and on the white label we have newer copper 10GBase-T ports that support standard Cat-6 cable.
For aggregation, I chose Arista Networks 7050 series switches based on the Broadcom Tident+ ASICs. This switch supports 40 GbE ports and 4 QSFP+ 40 GbE ports on a 1.28 Terabits/sec fabric. The white label solutions is using the DCS-7050T-64-F ($20,995) with 40 10GBase-T ports and the BladeCenter requires the bit more expensive DCS-7050S-64-F ($24,995) with SFP+ ports.
Our white label box has 10GBase-T ports on the motherboard, however the BladeCenter requiress two Ethernet Pass-Through Module at $4,999 each and $75 twinax cables. Bringing our cost per port of 10 Gb Ethernet including switch at $328 for white label and $1,175 per port for IBM BladeCenter.
In order to compare our numbers in an easy way I decided to look at a per CPU core metric of cost per CPU core. With dual 8 core CPUs and dual 10 Gb Ethernet ports. The BladeCenter comes in at $834 and white label at $408.
But what about cost for space?
It turns out that cost for space does not effect the per core number that much. With BladeCenter at 896 cores and a cost per rack at $750, over two years the cost per core is only $22.77 compared to the 640 cores and $31.88 per core over 2 years cost on the white label. So even if we take 2x amount of space for our 1U white label servers we still come way way ahead using 1U pizza box servers saving almost $7K per server!
Distributed storage in today’s clouds
How does this fit with today’s cloud deployments? Well the interesting thing is that today many cloud deployments are using servers for more then just compute resources. Today 10Gb Ethernet controllers such as the Intel X540 are able to directly access CPU cache greatly reducing the processor and memory requirements of 10 Gb flows. This allows servers to also act as cloud storage using new distributed file systems such as Gluster and Hadoop.
Unlike blades, our 1U servers have plenty of room for disks. In fact, the Supermicro X9DRH-7TF motherboard already has a LSI SAS2208 hardware RAID controller supporting 6Gb/s SAS/SATA drives. With 8 2.5″ 1TB SATA drives one could at a low cost add 280TB of raw RAID5 storage per rack to the compute cloud.
I have wanted to start playing with micro controllers for a while now, I ended up selecting the Parallax Propeller chip because of its ease of use and I liked it’s COG design with 8 32 big cores working together.
My first test was connecting a 4×20 line LCD and a few DS18S20 1-wire temp sensors to the propeller chip. Everything was very easy to learn the LCD was interfaced with no external components and the 1-wire bus only required a 4.7K pull up resistor.
Infiniband is an often overlooked technology outside of the supercomputer / clustering space. I think that is a shame given some of the amazing aspects of this technology. Infiniband is a serial connection with a raw full duplex data rate of 2.5 Gbit/s known as 1x single data rate (SDR) mode. In addition to a double data rate (DDR) and a quad data rate (QDR) mode, links can be aggregated in units of 4 or 12 paths yielding up to 120 Gbit/s in 12X QDR mode. In a day where server motherboards are just starting to see 10 Gibt/s ethernet cards, the most common “low speed” infiniband options is 10 Gbit/s 4X SDR cards. Infiniband uses remote direct memory access (RDMA) for data transfer allowing data to be moved between hosts directly without any CPU cycles. All of this happens in about 1/4th the port to port speed of 10 Gbit/s ethernet!
The part I like best about Infiniband is the price, especially the used market. Lets take a look at a common setup on eBay. There are lots of switch options, but I like the TopSpin 120 also know as the Cisco 7000p. This is a 24 port 4X SDR 10 Gbit/s switch that runs for $750 – $1500 depending on the used source. There are even more options for Infiniband cards, I tend to stick with Mellanox chip set based cards and they can be found for as little as $40 for PCI-X and around $125 for PCI express. The only thing that is going to cost you more with Infiniband is the cables, they will run you $20 – $50 each.
Applications that support native infiniband RDMA are going to get the best performance, but with the Infiniband over IP (IPoIB) you can use standard TCP/IP! With IPoIB your infiniband card shows up as a normal interface and you can run DHCP or static IP on it.
I use a lot of power in my office, so much that the four 1500 VA UPS units I have only last me a few min. I needed something bigger so I went on eBay and found two APC Matrix 5000 UPS units for $450 each including shipping. There was only one downside, there were no batteries and new batteries would have cost me several thousand dollars.
The solution? I picked up 8 marine batteries at the auto parts store and wired them up (yes with fuses) into two 48 volt strings connected in parallel.
Note: If you try this, you want to use Marine or better yet Deep-Cycle batteries rather then car starting batteries. Car batteries are designed to give very hight bursts of current and should only be discharged to about 5%. The very thin plates would destroyed over a few hundred discharges rather then the thousands you would get from deep cycle.
P.S. Yes, I built a cover for it!
I love my BlinkMind Video Phone service, but one problem has been being able to make calls when I am camping. I started by looking for a 3G access point that was already supported by OpenWRT, a Linux distribution for embedded devices. Since I run the Linksys WRT54G at home, the WRT54G3G was a logical choice.
Get Linux Running
Getting Linux Running on the WRT54G3G can be a pain since it’s PCMCI implementation does not work on the 2.6 kernel series. To make matters worse, Sierra Wireless only wrote and supports drivers for the 2.6 kernel. You can grab a copy of OpenWRT White Russian here for the BCM47xx chip set, next grab a hex editor (I used shed on Fedora) and change the 4 bytes to W3GA, once that is done you should be able to fire up the unit and upgrade the firmware with the edited image. If your lazy you can just download this.
Add a LCD
I selected a 4 line X 20 Character LCD display that could be used to show the IP address, upload / download speed, and signal strength. Modern Device made a nice little serial to LCD board that makes it VERY easy to connect a LCD to any service device.
The WRT45G3G does not have an external serial port, but internally it does have pads for a 3.3V serial. If you wanted to drive a computer serial port you would need a level converter such as MAX232. However, the Modern Device board is able to accept 3.3V without a problem. The only catch was finding the right pin. I broke out a logic probe and send some data out the port in pulses to eventually find the pin.
I wanted the system to be able to run off batter for at least a few hours so I decided to go with two 6V 6.5AH batteries in series rather then a single 12 volt battery. The size allowed them to lay down on the bottom of my case. The LCD runs off 5 Volts, the easiest way to make this work with parts on hand was to use a +5 Volt regulator that fit nicely on one of the 4 mounting screwed for the router. Current draw is low enough that no heat sink is needed. I also added a 12 volt LCD voltage meter to the mix so I could tell when my batteries were running low.
3G is via Sierra Wireless 881 PCMCI 3G card with AT&T service. I quickly realized that the default signal strength was not going to cut it and an amplifier would be needed. After some digging I selected Wilson Electronics 801101 3 watt cellular amp with in conjunction with a ARC Wireless Solutions ARC-FR0803R30 antenna. With the antenna I was able to buy a cable to connect to the Sierra card, but it required a 6 foot FME Female – FME Female cable that I replaced with a 1 inch FME Female coupler.
Putting Parts Together
I wanted the display to show some useful, current script runs at startup and displays IP address, upload and download avg bandwdith in kb/s, and Signal strength in dBm.
I have been looking for a way to move to removable 2.5″ boot/root disks on our servers. I am using all of the 3.5″ removable trays for storage leaving only a slim floppy bay in the case.
After much searching I found Thermaltake ST0002Z, a dual 2.5″ hot swap enclosure that mounts in a standard 3.5 inch floppy bay.
The two hot swap bays are actually 2 units that are bolted together making it very easy to take this apart and get yourself a nice 2.5 inch SATA bay that will fit in a slim floppy bay.