Nathan Stratton’s Homepage

Infiniband

by on Jul.30, 2009, under Hardware

Infiniband is an often overlooked technology outside of the supercomputer / clustering space. I think that is a shame given some of the amazing aspects of this technology. Infiniband is a serial connection with a raw full duplex data rate of 2.5 Gbit/s known as 1x single data rate (SDR) mode. In addition to a double data rate (DDR) and a quad data rate (QDR) mode, links can be aggregated in units of 4 or 12 paths yielding up to 120 Gbit/s in 12X QDR mode. In a day where server motherboards are just starting to see 10 Gibt/s ethernet cards, the most common “low speed” infiniband options is 10 Gbit/s 4X SDR cards. Infiniband uses remote direct memory access (RDMA) for data transfer allowing data to be moved between hosts directly without any CPU cycles. All of this happens in about 1/4th the port to port speed of 10 Gbit/s ethernet!

The part I like best about Infiniband is the price, especially the used market. Lets take a look at a common setup on eBay. There are lots of switch options, but I like the TopSpin 120 also know as the Cisco 7000p. This is a 24 port 4X SDR 10 Gbit/s switch that runs for $750 – $1500 depending on the used source. There are even more options for Infiniband cards, I tend to stick with Mellanox chip set based cards and they can be found for as little as $40 for PCI-X and around $125 for PCI express. The only thing that is going to cost you more with Infiniband is the cables, they will run you $20 – $50 each.

Applications that support native infiniband RDMA are going to get the best performance, but with the Infiniband over IP (IPoIB) you can use standard TCP/IP! With IPoIB your infiniband card shows up as a normal interface and you can run DHCP or static IP on it.

Cisco 7000P

Cisco 7000P

Mellanox MHEL-CF128-T

Mellanox MHEL-CF128-T

GORE 4X Infiniband Cable

GORE 4X Infiniband Cable

Be Sociable, Share!

1 Comment for this entry

  • Jonathan Lambert

    Well, after building a few solutions on Mellanox (the major provider of solutions for IB, and the chip manufacturer most people, if not all, use), I can say this: it’s fast. But it’s also buggy, especially on some of the Sun projects (ironically, they’re major supporters) we’ve done.

    But it does provide a really great interconnect fabric, and cost-wise, 10GE and FCoE can’t hold a stick on it for “real world” infrastructure (you know, outside of the Fortune 1000 – those of us who actually have things like “budgets” and actual short “timeframes”).

    The core issues I’ve found are thus:
    Convergence of networks requires a lot. It’s very difficult to architect a reliable Open Source gateway for IB systems: and when you do, doing proper network segmentation isn’t really possible (it’s technically “possible” I guess, but it would require some hardcore low-level work – we couldn’t get vlan management to work reliably), So, bridging in your ip to ib network, while pretty straight forward, leaves you without a lot of vlan extensions and therefore difficult management and security.

    The commercial alternatives for this include a box from Mellanox, some stuff from Cisco, but the absolute best box I’ve come across by far: http://www.xsigo.com/ Absolutely can’t tell you how awesome their stuff is. Bridge IB, FC, and 10G IP in a single box, and their management is fantastic. It’s a lot of money for a startup, but it’s not a lot of money relative to the cost of doing a medium size buildout (40-60 servers @ $500 FC card and $500 for each intel e1000 (just think vmware networks) adds up).

    Anyways, I’ve had a lot of real-world experience with it, and I’m happy we’ve gone down the path. The learning has been expensive, and I’ve come across a LOT of people who are telling me (vmware guys mostly) that FCoE is going to replace it – I think that’s crap. IB has a heck of a lot of advantages, but if they can solve the network management challenges for startups, this stuff could quickly become the STANDARD for federation of middle-of-the-road startup infrastructures.

Leave a Reply

ERROR: si-captcha.php plugin says GD image support not detected in PHP!

Contact your web host and ask them why GD image support is not enabled for PHP.

ERROR: si-captcha.php plugin says imagepng function not detected in PHP!

Contact your web host and ask them why imagepng function is not enabled for PHP.

Looking for something?

Use the form below to search the site:

Cool Links!

A few highly recommended links...