In today’s servers, the primary storage system is spinning disk connected to hardware RAIDs to achieve speed and redundancy. The limitation on performance deals with how many spindles you can get operating at the same time. With 10Gb Ethernet, there is usually a way to have the network output keep up with the storage. Processors and backplanes are not well utilized, as they continue to be much faster than the rotating storage.
Over history, the question has been “what can I change to make the bottleneck be something else?” The problem is that a number of these pieces are balanced to match the rotating disk speed. RAID sets of 8 drives are balanced with the RAID being about as fast as the 8 spinning disks.
To add speed, we add more RAIDs and more drive sets, but eventually you run out of slots without fully utilizing the processor or the backplane speed. The LSI RAID cards we uses are limited to two ports, with each port handling four drives. Even maxing out the two SAS ports with faster disks would limit us to 1.5 gigabytes per second per slot, so just replacing the drives with SSDs would gain us less than a 2X improvement.
So where do we go now? If we replace the RAID with a 12Gb SATA or SAS host adaptor and change out the drives to SSDs, we start to see some improvement. Now we use ZFS to gain redundancy and provisioning. Gone are the 2-port limits, so we can connect enough SSDs to more efficiently use the PCI-Express bus, and we see the total throughput rise to 8GB per second.
We now turn our attention to the output side. Where we had been using 10 gigabit Ethernet connections, we see 40Gb Ethernet connections making more sense. Now a twin 40Gb port card gets us 10GB per slot on the output side.
With the increased bandwidth on both storage and networking, we see the backplane better utilized, and we start to see some stress on processor speed. Life is great, once again, except how do we get all that data out to users? All this does nothing for us if the user is still operating on a 1Gb Ethernet port over SMB2 or AFP.
Let us look at the connection first. We see 40Gb Ethernet cards being available in add-on configuration, but how about laptops or Mac Pros? Enter Thunderbolt 3, which is due out next year, with enough bandwidth to handle a 40Gb connection. Hold that thought for a moment as we see opportunities for SMB3 with multi-path. With Thunderbolt 3 and SMB3, you could run multiple 10Gb connections to the switch or server, or run a pair of full 40Gb connections.
In 2015, it looks like we’ll finally see server systems less defined by the number of spindles than by the number of cores or 40Gb connections.
Tom Jennings is with Small Tree, www.small-tree.com