RoHS CompliantAll Small Tree cards, switches and storage products are RoHS compliant!

Data choke points and a cautionary tale

August 14, 2014 by Steve Modica

During a normal week, I help a lot of customers with performance issues. Some of the most common complaints I hear include:

“I bought a new 10Gb card so I could connect my Macs together, but when I drag files over, it doesn’t go any faster.”

“I upgraded the memory in my system because Final Cut was running slow, but it didn’t seem to help very much.”

“I bought a faster Mac so it would run my NLE more smoothly, but it actually seems worse than before.”

All of these things have something in common.  Money was spent on performance, the users didn’t have a satisfying experience, and they would be much happier had the money been spent in the right place.

Of course, the first one is easy.  Putting a 10Gb connection between two Macs and dragging files between them isn’t going to go any faster than the slowest disk involved. If one of those Macs is using an old SATA spinning disk, 40-60MB/sec would be a pretty normal transfer rate.  A far cry from the 1000MB/sec you might expect from 10Gb Ethernet!  Who wouldn’t be disappointed?

Similarly, the second case where a user upgrades memory based on an anecdotal suggestion of a friend is all too common.  On the one hand, memory upgrades are typically a great way to go, especially when you run a lot of things simultaneously. More memory almost always means better performance.  However, this is assuming that you didn’t have some other serious problem that was overwhelming your lack of memory.

In the case of Final Cut 7, which is a 32 bit application, more memory isn’t going to help Final Cut directly.  In fact, it’s much more likely that Final Cut would run better with a faster disk and perhaps a faster CPU.  Since FCP 7 didn’t use GPU offload, even moving to a better graphics card might not have delivered a huge gain.

The last one, where buying a faster Mac actually made things worse, is a classic case of mismatched performance tuning.  For this customer, the faster Mac also had a lot more memory.  It turns out that Mac OS X will dynamically increase the amount of data it will move across the network in a burst (the TCP Receive Window).  This resulted in the network overrunning Final Cut, causing it to stutter.  The solution?  Dial back the receive window to make sure FCP 7 can keep up.  This will be corrected by some other changes in the stack that are coming soon.  One day, slower applications will be able to push back on the sender a little more directly and a little more effectively than today.

These cases bring to mind a discussion I had with a 40Gb Ethernet vendor back at NAB in April. They wanted me to use their cards and perhaps their switches. The obvious question:  Don’t your users want the speed of 40Gb Ethernet? Wouldn’t they want to run this right to their desktops?!

Of course they would.  Everyone wants to go fast.  The problem is that those 40Gb ports are being fed by storage. If you look closely at what raid controllers and spinning disks can do, the best you can hope for from 16 drives and a raid card is around 1GB/sec.  A 40Gb cards moves about 4GB/sec. So if I sold my customers 40Gb straight to their desktops, I would need somewhere around 64 spinning disks just to max out ONE 40Gb port.  It could be done, but not economically. It would be more like a science project.

Even worse, on Macs today, those 40Gb ports would have to connect with Thunderbolt 2, which tops out around 2.5GB/sec and is yet another choke point that would lead to disappointed customers and wasted money.

I think 40Gb Ethernet has a place. In fact, we’re working on drivers today. However, that place will depend on much larger SSDs that can provide 1GB/sec per device.  Once we’re moving 8 and 16GB/sec either via a RAID card or ZFS logical volumes, then it will make sense to put 40Gb everywhere.  The added advantage is that waiting to deploy 40Gb will only lead to better and more stable 40Gb equipment. Anyone remember the old days of 10Gb back in 2003 when cards were expensive, super hot, and required single mode fiber?


No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.