RoHS CompliantAll Small Tree cards, switches and storage products are RoHS compliant!

  1. Another Couple of Reasons to Love SSDs

    February 26, 2014 by Steve Modica

    One day, when we’re sitting in our rocking chairs recounting our past IT glories (“Why, when I was a young man, computers had ‘wires’”), we’ll invariably start talking about our storage war stories.  There will be so many.  We’ll talk of frisbee tossing stuck disks or putting bad drives in the freezer. We’ll recount how we saved a company’s entire financial history by recovering an alternate superblock or fixing a byte swapping error on a tape with the “dd” command. I’m sure our children will be transfixed.

    No…no, they won’t be transfixed, any more than we would be listening to someone telling us about how their grandpa’s secret pot roast recipe starts with “Get a woodchuck…skin it.”  You simply have to be in an anthropological state of mind to listen to something like that. More likely, they walked into the room to ask you your wifi password (Of course, only us old folk will have wifi. Your kids are just visiting. At home they use something far more modern and futuristic. It’ll probably be called iXifi or something).

    Unfortunately for us, many of these war story issues remain serious problems today.  Disks “do” get stuck and they “do” often get better and work for a while if you freeze them. It’s a great way to get your data back when you’ve been a little lazy with backups.

    Another problem is fragmentation. This is what I wanted to focus on today.

    Disks today are still spinning platters with rings of “blocks” on them, where each block is typically 512 bytes. Ideally, as you write files to your disk, those bytes are written around the rings so you can read and write the blocks in sequence. The head doesn’t have to move.  Each new block spins underneath it.

    Fragmentation occurs because we don’t just leave files sitting on our disk forever. We delete them.  We delete emails, log files, temp files, render files, and old projects we don’t care about anymore. When we do this, those files leave “holes” in our filesystems. The OS wants to use these holes.  (Indeed, SGI used to have a real-time filesystem that never left holes. All data was written at the end.  I had to handle a few cases where people called asking why they never got their free space back when they deleted files.  The answer was “we don’t ever use old holes in the filesystem. That would slow us down!”)

    To use these holes, most operating systems use a “best fit” algorithm.  They look at what you are trying to write, and try to find a hole where that write will fit. In this way, they can use old space. When you’re writing something extremely large, the OS just sticks it into the free space at the end.

    The problem occurs when you let things start to fill up.  Now the OS can’t always find a place to put your large writes. If it can’t, it may have to break that large block of data into several smaller ones. A file that may have been written in one contiguous chunk may get broken into 11 or 12 pieces.  This not only slows down your write performance, it will also slow down your reads when you go to read the file back.

    To make matters worse, this file will remain fragmented even if you free more space up later. The OS does not go back and clean it up.  So it’s a good idea not to let your filesystems drop below 20% free space. If this happens and performance suffers, you’re going to need to look into a defragmentation tool.

    Soon, this issue won’t matter to many of us.  SSDs (Solid State Disks) fragment just like spinning disks, but it doesn’t matter near as much.  SSDs are more like Random Access Memory in that data blocks can be read in any order, equally as fast. So even though your OS might have to issue a few more reads to pull in a file (and there will be a slight performance hit), it won’t be near as bad as what a spinning disk would experience.  Hence, we’ll tell our fragmentation war stories one day and get blank looks from our grandkids  (What do you mean “spinning disk?”  The disk was “moving??”).

    Personally, I long for the days when disk drives were so large, they would vibrate the floor. I liked discovering that the night time tape drive operator was getting hand lotion on the reel to reel tape heads when she put the next backup tape on for the overnight runs. It was like CSI. I’m going to miss those days. Soon, everything will be like an iPhone and we’ll just throw it away, get a new one, and sync it with the cloud.  Man that sucks.

    Follow Steve Modica and Small Tree on Twitter @smalltreecomm.  Have a question? Contact Small Tree at 1-866-782-4622.


  2. Buying Storage

    February 13, 2014 by Steve Modica

    I’ve been in the computer industry for quite some time.

    Back in the early days, we worried a lot about running out of space on a computer or a server. If you filled up your Novell Netware system, what could you do?  Adding drives was an option, but it was expensive and “scary” and you’d still end up with another volume you had to train your users to use (we didn’t have the ability to stripe all that stuff together). Further, it was likely your disk controller only supported two drives and your motherboard only supported a couple controllers.  If you ran out of space in that scenario, it meant buying an entirely new platform (software included) that would be extremely expensive. There was also no guarantee all of your stuff would migrate cleanly.

    This led many of our early computer system design people down the path of expandability and modularity.  We wanted SCSI and later, Fibre Channel, so we could add device after device to a system and never run out of space. We wanted expandable filesystems so these new devices could be merged in without moving data around.  We wanted clusters so as we ran out of CPU power and IO slots, we could just add more. Never again would we find ourselves sitting on the floor at 10 p.m. trying to figure out why our second IDE drive wasn’t being seen by the new controller we installed last week.  (You forgot to change its address knucklehead. It’s conflicting with the first disk you put in there).

    So today, we have lots of options.  There are blade servers, clusters, and all manner of scalable this and that. You simply buy the first bit and start using it, and if you ever need more, you just buy some more bits and plug them in and it all gets bigger.

    The problem I have with this sort of model is the price for those first bits. You aren’t simply paying for the disks.  You’re also paying for the ability to expand. This expansion capability is extremely important if your business has the chance of wild and uncontrolled growth (and wouldn’t we all like that), but most of us are running smaller businesses. We’re like pizza places, but instead of selling pizza, we’re selling services. We’d be happy to see our businesses growing at 20% year over year.

    When I think about servers and storage, I like to focus on what I expect to need this year, and what will likely get me through next year.  Beyond that, I should expect to refresh the entire system.  Even if I “could” double the storage capability, will I really want to? Will 6Gb SATA drives be fast enough for the new 4K codecs coming along in two years?  Will I want to spend “expansion capable” dollars on storage technology that’s two years old?

    My personal opinion is that things are changing far too quickly to buy for a horizon past two years, and if you really think you might need to expand that quickly, you should probably be buying that storage now rather than hoping to add on in six months or a year.

    Follow Steve Modica and Small Tree on Twitter @smalltreecomm.  Have a question? Contact Small Tree at 1-866-782-4622.

  3. What’s Your NLE of Choice

    February 3, 2014 by Steve Modica

    Now that we’re several months removed from Apple’s introduction of Mavericks for OSX and we’ve all tested the waters a little, I wanted to talk about video editing software and how the various versions play with NAS storage like we use at Small Tree.

    Avid has long since released Media Composer 7, and from what I’ve seen, their AMA support (support for non-Avid shared storage), continues to improve.  There are certainly complaints about the performance not matching native MXF workflows, but now that they’ve added read/write support, it’s clear they are moving in a more NAS friendly direction. With some of the confusion going on in the edit system space, we’re seeing more and more interested in MC 7.

    Adobe has moved to their Creative Cloud model and I’ve noticed that it made it much easier to keep my system up to date.  All of my test systems are either up to date, or telling me they need and update, so I can be fairly certainly I’m working with the latest release. That’s really important when dealing with a product as large and integrated as the Adobe Suite of products. You certainly don’t want to mix and match product revisions when trying to move data between After Effects and Premiere.

    Another thing I’ve really grown to like about Adobe is their willingness to work with third party vendors (like Small Tree) to help correct problems that impact all of our customers.  One great example is that Adobe worked around serious file size limitations present in Apple’s QuickTime libraries. Basically, any time an application would attempt to generate a large QuickTime file (larger than 2GB), there was a chance the file would stop encoding at the 2GB mark.  Adobe dived into the problem, understood it, and worked around it in their applications.  This makes them one of the first to avoid this problem and certainly the most NAS friendly of all the video editing applications out there.

    Lastly, I’ve seen some great things come out of FCP X in recent days.  One workflow I’m very excited about involves using “Add SAN Location” (the built in support for SAN Volumes) and NFS (Network File Sharing).  It turns out, if you mount your storage as NFS and create “Final Cut Projects” and “Final Cut Events” within project directories inside that volume, FCP X will let you “add” them as SAN locations. This lets you use very inexpensive NAS storage in lieu of a much more expensive Fibre Channel solution.  For shops that find FCP X fits their workflow, they’ll find that NFS NAS systems definitely fit their pocket books.

    So as you move forward with your Mac platforms into Mavericks and beyond, consider taking a second look at your NLE (Non-Linear Editor) of choice. You may find that other workflow options are opening up.