I was out with a friend last night, and we happened to bump into somebody that he hadn’t seen for over 20 years. A moment of delayed recognition passed across his face (I’ll call this ‘recall’) before they re-established their common ground and started to reminisce about when they were both (a lot!) younger.
Always having my storage hat on, it reminded me of the need for long-term data archiving.
Data archiving is a necessity
All users need to keep copies of data – often because of regulatory reasons – for long periods of time. On the rare occasions they do need to interrogate their data, there’s usually a short delay whilst they recall it before they carry on as if the gap hadn’t happened.
But as with most things, there’s a cost associated with long-term archiving.
Consider a system where you need to take a full backup on a monthly basis, and keep each of those backups for 7 years. Over time, that’s going to amount to 84 full backups, and because of the relative costs of tape versus disk, these have tended to be pushed out to tape.
Although this makes it less straightforward to access the data, it’s usually been cost-prohibitive to store that many full backups on disk.
Fortunately, that’s no longer necessarily the case. In the example above where a customer is taking a full backup of a system every month, there’s going to be a large degree of commonality between those backups.
As a result, this makes long-term archiving an ideal candidate for data reduction techniques, either on a software level (TSM, for example) or a hardware level (Protectier Deduplication, Storwize Compression, etc).
Costs don’t have to be restrictive
If you take this in tandem with storage commoditisation, it now means that storing long-term archives on disk should no longer be ruled out on cost grounds. As an added bonus, the data being stored on disk means that data retrieval should be that much quicker, with no need to recall tapes from offsite storage locations.
The final piece of the jigsaw comes in the form of the offsite storage of that data. Now that it’s taking up much less capacity due to the said data reduction, why not send that second copy off into the Cloud, using one of the recently launched Predatar solutions?
No one would argue that archiving isn’t easy – it’s often a challenge that companies don’t want to face, and when they do, they often compromise because of a perceived lack of capacity.
But with the need for effective and efficient data archiving becoming ever more important, now’s the time to think again and investigate how to meet regulatory requirements and utilise advances in data protection at the same time.