LOGIN  PREDATAR FREE TRIAL  CALL BACK

Our Blog / News

Our blog covers a whole host of topics, and we regularly post news, event reviews, and other updates too. Out blog is where we discuss the latest technology, newest releases and industry trends.

So whether you’re looking for business and technical insight, or just an insider’s opinion, this is the place to be.

Our content’s updated regularly, so make sure you come back! You can subscribe below.

Has IBM cracked the dedupe code with TSM 7.1.3?

Posted by Alistair Mackenzie on 28-Aug-2015 15:30:00

On August 26th IBM announced v7.1.3 of Spectrum Protect (formerly TSM). The announcement included improvements to the user interface to please VMware administrators and a new device class to leverage public cloud storage.

 deduplication

Most importantly, it announced a way to deliver backup cost savings through a new, software based, deduplication method. Solving the deduplication problem is the key to opening up cost savings in software, disk storage and network bandwidth. Cracking this code promises even bigger savings by leveraging cloud compute for disaster recovery. It’s very important if you want to contain spiralling storage costs.

The Competition

Taking the Gartner Magic Quadrant for Data Protection as the source, IBM’s top competitors are Veritas Netbackup, Commvault Simpana and EMC. Veritas already has client and source side deduplication from v7, Commvault announced deduplication in v8 and improved it with global deduplication in v9 and EMC has a whole bag of deduplication solutions, notably Avamar and Data Domain appliances.

On the whole, software based deduplication has been a bit of a disappointment as soon as you try to scale the workload, so large enterprises have had to invest in dedicated appliances such as EMC Data Domain and IBM ProtectTier. The cost of these solutions is sometimes bigger than the original problem!

If, with this new version, IBM can pull off software based deduplication at scale the cost benefits for existing Spectrum Protect users could be significant and may warrant large enterprises to reconsider their options.

 The Bottom Line

Previous attempts by IBM to crack the dedupe code have focussed on post-process methods which have tried to take the strain off the central TSM database but have resulted in scalability issues and extra administrative complexity. A new in-line method attempts to improve scalability, improve deduplication savings and eliminate the extra admin tasks. IBM are claiming a single TSM database can now handle in excess of 30 Terabytes ingest per day with a total primary storage pool approaching 4 Petabytes. This performance would allow for significant consolidation of TSM servers and ultimately more elegant than the competition with their numerous media servers and add-on bits of software.

For TSM users big cost savings could be realised in the following 3 ways:

  1. Reduction in primary pool capacity – less disk and less software licence costs if you are using back end capacity licencing
  2. Reduced number of backup servers and hence reduced administration
  3. When combined with node replication the ability to leverage pay-as-you-go cloud IT for disaster recovery*

*for more details on consolidating backup and DR through DRaaS options you might want to check out this post http://www.silverstring.com/blog/backup-and-disaster-recovery-as-two-separate-items

Topics: Data Deduplication, Data Storage, Storage Software, Data Management, dedupe

Subscribe here

Recent Topics

see all