In a previous article, I talked about the critical capabilities for NAS backup to cloud archive tiers using AWS Glacier, Azure Archive Blob Storage, or GCP GCS Coldline. With listed pricing from $12 to $14/TB/Yr, these archive tiers are now cheaper than comparable enterprise grade hardware.
So I asked the question, “But really - why wouldn’t you want to use these tiers for backup? It’s cheaper and has better operational characteristics.”
The challenge is that traditional solutions for enterprise NAS backup are ‘cloud washing’. This means they only want you to use online cloud tiers for backup, which are 10x the cost of archive cloud tiers! The reason? These solutions were architected for tape or on-premises disk, not for cost-effective backup to the cloud.
To backup enterprise NAS data using archive tiers cost-effectively, it takes a new approach. IT departments need the ability to:
- Write data directly to archive tiers
- Minimize the transaction costs of putting data in the cloud
- Intelligently expire data
- Know when to reclaim space for expired data
- Restore data cost-effectively
Keep reading to see how Igneous manages all this for you, and how we make cost-effective data protection to cloud archive tiers a reality.
Capability #1: Writing data directly to cloud archive tiers
Igneous can write data directly to archive tiers such as Azure Archive Blob or AWS Glacier Deep Archive. We do not first land data in S3, S3-IA, Hot Blob or Cool Blob and then use tiering policies to put data into the archive tier. When companies use those transition policies, there is a minimum of 30 days in hot tiers before a transition to archive tiers can happen. On just storage cost, this is the difference between $23/TB/Month vs $4/TB/Month, plus the transition costs associated with moving data from hot to archive tiers. To prevent sticker shock, Igneous always targets the tier intended for backup and archive based on the policy applied.
Capability #2: Minimizing transaction costs of putting data in the cloud
Moving data to the cloud invokes transaction costs: each PUT or LIST operation has a price. While they don’t seem high at first glance - $0.05 per 1000 operations for AWS Glacier Deep Archive or $0.1 per 10,000 operations for Azure Archive Blob storage - these costs can add up.
For example, it’s pretty common for an enterprise with 1PB or more of data to have over 1 billion files. If these files are moved individually, the cost with AWS Glacier Deep archive would be $50,000!
DataProtect offers a better approach. Operate in a full once, incremental forever mode. At petabyte scale, you don’t have time to run another full each month.
Igneous proprietary IntelliMove technology partitions files into pools based on size before moving data to the cloud. IntelliMove takes large files, compresses them inline, and utilizes multi-part uploads for performance and reliability. It queues small files (20MB or less) and compacts them into a larger compressed file - we call them chunks - reducing transaction costs (PUTS) by orders of magnitude.
The results? On average our customers see a 100x reduction in transaction (PUT) costs for a backup. Using the example above, we can reduce the total cost of backing up 1 billion files from $50,000 to $50.
Capabilities #3 and #4: Intelligently Expire Data and Clean Up Expired Data
The expiration of data is different from space reclamation of cloud storage. The former is about business policies, the latter cost optimization. Igneous give you both.
The most cost-effective way to minimize transaction costs would be to stream all the data in a single large file. This is great for transaction cost savings, but presents a secondary problem of expiring data from the cloud.
In the cloud, AWS Glacier Deep Archive and Azure Archive Blob Storage have a function to rehydrate data before it becomes accessible. This is the wait time, and of course, there is a cost to re-hydrating data per GB in order to do anything.
If all data was in a single file, eventually, the data would need to be rehydrated, have all the expired files removed, and re-written so that capacity doesn’t grow unbounded. For a 1PB backup file, this operation of rehydrating, compacting, and rewriting could easily cost over $60,000
This is why Igneous chunks data - a balance between minimizing PUT costs and managing the expiration of data.
When a policy expires data, the file data is removed from Igneous InfiniteIndex, which makes it non-recoverable, adhering to your business policies about retention and recovery of data.
Space reclamation happens when a chunk is 100% expired (large files) or when a chunk reaches the tipping point where it is cheaper to re-hydrate, compact, and rewrite vs continuing to store the data. This point of re-hydrating and compacting is different for every class of cloud storage: for some clouds it may happen when the file is 45% expired, for others, not until the file is 95% expired. Fortunately, Igneous calculates this for you, so you can be confident that your cloud expiration and compaction strategy is appropriate for whatever cloud tier you’ve selected.
Capability #5: Restore data cost-effectively
This is the Hotel California conversation - many people joke that you can put your data in the cloud any time you like, but it can never leave.
From our customer base, we see 0.25% to 3% restore rates of cloud backups back to on-premises NAS. This tends to be 1TB to 500TB per year. Most of these restores are directories, files, and the occasional export/share that needs to be reverted.
Igneous’ ability to store data by aligning data to chunks also has an advantage in how we restore data. This is because there tends to be a locality by directory in how the data gets aligned in the chunks. The result is that we only have to rehydrate the chunks needed for the recovery, whether it’s a directory or group of files. This minimizes rehydration and retrieval costs.
An average restore cost is $35 to $70 per TB out of the cloud, but it can be as much as half that if a customer chooses to restore it to a cloud filesystem, keeping it in the cloud.
Want to learn more about restoring NAS data to the cloud? Go “Under the Hood” of Igneous to learn how we move petabytes of data and billions of files.