For years it’s been said that you can’t take advantage of the cloud for file data because it would be too expensive, or too hard. But as file data growth continues, the mechanics and economics of the cloud have continued to evolve as well. While the cloud may have been a pipe-dream for enterprises in the past, the simplicity and price-points available for cloud-based backup and archive solutions can no longer be ignored.
To help data-centric organizations accelerate their cloud strategy, Igneous has debunked 3 common myths about file data in the cloud for backup and archive:
Myth #1: Ingress fees are too expensive
Moving data to the cloud can be expensive based on the data path to the cloud. Traditionally, individual objects written to the cloud are incurring ‘put’ charges to write each file to the cloud. As an example, if you are moving one billion files to the cloud and each ‘put’ incurs a charge, the initial cost just to get that data into AWS S3 Glacier Deep Archive would be $55,000 - and that doesn’t even costs for on-going storage or restores
With Igneous, data can be grouped into our efficient format of equal sized 100Mb blobs and moved directly into any cloud tier based on your preset policies. What sets this apart from the above example? You’re only pay a single put charge per 100Mb blob rather than per object. In the above example, your put costs would be reduced from $55,000 to $534 - a 100x reduction!
Myth #2: It's cheaper to backup and archive on-premises
Historically, the cloud has been used for small applications that need more performance but was too expensive for large datasets like backup and archive. Compared to previously available storage tiers from public cloud providers, it was cost effective to leave data on-premises. In addition, the cost of software and ingress fees were complicated to understand, and potentially very expensive.
In this example, we’ve prepared some general cost details based on industry averages comparing hardware and the pure costs of storage not including costs for software. Total on-premises costs between $30-$100/TB per year versus in a public cloud costs between $49-$320/TB per year before adding costs for software. You can see at these rates it is not a very compelling argument to move the data to the cloud.
Now using the same example of costs, based on general estimates of industry averages comparing hardware and the pure costs of storage not including costs for software, we can compare on-premises storage costs at $30-$100/TB per year to the latest offerings from AWS Glacier Deep Archive, Azure Archive Blob and GCP’s newest coldline archive tier at current market rates from $12/TB-$15/TB per year not including software.
You can see in this example that cloud is now a highly compelling option coming out roughly half the cost on-premises storage. Factoring in the reduction in ingress fees outlined above, and the cloud is no longer cost prohibitive for backup and archive data.
With Igneous DataProtect you can directly backup and archive to AWS Glacier Deep Archive, Azure Archive Blob storage or GCP’s newest coldline deep archive. Storing data in the cloud is now more cost effective than anything on-premises including the cheapest and deepest on-premises disk storage that will not match $12/TB -$15/TB annually.
Myth #3: It's too hard to get my data back from the cloud
For many, the perception is that finding your data in the cloud could be considered an insurmountable challenge. Not only will you need to find it, recreate the original file structure, and restore it at scale - but how much will it all cost?
In this example a user has requested an old project - Project X from 2017 - be retrieved. The challenge; your data in the cloud looks like the image below, a flat file structure.
How do you know which files in your bucket are part of Project X from 2017? How do you recreate the original file system structure so the original applications will function properly? How do you retrieve 50K files? What commands do you use? How much will it cost?
Igneous solves the problem that no one else can. Finding data is no longer an insurmountable challenge searching a flat index in the cloud. With Igneous you can search or browse the original file structure. Save time combing through millions or billions of files, restore to any on-premises location with a few clicks, and maintain the original file structure and permissions. Igneous manages the entire restore process - notifying you upon completion - so you don’t have to worry about how to restore files at scale. You minimize restore costs with our efficient format.
Let’s recap! We’ve debunked 3 common myths about file data in the cloud for backup and archive:
- “Ingress fees are too expensive” - Igneous can write directly to the tier you specify and with our efficient format you only pay once per blob.
- “It’s cheaper to backup and archive on-premises” - with the availability of deep archive from public cloud providers and Igneous’ ability to write directly to these tiers, storing data in the cloud is now more cost effective than anything on-premises.
- “It’s too hard to get my data back” - Igneous manages the entire restore process so you don’t have to worry about how to restore files at scale. You’ll save restore costs with our efficient format and save time with end-to-end automation and notifications.
Are you ready to consider the cloud for backup and archive?