Blog

Subscribe to Email Updates

4 Requirements of Modern Backup and Archive

by Catherine Chiang – February 20, 2018

As enterprise datasets grow at unprecedented rates, with the majority of it being unstructured data, requirements for modern backup and archive have expanded beyond the capabilities of legacy secondary storage systems.

Modern backup and archive must have the following four requirements to meet the demands of massive unstructured data:

Policy-Based Data Management

Policy-based workflows for backup and archive streamline data management by allowing administrators to easily set policies for automatic backup and tiering to cloud.

Another big problem with massive datasets is not knowing what’s there, so it’s incredibly useful to have a single consolidated tier with indexing and search. Autodiscovery of shares and exports helps administrators easily discover what data needs to be protected.

Data Movement Engine

Moving data becomes extremely difficult when datasets are large. Legacy backup solutions use single-threaded protocols to move data, but this fails for petabyte-scale data.

A modern backup and archive solution must use highly parallel streams to move data. Another requirement of modern data movement is latency awareness, which enables backups to run continuously without creating backup windows.

Cloud-Native Services

As data grows quickly both on-premises and in cloud, enterprises face the challenge of data management in a hybrid world. Cloud-native architecture for on-premises infrastructure helps to bring cloud benefits to enterprise datacenters.

These cloud benefits include scale-out architecture, resiliency, and agility. Scale-out architecture allows enterprises to scale their solution without creating silos, which becomes essential as data grows large. The distributed nature of cloud-native architecture results in resiliency, protecting against potential failures. Lastly, users experience agility due to nondisruptive software updates.

As-a-Service

Finally, as the demands for managing massive unstructured data grow, as-a-Service providers offer a scalable, more agile alternative to legacy systems.

“as-a-Service” means that instead of buying hardware and then dealing with the day-to-day maintenance and troubleshooting of the secondary storage solution, customers pay for an all-inclusive service which alleviates management overhead.

as-a-Service offerings take care of monitoring, diagnostics, failure management, and software updates. This is particularly valuable as data grows larger because it actually ends up saving enterprises money by lowering total cost of ownership (TCO). And unlike traditional managed services, which employed customer service teams that get expensive as data grows, as-a-Service vendors use software to provide efficient and cost-effective services.

Related Content

Backup vs. Archive: The Case for Incorporating Both in Your Data Management Strategy

July 10, 2018

We often talk about backup and archive as coupled processes, so much so that these two very different concepts may become conflated.

read more

Scale-Out Secondary Storage for Scale-Out NAS: How Igneous Integrates with Qumulo

July 2, 2018

One of Igneous’ key benefits is how we integrate easily with any primary NAS system, streamlining secondary storage infrastructure and freeing customers from vendor-specific siloes on the secondary tier.

read more

How Do You Archive Data at Scale?

June 12, 2018

Archive, as a concept, seems simple enough: Offload your infrequently-accessed data to a secondary storage tier and save it until the one day you might need it.

read more

Comments