Blog

Subscribe to Email Updates

4 Requirements of Modern Backup and Archive

by Catherine Chiang – February 20, 2018

As enterprise datasets grow at unprecedented rates, with the majority of it being unstructured data, requirements for modern backup and archive have expanded beyond the capabilities of legacy secondary storage systems.

Modern backup and archive must have the following four requirements to meet the demands of massive unstructured data:

Policy-Based Data Management

Policy-based workflows for backup and archive streamline data management by allowing administrators to easily set policies for automatic backup and tiering to cloud.

Another big problem with massive datasets is not knowing what’s there, so it’s incredibly useful to have a single consolidated tier with indexing and search. Autodiscovery of shares and exports helps administrators easily discover what data needs to be protected.

Data Movement Engine

Moving data becomes extremely difficult when datasets are large. Legacy backup solutions use single-threaded protocols to move data, but this fails for petabyte-scale data.

A modern backup and archive solution must use highly parallel streams to move data. Another requirement of modern data movement is latency awareness, which enables backups to run continuously without creating backup windows.

Cloud-Native Services

As data grows quickly both on-premises and in cloud, enterprises face the challenge of data management in a hybrid world. Cloud-native architecture for on-premises infrastructure helps to bring cloud benefits to enterprise datacenters.

These cloud benefits include scale-out architecture, resiliency, and agility. Scale-out architecture allows enterprises to scale their solution without creating silos, which becomes essential as data grows large. The distributed nature of cloud-native architecture results in resiliency, protecting against potential failures. Lastly, users experience agility due to nondisruptive software updates.

As-a-Service

Finally, as the demands for managing massive unstructured data grow, as-a-Service providers offer a scalable, more agile alternative to legacy systems.

“as-a-Service” means that instead of buying hardware and then dealing with the day-to-day maintenance and troubleshooting of the secondary storage solution, customers pay for an all-inclusive service which alleviates management overhead.

as-a-Service offerings take care of monitoring, diagnostics, failure management, and software updates. This is particularly valuable as data grows larger because it actually ends up saving enterprises money by lowering total cost of ownership (TCO). And unlike traditional managed services, which employed customer service teams that get expensive as data grows, as-a-Service vendors use software to provide efficient and cost-effective services.

Related Content

When and How Should You Use Cloud in Your Backup Workflow?

September 11, 2018

While traditional tape workflows may make sense for some organizations, tape backup comes with a set of challenges that become especially difficult to manage at scale. At scale, the cost of tape storage no longer seems so cheap, and IT leaders start looking around for alternatives.

read more

How Igneous Optimizes Data Movement

August 22, 2018

Our co-founder and Architect, Byron Rakitzis, recently wrote an article over at DZone called "Parallelizing MD5 Checksum Computation to Speed Up S3-Compatible Data Movement."

read more

Why Isilon Users Need Multi-Protocol Support in Their Data Protection Solution

August 21, 2018

As you may have heard, we’ve added support for multi-protocol on Dell EMC Isilon OneFS—making Igneous the only modern scale-out data protection solution for large enterprise customers with this capability.

read more

Comments