Blog

Subscribe to Email Updates

4 Requirements of Modern Backup and Archive

by Catherine Chiang – February 20, 2018

As enterprise datasets grow at unprecedented rates, with the majority of it being unstructured data, requirements for modern backup and archive have expanded beyond the capabilities of legacy secondary storage systems.

Modern backup and archive must have the following four requirements to meet the demands of massive unstructured data:

Policy-Based Data Management

Policy-based workflows for backup and archive streamline data management by allowing administrators to easily set policies for automatic backup and tiering to cloud.

Another big problem with massive datasets is not knowing what’s there, so it’s incredibly useful to have a single consolidated tier with indexing and search. Autodiscovery of shares and exports helps administrators easily discover what data needs to be protected.

Data Movement Engine

Moving data becomes extremely difficult when datasets are large. Legacy backup solutions use single-threaded protocols to move data, but this fails for petabyte-scale data.

A modern backup and archive solution must use highly parallel streams to move data. Another requirement of modern data movement is latency awareness, which enables backups to run continuously without creating backup windows.

Cloud-Native Services

As data grows quickly both on-premises and in cloud, enterprises face the challenge of data management in a hybrid world. Cloud-native architecture for on-premises infrastructure helps to bring cloud benefits to enterprise datacenters.

These cloud benefits include scale-out architecture, resiliency, and agility. Scale-out architecture allows enterprises to scale their solution without creating silos, which becomes essential as data grows large. The distributed nature of cloud-native architecture results in resiliency, protecting against potential failures. Lastly, users experience agility due to nondisruptive software updates.

As-a-Service

Finally, as the demands for managing massive unstructured data grow, as-a-Service providers offer a scalable, more agile alternative to legacy systems.

“as-a-Service” means that instead of buying hardware and then dealing with the day-to-day maintenance and troubleshooting of the secondary storage solution, customers pay for an all-inclusive service which alleviates management overhead.

as-a-Service offerings take care of monitoring, diagnostics, failure management, and software updates. This is particularly valuable as data grows larger because it actually ends up saving enterprises money by lowering total cost of ownership (TCO). And unlike traditional managed services, which employed customer service teams that get expensive as data grows, as-a-Service vendors use software to provide efficient and cost-effective services.

Related Content

How Igneous Selects Weekly Release Candidates for Production

August 14, 2018

Streaming out a weekly software update brings joy to customers and engineers alike. Customers receive cutting-edge features and timely bug fixes, while engineers transform bright ideas into production realities with minimal turnaround.

read more

Igneous Announces New Integration with Google Cloud Storage

July 23, 2018

Igneous Systems is excited to announce a new integration with Google Cloud Platform. This integration was designed with both replication and long-term archive in mind. Through the Igneous interface, you can now move files and file systems directly into Google Cloud Storage via policy. You will retain your ability to search across storage tiers, restoring to Igneous or to your primary NAS when you need to recover your data.

read more

Backup vs. Archive: The Case for Incorporating Both in Your Data Management Strategy

July 10, 2018

We often talk about backup and archive as coupled processes, so much so that these two very different concepts may become conflated.

read more

Comments