Blog

Subscribe Here!

Three Benefits of Cloud-Native Architecture

by Jeff Hughes – November 28, 2017

When we started Igneous, we set out to solve the problems of legacy software architectures, which weren’t particularly scalable, resilient, or agile. That’s why Igneous is built with a cloud-native architecture, enabling enterprises to harness the power of cloud wherever their data lives.

Unstructured data is growing quickly both in cloud and on-premises, presenting enterprises today with the challenge of managing unstructured data in our hybrid world. There are two approaches to this challenge: Make your cloud look like the premises by putting legacy software like NetApp in your cloud, or make your premises look more like cloud. We chose the latter, and this post will talk about why.

Igneous uses cloud-native architecture to bring the following benefits of cloud on-premises:

1. Scale-out

Igneous is a scale-out solution that makes it easy to scale your backup and archive infrastructure as your data grows. We’ve got a scalable index store, a scalable object store, and a scalable container run-time. All of this underneath is transparent to the interface above, so no matter how big you grow, there’s nothing more to manage and no new architecture planning required to scale.

2. Resiliency

Cloud also brings the resiliency to the equation because of the distributed nature in the systems. We’ve got resiliency at a number of layers: in how our container runtime is designed, in how we write data, and also in how we manage replication.

3. Agility

Just like the cloud, we have the ability to update software in-place. This provides two benefits: transparent defect resolution and the ability to put new features in place without taking systems down for maintenance. Users and IT enjoy the convenience of automatic updates and new features without experiencing interruptions to their work, increasing business agility.

Igneous Cloud Native Architecture.png

Igneous is analogous to public cloud providers like AWS, but for your data both in cloud and on-premises. Under the covers of our data management platform, we’ve designed a scalable object store like Amazon S3, a scalable index store like Amazon DynamoDB, a scalable container runtime like Amazon Elastic Compute Service, and an event-driven computing mechanism like AWS Lambda—all in 1/10 of the footprint so it can work on your premises.

Learn more about our cloud-native services on our newly launched Technology page.

Related Content

PAIGE and Igneous Build Industry-Leading Compute Cluster for Healthcare AI

January 16, 2019

PAIGE’s mission to revolutionize the diagnosis and treatment of cancer through machine learning requires an extremely large dataset of high resolution slide images. To do so, they are building an industry-leading compute cluster for healthcare AI. The team needed to not only protect all of this unstructured data, but also programmatically move and process small subsets of the overall dataset on demand for high performance computations.

read more

Coming Soon: A New Approach to Protecting Datasets

December 17, 2018

Unstructured data has grown at an annual compounded rate of 25% for the past ten years, and shows no sign of slowing. For most organizations, “data management” for unstructured data has really just meant capacity management, i.e. increase capacity to keep up with data growth. This model worked at moderate scales, but as datasets have increased in size, complexity, and quantity, it has pushed the scales into petabytes of data with billions of files, and overwhelmed budgets. Enterprises are now asking for data management strategies that do more than just provide continuously increasing capacity.

read more

Accelerating Image Analysis and Cancer Diagnosis with AIRI from Pure Storage and Igneous

November 28, 2018

Artificial Intelligence (AI) has various applications today, from self-driving vehicles to optimizing workflows in manufacturing operations to detecting malware on the internet. Deep learning is a form of AI where multi-layer neural networks are utilized to transform input data into progressively more defined and useful outputs. Deep learning differs from machine learning (ML) in that ML focuses on the development of task-specific algorithms that can be applied to specific problems, while deep learning focuses on extracting information at multiple levels.

read more

Comments