Blog

Subscribe to Email Updates

True Cloud for Local Data—Yes, You Can Have It All

by Kiran Bhageshpur – October 10, 2016

Over the past couple of years, customer after customer told us what they really liked about the public cloud infrastructure and how much they desire its characteristics within their datacenters for workloads they can’t or won’t move to a public cloud offering. From all these conversations, two points emerged as a theme for Igneous’ approach to creating a True Cloud for Local Data.

The first was a list of the characteristics of the public cloud that enterprise customers found attractive. The second was the nature of the workloads that were, at best, cumbersome to move to a public cloud infrastructure and how these workloads were actually growing in size and importance.

With public cloud infrastructures, customers liked that they did not buy hardware or “rack and stack” it in their datacenters. They did not have to license software, worry if they had deployed the latest security patch, or set up maintenance windows for change management. In short, they did not have to manage the IT Infrastructure. Instead, they consumed an elastic and scalable service that was API-driven.

Furthermore, with the public cloud, they had access to a new application development paradigm with cloud-native services such as S3 for storage, DynamoDB for NoSQL data, SQS for messaging, etc. These new services allowed them to easily compose and build horizontally scalable, distributed applications.

In addition to the above benefits, customers paid for these services as they consumed them. They were freed from having to make large capital purchases up front and could instead pay as they consumed these services. Not only was the architecture scalable and elastic—so was the commercial model.

To summarize, the characteristics of True Cloud for Local Data became:

  • No hardware to buy, software to license, or infrastructure to manage
  • API-driven, scalable, and elastic operation
  • Cloud-native services that enable better building of scalable and distributed applications
  • Pay-as-you-go pricing
The public cloud is clearly the single most disruptive change to IT Infrastructure in the last 15 years. Many enterprise applications are moving to the public cloud, and still more are being replaced by SaaS applications. However, there are certain applications that customers tell us they cannot or will not move to a public cloud.

For example, large data sets (often on the order of terabytes per hour) generated by machines (such as security logs from thousands of servers, images from HD cameras on planes and satellites, or outputs from high-resolution microscopes) simply cannot be moved across perimeter networks to the public cloud, even with 10Gbps links. These data sets are viewed as valuable (with their value often increasing over time!), are retained for long periods, and are frequently reprocessed. As such, customers tell us that these data sets and workflows can’t be moved to the public cloud.

In some cases, concerns around data security (such as in the case of raw footage from the latest pre-release Hollywood blockbuster!) and legal or regulatory restrictions (such as Safe Harbor regulations) mean that a company won’t move certain data sets and associated workflows to the public cloud.

In seeking a cloud experience within their datacenters, customers shared with us their efforts with using a private cloud. More often than not, their efforts involved buying hardware off a hardware compatibility list, installing expensive, licensed commercial software (or bludgeoning open sources distributions!), and managing a stitched-together infrastructure. Gaining neither the elasticity nor scalability of the public cloud, yet stuck with a cost model (CapEx and ongoing OpEx) that is more like a traditional enterprise infrastructure, these folks experienced a frustration that was palpable.

This is the gap we set out to fill. With Igneous:
  • Customers don’t buy hardware, license software, or manage any infrastructure
  • They subscribe to our cloud-native services, accessing them across API just as they would with the public cloud
  • We deliver these services via our purpose-built appliances that live within the customers’ networks, behind their firewalls
  • Customer data stays on their own premises on our purpose-built appliances that is fully managed by our cloud based software
The result: True Cloud for Local Data!

Related Content

Moving to Cloud? Here's How a Hybrid Approach Can Help

May 8, 2018

Today, the growth of unstructured data is scaling beyond what traditional legacy storage and backup softwares were designed for. While advances in cloud architecture from AWS, Microsoft Azure, and Google Cloud present solutions for managing enterprise data, many enterprises face obstacles to adopting cloud and stick with legacy storage and backup software as a result.

read more

Archive First, Backup Should Be Boring, and Other Insights: A Conversation with Life Sciences Technologist Chris Dwan

April 24, 2018

Chris Dwan is a leading consultant and technologist specializing in scientific computing and data architecture for the life sciences. Previously, he directed the research computing team at the Broad Institute, and was the first technologist at the New York Genome Center.

Chris joined us for a conversation on data management challenges and trends in the life sciences. Read on to discover Chris’ insights from over a decade of experience in life sciences IT!

 

read more

How Much is Tape Really Costing Your Business?

April 17, 2018

Tape remains a popular medium of data storage due to its advantages in cost and ease of use. Often as little as a penny per gig per month to cover just the hardware, tape seems to be a great, affordable option for backup. Unlike other backup solutions, scaling tape is simply a matter of adding physical inventory, and tape doesn’t require network bandwidth or expertise to move offsite, or power when storing long-term archives.

read more

Comments