Over the past couple of years, customer after customer told us what they really liked about the public cloud infrastructure and how much they desire its characteristics within their datacenters for workloads they can’t or won’t move to a public cloud offering. From all these conversations, two points emerged as a theme for Igneous’ approach to creating a True Cloud for Local Data.
The first was a list of the characteristics of the public cloud that enterprise customers found attractive. The second was the nature of the workloads that were, at best, cumbersome to move to a public cloud infrastructure and how these workloads were actually growing in size and importance.
With public cloud infrastructures, customers liked that they did not buy hardware or “rack and stack” it in their datacenters. They did not have to license software, worry if they had deployed the latest security patch, or set up maintenance windows for change management. In short, they did not have to manage the IT Infrastructure. Instead, they consumed an elastic and scalable service that was API-driven.
Furthermore, with the public cloud, they had access to a new application development paradigm with cloud-native services such as S3 for storage, DynamoDB for NoSQL data, SQS for messaging, etc. These new services allowed them to easily compose and build horizontally scalable, distributed applications.
In addition to the above benefits, customers paid for these services as they consumed them. They were freed from having to make large capital purchases up front and could instead pay as they consumed these services. Not only was the architecture scalable and elastic—so was the commercial model.
To summarize, the characteristics of True Cloud for Local Data became:
- No hardware to buy, software to license, or infrastructure to manage
- API-driven, scalable, and elastic operation
- Cloud-native services that enable better building of scalable and distributed applications
- Pay-as-you-go pricing
For example, large data sets (often on the order of terabytes per hour) generated by machines (such as security logs from thousands of servers, images from HD cameras on planes and satellites, or outputs from high-resolution microscopes) simply cannot be moved across perimeter networks to the public cloud, even with 10Gbps links. These data sets are viewed as valuable (with their value often increasing over time!), are retained for long periods, and are frequently reprocessed. As such, customers tell us that these data sets and workflows can’t be moved to the public cloud.
In some cases, concerns around data security (such as in the case of raw footage from the latest pre-release Hollywood blockbuster!) and legal or regulatory restrictions (such as Safe Harbor regulations) mean that a company won’t move certain data sets and associated workflows to the public cloud.
In seeking a cloud experience within their datacenters, customers shared with us their efforts with using a private cloud. More often than not, their efforts involved buying hardware off a hardware compatibility list, installing expensive, licensed commercial software (or bludgeoning open sources distributions!), and managing a stitched-together infrastructure. Gaining neither the elasticity nor scalability of the public cloud, yet stuck with a cost model (CapEx and ongoing OpEx) that is more like a traditional enterprise infrastructure, these folks experienced a frustration that was palpable.
This is the gap we set out to fill. With Igneous:
- Customers don’t buy hardware, license software, or manage any infrastructure
- They subscribe to our cloud-native services, accessing them across API just as they would with the public cloud
- We deliver these services via our purpose-built appliances that live within the customers’ networks, behind their firewalls
- Customer data stays on their own premises on our purpose-built appliances that is fully managed by our cloud based software