Subscribe Here!

Thoughts from AWS Re:Invent 2016

by Steve Pao – December 5, 2016

Perspectives on hybrid cloud for large, unstuctured data that we share with Amazon, plus some differences in opinion informed by our customers.

Last week, a small team of us went to the AWS Re:Invent 2016 conference that featured the latest announcements from Amazon Web Services (AWS).

At Igneous, we are both a consumer and a partner of AWS.

As a consumer, we utilize AWS both in our development pipeline as well as in production to run the Igneous Cloud (our hyperscale remote management platform for our equipment deployed at customers’ premises).

As a partner, we recognize that many of our customers and prospective customers are pursuing a hybrid cloud strategy. In addition to workloads that involve smaller datasets, we’re also hearing from customers that want to utilize public clouds for offsite redundancy for portions of their data they store on-premises, sharing processed results with parties outside their enterprises, and for bursting compute elastically. (In general, storage for large, unstructured data doesn’t really "burst" — it just grows monotonically!)

At Re:Invent, it was clear from the presentations that the AWS team was also seeing the same trends with large, unstructured data that motivated us to start Igneous.

The picture below was from the "Deep Dive" session on Amazon S3, demonstrating an overwhelming interest in utilizing object storage.


Beyond just the interest in object storage were some good discussions of trends driving hybrid cloud for large, unstructured data:

Based on these observations, AWS made a number of announcements.  As an AWS partner, we’re interested in pursuing AWS Greengrass, as we believe event-driven computing models are right for data-centric computing. And like with AWS Snowball Edge, we believe that storage should also embed compute. All that said, these AWS solutions make the “all-in” assumption that the data all eventually goes to the AWS cloud — even if it is so large that it has to be physically transported via parcel service or via a dedicated truck with AWS Snowmobile.

While these announcements are interesting as AWS continues to evangelize an "all-in" strategy, even those who view themselves as "cloud first" are utilizing a hybrid cloud strategy, combining public cloud with their enterprise data centers.

At Igneous, our aim is to provide options for customers running hybrid clouds without requiring the physical movement of data via truck or parcel service.  By managing large, unstructured data in enterprise data centers, enterprises can continuously run their data pipelines without having their data go offline while in transit.  By providing True Cloud for Local Data, Igneous Data Service can serve as both an on-ramp and a control point for data that is managed across enterprise data centers and even multiple cloud providers.

For context, here was some media converage of Igneous during the Re:Invent conference.

If you’re interested in learning more, contact us!

Related Content

Top 10 IT Trends for 2019

February 19, 2019

In 2019 and beyond, 451 Research sees a key shift in the world of IT—the breaking apart and coalescing of old silos of technology. Today, technological advances feed off each other to drive innovation. With this new paradigm of technological innovation, 451 Research shares 10 IT trends they predict for 2019.

read more

“Interesting Times” for Unstructured Data Management

January 10, 2019

The expression “may you live in interesting times...” is subject to much debate. To some it is a celebration of the the opportunities to be found in times of transition. To others, it is a cautionary phrase that should be heeded to avoid misfortune. No matter which side of these interpretations you find yourself aligned with, there is no question that 2019 will be a year of significant opportunities and challenges for those responsible for the proper care, management, and stewardship of unstructured data.

read more

8 Principles for a Better Data Management Strategy

December 5, 2018

I’ve spent the better part of three decades leading one of the most demanding high-performance computing infrastructures in the world. One of the greatest challenges of HPC infrastructure is keeping data available and meeting the needs of the business with supporting engineers located in dozens of locations around the world. Here are some key takeaways for anyone struggling with this problem.

read more