Blog

Subscribe Here!

Cloud-Native Services—The Good, The Bad, The Best

by Christian Smith – November 8, 2016

In the early days of the company, one of our biggest debates was whether the core of our architecture should be a POSIX-compliant data tier or a modern key-value store. Back then (and for that matter, even today!) a vast majority of workflows depended on a legacy POSIX interface to storage.  

If the 1990s was the decade of Windows as the de facto platform, and the 2000s was the decade of Linux as the de facto platform, we are now in the era of cloud as the de facto platform for today’s compute and data. 

Needless to say, we decided to build our service around this new de facto platform. The public cloud brought a completely new tool chain—a whole new set of services that is changing the way applications are developed, deployed, and maintained. As with most things new, these “cloud-native” services make certain tasks dramatically simpler, but bring their unique challenges. 

So, what’s so special about having cloud-native services, like those we offer with the Igneous cloud platform? 

They are all about making it easy to build distributed applications that scale horizontally. Back in the day, applications were easier to develop. You developed applications to run on the Windows/Linux/Java platform, and if it needed to get faster, you just ran it on the faster servers that were always being launched. Moore’s Law came to your rescue, time and time again! 

The combination of the stalling of Moore’s Law and the geometric explosion of data sets and related workflows means that we now have to scale by breaking up applications into more components and adding more servers (as opposed to just faster servers!). In other words,  horizontally scaling decoupled systems! More components and servers means greater complexity to manage, and this is where new tools and services come in! The stack is changing from servers, Block/File storage and relational databases as primitives to containers, object storage, NoSQL databases, message queuing, stream processing, and server-less computing as the new application primitives. 

By building our on-premises solution with this new toolchain in mind, our aim was to enable this transformation to the rapidly growing data-centric workloads that can’t or won’t move to the public cloud.

Related Content

Top 10 IT Trends for 2019

February 19, 2019

In 2019 and beyond, 451 Research sees a key shift in the world of IT—the breaking apart and coalescing of old silos of technology. Today, technological advances feed off each other to drive innovation. With this new paradigm of technological innovation, 451 Research shares 10 IT trends they predict for 2019.

read more

“Interesting Times” for Unstructured Data Management

January 10, 2019

The expression “may you live in interesting times...” is subject to much debate. To some it is a celebration of the the opportunities to be found in times of transition. To others, it is a cautionary phrase that should be heeded to avoid misfortune. No matter which side of these interpretations you find yourself aligned with, there is no question that 2019 will be a year of significant opportunities and challenges for those responsible for the proper care, management, and stewardship of unstructured data.

read more

Coming Soon: A New Approach to Protecting Datasets

December 17, 2018

Unstructured data has grown at an annual compounded rate of 25% for the past ten years, and shows no sign of slowing. For most organizations, “data management” for unstructured data has really just meant capacity management, i.e. increase capacity to keep up with data growth. This model worked at moderate scales, but as datasets have increased in size, complexity, and quantity, it has pushed the scales into petabytes of data with billions of files, and overwhelmed budgets. Enterprises are now asking for data management strategies that do more than just provide continuously increasing capacity.

read more

Comments