webinar_default_background_header

ON-DEMAND WEBINAR

Reduce IT Risk and Modernize File Environments at Scale with Igneous and Azure

Overview
Unstructured data continues to grow at an unprecedented rate, data centers are filling up, IT budgets are shrinking and rapidly changing business environments require more operational flexibility than ever before.

Legacy systems are not able to keep up. Many businesses find themselves stuck with systems that require periodic refreshes, onsite management, and increase risk.

Are you ready to reduce risk by ending your dependence on legacy technologies, get better utilization from current resources and take advantage of the cloud while reducing costs?

Igneous offers SaaS data protection, archive, and visibility solutions optimized for file intensive environments which can be deployed 100% virtually, no datacenter visits - only software installation. Igneous helps you see what you have to take action, such as free up primary capacity, protect everything you have with simple policies to hit the most aggressive SLAs and better manage your unstructured data workflow with Microsoft Azure. Learn how one customer is leveraging Igneous and Azure to manage their file environment from data capture to cloud.

This webinar will show how a 100% virtual Igneous data protection and archive solution with Microsoft Azure will enable users to remotely optimize and reduce their on-prem footprint saving valuable time and IT resources.

Presenters
Christian Smith, VP Product at Igneous

Karl Rautenstrauch, Principal Program Manager, Partners at Azure Storage

Originally Recorded
Tuesday, April 21st, 2020 at 9:00am PT

 

  • Related Resources

    Igneous + Azure Integration Page
    Learn how Igneous provides SaaS Backup and Archive to Azure Blob Storage at Scale.

    Quantum Spatial Case Study
    Learn how Quantum Spatial, North America’s largest geospatial services firm, used Igneous and Azure to manage their growing unstructured data footprint.

    Azure ExpressRoute Overview
    ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. Igneous can use your Azure ExpressRoute to quickly and securely move your backup and archive datasets from on-premises to Azure.

    Azure Blob Storage Tiers Overview
    Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. The available access tiers include hot, cool and archive - this document outlines the differences between them.

  • Full Webinar Transcript

    CORPORATE PARTICIPANTS

     

    Caroline Thomas

    Igneous – Senior Product Marketing Manager

     

    Christian Smith

    Igneous – VP – Product

     

    Karl Rautenstrauch 

    Azure Storage – Principal Program Manager – Partners 

     

    ................................................................................................................................................................................................................................

     

    PRESENTATION

     

    Caroline Thomas

    Hello everyone and welcome to today's webinar, Reduce IT Risk and Modernize File Environments at Scale with Igneous and Azure. My name is Caroline Thomas and I'll be moderating the session. Today we'll be covering how to reduce risk in your organization by ending your dependence on legacy technologies, getting better utilization of your current resources, and taking advantage of the cloud, all while reducing costs. 

    Please feel free to put any questions into the Ask a Question box at any time and we will answer all questions at the end of the webinar. So, with that, I'd like to introduce our featured presenters today. Karl Rautenstrauch, Principal Program Manager of Partners at Azure Storage, and our own VP of Product at Igneous, Christian Smith. Karl and Christian, thank you for joining us today. 

    Christian Smith

    Thank you. So, Hello, everybody, this is Christian Smith. I'm VP of Product here at Igneous. I want to talk to you today first about just some of the drivers of data overall, and then talk about some of the challenges people are facing, and then I'll hand it off to Karl to talk about Azure, and then we'll come back and talk about modernizing environments for data management. 

    So, at Igneous we've been focusing exclusively on the fast growing segment of unstructured data. You can think of this is a file data that exists in the enterprise, and a big change has happened in this segment and beyond it just growing and doubling every two years, the fidelity of the machines that are generating this data continues to grow and get larger and larger and larger, and this data is no longer the data that was generated by people sitting at their desks in a home directories environment. This is data that was generated by machines. It could be in segments such as life sciences or media entertainment, or it can be in places like high tech manufacturing, oil and gas. It's where there are machines that are generating data such as lattice light sheet microscopes, it could be a big HPC cluster that's analyzing data, but the challenge is this data is both strategic to the organization both short term and producing results in long term, and being able to be reused again in the future, and so retention is very important in this environment. 

    Some of the big challenges in this environment is the legacy infrastructure that has traditionally supported this is kind of aging and the ways that this has been run is pretty operationally intensive, and so when we talk about where this data has historically landed, it's been on devices such as NetApp, or Isilon, or Qumulo, or Pure FlashBlade. These devices tend to absorb and process this data, and then functions like data protection are either replicating to secondary data centers or going off to some sort of tape library. The challenge is as this data continues to grow, SLAs are starting to be missed, so it used to be that you always had that one export or one directory that couldn't be protected because of a high file count. Well, in today's day and age, that file count or that capacity is now eclipsing 10 to 15% of that environment, and when you look at the mechanism for using or deploying these types of environments, they tend to have a high operational cost. A trend that we're starting to see is that secondary data centers, as the cloud has been coming into workflows, are both full and looking to be reduced in these environments. There's nowhere to put this secondary data, but the challenge is, is that a lot of these solutions really don't have a way to leverage the cloud as effectively as you would like to and so you're still stuck with some secondary environments. 

    Well, the time is right. Data Protection to Azure is the time and the time is now to start looking at cloud and how data has been growing in the cloud and how data has been consumed in the cloud, and for Igneous and Azure partnering together, look, we spent a lot of time looking at how you can adopt cloud to modernize your backup and archive environment. Just the agility and flexibility of workflows and use cases, the economies of scale, not having to deploy more infrastructure in your data center, not having to continue to expand infrastructure in your data center, bringing the simplicity of operations into your environment, which really means that for organizations that have data heavy workflows, they can start thinking about eliminating secondary data centers, and in today's day and age where getting into those data centers is becoming increasingly challenging, just managing that data from your couch, being able to remotely manage the backup and archive process without having to go into the data center to touch infrastructure is a huge benefit, and when you look at the secondary effects around networks, and networks are now ready, there are now more points of presence, and the pricing for networks into direct connect into Azure are more prevalent than ever and are more cost effective than ever. 

    And then lastly, when you take the end-to-end economics together, and Igneous does a lot of these TCO analysis with customers, when you factor in the tiers, the data, the network, the economics are better than doing anything on-premises these days. We've eclipsed the point where standing up infrastructure for data protection and archiving is cheaper to do on-premises. 

    So, first I want to hand it over to Karl, let him talk about Azure, and then we'll talk about how we leverage Azure for unstructured data for backup and archive environments. 

    Karl Rautenstrauch

    Great. Thank you, Christian, and thanks for inviting me to share this forum with you today. I really appreciate that, Thanks to all the attendees. I appreciate you taking time out of your day, and if nothing else, for all of those who are stuck at home like myself, hopefully we’re a welcome distraction during the routine that we've all fallen into during these very, very unique times. 

    So, I'm going to start by talking just really briefly about some of the fundamentals in our cloud storage platform for those who may not be familiar with Azure, or not familiar with public cloud storage offerings, so what you see from us and you see from others is the type of storage media and the protocols that you are used to in your data centers for providing support to the applications that are going to be running on your virtual machines, within a container ecosystem, or a microservices architecture, and we provide traditional storage from a block perspective, that's our disk storage, we have different price performance tiers; our object storage, which we're going to spend the bulk of our time talking about today, that's our most cost effective and highest scale storage infrastructure and the storage infrastructure that Igneous takes advantage of with their innovative products and capabilities; and then our file storage infrastructure for those who need to move user home directories, departmental shares, installation, software installation repositories, we have storage capable of doing that as well; and then we offer means to move that data to Azure, that our friends at Igneous support, and we're going to talk about that here in another slide or two, and means to integrate your on-premises environment from a machine learning and IoT sense with Azure devices in the form of our Azure Stack family. So, some really innovative capabilities and means that, along with Igneous, allow you to extend that on-premises footprint into Azure and take advantage of it. 

    But again, we're going to talk about the object storage platform today. That's the platform that Igneous supports that really brings the most benefit to you in terms of scale, management, resiliency, and cost effectiveness, and what we do is we provide different levels of resiliency at different price points based on the criticality of your data, so the least redundant option we have is three copies of your data within a region, and a region in Azure terms is a group of data centers that is within a synchronous replication distance. So, you have at least three copies with the bare minimum option that we provide. You can step that up to three copies that are guaranteed to be within three different physical data centers in that region. That's our zone redundant storage. And then we move up to our geographically redundant storage options, which provide six copies of your data within an asynchronous replication zone, three in the primary, synchronous replicas, so within a secondary region, asynchronously replicated, so whatever the failure scenario is, whatever the criticality of your data is, and the number of copies you need to maintain, we can provide a back-end storage infrastructure to do that, and we can do it cost effectively. 

    So, I love what Christian had to say about the state of the network inside of customers’ environments. I can remember, oh my goodness, 12 years ago when I was a system architect and had to beg, plead, and borrow to get an OC-192 circuit between our two data centers so that I could provide replication and redundancy for critical business applications. I swear, I can promise on my firstborn son in order to get that OC-192. What we're seeing now is our customers have tremendous connectivity capabilities that allow ample bandwidth, low latency bandwidth to our Azure regions, and allow you to do more and more with running production applications in the cloud, without a performance hit, and using tools like Igneous to extend your on-premises data footprint into Azure as well. One of the great capabilities of ours that Igneous supports is our ExpressRoute, which provides a secure, dedicated circuit from your site, either through an exchange provider like Equinix, or through your ISP, like Verizon or AT&T, to our data centers, allowing secure access to our services over a high bandwidth/low latency connection. One thing that you hear frequently with cloud solutions, and cloud storage solutions in particular, is that there's a high cost to retrieve your data from cloud, but with our ExpressRoute direct local option, we actually waive those fees and you can move data in and out at will without any cost implication to it. 

    Now Igneous does some things that are really unique, where they actually have optimized their transfers to and from Azure to reduce those costs even further, and what cost am I referring to? Well, we, like traditional storage providers, offer different price performance tiers, and when you look at what we provide, and when you look at use cases, like extending primary storage into the cloud to eliminate or reduce the growth that you have in your on-premises environment, you really want to look at three storage tiers that I have highlighted here Hot, Cool, and Archive. 

    What's the difference between the three? Well, between Hot, Cool, and Archive, there is a difference in terms of recall experience. You can retrieve your data faster from hot and cold storage, which are online storage solutions, than you can with the offline Archive storage. Archive storage is by far the cheapest per-terabyte per-month cost that we offer, and that is also offered anywhere in the public cloud, but it is a higher cost to retrieve that data, so using an intelligent solution like Igneous that can help you map the heat of that data, how often is it accessed and what tier should it belong on, can allow you to cost effectively leverage Archive storage and save a tremendous amount of money versus what you're doing with storing your data today. 

    Hot and Cool are going to be higher storage costs in terms of per-terabyte per-month, but they cost much less to access. So, if it is data that you may need to retrieve on a frequent or relatively frequent basis, Hot and Cool are the better tier choices for you. And again, Igneous can help you determine what's the proper tier for the particular data set or percentage or portion of that data set that may be more appropriate for one of the cooler, less cost per-terabyte tiers of storage. 

    And for those of you who have really large footprints, we do have some options with our reserved capacity offering that allow you to purchase that storage at a pretty significant discount for term of one to three years and receive even more savings on top of what we have an advertised retail price. 

    But really any environment requires appropriate security measures to make sure that your data is not exposed and that your data is available where you don't want it to be, and also that it's protected where and when it should be in every way, shape and form that it's stored, so we have several mechanisms that allow you to make sure that your data is secure, and that your data is protected against accidental or malicious activity. So, one is, of course, encryption at rest with either Azure generated and maintained keys or your own keys, customer-managed keys. You can protect that data both in flight to our storage repositories or at rest [audio]. That's certainly our best practice. We also give you the ability to lock a resource, like one of these storage repositories, to make sure that it can't be accidentally deleted if someone were in the process of doing some spring cleaning, trying to remove what they may perceive as an unused asset, you can prevent that account, that storage repository from being deleted, and then even more importantly, you can protect the content within it from being accidentally or maliciously deleted, and allow for recovery with our Soft Delete capability, and we have some more sophisticated functionality coming in the form of account level backups that will allow you to protect that content to an external repository and basically create another replica of it to make sure that you're protected in all events and all eventualities in terms of a deletion or accidental overwrite. If you want to kick it up a notch even further, if you are in a regulated industry like I was, where I was under SEC 17a-4 regulations for some of the data that we had to maintain, we support an immutable storage offering, what many of you probably know as Write Once, Read Many, which allows you to set policies and prevent that data from being deleted, or overwritten, or modified for a set period of time, and that period of time is up to you. You have complete control over that. So, if you're in an industry like I was in, in the past, you're maintaining records of trading, financial trading, commodities trading, and you need to make sure that you can attest to your ability to produce that data in an unmodified fashion, we can meet those needs and Igneous can help you with that by integrating with this back-end immutable worm storage infrastructure. 

    So, a robust platform, enterprise class, incredibly cost effective, simple to use, easy scaling, no data migration. As we change out hardware on the back-end, that's our responsibility, not yours, and Igneous can help you take advantage of it. 

    Christian, thank you very, very much. I'm going to stay on and monitor the questions and respond to those, and I really appreciate the time here this morning, and our friends on the east coast and in Europe this afternoon. 

    Christian Smith

    Thank you, Karl. Thank you. I appreciate it, and so, Karl talked to you about all the great capabilities of Azure. What we're going to talk now about is, OK, how do we leverage those capabilities, and we can say from NAS to Azure, and the steps of this is to go from a data management platform is really around three capabilities. 

    First, I have to see my data. In the world of unstructured data, we operate pretty blind. We know that data exists out there. We know that the data is there, we know the capacity of that data, but we actually don't know where it is, how old it is, how it’s accessed, we have no visibility into what goes on, and so we'll talk, first, about how do you see that data? Secondarily, how do you archive that data? How do you pick the old stuff out of that data and start archiving it to Azure to free up that NAS capacity so you can bring on new projects, new workflows, new environments into your NAS unstructured data environment? And then lastly, how do you protect the rest and do it with aggressive SLAs so that you're protecting your data, time and time again being able to restore it whenever you need to? 

    And so really, we talked about data discovery as providing that visibility for all your data. This is a pretty simple process to get up and running. You deploy a VM in your environment, we’ll connect to any of your file sources, and we go out and we go scan that data. There's nothing more to deploy than a small VM to give you these dashboards of your data. It goes really fast to time-to-insight, so you go from minutes to hours to get insights when you're talking hundreds of terabytes to petabytes of data, and it works in a way that is globally distributed, so it doesn't matter whether I have one site that's really big or multiple sites that are out there, but the value of this is that you can get a really lightweight way to go give you this nice dashboard that tells you exactly what your footprint looks like, so that data is a heat map that says here's what's hot, here's what's cold, based on aging or access time. 

    In this view, you can go customize any of the aging parameters that you want. It dynamically refreshes all the data underneath that to match that, and then you can go take surgical action. Now, you can go archive data exactly where you want to archive it by project, by directory. The world assembles itself around projects and that seems to be the best way to go take and extract data off that is big that hasn’t been touched in a long time. We like to make this really drop-dead simple, so one-click action. Just click on the directory you want to go archive, it will kick off the whole archive process to Azure. You pick your tier in Azure, whether it is “hot, cool, or archive” and then have the other settings associated with that, whether you want it immutable, or you want it like the SEC 17a-4 protection with expiration. We don’t care where the data is coming from, so it can be on any NAS system, it could be on Isilon, NetApp, Qumulo, Pure, it could be on Windows Server, it could be on ZFS, it could be on GPFS, as long as there is a SMB or NFS protocol to get to it, we can move that data off into Azure archive or Azure archiving tiers. 

    The big thing here though is since this data is not accessed very often, and people aren’t really using it, once you go archive this data, we make it searchable. Now, you have this really great opportunity to go reuse that data in a much more predictable way, because once it’s been archived, I can go search for that data again, and either extract or restore individual files, portions of an archive or the whole archive anywhere that it’s needed, and we keep all the history associated with it, from where it came from, the date it happened on, and have a full metadata tree that you can browse on the system to go to that system and then search in that view. 

    Then once you’ve archived it, you want to go protect the rest. Since Igneous is a SaaS-based platform, we make it really easy. That same VM that was deployed in the datacenter can also go protect all that data off to Azure. Again, we support all of those tiers in Azure that Karl mentioned earlier, so you pick which tier you want based on your recovery period or recovery time objectives that you're looking to go hit, but we make it really easy. We will go import the namespace from any of these devices, which means you're managing at more of a bulk level. We API integrate with the top four, but we also support any other platform that speaks SMB or NFS. 

    The part of about this was Igneous was really built around managing files at scale, so we were purpose-built for these unstructured environments, which means that if you have big directories or small directories or file-dense directories, we were built to go handle that and do that in a way that lets you hit those daily SLAs really aggressively. We don’t really fret about petabytes of data or billions of files because that’s what we spent our time optimizing around. 

    A big topic lately has been ransomware, and there's no better way to protect yourself against ransomware than creating this physical separation between primary NAS devices and data in Azure, especially the archived tiers in Azure, or leveraging the immutability features in Azure. You now have a copy that cannot be altered in any way, shape, or form to protect you against any ransomware events, and it’s not only decoupled from your primary NAS environment, it is also physically distanced from everything that you have, with its own set of permission models that’s managed through Igneous, so you have a completely isolated environment for all your protection data. 

    Let’s talk about the cloud usage as Karl mentioned earlier. Just deciding that you want to go leverage the cloud, has some differences associated with it. Those differences come in the way that cost is managed, and so we like to talk about how do you make the cloud efficient, how do you really leverage this with a really great TCO. Some of that is, OK, you lower price, but that’s not really the whole reason there. Some of these things are you have to inherently think about the cloud and the way that you manage cost in the cloud. 

    Let’s give you an example here because that’s the best way to describe it. If you have a petabyte of data and it’s made up of one megabyte files, that’s about a billion files. If I just took my standard… I'm going to take all the data off my primary systems, back that up to Azure, I'm going to hit $50,000 just to back up that one petabyte of data in transaction costs. That’s the way that a lot of the cloud providers are being structured, because there's real network costs associated with that. We help customers mitigate that cost, so when we’re doing this parallel ingest, we’re going to go take that data, we’re going to compress that data and we’re going to put it into chunks and we’re going to pick the optimal chunk size that we put into Azure, so that we mitigate those transaction costs. 

    So, that very same example of taking the one petabyte of data and moving into Azure Archive Blob store directly will cost you $537 as opposed to $50,000. 

    Then you have to think about the secondary effect here, is that I have two things that I have to go do now. The first thing I have to go do is expire data to meet business requirements. The second thing I have to do is once I expire data, I have to decide when I'm going to actually go compact that data in the cloud. I don’t want that data just to grow indefinitely, especially, if I have a retention policy associated with that data, otherwise my costs are just going to continue to go up and to the right. 

    What Igneous has done is gone and separate the process of expiration from compaction, so you can set your business policies for data to expire at a given point in time, and we will remove that data from the index so that’s no longer recoverable. That doesn’t mean we actually go clean up the data. It’s only when one of these chunks at a certain percentage full where it’s easier to go, or it’s cheaper to go expire to read that chunk, compact it, and rewrite it, versus let it store in Azure in perpetuity, we will go take that action. We’re both managing the costs associated with that expiration process also. 

    Now, the alternative is you just keep level zeroing every month and resetting your baseline. As you start leveraging the cloud, you can’t keep resetting your baseline, you just don’t have enough time in the day to relevel zero every month in sort of the traditional way that a tape rotation works. 

    Hopefully, that explains how we make Azure economical for these really file-centric environments. 

    Then about Igneous, like I mentioned, we were really designed around scale and files, and so we are a scale-out VM architecture, so when you have that big site, you can keep adding multiple VMs to continue to scale-out that environment to hit your most aggressive SLAs, or if you have multiple locations, you can deploy these globally. Because we are SaaS, you always have a single pane of glass that you can go work from. It really makes it easy and efficient to go do this. By the way, you could be sitting at your home managing this data right from Igneous, whether you're talking about the visibility, whether you're talking about the archive capabilities, or whether you're talking about the data protection capabilities. 

    The second part is this is really the secret sauce of why we were designed around file and how we can do what we do with file. One, we have some really core technology about how we go scan for data, how we move data and how we index that data. We don’t worry about indexes that get to billions of files or catalogues where you would have to go stand up another catalogue server. That is all handled in the SaaS offering, and our index is capable of supporting trillions of entries associated with it, those entries being what you see in DataDiscover and what we use as the counter snapshot to go scanning for change rate and file systems. 

    Then our AdaptiveSCAN and IntelliMOVE are about how we efficiently move through file systems to find change rate and move that data off into the cloud, into Azure. 

    A big portion of what we do is we realized that when you're moving petabytes or even hundreds of terabytes of data into Azure, that’s going to take a little while to get that first level of zero through. So, we have what we call “Latency Awareness and Dynamic Throttling”. When we’re going through the day and we’re moving data off, we’re constantly measuring the latency associated with your primary NAS systems and we will throttle back the number of threads that we were using to not impact your applications. 

    So, we say you can run 24/7 with Igneous, we will always adapt and adjust to what's going on to not disrupt you and be really lightweight and easy on your systems. 

    Last think I want to talk about, a customer example. This is Quantum Spatial. If you don’t know what Quantum Spatial does, they fly planes and take pictures, and they take all sorts of digital imagery, everything from Lidar and ortho-imagery and HD video and thermal infrared and hyperspectral imagery as they use their planes and satellites. If you think of the workflow of Quantum Spatial, they take data that comes off these planes or satellites, they go process that into an end result, and then they provide it to their customers, and that is an insight to their customers based on what their customers contract is. It could be things like surveying land, it could be about crops, it could be about damage from storms, any number of ways that Quantum Spatial will be contracted to take their fleet of planes or their satellites to capture this data, process it, share it, so people can make decisions. It turns out Quantum Spatial is seven sites, and so it’s not only processing data and gathering it and archiving it in the central sites where the bulk of that work is done, it’s also that distributed environment where that work is performed on the edge. 

    From Quantum Spatial’s perspective, their workflow is capture, they go process that data. While they're processing and working on that data, Igneous is doing a backup of that data, so we’re hitting pretty aggressive SLAs on the amount of change rate, the amount of data that’s coming in. We’re protecting it to Azure, in this use case, and we’re doing it across multiple sites, and so there are multiple VMs that are deployed across the Quantum Spatial environment. As they protect that data, and a project gets to its end of life, so they say, ‘OK, we’re done with this, we've delivered this to a customer’, it sits for a period of time, they go archive that data off into Azure, and when they archive that off, it’s not a passive archive. They're frequently bringing that data back on-premises again to go reprocess that as they enhance that data associated with it, and they are using DataDiscover to drive that process, to get good visibility into their workflow, into their change rate, into where their data is located to make really great intelligent decisions about it. 

    What they have is a global site with multiple NAS devices, it’s petabytes of data and there's petabytes of data moving around on an ongoing basis. 

    That’s it. Thank you for your time everybody and we’re here to take questions from everybody as they feed it into the chat. Carol. 

    Caroline Thomas

    Karl and Christian, thank you so much for that presentation, really great information on how to get better utilization of your resources by removing dependencies on legacy technologies and taking advantage of the cloud with Igneous and Azure. 

    To all attendees, as a reminder, if you still want to add in questions, please use the “Question and Answer” box and if we don’t get to everything today, we can follow-up offline. We have quite a few questions, so I'm going to get right to it. 

    Christian Smith

    Caroline, could I answer just one thing here, so one thing I've failed to mention, Igneous DataDiscover, we’re running a promotion right now, or not even a promotion, just to help companies that ate impacted by COVID-19 to continue their operations, and so if you go to our website and go look on our website, you can get a six-month use of Igneous DataDiscover for free to start getting gunsights into your environment. I suggest everybody on the call go to www.igneous.io and try it out. It’s not going to take much time and you can go get better visibility into what you're doing. 

    Back to, Caroline. 

    Caroline Thomas

    The first question, “When we archive data to Azure deep archive tiers, how fast can we get those files back if we need them?”

    Karl Rautenstrauch

    I will go ahead and take that one, so [audio] and there are two options, so on our public documentation and I will be happy to provide a link [audio], we have two different retrieval SLAs, so the [inaudible] retrieval SLA measures the retrieval in hours, and it can take up to [audio] hours to retrieve content from the archive [audio] we document publicly. We just introduced a priority retrieval tier which starts the retrieval in an hour. You have [audio]… happy to [audio] documentation on that. 

    Christian Smith

    Perfect, and Igneous starts the rehydration in a parallel way, so as soon as that first byte starts becoming available, we will start transferring that back on-premises. 

    Caroline Thomas

    “We have two development centers with separate NAS instances at each site, can we use Igneous to backup these two NAS environments to a single Azure destination?”

    Christian Smith

    Yes. We can back them all up to a single Azure destination from both sites, so deploying two different virtual machines on the sites can still target the same centralized location for data. 

    Caroline Thomas

    “How does ExpressRoute help with ingress and egress fees?”

    Karl Rautenstrauch

    We have a couple of different ExpressRoute options available, just like the retrieval options [audio] there. With Express Rout, there are a couple of different variants of the offering that you choose to wave the egress. Pulling data back for any reason to your site is absolutely no cost, and just like the archive information that is all public and available on both our pricing page and in the ExpressRoute documentation. 

    Caroline Thomas

    How do we ensure our data is being backed up as expected from NAS to Azure? Does Igneous do the verification or does Azure do “that?”

    Christian Smith

    We certainly at Igneous have our own verification of all data that goes into Azure, so we’re fingerprinting every file, and we’re ensuring that fingerprint all the way through and then also when we read that back out again. Then in Azure, Karl can talk about the resiliency there and the number of lines of resiliency associated with it, but they have their own method for data integrity also. 

    Karl Rautenstrauch

    [Technical difficulty]

    Christian Smith

    Karl, your audio is really bouncing and jumping right now. It’s kind of funny in our day and age as we all work from home, internet has gotten better, phone has gotten worse. We are dialed into this call and it’s very hard to hear you at this point. If you move or something and try it out. If not, we can take answers and I can answer them. 

    Caroline Thomas

    Why don’t I move onto the next question and we can follow-up in email with that one? 

    “What was the economic impact of the solution you deployed at Quantum Spatial?” 

    Christian Smith

    I don’t have the full dollars and cents in front of me right now, but what I can tell you is that they were in a place where there was a lot of challenges with existing infrastructure and the options that were on the table had failed multiple times. They were having just a challenge with maintaining an SLA and being able to manage their data appropriately. We’re working on a total economic impact, but it started with, ‘I need to be able to do this and I need to be able to do this cost efficiently’, and we matched and exceeded their expectations on that. 

    Caroline Thomas

    This might have been for Karl, but maybe you can take it. Christian. 

    “How do we decide which Azure tier is right for us?”

    Christian Smith

    That’s a good question and I think it has to do with how fast you need to get your data back. Do you need time to first byte of zero or can you wait the time that Karl mentioned, like 12 hours to get data out of Azure Archive Blob Store? For the most part, we've seen people that are in this category have SLAs in the days, because it tends to be on tape, and it takes days to get back. Anything that goes to Azure Archive Blob Store is actually more cost effective and it has faster retrieval times back to on-premises but, of course, there's Cool, which has time to first byte of zero, and that’s that next economical tier up. Then still having that optionality if you want to accelerate your restore time from Azure Archive Blob Store, you just heard Karl talk about that, there's an option coming out that says I want to get it back faster and have a much shorter SLA time around it. 

    For the most part, it’s been people picking the tier that’s been a negotiation with their business teams and what that SLA is for RTO, and then measuring that against the costs associated with it. We've done a bunch of TCO work around that and happy to go through your specific case if you desire. 

    Karl Rautenstrauch

    How’s my audio now, any clearer? 

    Christian Smith

    It’s great. 

    Karl Rautenstrauch

    Perfect. I adjusted three feet in my office. Just to add onto that really quickly. Understanding your access patterns and your need for your data is really the best way to determine the right tier and that’s something that DataDiscover can help you with, help you understand what percentage, what type of data is retrieved, how frequently and then you can map that to the proper tier in Azure. 

    Caroline Thomas

    Karl and Christian, we really thank you for your time today and everyone that joined our webinar. We will be concluding the webinar now and any questions we didn’t get to answer, we will definitely follow with an email. Just so you know, this webinar will be available on demand via the same URL that you used for the live session today. 

    With that, thank you very much everyone for your time and have a great rest of your day.