World’s First Hybrid and Multi-Cloud Data Fabric with Initial support for VMware Cloud – Welcome Datrium CloudShift!

Ok, you got here! However, before you start reading and give up in the middle of a long post, let me tell you that what Datrium is announcing today is a generational improvement to existing Disaster Recovery and Hybrid Cloud platforms for many reasons – so it is crucial to understand how we got here.

If you just want to learn about the announcement, go to the section “Welcome, CloudShift” or read the following: ” CloudShift delivers a complete but straightforward run-book orchestration for multiple use-cases, and Cloud DR provided as a cloud service. CloudShift enables customers to utilize the data already in the cloud, via Cloud DVX, to failover, failback, instantiate and migrate applications on various clouds, with the first release supporting the VMware Cloud on AWS (VMC).Here is the Press Release.

 

The Beginning

Delivering a true Hybrid cloud experience is hard, really hard, and anyone who says that it can be achieved without an integrated data fabric that spans on-prem and public clouds with cost-effective data movement is just fooling themselves. I don’t say that lightly, because a Hybrid cloud is a journey and requires multiple parts of the stack to be perfectly aligned for successful delivery.

Datrium Hybrid Cloud has always been on the roadmap, but to deliver the experience customers desire we had to go through multiple product phases.

We started building the most-scalable and fastest converged platform in the world that can run any tier-1 app, at any scale – DVX has been validated being 10x faster than the fastest HCI and 5X faster than the fastest AFA but at 1/3 of the latency.

We also knew that at any serious scale data services cannot be an option, so we took a zero knobs approach that brings simplicity to the platform, but more importantly, enables Datrium to defeat data gravity (more on that later). As part of the DVX platform, we also delivered a native and robust scale-out backup that eliminate air-gaps between traditional primary storage and backup, providing data fidelity across the entire stack.

The 2nd delivery phase was the ability to leverage integrated universal deduplication combined with replication to create availability zones across on-premise and public cloud. As part of the public cloud component, we shipped Cloud DVX, a native and integrated Backup-as-a-Service solution that lives on the cloud and provides cost-effective long-term archiving.

Now comes the 3rd delivery phase of the vision, and we are shipping the tools necessary to leverage Cloud DVX to orchestrate and instrument Disaster Recovery-as-a-Service on public clouds, with the first release supporting VMware Cloud on AWS. Yes, all that with great partnership from VMware.

There’s a lot more to come as part of the platform vision, but let’s now take a step back to understand how all the pieces fit together.

 

 

To make Hybrid Cloud work, we need to defeat Data Gravity

Data Gravity is the term used to describe the hypothesis that Data, like planets, have mass and that applications and services are naturally attracted to Data. This is the same effect Gravity has on objects around a planet.

 

Dave McCrory first coined the term Data Gravity to explain that because of Latency and Throughput constraints applications and services will or should always be executing in proximity to Data — “Latency and Throughput, which act as the accelerators in continuing a stronger and stronger reliance or pull on each other.”

This Data Gravity theory is highly applicable to the evolution of datacenters and clouds. The ascension of host attached Flash devices, and the ability to utilize them on local computing buses vs. over a network is a clear indication that applications benefit from the data proximity.

In the case of data movement between clouds, the real puzzle is how to dilute and reduce data to its most essential fundament, a sequence of bits and bytes that never repeat themselves. Also known as data deduplication, this technology has been around for many years, but it has always been used as in a self-contained manner, this means that data is deduplicated in a container, in a drive, in a host, in a cluster, on the wire. However, when it comes to application and data mobility, we are still wrapped by latency and throughput, making such data movement hard, particularly when addressing vast data lakes.

If it was possible to de-duplicate application data at a global level, across various datacenters, across clouds, across data lakes, and across systems then we would be guaranteeing very high-level of data availability in each part of the globe because data becomes ubiquitous and universal.

With Datrium an application running in a private datacenter has each data block de-duplicated and hashed locally creating a unique fingerprint, then these fingerprints are compared to hashes available on multiple Cloud DVX deployments. Then only the outstanding and unique data is transferred before migrating the application and data from a location to another, in a fraction of the time and bandwidth that is required with traditional mechanics.

Universal de-duplication makes data ubiquitous and universal, common to every application and system, while metadata takes on a vital role, building datasets, enforcing policies and distribution. The bigger the pool of de-duplicated data available on a given location (AWS, Azure, GCP, On-Prem), the lesser bandwidth is required because most of the necessary data is already there. Data contention and distribution issues are gone because data is ubiquitous and common to all systems, while metadata starts playing a vital role.

Unless we defeat data gravity it’s not possible to build a Hybrid Cloud that won’t incur high networking and cloud costs. You can read more about Data Gravity here.

 

 

To make Hybrid Cloud work, we need Backup-as-a-Service

Breaking data gravity is not enough, because to deliver a reliable Hybrid cloud experience that is also cost-effective there’s need for a well-thought-out backup engine that continuously protects and archive data on a cloud repository, but also consolidates data from multiple locations – that’s what Cloud DVX delivers.

Cloud DVX is a zero-administration SaaS solution of the overall Datrium DVX platform that lives on the cloud (AWS today but Azure and GCP in the future). As a part of the service offering, Datrium manages the service availability, software upgrades as well as proactive support and self-healing functions.

Cloud DVX is the brains for on-premise DVX instances, and the software is built on the same split provisioning foundation as the on-premise DVX system, enabling massive on-demand scalability of compute and capacity. Furthermore, the same superpowers of the Log-Structured Filesystem (LFS) is behind Cloud DVX.

One of the use-cases for Cloud DVX is Backup-as-a-Service. Traditionally IT organizations provide incremental and differential snapshots and backups of running systems and store an extra copy on secondary storage for quick retrieval (low RTO), and later the same data is archived to tape for long-term retention.

Cloud DVX BaaS delivers native dedupe-aware backup and archival capabilities to AWS. Cloud DVX collapses the long-term archiving tier, traditionally owned by tape vendors, and enables organizations to go to the cloud with an extremely secure, cost-effective and remarkable RTO.

The solutions offer a self-managed solution that supports multi-site, multi-system, and multi-object global de-duplication with full data efficiency and encryption on the wire and at rest. Further, because the service supports end-to-end encryption, there is no need to add a separate VPN and related AWS charges.

In the context of offering the World’s First Hybrid and Multi-Cloud Data Fabric, Cloud DVX is the data repository holding backup and replicated data. For now, this repository is on AWS (under the covers it uses EC2 and S3), but it will also have placement in Azure and GCP, enabling intercommunication between clouds.

You can read more about Cloud DVX here.

 

 

Cloud DR and Hybrid Cloud are HARD!

Cloud DR holds promise, but the reality is that technology vendors have not been able to deliver solutions that are simple and cost-effective. At the same time, there’s so much complexity in the datacenter, making most DR options fragile and brittle. Furthermore, inefficient data protection infrastructure forces choice between low RPO and low cloud costs.

Solutions like VMware SRM are solid and have evolved to handle very complex datacenters, but the underneath complexity in dealing with multiple infrastructure silos, many copies of data, and data transformations, make DR solutions complex and with a high management overhead.

Below is an example of all the components involved in a traditional DR implementation using legacy technology. What could go wrong?

 

 

Welcome, CloudShift!

CloudShift is another element of Cloud DVX that delivers a complete but straightforward run-book orchestration, and Cloud DR provided as a cloud service. CloudShift enables Cloud DVX BaaS customers to use the data already in the cloud to instantiate applications on different clouds, with the first release supporting VMware Cloud on AWS (VMC).

VMC is our first pick because the vast majority of organizations are happy VMware customers and many of them are looking to leverage VMC in multiple ways. Furthermore, with VMC there are no risky VM conversions necessary, making the DR process very simple and integrated.

Before we move ahead, it’s important to highlight that CloudShift Runbook Automation for DR and Workload Mobility provide support for traditional Prem-to-Prem, Prem-to-Cloud and Cloud-to-Prem. Also, Datrium supports VMware SRM and can be orchestrated as part of a broader datacenter runbook automation.

 

 

CloudShift is as part of the Datrium data fabric and provides on-demand VMware DR to VMC, not even requiring that a VMC SSDC be pre-created in order to kick-off a DR plan. In that case, of course, the DR plan may take a couple hours, because first an SSDC must be created and then data on Cloud DVX must be copied to the SSDC – in such cases, we are looking at couple hours RTO to re-instantiate a completely lost datacenter. Another option is to have an SDDC pre-created, in which case, the RTO would be much lower.

Cloud DVX delivers a very low RPO, because it utilizes DVX snaps and forever incremental replication, but at the same time keep costs low for CloudShift to deliver just-in-time DR infrastructure. When compared to synchronous replication or stretch-clusters solutions that require the VMC SDDC to be fully operational, Cloudshift becomes attractive, especially when cost and availability are important to organizations.

The picture below demonstrates how legacy data protection architecture works in a DR scenario and how CloudShift is a generational improvement.

 

The orchestration is the only thing that must be configured by IT teams to make CloudShift work, and as part of this config, a DR and Test plans must be created. Using those plans, IT is able to execute non-disruptive testing and build site-to-site mappings with guest VM re-IP when necessary – CloudShift integrates with the VMware stack, including NSX for seamless networking migration.

Furthermore, the system runs continuous plan verification and compliance checks to ensure everything will work just fine when you need the most. Finally,  CloudShift also offers the ability to completely failback all applications and data to your primary datacenter.

 

 

Here are some Beta product screenshots.


 

 

 

 

We have a rock-solid platform that lives in your datacenter and delivers extreme performance, scalability and native data protection; this platform is natively integrated with an extremely efficient cloud backup and data cloud repository; this data cloud repository is used to deliver seamless disaster recovery to VMC today and soon with VM conversion to multiple clouds. In a near future, these data cloud repositories will communicate to each other creating a cross-cloud data fabric and forming a global data ledger that provides native global data governance.

 

Useful Links:

 

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net

Shocking datacenter reality?

That’s enough; I am in pain!

That is what a prospect transpired to me recently. I could see his face of desperation as he told me in details the current challenges his team faces in integrating datacenter products and technologies.

It all started with the application performance provided by their tier 1 storage, but it was soon followed by backup issues where their big databases could not be fully backed-up during the available time window, and subsequently followed by the lack of a Disaster Recovery plan that would seamlessly integrate with the organizations’ needs.

Finally, he tells me that his team was tired of troubleshooting the intersection between all these disparate technologies.

Looking at his face of desperation, I replied:

– What if I could provide you with a solution that seamlessly integrates into your existing datacenter, provides you with exceptional application performance that will make your users happy, implements fully integrated and non-intrusive backup that enables sub-minute RTO and allow long-term archiving to Cloud, and moreover completely automate DR to your secondary site or even to the Cloud. What if this product was provided as a single integrated solution with seamless workflows?

He in turn answered:

 

–Where do I sign? If you can make all that happen for me, you will make a lot of money.

 

What I am telling you happened recently, and of course, we are going through the motions for him to test Datrium tech, but it is clear that there is a massive datacenter integration puzzle that must be solved, but a problem that technology vendors have not been able to address thus far, essentially because technology vendors singularly focus on point solutions.

While I think about how Datrium solves this organization’s issues effectively, I also realize that many companies have traditionally budgeted IT to buy and renew technology in procurement cycles. This means that primary storage buying cycle is ordinarily different from backup and networking. However, what if you could get the irrefutable offer to solve all those IT challenges, in one go, and at a more affordable price than buying all your IT products over time, even accounting the depreciation costs?

True-stacks are becoming the go-to approach to solve datacenter integration obstacles. IT doesn’t have to be complicated, and the big reason why customers are in pain is that products have not been built to work in symbiosis with each other.

 

IT is tired of doing things that aren’t providing business value, and troubleshooting the integration between components for sure isn’t business value.

 

That’s not to say that fast storage, backup or disaster recovery aren’t important to the business – they unquestionably are – but the time wasted in making them work together in a cohesive manner – if possible at all – is not.

 

VMworld 2018 Royale with Datrium. How to Win a FREE VMworld pass, Get you vExpert Gift, and Win many Prizes!!

Another VMworld is upon us and Datrium is going big. Last year Datrium won the 2017 Gold Product of the Year, and I believe Datrium is set to win few prizes this year too. If you have been following us you already know that our partnership with VMware is strong, and Data Protection and seamless Hybrid and Multi-Cloud is a strong part of our roadmap and vision.

Here you will find a list of all Datrium sessions, events, and giveaways!!

 

Before the Show

 

  • Win a FREE Pass to VMworld US!
    Datrium is giving away one FREE full conference pass to  VMworld US 2018!
    Don’t miss out on a chance to discover new technology and meet the people that are shaping the future of digital business and taking IT to the next level.

 

  • VMunderground Opening Acts – 2018
    Datrium is sponsoring The Opening Acts is a series of panel sessions on Sunday before VMworld US. We feel that subject matter expert having open discussions about technology and community provides a great learning opportunity and an opportunity for professional networking.
    WHEN: August 26th, 2018, 1-4pm
    WHERE: Beerhaus (at the Park), next to New York New York.

 

  • VMunderground 2018 (I’ll be here)
    VMworld is a long week and we want to give the community a chance to get together with old and new friends to kick-off the week in a relaxed, friendly atmosphere to set the week off right. For the 12th year, the organizing team continues the tradition, making discussions and meeting up with sponsors and old friends easy & comfortable. Datrium is a sponsor, and I’ll likely show up at this awesome event.
    WHEN: August 26th, 2018, (tentative time: 8-11pm)
    WHERE: Beerhaus (at the Park), next to New York New York.

 

 

During the Show

 

  • Spousetivities at VMworld Las Vegas 2018
    Welcome to the 11th Anniversary of Spousetivities at VMworld! If you’ve joined Spousetivities before, then you know all about the fun, prizes, friendships, and everything that comes with the program. If you’ve never joined Spousetivities, or if you aren’t sure what Spousetivities is, then read on! You can also visit my Spousetivities blog for more information. Don’t miss the cabana days at Mandalay Bay sponsored by ActualTech Media and Datrium!

 

 

 

  • The chance to win a RadRover Mini! (I’ll be here)
    Stop by the Datrium booth for a chance to win ONE of TWO RadMini’s. The RadMini is the first
    and only electric folding fat bike.WHERE: Solutions Exchange
    Mandalay Bay Expo Hall – Datrium Booth #1350

 

 

  • Session: Automatic Failover to AWS When a Wildfire Approaches Your Data Center (I’ll be here)
    Tuesday, August 28 at 11:30 am
    Session ID: HYP3720BUS
    When the heat rises and your data center is in the line of fire what will you do? Proper disaster recovery (DR) planning is key to any successful business continuity strategy. In this session, you will learn first hand how Sonoma County uses VMware Cloud (VMC) on Amazon Web Services (AWS) as their low-cost DR site. Learn the tools and insight to help their organization avoid the cost of a secondary data center and utilize automated DR services which failover to VMC on-demand. In the end, your organization will be left with an agile business continuity strategy for when (not IF) the disaster strikes.

 

 

  • Session: Faster Home Loans on VMware vSphere Mean More Financial Services Revenue
    Tuesday, August 28 at 4:00 pm
    Session ID: VAP1454BU
    Speakers:
    James Jordan, Systems Architect, Certainty Home Loans
    Greg Kleiman, Senior Director, Product Marketing, Datrium
    Learn how leading financial services company Certainty Home Loans increased the performance of their mission-critical applications running on VMware vSphere. Discover how combining your application, virtual machine processing, I/O processing, and primary data on to a single host can deliver 5–10x improvement in performance. Find out how, at the same time, management time is reduced by using VMware vCenter Server to manage storage and save time and money. The end result is more revenue for Certainty Home Loans and other financial services providers who follow their lead.

 

 

  • Session: Battle Royale: SysAdmin vs DevOps Engineer
    Monday, Aug 27, 11:00 a.m. – 11:30 a.m.
    Session ID: CODE5557U
    Speakers: Clint Wyckoff, Senior Global Solutions Engineer, Datrium
    DevOps and Automation, Python or Perl, PowerShell, Chef and Puppet with a splash of Self-Service too? This is all quite overwhelming and lends itself to barriers between the SysAdmin and the Dev’s. But really, what’s all this mean? How can these new methodologies be implemented in a real-world vSphere environment? This session will demystify the core concepts of DevOps and make things simple. The session will focus on real-world workflows and LIVE demo’s which focus on your traditional vSphere environment. We’ll teach you how you can leverage tools like vRealize Automation and at the end, you’ll be empowered to go home and make IT happen.

 

 

  • VMTN TechTalk: The DR site is dead, long live DR! On-demand DR on VMware Cloud on AWS 
    Tuesday, Aug 28, 12:00 p.m. – 12:30 p.m.
    Speaking Session: VMTN5618U
    Speaker: Ganesh Venkitachalam, Co-founder, Datrium
    VMware Cloud in AWS opens up new frontiers in disaster recovery economics. There is no longer a need to always maintain a second site just for DR! In this session, you will learn how you can enable just-in-time deployment of resources in a software-defined data center and eliminate under-utilized disaster recovery infrastructure on-premises. We will demo how you can create DR plans for virtual machines with a DR-as-a-service offering running in AWS. Orchestration SW enables the just-in-time creation of the SDDC on VMware Cloud on AWS. The combined solution provides low RTO DR. Low RTO is achieved by replicating deduped/compressed per-VM snapshots onto AWS S3 and then restarting the VMs into the VMware Cloud SDDC cluster on-demand. Post-DR management is simple – vCenter is the management interface both on-prem and in the cloud, as always!

 

 

  • VMTN TechTalk: Existing Choices to Leverage VMware Cloud on AWS (VMC) for DR [VMTN5977U]
    Thursday, Aug 30, 11:00 p.m. – 11:30 p.m.
    Speaking Session: VMTN5977U
    Speaker: Andre Leibovici, VP of Solutions and Alliances, Datrium
    VMware offers native replication and orchestration of disaster recovery to VMware Cloud on AWS (VMC), but alternative solutions that use different strategies may further complement and facilitate different Recovery Time Objectives (RTO) and Total Cost of Ownership (TCO). In this session, we will examine the available alternatives to leverage VMC as an effective workload destination during disaster recovery events.

 

 

  • VMTN TechTalk: Harnessing the vCommunity to further your career
    Speaking Session: VMTN5508U
    Speaker: Simon Long, Sr. Solutions Architect
    Over the past 10 years, I’ve been a part of this amazing community. Without this communities support, I wouldn’t have landed my dream job as a consultant within VMware, nor become a Double VCDX. In this session, I will share my story and highlight…

 

 

The vExpert Giveaway!

To show our appreciation for your dedication to being the best in the VMware ecosystem, the first 250 certified 2018 VMware vExperts* to register, will receive one custom hoodie. Stop by Datrium’s booth #1350 at VMworld 2018 Las Vegas to claim your gift!

 

WAIT, there’s more!
6 randomly selected vExpert’s will be taking home their very own Nintendo Switch. Collect your vExpert hoodie at the Datrium booth and see if you’re the lucky winner!

REGISTER HERE

 

 

Resilient & Ready to Rock with Zerto

Zerto’s “Resilient & Ready to Rock” Party is sure to be one of your favorite parties of VMworld 2018 – just ask those who have joined in on the rowdy fun in previous years! So whether you are looking to add another year to your VMworld memory book or you are a first-timer — make sure to firm up your “what happens in Vegas stays in Vegas” agreement that you made with your coworkers and come party and rock on with Zerto, Expedient, Datrium and of course Ten Band, the #1 Pearl Jam tribute band in the country! There will obviously be great music, food, drinks, and hilarious stories to talk about at breakfast the next day.

Where: House of Blues in Mandalay Bay
When: Monday, August 27th | 7:00pm – 10:00pm

REGISTER HERE

 

 

Manfred Hofer put together a comprehensive list of Parties, Gatherings, and events. Check it out HERE.

 

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net

Load more