Innovators’ Challenge – Normalizing the Clouds

The Old Lock-in Problem

Before VMware, there used to be three or four large block storage companies: IBM, EMC, HP, etc., and each produced their proprietary storage systems.

No alt text provided for this image

Each of these companies maintained large farms of compatibility testing to ensure that their storage systems worked with all kinds of different versions of client operating systems (like Solaris v1, v2, etc., Windows v1, v2, etc.) and their respective software drivers and hardware. This vast compatibility testing matrix used to be a substantial barrier for new innovative storage companies entering the market. 

It was also nontrivial to transfer data from one vendor to another, making that the era of total data repositories lock-in.

The 2000s

VMware came along with the server virtualization software, and their observation was that most servers were IDLE, and possibly 2% busy on average.

Virtualization technology helped organizations to consolidated servers by 10 to 20x, and those were easy savings that CIOs could easily understand; additionally, VMware provided management software that made it easy to manage a collection of servers. 10x cost savings with easy infrastructure management was a swift game-changer for the industry.

The other important thing VMware did was to normalize all the prominent storage vendors. VMware abstracted away all the storage nuances of the hundred different client operating systems and their different versions. VMware provided a straightforward way to plug in storage systems to the virtualized workloads, and every storage vendor had to adapt to a VMware supported protocol, and that was enough to test.

This significantly reduced the barrier for new innovative storage companies to enter the market, but it also removed data repositories lock-in. 

This change enabled a significant number of new storage companies to flourish and bring innovations to the market (companies like Nutanix, Datrium, etc.). It was a win-win for the industry and for customers. 

Customers could now move from one storage system to another with the push of a button, and it was the end of the storage vendor lock-in.

The New Lock-in Problem — deja vu

We are at a similar crossroads today with the three or four big public clouds. AWS, Azure, and GCP each offer different compute and storage services – there is no consistency across them, and there is no integration between them.

A customer using AWS cannot promptly move their business applications to another cloud vendor, and vice versa. Further, mobility across clouds requires some rewrite of applications. This is lock-in 2.0.

Enter VMware again with the recent announcement that they can run their VMware Cloud Platform (VMC) in all public clouds. This enables uniform compute management across clouds, providing customers with a consistent experience. This is a victory for customers who want to shutdown private data centers and leverage public clouds without having to rewrite their applications or leverage public clouds as a natural extension of their existing data centers.

This uniform compute service across clouds is excellent, but there are still significant challenges with regard to storage services.

The Shifting Of Dollars

No alt text provided for this image

For every $1 spent on VMware’s vSphere in the datacenters, there are $10 spent on compute servers, $7 on networking gear, and $5 on storage systems. There are about 100 million VMware virtual machines deployed across enterprises today, and VMware has an approximately $6B vSphere annual revenue with 82% market share. Further, about $30B is spent on storage systems.

As enterprise customers start adopting VMware Cloud on AWS, GCP, and Azure, all compute, networking and storage dollars will shift directly into the coffers of public cloud vendors.

Now you can understand the motivation of the public cloud vendors to attract VMware customers to run their applications on their cloud by offering VMware services on their platforms (even Oracle announced VMware on their cloud).

This is the most natural path for customers to adopt the public cloud without having to retool and rewrite applications.

The question then is, what happens to all the infrastructure vendors in the private data center space? With a change of this magnitude, there is a massive opportunity for infrastructure vendors able to adapt and thrive in this new cloud model.

The Storage Problem

It is easy to assume that public cloud vendors have many different types of storage services, and hence there is no room for traditional datacenter storage vendors to play in this space.

Beyond the lock-in, the real problem is that storage services in the cloud are not as enterprise-class as one would expect, and this is recognized by the cloud vendors themselves.

Let us take AWS as an example. AWS has S3 for cheap and deep storage with very high durability, but it has very restrictive I/O behavior with slow performance, and you cannot really use it to run your virtual machines directly off that. AWS also offers EBS, which is All-Flash with good performance but very expensive. Additionally, EBS has a high probability of data loss (based on AWS documentation). Finally, AWS Glacier that is exceptionally cheap, but behaves like a tape library in terms of performance.

Enterprises are used to getting high quality, high performance, and highly resilient storage systems for their data center. Further, AWS zones can go down, and customers need to promptly recover at a different zone, or at even a different cloud provider. 

Finally, you cannot move to a different cloud if you are locked into a specific provider.

New Storage Paradigm – Enormous Opportunity

Enterprises demand a storage solution that provides services at the cost of AWS Glacier with the durability of AWS S3, but with the performance of AWS EBS. In addition, this new storage must be able to work across clouds to avoid lock-in and efficiently move across clouds. This data movement enables enterprises to prepare for a disaster when one cloud is down, or the business decides to change directions.

No alt text provided for this image

However, this is a nontrivial problem in storage system design.

Existing storage vendors will have a remarkably hard time to pivot their software to work in the cloud because storage system software designs are like database schemas, which are very hard to change once you have created the initial architecture. Hence, the cloud offers a totally new space to explore for new challengers, without the burden of competition from the legacy incumbents.

Legacy storage vendors have been fast to change their marketing materials to say “multi-cloud,” but what they mean is putting their legacy physical storage boxes in colocation facilities, with expensive network links to the clouds. Customers can see through this marketing spin.

Multicloud Compute & Data Planes

VMware now provides uniform compute services across clouds. Kubernetes also does that, and VMware realizes that hence, Project Pacific.

However, the missing piece is a uniform data plane that provides a consistent and resilient data experience across clouds. Snowflake (a new data warehousing vendor) is doing something similar for OLAP datasets, but organizations need similar services for VMware VMs & Container data.

Our team at Datrium is focused on solving this multi-cloud data plane problem and we have a real opportunity to help organizations to start bridging the multi-cloud chaos today.

– By Andre Leibovici & Sazzala Reddy

VDI Calculator now as a Docker Container

Over the years I’ve heard users of my VDI Calculator complain that they either have issues with Java functionality or they do not want to install Java on their computers due to security concerns.

So, I have created a pre-tested Docker container that contains a Linux image and the calculator components; further, it also provides an x11 display.

To install simply follow the steps below:

  • Create a folder named vdicalc
  • Create a file named docker-compose.yml
  • Copy contents from (below) into docker-compose.yml
  • Set x11-bridge display using “export DISPLAY=:0
  • Start services using “docker-compose up -d
  • Access calculator at http://localhost:10000/connect.html

There are a couple of known issues:

  • No printing support
  • No web guide support

Optionally go to Docker Hub:

This article was first published by Andre Leibovici (@andreleibovici) at

Datrium DR-as-a-Service Networking with VMware Cloud (VMC)

During VMworld Datrium launched a unique DRaaS solution completely integrated into VMware Cloud on AWS. At a high-level DRaaS if a complete DR solution offered as a subscription that leverages the VMware Cloud and native Datrium cloud-based backups to deliver a fully-orchestrated low cost, low RPO and lower RTO disaster recovery for VMware customers.

Datrium provides fully integrated purchasing, support, and billing for all components and services, including VMware Cloud on AWS and AWS itself. It’s delivered as a SaaS solution that eliminates all the complexity of packaged software.

All Datrium services are deployed as AMIs into a Datrium-created VPC and Subnet. VPC endpoints used to access all other external services required by Datrium components (ControlShift and Cloud DVX) are also created automatically. All components are monitored and restarted for high availability and resilience. All required state is replicated to ensure resilience.

However, some of the most asked questions relate to networking connectivity options to VMware Cloud on AWS when in DR more. Please note that blog post is addressing user and site connectivity to applications running on VMware Cloud in DR mode, rather than Datrium replication between on-premises and CloudDVX. The replication between on-premises DVX and CloudDVX is done using native snapshots coupled with universal deduplication and compression, and transmitted over a HTTPS tunnel over the Internet.

At this point, if you are not yet familiar with Datrium DRaaS I suggest you to stop reading and briefly read World’s 1st Just-in-Time Cloud DR (VMware Cloud)… Everything Techies Need to Know….

There are several options when it comes to connecting on-premises data centers to other on-premises data centers or to the public cloud. This blog post outlines the various options available.

Site Connectivity Options

The diagram above demonstrate some of the more popular connectivity options. Represented by a solid line, these solutions typically offer secure, private, one-to-one connections between sites. Represented by a dashed line, these solutions enable access to internal applications via the public internet.  

On-Premises to On-Premises Connectivity

First things first, lets discuss the connectivity necessary for using ControlShift between on-premises datacenters.ControlShift requires network connectivity between on-premises datacenters to failover or migrate workloads. This connectivity is primarily at the Datrium infrastructure level. Additional connectivity between sites may be needed for applications to communicate with each or for users to interact with the applications.

Connecting multiple on-premises data centers is a practice that has been around for many years. There are many ways to connect on-premises data centers. The connection method selected should be decided independently by the customer’s networking team.

Datrium will automatically establish connectivity (outbound) with a ControlShift SaaS instance running on AWS.

On-Premises to Cloud Connectivity

There are few options when it comes to connecting on-premises datacenter to VMC on AWS. Below are the four most common options, but please note that the options below are addressing user connectivity to their applications, rather than Datrium replication between on-premises and CloudDVX.

AWS Direct Connect (DX)

AWS Direct Connect (DX) is a service aimed at allowing enterprise customers easy access to their AWS environment. Enterprises can leverage the DX to establish secure, private connectivity to the AWS global network from their data centers, office locations or co-location environments.

  • This is the recommended approach from VMware, and some customers may already have DX in place if they heavily utilise AWS.
  • DX is a one-to-one connection between on-premises and cloud.
  • The process of purchasing a DX can take months from contract signing to installation, so some forward planning is required.
  • Direct Connect offers higher speeds and lower latency than you can achieve with a connection over the public Internet. Connections can either be 1Gbps or 10Gbps.
  • All data transferred over the dedicated connection is charged at the reduced AWS Direct Connect data transfer rate rather than Internet data transfer rates.
  • DX is a physical connection from the on-premises site to the cloud site – as such it could be affected by on-premises failure modes and needs to be accounted for in the design and operations appropriately.


A Layer 2 Virtual Private Network (L2VPN) can be used to extend an on-premises network which provides a secure communications tunnel between an on-premises network and a network segment in VMC on AWS SDDC.

  • The L2VPN extended network is a single subnet with a single broadcast domain so you can migrate VMs to and from your VMC SDDC without having to change their IP addresses.
  • VLANs (up to 100) can be used to create multiple private networks within the single subnet.
  • VMware Cloud on AWS uses NSX-T to provide the L2VPN server in your VMC SDDC. L2VPN client functions are provided by a standalone NSX Edge (for free) that is downloadable and deployable into an on-premises data center.
  • A one-to-one connection between on-premises and cloud (multiple L2VPN’s can be used).
  • Not typically used to access the Management workloads of VMC. See DX or IPSec VPN.


IPsec VPN is a feature of VMC on AWS which provides secure access to On-Premises management and workload connectivity via a secure IPsec VPN tunnel.  

  • VMware NSX-T Edge provides the IPsec implementation within VMC. The On-Premises gateway can be provided with any IPsec compatible appliance, either physical or virtual.
  • A one-to-one connection between on-premises and cloud.

There are two types of IPsec VPN’s that can be used with VMC on AWS:

  • Route-based VPN (Dynamic Routing) – A route-based VPN provides resilient, secure access to multiple subnets. When a route-based VPN is used, new routes are added automatically when new networks are created. Route-based VPN uses BGP to dynamically share routes across the VPN tunnel.
  • Policy-based VPN (Static Routing) – A policy-based VPN creates an IPsec tunnel and a policy that specifies how traffic uses it. When you use a policy-based VPN, you must update the routing tables on both ends of the network when new routes are added.

3rd Party VPN

A 3rd party Virtual Private Network (VPN) solution can be used to extend an on-premises network to a public cloud SDDC. Many VPN solutions providers offer a virtual appliance deployment option. These vSphere compatible appliances can be deployed into VMC to offer another method of extending an on-premises network to or enabling individual users to access workloads running within VMC.

  • OpenVPN and Palo Alto Networks are example 3rd party VPN solutions.
  • Requires a vSphere compatible VPN appliance.
  • 3rd Party licensing applies.
  • Allows customers to utilise existing products and skillsets.
  • Typically, a one-to-one connection between on-premises and cloud, but can also be used as a many-to-one VPN via the Internet.
  • Public IP required for connectivity.

Accessing VMC Workloads on the Internet

The connectivity options above enable secure, controlled access to workloads running within VMC. However, if a user is neither on-premises or using a secure VPN client, internal workloads will be inaccessible when in DR mode on VMware Cloud.

Datrium ControlShift DR plans are responsible for creating mapping rules between on-premises networks and VMware SSDC networks, if necessary, and as part of the DR plan ControlShift can also re-IP VMs accordingly. When it’s time to fallback, ControlShift will automatically reverse the IP addressing to the original ones, along with transferring only unique data back on-premises.

Occasionally workloads may need to be made available via the public internet. To enable direct internet access for workloads VMC offers Public IP addresses. Public IP addresses can be requested on-demand and mapped to workloads that need to be directly accessed via the internet. Some examples of servers that may require direct internet access would be: 

  • Email servers
  • Web servers
  • VMware Horizon – Unified Access Gateways
  • 3rd Party VPN solutions

For VMs that need to be exposed to the Internet (or need a 1:1 Natting), go on the VMC Networking and Security section in the VMware Cloud Console and create a rule to allow outbound traffic to the Internet.

Thanks to Mike McLaughlin and Simon Long for crafting most of this information.

This article was first published by Andre Leibovici (@andreleibovici) at

Load more