In a world where, in the blink of an eye, anyone with a credit card can access what is essentially the largest computer in the world (in the form of AWS), today’s enterprise data centers are starting to look like mainframe systems of a bygone era: clunky, complex, inflexible, risky and expensive to develop and operate.
To keep up, enterprises have had two real choices: deploy their apps and services in third-party data centers, or implement private clouds. Public clouds, while attractive in their utility and cost, have not been a good option due to a lack of control and guaranteed service levels. But a third choice has emerged: the hybrid cloud.
Hybrid clouds make public clouds a seamless extension of the enterprise data center – what I call the boundless data center (see disclosure below). A boundless data center is essentially a virtualized data center that is no longer limited by bricks and mortar. It is a data center purpose-built to integrate seamlessly with public clouds – enabling the decision of where applications run and data is stored to be based on business requirements, not the limitations or constraints of cloud, storage or virtualization platforms. Because of these advantages, I believe we are entering the era of the boundless data center.
The inevitability of the boundless data center
Amazon, which pioneered the public cloud market, is rumored to be working on a private cloud solution. Microsoft has steadily increased the volume of customers and its own services running on Azure, and currently has a hybrid cloud offering. VMware, which turned server virtualization into a private cloud, is now championing hybrid clouds through its Cloud offering.
Yet I believe that the real giants of the boundless data center era will be the start-ups of today that are nimble, devoid of legacy and that embrace a cloud-first mentality. While “cloud bursting” is a popular use case to accommodate unusually high activity (Black Friday sales, breaking celebrity news), I see at least five key areas of opportunity that are less episodic and more permanent in nature.
Migration of large number of low-usage applications
Many businesses have developed or implemented important but low usage/frequency applications, which are ideal candidates for migration to the cloud. However, as these applications are migrated they still need to be accessible the same way they were before, securely with the same rights and permissions, and as if they were behind the firewall. Migration technologies will need to make all of this possible easily, efficiently and seamlessly.
On-demand development and test environments
Hybrid cloud is an ideal environment for application development and testing. Application developers can rapidly set up the required infrastructure using a public cloud, to develop and test applications at scale in the cloud. And operations teams can build-out the needed infrastructure at the right scale – all without any capital expense – delivering new applications to the business rapidly.
Eighty percent of data is written once and never touched – this is “data at rest.” Hybrid cloud storage – proven by innovative startups like StorSimple (acquired by Microsoft) – can reduce overall storage costs by 60 percent to 80 percent. Enterprises are increasingly looking at integrating cloud storage with their traditional on-premise storage for secondary storage use cases such as archival, backup, etc.
Disaster recovery (DR) is another very capital-intensive IT requirement that stands to benefit greatly from utilizing a hybrid cloud. For most enterprises, DR investments have involved maintaining duplicate instances and even data centers for the purpose of backup and rapid failover during an outage. The result is that DR is a multi-billion dollar industry of redundancy based on what is far less than 10 percent actual usage. A hybrid cloud allows an enterprise to use public clouds as a seamless failover and recovery site.
Analytic workloads can be unpredictable in nature: many require large amounts of storage and compute that is episodic, and others that are intermittent. Increasingly, third party or public data needs to be leveraged to complete an analysis, and bringing it behind the firewall is an inefficient and costly step. Simply put, the ideal compute and storage infrastructure for analytic workloads becomes highly elastic, able to traverse private and public boundaries (which, for instance, enables the opportunity to do analysis in the big data realm).
I believe that the new world order of the boundless data center is just being formed and that the next VMWare, AWS, or EMC will emerge from a nimble start-up with a cloud-first mentality.
Photo courtesy of mmar/Shutterstock.com.
This post originally appeared on GigaOM