Interoperability in computing environments is a repeating problem affecting multiple parts of an organization. When systems cannot communicate with one another, the result is slowdowns in every way, from technology performance (as data has to be converted from one system to another) to human processes (such as teams spending time in yet another meeting trying to devise workarounds for data translation challenges). Interoperability must emerge, because common formats to send and store data are a necessity. This isn’t a new topic or concept, but this time, it’s being applied to modern infrastructure.
It’s time to address this issue in DevOps. We want to do for storage and containers today what Sun’s network file system protocol (NFS) did for client/server storage in the 80s: enable performance, simplicity, and cross-vendor compatibility.
Happily, we’re in a position to participate in the process and the conversation around ensuring different pieces of the ecosystem fit together. The Cloud Native Computing Foundation, a division of the Linux Foundation, aims to help organizations build, deploy, and run critical applications in any cloud. The CNCF has under its umbrella other projects aligned with our goals, such as Kubernetes and Prometheus.
Today we recognize industry collaboration moving forward with major container and storage platforms. This initiative is called CSI, short for Container Storage Interface, and we are proud to have just introduced a first example implementation. We’re seeing immediate progress, as we expect CSI to be introduced as a CNCF project later this year, bringing storage for cloud native officially into Cloud Native environments. While there are other organizations and initiatives in this space, such as OpenSDS, we are fully supportive of CSI and are proud to be a part of the community of customers, container orchestrators and other vendors standing behind this initiative. Along with this, we see REX-Ray as a prospective CNCF project, bringing support for storage for cloud native workloads.
Why It’s Necessary
In nearly every historical case, interoperability standards have emerged: common formats we use to send and store data. Specifically in the storage industry, we’ve even seen this time and again from SCSI, NFS, SMB, CIFS, and VVOLs. Since I’ve always been a fan of Sun, I’ll use NFS as my example. In 1984 Sun developed the NFS protocol, a client/server system that permitted (and still permits!) users to access files across a network and to treat those files as if they resided in a local file directory. NFS successfully achieved its goals of enabling performance, simplicity, and cross-vendor compatibility.
NFS was specifically designed with the goal of eliminating the distinction between a local and remote file. To a user, after the appropriate setup is performed, a file on a remote computer can be used as if it were on a hard disk on the user’s local machine.
There’s plenty of architectural history to consider in how Sun achieved this, but the bottom line is: If you are a server, and you speak NFS, you don’t care who or what brand of server you’re talking to. You can be confident that the data is stored correctly.
Thirty years later, we technologists are still working to improve the options for data interoperability and storage. Today, we are creating cloud-computing environments that are built to ensure applications take full advantage of cloud resources anywhere. In these cases, the same logic applies as the NFS and server example above: If you have an application, and it needs fast storage, you shouldn’t have to care about what is available and how to configure it.
Storage and networking continue to be puzzle pieces that don’t always readily snap into place. We’re still trying to figure out if different pieces all work together in a fragmented technology ecosystem – but now it’s in the realm of containers rather than hard disks and OS infrastructure.
Ensuring Interoperability Is a Historical Imperative
As I wrote earlier, and to quote Game of Thrones, “winter is coming.” Whatever tenuous balance we’ve had in this space is being up-ended. Right now it’s the Wild West, where there’s 15 ways to do any one thing. As an industry, cloud computing is moving into an era of consolidation where we combine efforts. If we want customers to adopt container technology, we need to make the technology consumable. Either we need to package it in a soup-to-nuts offering, or we must establish some common ground for how various components interoperate. Development and Ops teams have the opportunity to argue about more productive topics, such as whether the pitchless intentional walks makes baseball less of a time commitment or whether it sullies the ritual of the game.
The obvious reason for us to drive interoperability by bringing storage to cloud native is that it helps the user community. After all, with a known technology in hand, we make it easier for technical people to do their work, in much the same way that NFS provided an opportunity for servers to communicate with storage. But here the container provider is the server, and the storage provider is the same storage we’ve been talking about.
The importance of ensuring interoperability among components isn’t something I advocate on behalf of {code}; it’s a historical imperative. Everyone has to play here; we need to speak a common language in both human terms and technology protocols. We need to create a common interface between container runtimes and storage providers. And we need to help make these tools more consumable to the end user.
The end goal for storage and applications is to offer simplicity and cross-vendor compatibility where storage consumption is common and functions are a pluggable component in Cloud Native environments. We feel that this is the first step in that direction.