Next-Generation Infrastructure for HCI and Beyond
In computing, architecture is all about finding balance by taking advantage of cheap, plentiful resources and maximizing utilization of scarce, expensive ones to optimize yield, performance and cost per computation of applications.
Project Monterey which VMware introduced earlier today creates a new type of disaggregation and, therefore, composability options to balance those resources. Modern applications and operating environments require modern infrastructure. At Dell Technologies, we are reimagining next-generation building blocks to enable closer cooperation between the future VMware Cloud Foundation (VCF) infrastructure overlays and our future infrastructure underlays.
Redefining the Balance Equation in Compute Architectures
As business and society increasingly digitize, the same architectural concepts apply to organizations. For decades, we focused on building and deploying systems of record – databases and transactional computing. The emergence of the internet and social media led us to develop systems of engagement. Increasing digitization and pervasive connectivity, enabled by 5G, are compelling us now to develop systems of insight to build intelligent businesses.
Systems of insight capitalize on the vast amount of digital data being created today. A Chief Digital Officer for a leading medical research institution recently told me: “In the past, we were asked to do more with less (consolidation); but now, we’re being asked to do more with more.” The latter “more” referring to the data deluge surrounding us all.
Interestingly, that data deluge is increasingly being generated at the edge – witness autonomous transportation, wearables, remote monitoring, and so forth. And since compute follows data – it’s much cheaper to bring compute to data than to move data to compute – we have the emergence of an enterprise trend toward edge computing.
The same edge trends are occurring inside computing systems. The emergence of cheap, ubiquitous compute cores inside network adapters (SmartNICs) and inside actual storage media (computational storage) are redefining the balance equation in computing system architectures and present us with unique arbitrage opportunities.
Disaggregation and Intent-Based Computing
Workloads are increasingly evolving toward data-centric computing. As such, these workloads tend to be optimized by specialized offload engines and accelerators such as GP-GPUs, FPGAs and emerging specialized training and inference silicon engines.
This silicon diversity – x86, ARM and specialized silicon – as an ensemble combined in systems lead us into heterogeneous computing. The ratios required to optimize data-centric workloads among these varied types of engines may be such that they cannot be realized within the mechanical/power/thermal confines of a classic server chassis. This leads us into an era of disaggregation where, rather than deploy intact systems, we aim to deploy smaller, malleable building blocks that are disaggregated across a fabric and must be composed to realize the intent of the user or application. The provisioning of engines to drive workloads is completely API-driven and can be specified as part of the Kubernetes manifest if using VCF with Tanzu. We call this intent-based computing.
The concept of composability is not new. We could think of prior mainframe and Unix-based systems as implementing capabilities of this genre. On x86, however, what has been extant to date is coarse-grained composability with constructs like VMware’s Software-Defined Data Center (SDDC), which carved up SW-defined infrastructure assets out of intact server or storage systems.
What we require moving forward is fine-grained composability. We need smaller and simpler specialized-function infrastructure building blocks that can be more flexibly combined by a composability or intent manager to optimize computational yield across core data center racks or edge sites.
VMware Project Monterey – Evolving beyond HCI toward Hyper-Composable Infrastructure
VMware’s Project Monterey provides that close cooperation between the infrastructure overlay and underlay to enable this new type of fine-grained composability. By making the most of general-purpose compute available on SmartNICs, VMware is extending disaggregation to the hypervisor. The benefits of a hypervisor and its associated infrastructure services being disaggregated across compute hosts and SmartNICs are many, including:
- Increasing customer-usable yield out of the underlying host hardware by removing contention between application and infrastructure virtual machines (VMs)
- Enabling higher, customer-visible performance for applications and application services through improved asset utilization
- Extending a common control plane across virtualized/containerized and bare metal workloads, enabling VMware to provide bare metal as-a-service
- Providing air-gap isolation between computing domains for higher intrinsic security
- Improving network response time and performance by offloading networking functions from the server host to the SmartNIC
As the leading infrastructure partner in Project Monterey, Dell Technologies’ objective is to deliver a new class of infrastructure. We are working closely with our SmartNIC partners to accelerate the transition in the market, moving beyond hyperconverged to hyper-composable infrastructure for service providers, telcos, and large enterprises.
In hyperconverged systems, like our industry-leading VxRail offering co-developed with VMware, infrastructure and application VMs or containers co-reside on relatively coarse common hardware and contend for resources. As we introduce hyper-composability, we will develop finely disaggregated infrastructure expressly enhanced for composability and therefore tightly integrated and optimized by both soft- and hard-offload capabilities to SmartNICs and/or computational storage.
Customers will benefit from the simplicity afforded in VMware’s infrastructure overlay and the flexibility of having tailored hardware infrastructure in Dell Technologies’ underlay with no waste relative to workload demands.
We have already demonstrated joint working prototypes internally and have committed to deliver offers to the market. Stay tuned!
Service providers that deliver as-a-Service and Telco customers building their 5G clouds will find this solution very attractive. Large enterprise customers, who increasingly behave as service providers to their internal customers, will also be interested. But the real benefit of this architecture is that we can express the value in back-end core data centers as well as at the edge to create a seamless computing continuum between core, edge and cloud. This is the foundation of our next-generation service delivery platform.
While hyperconverged systems can easily compose software-defined assets (compute, storage, networking), we envision that hyper-composability will extend that ease of dynamic (re)composition to the hardware building blocks to take advantage of silicon diversity. Project Monterey will help enable this new type of fine-grained composability. As a result, we will have new opportunities to balance the computing system architecture to increase yield, maximize performance and reduce cost per computation of workloads.
Attend VMworld to find out more. We’ve got breakout sessions on this topic including:
- Next Generation Infrastructure Enabled by SmartNIC – Sept. 29, 2:00-2:230pm PT
- Dell, VMware and NVIDIA – Sept. 29 12:00-12:30pm PT for the Americas and Sept. 30 for EMEA at 4:00-4:30pm PT CEST.