With workloads becoming more dynamic, the idea of a true composable infrastructure becomes more and more compelling. Dell recently worked with Penn Schoen Berland (PSB) to survey 1,200 IT and Business Decision Makers (ITDMs, BDMs), and found that 95 percent of those surveyed are in agreement that the goal of composable infrastructure is to activate the right amount of the right data center resource at the right time. Further, that same survey revealed that 50 percent of ITDMs and BDMs surveyed plan to deploy a composable infrastructure in the future.
With that said, we look forward to a time when customers can reap the benefits of a composable infrastructure, but believe that there are a few challenges for the industry to solve before we can truly call it composable. Today, I will be giving composable infrastructure a reality check, highlighting what it will take for the technology to live up to the term.
Currently I see three major challenges to composable infrastructure: 1) lack of software-defined intelligence, 2) lack of new industry standard around a modern IO/Memory fabric at rack scale, which is needed for full composability, and 3) lack of industry standard around openness, allowing customers to allocate resources across multiple vendors’ technology.
Let’s start with the lack of management or software-defined intelligence. Today, the end user has to manually refer to templates or spreadsheets of workload requirements and tell the engine what to allocate; then the engine will carve out resources for that workload. It’s not automatic. Yes, workload automation tools are incredibly helpful once those templates are built, but a person still has to apply those templates in order to provision the correct resources.
For composable infrastructure to be a reality, the user should not have to decide where to map resources – it should be automatic based on the workload and decided by the technology. The management should assign what compute, storage, and networking resources are needed without any human intervention. This “autonomous brain” would then analyze data and make decisions to determine the best place to map it. Over time, the brain continuously learns and evolves to make better decisions and self-heal without the need for a system administrator.
Without this type of management or brain, which no one has yet, composable is impossible to do effectively. That’s not to say people aren’t working on it, it’s just that it is not currently available. This type of management should lead to significant gains in managing overhead, maximizing efficiency, and delivering dynamic, agile performance levels for each workload.
For fans of the Terminator movie franchise, developing this management is a little like taking some principles of Skynet– specifically, reducing the potential for human error and guaranteeing a fast, efficient response. Of course, this is applied with other lessons of the Terminator so as to avoid extermination of the human race.
The second challenge gets down to the nuts and bolts of hardware interconnects. Trapping valuable and expensive hardware resources in a box leads to more inefficiency, so to be fully composable, we must invent new ways to share hardware resources. Storage to some degree has tackled this via Software Defined Storage – pooling internal and external storage into virtual and software controllable– but storage has different latency characteristics than DRAM and next generation non-volatile memories. Next generation workloads aspire to place data closer to the CPU to reduce latency. Sharing DRAM/NVM resources over traditional networks via software will defeat the value prop of reduced latency closer to the CPU, and lose memory centric access mechanisms. Thus, a new industry standard IO/Memory fabric of sorts at rack scale will be required to enable full composability of resources like NVM, GPUs, etc. With a proper IO/Memory fabric; software, automation and orchestration can fully exploit composable hardware.
The last challenge that I see is the current lack of industry standards related to openness. Our customers demand approaches that work across many vendors and many technologies because there is quantifiable value to infrastructure built using open standards, rather than around proprietary vendor standards. Any vendor can claim to be “open”, but once you have a datacenter full of multiple vendors’ hardware it becomes apparent that there are varying definitions and degrees of openness.
It’s a little like personal profiles or photos on social media – not all of them are a 100 percent true reflection of the actual person. Instead most profiles are a reflection of who the person wants to be. It’s the same with some vendors – they want to be open, so they proclaim themselves to be. Unfortunately, that’s not how it works.
And don’t be fooled by open APIs either – these do not make the infrastructure “industry-standard”.
We see great promise in a common, modern API to discover, inventory, compose, configure and monitor systems, components, and elements both within data center and cloud deployments. Dell and other industry leaders are rapidly building upon the work already completed in the DMTF’s Scalable Platform Management Forum (SPMF) Redfish specification and schemas. The goal of these specifications is to enable compute management using RESTful technology on a common model.
Most recently, Dell, through work in the industry group codenamed Chinook (Dell, Hewlett Packard Enterprise, Microsoft, VMware, and Intel), jointly submitted work in PCIe Switch, BIOS and local storage models to the SPMF. Future Chinook submissions to the SPMF for Redfish will expand coverage with the goal of addressing all components in the data center with a consistent API. As a result, customers and operators will be able to better deal with the velocity of deployment and management of constantly evolving technologies.
We look forward to the evolution of standards-based approaches for composable infrastructure. After all, Dell has a proven track record driving open initiatives and embracing open architectures is our heritage and in our DNA.
In conclusion, composable infrastructure is not ready for its prime time debut, yet. The challenges with management and industry standards are key issues that need to be addressed – and it will take some time. Composable infrastructure is a work in progress, and has tremendous opportunity in the future. At Dell, we’re actively working to make true composable infrastructure a reality in the long run – Redfish/Chinook are great examples of Dell and industry leaders working together on the journey. We’ve already delivered innovative and transformative solutions like Active System Manager, Dell PowerEdge FX architecture, Open Networking, and G5/DSS 9000 that make incremental progress towards the desired end state, and we look forward to a time when the key issues are solved and we’re able to debut a true “composable infrastructure.” And don’t forget – Dell is very much able to solve customer problems in modular, cost-effective and immediately impactful ways that are relevant today and will remain relevant far into the future, but to fulfil the term “composable,” the industry as a whole just isn’t quite there yet.