Artificial intelligence is rapidly transforming a wide range of industries with new applications emerging every day. As this technology becomes more pervasive, the right infrastructure is necessary to support its growth. Dell Technologies has responded to this growing need by releasing its next-generation PowerEdge rack configuration servers, which deliver a significant boost in capacity and performance over the previous generation.
PowerEdge now offers servers that can be easily installed in or outside the data center. They also include a range of GPU-optimized models that can support a variety of AI use cases from AI to edge and telecom. Select models also offer extra cooling options specifically designed for operational efficiency and performance and may include:
-
- Eight-way NVLink peer to peer air-cooled
- Four-way NVLink peer to peer in liquid assist air-cooled
- Direct Liquid-cooled
These options provide customers with varying options as they assess their cooling needs when they plan the growth of their infrastructure to support AI-related applications.
Synchronized Performance
These new platforms’ performance is impressive. One example is the XE9680 with 8x Nvidia H100 GPUs and NVLink. It demonstrated up to an eightfold improvement over the previous generation in machine learning (ML) performance testing Inference 3.0 for high-demanding AI training, generative AI model training and fine-tuning and AI inferencing metrics. The key XE9680 features supporting this performance included:
-
- Intel 4th Gen Xeon processors with high core-count CPU per socket
- Eight Nvidia H100 GPUs connected with NVLink
- 2x NVMe drives per GPU to support a large amount of scratch space
- GPU direct storage to move data from an external network attached storage to GPU via RDMA
- 10x PCIe slots to support a large AI cluster configuration.
The R760xa with Nvidia H100 and XR5610 with Nvidia L4 also showed exceptional results for data center and edge inferencing with high performance/watt for edge applications.
Inferencing Everywhere
Intel’s 4th Generation Xeon scalable processors also offer significant improvements for AI workloads. The R760 with 4th Generation Intel Xeon scalable processors uses AMX technology to deliver up to 8x improvement for inference throughput. With Intel AMX technology, developers can leverage AMX to boost the performance of AI workloads while continuing to use the ISA instruction set for non-AI workloads. This benefit extends to the entire PowerEdge portfolio, creating a major performance uplift for AI workloads from edge to core.
These technological advancements in GPU-optimized servers and Intel’s Xeon processors are creating the foundation for enabling the creation of new AI training and inferencing software, generative AI models, AI DevOps tools and AI applications. Dell Technologies’ latest release of AI- and ML-enabled platforms provides the flexibility that end-users need to create AI applications that span from core to edge.
Building an AI-powered Future
Dell Technologies has put a significant focus on the architecture and design of these platforms to support both current and future AI applications. These advancements in infrastructure are necessary to keep up with the growing demand for AI and the increased complexity of the models and workloads being developed. With its latest PowerEdge servers, Dell Technologies is paving the way for a more efficient and powerful AI future both inside and outside the data center.