In this video, you'll get an overview of the tasks you must perform to integrate VxRail into your data center and prepare for automated deployment. Task One: Prepare the physical installation of the VxRail nodes into your data center.
That is, plan for rack space, power consumption in your data center for the planned cluster and for any future expansion. Task Two: Work with your network administration for connecting the VxRail nodes into your switch infrastructure.
That is, the VxRail cluster must integrate with your data center network infrastructure to support operations. Involving your networking team is a must during the planning phases to provide the appropriate and supporting switching infrastructure.
Task Three: Work with your network administration to capture the settings required for input into the VxRail Configuration Helper. That is, the configuration settings used for VxRail automated deployment must be compatible with the supporting data center network and data center services.
Work in tandem with the network team to capture common settings required for the VxRail cluster and supporting data center network. Finally, task Four: Determine the best option for connecting to VxRail Manager to perform VxRail automated deployment.
That is, VxRail automated deployment is initiated and driven through VxRail Manager. You must have a way to connect VxRail Manger through your data center network.
This image shows the basic architecture of a four node cluster connected to a pair of top of rack Ethernet switches and a management switch.
A VxRail standard cluster consists of three or more VxRail nodes connected to one or more Ethernet switches. Which are referred to as Top of Rack Switches.
The Ethernet switches serve as a back plane for the VxRail cluster by enabling connectivity between the VxRail nodes and enabling connectivity to the upstream network.
During a VxRail automated deployment, VxRail Manager captures the settings required to integrate the VxRail nodes with your data center infrastructure, and configures a hyper-converged cluster.
A separate management switch is recommended to enable access to the VxRail node hardware for IT Management. This image shows front and back views of the five VxRail models we support.
Any combination of VxRail node models can be integrated into a cluster, except for the first three nodes. The disk drives installed in the front slots of the nodes are configured into a vSAN datastore during the automated deployment process to provide stored resources for the cluster.
And then, the Ethernet ports on the rear of the nodes are configured to connect the VxRail networks to your data center network. When planning for physical capacity for future growth, consider that each E-Series and D-Series node requires one U of rack space while the other node models require two U of rack space.
Let's get a quick overview of the Dell Enterprise Infrastructure Planning Tool which is a great resource for capturing the physical and electrical requirements for your VxRail cluster.
In the left pane, select VxRail from the drop-down menu and then select VxRail nodes to build your planned cluster. The result is a report that you can use for preparing your data center for the installation of VxRail nodes.
Next, let's look at the network connectivity. Each VxRail node will have either NDC or OCP ports built into the back of each node depending on the model selected.
Based on the model you select for your cluster, you have the option of installing PCIe cards to supply additional Ethernet ports for your workload.
You can choose 10 gigabit Ethernet and 25 gigabit Ethernet to support the VxRail workload. Depending on the selected Ethernet speed, you have the choice of selecting RJ45, SFP+, or QSFP+ type ports for your VxRail cluster.
When planning the top of rack switches to support the VxRail networks, ensure switches are selected that map to the Ethernet port types on the VxRail nodes, and ensure you have sufficient open port capacity on the switches.
In addition, make sure that you have matching cables for the Ethernet port type selected. You can choose to deploy your VxRail cluster by connecting either two Ethernet ports per node, or four Ethernet ports per node to your network.
You must be consistant and reserve either two ports or four ports in every node in the cluster. Option One: You can only use the built-in integrated Ethernet ports to support VxRail networking, or option Two: You can use both built-in ports and PCIe based ports.
Each VxRail network is assigned to a VLAN or passaged through the connected node ports and switch ports and between the top of rack switches.
Work with your network administrators to capture this setting and all network settings required for your VxRail cluster. All of the components on the External Management network will require a public IP address and a hostname, and must have passage upstream.
A public IP subnet should be reserved for this purpose. The Internal Management network is used for node management and node discovery, and does not require IP addresses.
The vSAN network enables storage for virtual machines, and each node requires either a public IP address, or multi-rack deployment, or a private IP address for single rack clusters. An IP subnet must be reserved for this purpose.
The vMotion network enables virtual machine mobility between nodes, and each node requires an IP address. The subnet can be either a public or private IP address range based on whether virtual machine mobility is required outside of the cluster.
You can create as many guest networks as necessary to meet your business requirements. The Out-of-band Management network enables IT to access the VxRail nodes for hardware maintenance.
The network settings captured to support cluster operations are applied during the VxRail automated deployment process. During the VxRail automated deployment process one or more virtual distributed switches are configured to support VxRail networking.
The VM NICs are configured to enable linkage between the physical data center network and virtual network. A port group is configured on the virtual distributed switch, or switches, for the required VxRail networks.
And, the VLAN reserved for each VxRail network is configured on the port group. The External Management network and guest networks must be able to pass through the uplinks on the top of rack switches.
Any guest networks configured on the VxRail network must pass through the uplinks to reach end users and external applications. The External Management network must be able to reach your DNS and NTP servers.
If you plan to deploy VxRail against a vCenter in your data center, the External Management network must be able to connect to this instance. And, for Call Home, if you have an external SRS gateway, then that must also be reachable from this network.
This architecture diagram shows a VxRail supplied vCenter managing a VxRail cluster at the top, and a customer supplied vCenter managing a pair of VxRail clusters at the bottom.
You must provide a hostname and an IP address on the VxRail External Management network if you choose to deploy your VxRail cluster against a vCenter instance VxRail provides.
If you provide your own vCenter instance instead, be sure to reference the VxRail support matrix to identify the support version for your planned cluster.
Also, ensure that you follow the guidance in the VxRail vCenter planning guide for details on configuring your vCenter instance to support VxRail clusters.
This architecture diagram shows two methods to access VxRail Manager. Once you have finished data center preparations, and capture the VxRail settings, connect to the VxRail Manager on the primary node to automatically deploy VxRail.
A laptop can be connected to a switch access port configured to connect to the VxRail External management network. A jump host can also be used, so long as the connectivity to the VxRail External Management network is in place.
Now that you understand what is expected to deploy a VxRail cluster, work through these major tasks to continue the process.
For more information about VxRail, visit: Dell.com/Support.