Thank you for watching this video demonstration of the scale IO Vsphere plug in this video demo assumes some knowledge of scale IO in a VM ware environment. So before you get started, you may want to look at additional video content on the DELL E MC scale IO youtube channel and particularly the automated deployment video shown here. That video explains how the VM Ware hypervisor works seamlessly with scale IO and also shows how this demonstration infrastructure was built. What does the plug in do for us? It allows administrators to manage scale IO storage from the Vsphere web client which VM Ware customers are already using VM Ware ESX is by far the most popular hypervisor in use today.
This drives the need for the plug in which makes storage much easier to manage and consume in the virtualized data center. The scale IO plug in automates key tasks such as installing scale IO software on the ESX infrastructure, creating scale IO clusters on automatically provisioned VM ware V MS, adding storage provider, nodes and V MS to clusters, provisioning and deep provisioning scale IO volumes and VM ware data stores. In summary, it enables administrators to manage nearly all aspects of storage within the VM ware environment. In the remainder of this video, we'll walk through a description of the plug in. Then we'll take a look at the demo environment and automated workflow from there.
We'll deploy a storage cluster and then we'll add a storage node to it. We'll take a look at advanced settings and lastly, we'll have a few closing statements. The plug in is easily installed from a Windows client using a powershell script. It integrates seamlessly with the Vsphere web client and provides a single pane of glass for managing the storage system as well as the rest of the virtualized environment. We'll start with three ESX nodes interconnected by a 10 gigabit IP storage network. The plug in first creates a partition on the sad do disk on each node, placing a data store in it. It goes on to use those data stores to contain storage virtual machines on each node. Those SV MS are cloned from a preconfigured VM template which contains the scale IO software.
Next, the plug in will install and configure the primary metadata manager for the first storage virtual machine on the first node. Then it will continue by installing the standby MD M on the second node and the MD M tiebreaker on the third node. Once the MD M cluster is built, the lightweight agent gateway VM is installed on the first note. And lastly the storage provider scale IO data server software is installed and configured and then selected disks are allocated to a storage pool. Let's go ahead and create our storage cluster. The first thing we'll do here to do a little pre validation of the environment is install the scale IO data client or S DC on a select number of ESX hosts. So to do that, we're going to kick off the plug in and we're going to select the install S DC and ESX task. We see our VCENTER IP here there is our data center and the lab that we're installing into.
We'll go ahead and install on all four of these hosts by checking here and entering the root password for ESX process is fully automated and we see it's already completed. So next, we'll move on to deploying our scale IO cluster. After rebooting those hosts, we can go ahead and create our scale IO cluster. First, we tell the installer we wish to create a new scale IO system. We accept the license agreement, provide a name for the system, provide a password for the admin user for that system. And now we identify the host that we want to install in our cluster. We're going to start by creating a three node cluster. We'll add 1/4 node later. Next, we identify the roles of each node that we selected.
This is a little bit easier because we're creating a three node cluster here. We're selecting defaults just entering the DNS server, we create our protection domain in our storage pool name. We don't have enough nodes to set apart fault sets. So we're gonna skip that step for now. Now we're going to identify the nodes that we're we're installing. So this identifies what, which nodes will provide storage. So based on those nodes, we just selected, we will now select the individual disk devices that will be assigned to the storage pool. So we've selected five solid state devices per node. We've already installed the scale IOD client or S DC which consumes storage. Now we'll select the ESX host which will be used as a scale IO gateway for the lightweight install agent.
We'll select the scale A O virtual machine and template and provide a password which will be used when the storage virtual machines are provisioned. Now, we assign our networks and then our IP addresses to establish our scale A O cluster. So let's quickly review the IP S we have the management IP for each storage virtual machine, it's subnet mask, the gateway uh so that we can have access to the management network and then the data IP S for the primary and secondary interfaces. Here, we have the virtual IP addresses on both V lands will proceed to the summary and then kick off the deployment process. And before we can do that, we have to rev validate with V center. So now we're going to complete the installation. We have to rev validate with the center. And now the deployment process has started notice on the bottom of the screen, the storage virtual machine. For the first note is actually being cloned. Now we'll fast forward through some of these sections and click finish. The cluster build is complete. At this point.
We should take a look at the scale IO storage cluster we just created first, we see the four nodes in our demo lab and the three storage V MS that the deployment wizard created for us recall that we created a three node cluster. So three of the nodes are being used to provide storage. As we glance through the SV MS, observe the node section in the summary panel and see that the nodes role and caching status are reflected here. The data client or S DC is not reflected here because it actually gets installed on the ESX host itself. That way any number of application virtual machines will be able to access scale A O volumes as they are provisioned. Note, there is 1/4 VM which again is the lightweight install agents gateway. This VM is used by the plug in to manage the installation.
Now let's look at the storage provision by the installer for us. We see in the first node that the data store of 22 gigabytes was created for us and that's where the SVM resides. Now let's return to the plug in and take a closer look. At storage. If we click here on scale a systems, we'll find our cluster. We'll see that it has three storage provider nodes and 15 dis devices. We can drill further to see details related to the Protection domain. And ultimately the storage pool that was created, we've drilled to the Protection Domain and we see we have one storage pool, three SD CS and 15 storage devices. So if we click on volumes, uh while we did not create any volumes in our cluster as of yet, there are some volumes in the environment that are available. Uh For review, you can also use the plug in to provision scale IO volumes. There's a separate provisioning video available on youtube and we'll refer to that at the end of the presentation.
Also, instead of uh drilling down through the scale IO systems, you can access various storage components from the scale IO menu here, the protection domains, storage pools, scale IO data servers and so on. So here's the first note in our three node cluster, the protection domain, the disc devices associated with that node, we didn't create any RF cache devices or vault set. So we won't find anything there. So we can see that once the cluster is built, we can go back and and manage it directly through the VSPHERE client. So next, we should take a look at adding a node in the life cycle of storage. There's always that event where we need to add performance or capacity to achieve that. In a scale IO environment, we simply add storage provider nodes and that's what we'll do next. It's important to point out at this stage what makes scale IO and software defined storage so unique scale IO is IP based block storage.
The primary building block of scale IO is the industry standard X 86 server. Contrasting with the classical storage array approach which requires specialized hardware as capacity and performance demands, increase servers are added to your storage cluster as needed. Traditional storage arrays only scale to a handful of controllers while scale IO scales beyond 1000 nodes in a virtualized data center, a single scale IOC can support countless vmware clusters and as nodes are added or removed data is automatically rebalanced across the scale IO cluster, mitigating those large data migrations that disrupt our business operations. Let's see what's involved in adding one additional storage node to our cluster. We've discovered that we need a little bit more storage, performance or capacity. So we're gonna go ahead and add a storage node to our cluster to do that. We're gonna go back into the deployment wizard and we're going to add a server to a preregistered scale A O system.
If you remember, we call it finance. So now we're going to select the node that we're going to add to the system. It will carry on no additional metadata management tasks or rules. So we can skip this. We will have to provide DNS information. We don't need to create any new protection domains or storage pools. So we'll skip that again. We're not adding any fault sets. Uh We don't have any and we don't have enough nodes to uh to add them. We're gonna add this uh new node as a scale IO data server or sds. So we'll drill down to that node and select the disc devices that we wish to use. There's five of them, we're not going to be installing the data client. It's already been installed on this node, provide the password for the lightweight install agent. Again, the the gateway is already installed.
So we're simply providing access to that gateway from this storage VM a little bit different. This time, since the lightweight install agent gateway is already installed, we need to validate the password. The green check mark means the password is good. Again, we're gonna select the scale a O storage virtual machine template and then the password that we want for the SVM, the networking details and then the IP addresses, we entered the IP S just as we did before we have the management IP gateway, uh IP address and the data IP S for the data networks. In the end, it summarizes uh all of the characteristics of the SDS. So we click finish reconfirm and again, the install agent takes over and begins the deployment process for us. The deployment is completed. Let's go ahead and take a look at the environment then.
So now we see 1/4 node and we see that it has the scale IO data server installed on it as well, the lightweight install agent and the RFC software. So that concludes our deployment of this last and fourth node. It's a good time to explore the advanced settings at this point. These are used to establish the behavior of the deployment and other automated processes. So the first enable V MD K creation is a way to automatically deploy uh basic uh V MD KS. During no deployment notice, we did not select that when we ran the wizard enable RD MS on nonparallel scuzzy controllers. So this option allows us to provide better performance um better configuration practices related to flash devices, flash disk drives allow takeover of devices with existing signature. So if we have a node that was previously deployed on scale IO and maybe we took that node out of the system and we wish to re reuse it in another context, we can do that. And if we mark this this uh check box scale IO during the deployment process will allow you to go ahead and capture all these devices that were previously used and reuse them allow using non local data stores for SV MS.
This is a uh performance related issue where you may wish that plant wizard to insist if you will that all of the SV MS are local to the ESX hosts that they're running on. The parallelism limit determines how many concurrent processes the deployment wizard will allow to operate at any given time. So in larger environments, especially production environments, it might be wise to lower this limit. The two bullets shown here are poetic. The most important takeaway related to this demonstration is this simplicity. It just isn't that difficult. We've seen that the scale iovm Ware plug in simplifies management of storage, providing a single pane of glass for administrators to manage their ESX environments, giving them more time and energy to focus on driving business value while delivering results faster.
And now a quick slide on resources available to you. The primary source of publicly available product information and software can be found on the scale IO landing page. The URL is shown here refer to our scale io youtube channel and review the automated deployment video. Also take a look at the provisioning video. It will give you more insight on managing volumes and data stores. And lastly for more detailed information. See the scale Iovm Ware document which goes into more detail on installing the plug in itself.
Thank you for watching this video and have a great day.