Welcome to Dell Technologies in this video. I'm going to walk you through the process of configuring multi V land four power Protect data manager. In later slides, we are going to call it PPDM. The reference is the Dell knowledge base article number 205869. Here is the agenda.
I would talk about PPDM, multi V land network traffic types, virtual network planning supported scenarios, steps to configure multi land network, deep dive in a full network separation, network design, a live demo and then go through common troubleshooting steps. Some prerequisites are applied.
We expect that you are already familiar with the PPDM user interface and have general networking knowledge in both physical and virtual switch and understand the V land concept. There are three types of network traffics in A PPDM multi V land configuration. The first one is management control traffic typically http SR est API calls also include small file transfers such as locks and update packages.
The second one is data network traffic, for example, backup, restore and replication. The third one is data for management components, traffic. It is the same as data traffic except the backup and restore data is related to management components such as PPDM data and search node data.
The data traffic type and data for management components, traffic type are technically the same. They both use DD boost protocol. The difference is data type backups, customer data and data for management components type backups PPDM configuration and search node data.
If data for management component is not defined by default, this backup traffic would go through management network. Let's have a look at a couple of network designs. The first one is a single network assigns all traffic types to the same network. There's no separation between management and data.
The second design is data for management components, traffic on data network. This network design separates management network traffic from data and replication networks traffic. The management traffic and the data for management components. Network traffic are not separated and this network design separates management network traffic from data and replication. Network traffics, the data traffic and the data for management components, traffics are not separated.
The full network separation design separates management network traffic from data replication and data for management components, network traffic. The power protect data manager supports virtual networks for the following use cases, virtual machine backups, guest level backups, replication, disaster recovery, cloud Dr and storage data management.
Here are seven steps to configure multi V networks. The first step is to register the V center on which PP DMV M is deployed. The second step is to configure network switch ports for trunk mode. This allows the ports to carry traffic for multiple V lands. The third step is to configure virtual switch and port groups.
Here are some very important vmware port group concepts to understand before you configure port groups. The first one is virtual gas tagging or VGT for port groups on standard virtual switches, configure the virtual switch port for V ID 4095, which makes the Vilan accessible for port groups on distributed virtual switches use V tracking which supports specifying multiple V by id or range. Both are the same.
They allow multiple V land traffics to pass through the port group. The virtual machines that connect to the port group must be able to handle V land tagging. You can use one of them to configure your virtual network. Another port group type is virtual switch tagging or VST.
The port group V land ID ranges from 1 to 40 94 VST port group allows only one V land traffic to pass through the port group. The virtual machines that connect to the port group do not handle V land tagging. In other words, it has no knowledge that the VLA exists.
The V switch port group handles the V land tagging. For more detailed information about VGT and VST port groups and standard V switch and distributed V switch. Please visit the VM ware support website. Step four is to log into the data domain web U I to configure Nick and VIP S.
The Data Domain administration's user guide provides more information. Step five is to log into the PPDM web U I to add the replication source and target data domain as protection storage. Step six is to set the network names and proposals for each data domain physical or V land interface previously configured.
This step seven is to add virtual networks include creating a pool of static IP addresses. The IP pool is used to auto assign IP S to management components such as PPDM and search node data for management component V land interface and V proxy data network V interface.
The network names are not required to match the network names configured in step six. It is strongly recommended that you use the same network name in both locations for each virtual network. The virtual network can be assigned to a protection policy or an individual asset.
If the virtual network is assigned to an asset, it is different from the policy that asset belongs to assets override protection policy rules. The best practice is to always configure backup virtual network by protection policy. You can use the by asset method to specify a virtual network for any asset.
However, this method is especially suited to configure assets which are exceptions to the backup policy rule. Now I will give you a deep dive on how to configure we Mwa virtual network in a full network separation scenario in this session. We are going to configure a multi V with full network separation.
Keep in mind when we say full network separation, we mean the logical network, not the physical one. In this case, we have separated logical networks for management V data for management components. VLA backup V land one, backup V land two and replication V land. Here is a virtual network port group configuration in an environment where we have two data do remain.
Virtual edition V MS set up in the same V center network as the PPDM virtual machine. For the management network, we use the default standard V switch zero and use the default port group named VM network to be the management port group with no V land tagging. The V id is set to zero. Then we create a second V switch one for passing data traffic.
We create a virtual gas tagging VGT port group. If using standard wi switch or trunk port group, if using distributed V switch VGT or trunk port group would allow virtual machine tech traffic to pass through the WII switch. In this case, the port group would accept V land 2030 40 50 V tech data traffic.
We also create two virtual switch tagging vst port groups on V switch one named V land 30 port group and V land 40 port group. This is for virtual machine agent based backup connections. There may be tens, hundreds or thousands of agents based VM back up clients with the VST port groups, we don't have to configure the Vilan interface within those virtual machine guest operating systems which saves a lot of administration time.
But in case you don't have any guest agent, backup VM clients in the V center environment, then you don't need to create the two port groups. If physical data domains are part of your multi V land set up, you will need to manually configure data domain V lands and the physical switch ports. Based on your multi V land design, we will talk more about the details and give examples.
In later slides. The Vulture network port group configuration is the same as previous DDVES set up except no DDVES in the picture port group configuration is exactly the same. You can have more physical separation for the V land network traffic to gain better performance.
In this example, it shows a dedicated V switch and up links for management and data for management component, network traffic and dedicated V switch and up links for each backup V land traffic. Because of the V land separation, you need to set up vst group for each V land traffic instead of one shared VGT or a trunk port group.
Next, we are going to have a live demo about the full network separation. We have a physical data switch that directly connects to the physical data domain model DD 6800 and also connects to the ESX I host cluster that has a PP DMV M deployed on it. We also have AD DVEVM deployed in the cluster that is used for replication target storage V switch zero has two up links named VMN zero and VMN four.
We have three ESX I hosts. So we have a total of six up links between E six I host and the DELL switch. Let's go through the seven steps to configure PPDM multi V LAM. The first step is to register the V center server on which the PPDM virtual machine is deployed. Log in to the PPDMU I for this task. No step two is to configure the physical switch.
In this demo, we are going to use a DELL S 40 48 on switch. The show interface description command shows the following info. The highlighted area shows three ESXIV switch zero uplink connections, VMN zero and VMN four connect to Dell switch port 12569 and 10. And the port 13 is an uplink port to connect to an uplink switch.
The port 33 connects to the data domain management interface and port range from 49 slash 1 to 49 slash four. Connect to the data domain data interfaces. The screen on the right shows the V land configuration. The UN V 121 is management V land which groups all three egg I host and switch uplink ports together.
The tech V 2030 40 50 are our PPDM multi V land configuration, six ESX I ports and four DD data ports are grouped together to allow the four types of V land traffic to pass through. Step three is going to configure the virtual network port group. Let's first look into V center U I and go to the first E six I host in the cluster. Click on E six I host, then click on configure expand networking.
Then click on virtual switch under V switch zero, click on add networking, then select virtual machine port group for standard switch option. Then click next, verify you selected V switch zero. Then click next again under networking. Choose the network label for the VGT port group. Then select V ID 40 95 at two VST port groups for V 30 40 F.
You FVM clients did use guest backup agent in this V center environment. This type of port group will help simplify guest networking when it comes to guest OS data backup such as a fire system backup or a database backup without having to configure V on the guest operating system. Next is to configure replication target DD network interfaces with corresponding port groups.
We assigned the first network adapter with management port group named VM network and assigned the next two data adapters with GT port group so that it can pass both VLA 2050 V land traffic step four is to configure DD interfaces. Let's first configure two data domain V land interfaces log into the primary data domain U I click on hardware.
Then Ethernet highlight the first data interface. Eeth three A click on the create dropdown menu and select V land. Enter the V land ID IP address and submit mask. In this demo. We choose the first V land 20 IP to be 10, 10 2188. Then repeat the same for eeth three A to create V 30 40 Once eeth three A V Land is configured do the same for eeth three B three C and 3D, the IP assignment would look like this.
Once the primary data domain V Land is configured, log into the replication, data domain storage and configure the same. But without two backup networks, we learn 30 40 step five is to add the two pre configured data domains in PPDMU I step six and seven are to label and networks.
In the PPDMU I, it has two parts first to label data domain interfaces with the network label and its purpose. You can give any name to the DD interface. The best practice is to give the interface a meaningful name such as management V land 20 data for management component and so on.
In this demo, I use default network to label DD management interface and use V ID to name the rest of the data interfaces. Now we are going to label replication data domain network interfaces. Second part is to add network in PPD MP. PDM has a default network named default network. Let's change the purpose to be only for management.
Second network is data for management component network V land 20. Here we are going to create a static IP pool to let PPDM automatically assign N IP to PPDM itself and search notes. The rest of the networks are backup V 30 40 the replication network VM 50 for V land 30 40. The IP pools serve the V proxy data interface IP auto assignment.
We now have five vila networks created successfully. Now we have completed the multi V land network creation. Let's create protection engine V proxy and then run a test VM back up in this page. We are going to configure backup network interfaces. We are going to use VGT network port group.
We previously created in V center U I then pick a IP pool and IP address will be used for the V proxy data interface. While V proxy one is creating, let's create V proxy 24 V land 40 VM backups. We now have our two V proxies created and ready for use in VM. Back up policy. Let's go to protection policies and add a V land 30 backup policy.
Select the virtual machine you want to be included in the backup policy, then add primary backup storage and select backup network interface. Now we have the backup policy for the land 30 created then go to jobs and jobs to check the policy creation process. Once the policy creation is completed, go to protection policy to start a job manually.
Now, the backup shows completely successfully. Now let's configure a replication policy with the V land 50 network. It now shows replication completed successfully. We are going to test an agent file system backup using Bland 30. I already have a Windows BM guest machine registered with the PPDM.
Let's first look at the port group assignment on this VM. There are two nicks added to this VM network adapter. One connects to management port group VM network and network adapter two connects to V land 30 VST port group. Now let's configure a fire system backup policy.
The file system agent backup policy was created successfully. Let's run a backup. Now, the file system backup is completed successfully. The next last step in this demo is to create a search note with additional data from management components IP in V 20. Here we select BGT network port group and the 20 IP pool.
Now we have to search, not deploy it successfully. Here are some troubleshooting tips during or after a multi V configuration. Make sure primary and replication DD can ping each other through all configured V land networks, make sure V proxy and search node can reach to their corresponding DDV land IP S.
It failed to add data for a management network in PPDMU, I. Please review the virtual port group configuration and try again if transparent snapshot TSDM is used. Make sure ESX I hosts can ping data domain backup VIP S for further information. Please check out the Dell knowledge base article number 205869.
Thank you for watching.