Unsolved
This post is more than 5 years old
16 Posts
0
2754
Windows Failover Cluster Manager not showing CSVs on C:\ClusterStorage folder
I have added a new node into an existing ScaleIO environment but it doesn't seem to pick up the CSVs in the ClusterStorage folder when adding the node into the cluster.
I can see that when I run --query_all_sdc command that the node is there as an SDC but there are no IOPs on it unlike the others.
Also when running drv_cfg --query_mdms command on the SDC, I have the following:
MDM-ID bd4cXXXXXXX0e23 SDC ID cf0XXXXX00000005 INSTALLATION ID a47fbXXXXXX1981
IPs [0]-10.10.10.14
MDM-ID bd4cXXXXXXX0e23 SDC ID cf0XXXXX00000005 INSTALLATION ID a47fbXXXXXX1981
IPs [0]-10.10.11.14 [1]-10.10.10.15 [2]-10.10.11.15
How do I get them on one line as with the other nodes? As I added the second line IPs later as when I initially added the first IP, I put a space after the comma between the IP addresses.
Is there a way to unmap volumes to SDCs also?
Thanks,
John
dstratton
33 Posts
0
October 31st, 2016 14:00
regarding query
How do I get them on one line as with the other nodes?
As I added the second line IPs later as when I initially added the first IP,
I put a space after the comma between the IP addresses.
If the CLI and the MDM do not reside on the same server, add the --mdm_ip parameter to all CLI
commands.
In a non-clustered environment, use the MDM IP address.
In a clustered
environment, use the IP addresses of the master and slave MDMs, separated by a
comma. For example:
scli --mdm_ip 10.10.10.3,10.10.10.4 --login --username
supervisor1
regarding query
I can see that when I run --query_all_sdc command that the node is there as an SDC but there
are no IOPs on it unlike the others.
please Verfiy SDC<=>MDM connection
# /opt/emc/scaleio/sdc/bin/drv_cfg --query_mdm
and make sure when we SSH to the Primary MDM and run the command "scli --query_all_sdc", we need to make sure the SDC is not in the Disconnected state:-
dstratton
33 Posts
0
October 31st, 2016 14:00
regarding query Is there a way to unmap volumes
please make sure the sdc's are not disconnected, if so is a seperate process, but yes it is able to be done
https://support.emc.com/kb/484801
unmap volumes to SDCs
Unmap the volume from all the SDCs:-
scli --mdm_ip 10.X.X.11 --unmap_volume_from_sdc --volume_name vol_1 --all_sdcs
Volume will not be accessible to the SDC. Press 'y' to confirm.
Successfully un-mapped volume vol_1 from all SDC nodes
from GUI goto Frontend > Volumes >
Unmap Volume
taken from scaleio 2.0 user guide
https://support.emc.com/docu67392_ScaleIO-2.0-User-Guide.pdf?language=en_US
To unmap volumes, perform these steps:
1. In the Frontend > Volumes view, navigate to the volumes, and select them.
2. From the Command menu or context-sensitive menu, select Unmap Volumes. The Unmap Volumes window is displayed, showing a list of the volumes that will be unmapped.
3. If you want to exclude some SDCs from the unmap operation, in the Select Nodes panel, select one or more SDCs for which you want to retain mapping. • You can use the search box to find SDCs.
4. Click Unmap Volumes. The progress of the operation is displayed at the bottom of the window. It is recommended to keep the window open until the operation is completed, and until you can see the result of the operation.
via cli
Example (unmap volume from a single SDC)
scli --mdm_ip 192.168.1.200 --unmap_volume_from_sdc
--volume_name vol_1 --sdc_ip 192.168.1.3
Example (unmap volume from all SDCs)
scli -–mdm_ip 192.168.1.200 --unmap_volume_from_sdc
--volume_name vol_1 –-all_sdcs
taken from scaleio user guide 2.0
https://support.emc.com/docu67392_ScaleIO-2.0-User-Guide.pdf?language=en_US
what scaleio version are you on?>
triggs88
16 Posts
0
November 1st, 2016 02:00
Thanks dstratton. It is version 1.31.1277.3
Thanks for all your help...
John
triggs88
16 Posts
0
November 1st, 2016 03:00
Thanks dstratton. It is version 1.31.1277.3
I have just seen this on the node that is shown as disconnected.
C:\Program Files\emc\scaleio\sdc\bin>drv_cfg.exe --query_mdms
Failed to open \\?\root#scsiadapter#0000#{cc9ba7b0-6d22-4016-81c5-3369f0a163c4}.
Code 0x5
Failed to open kernel device
Have you ever seen this before and what do you suggest?
Thanks for all your help...
John
triggs88
16 Posts
0
November 2nd, 2016 06:00
The problem has been sorted. The issue with adding this new node to the cluster was due to the fact that we have 2 MDMs and each MDM has 2 network interfaces = 4 IP addresses.
When I used the drvcfg command to add the MDM IPs I added a space and so only 1 IP was added to the SDC. I then later added the remaining 3 IP addresses, but this appeared as another MDM, even though all the GUIDs are exactly the same. The Server sees 2 MDMs. This means that when adding to the Failover Cluster Manager and running through the cluster validation, the report was identifying 2 identical storage devices, therefore failing.
Although the server is added to the cluster, no VMs could be migrated to it as the CSVs were not showing the VM files and folders.
To resolve this I had to go into the registry, where the MDM is specified and modify the key data on 1 line rather than 2 lines.
So:
MDM-ID bd4cXXXXXXX0e23 SDC ID cf0XXXXX00000005 INSTALLATION ID a47fbXXXXXX1981
IPs [0]-10.10.10.14,10.10.11.14 [1]-10.10.10.15 [2]-10.10.11.15
Instead of:
MDM-ID bd4cXXXXXXX0e23 SDC ID cf0XXXXX00000005 INSTALLATION ID a47fbXXXXXX1981
IPs [0]-10.10.10.14
MDM-ID bd4cXXXXXXX0e23 SDC ID cf0XXXXX00000005 INSTALLATION ID a47fbXXXXXX1981
IPs [0]-10.10.11.14 [1]-10.10.10.15 [2]-10.10.11.15
The registry key is located here:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\scini\Parameters\mdms
And reboot the server.
This happened as there wasn't a obvious way to remove the single IP and then re-adding the IPs but editing the registry worked and the cluster is now allowing live migrations to the new node.
Thanks for all your input, leading me to the solution.
John