This post is more than 5 years old
13 Posts
0
4131
volume mappings to initiators.
Hi, I'm having a problem exposing the scaleIOs to our ESX hosts.
I have created the volume called "ScaleIO-Dev"
I have checked our VMware host configuration and enabled the iSCSI iniator as we would and as we do for hooking up to our existing ZFS based SANs.
The name of our internal server itself is; dnzakesx-backup01.dsldev.local so logically the iqn name according to the standards should be something like;
iqn.2015-06.dsldev.local:dnzakesx-backup01
when I try and and map the volume to a scsi_initiator;
scli --map_volume_to_scsi_initiator --volume_name ScaleIO-Dev --initiator_name iqn.2015-06.dsldev.local:dnzakesx-backup01 --mdm_ip a.b.c.d
I get a message back;
Error: initiator_name too long: 'iqn.2015-06.dsldev.local:dnzakesx-backup01'
when I shorten the iscsi initiator name to something like say; iqn.2015-06:backup01
Error: MDM failed command. Status: Could not find the SCSI Initiator
Why am I getting an "initiator too long" on a perfectly valid DNS name?
Are there any DNS lookups/reverse DNS lookups taking place as part of the connection to validate the iqn name?
Why am I not able to map the volume to the iniator?
If it helps I'm running the 3xCentOS7 VMs with 3 NICs each - one the management nic, the other 2 for the iscsi traffic (on different vlans).
cheers
Ashley
pavel.heidrich
9 Posts
0
June 22nd, 2015 23:00
Hello Ashley, you still can present Scaleio storage built on CentOS hosts to VMware host(s) without iSCSI and NFS. Scaleio has native SDC (Storage Data Client) for ESXi implemented as a VIB extenstion, which effectivelly is a storage initiator (client) that communicates with storage targets (servers). It is the front-end storage protocol and multipathing driver instead of iSCSI/NFS and NMP.
While the standard installation is converged where everything is install symetrically using scripted wizzards, you can decouple the system in many ways...
echolaughmk
2 Intern
2 Intern
•
522 Posts
0
June 19th, 2015 05:00
I'll take a stab at this since I read it as you trying to map an SIO volume from your SDS pool through native iSCSI to your ESX host. If this is the case, I tested this with the 1.30 version of SIO and it worked fine, but when I queried about its support, I found it to be unsupported (even though it works). If this is what you are trying to do and I remember correctly, I had to use the command to add the new iSCSI initiator (add_iscsi_initiator) and then map the volume to the newly added iSCSI initiator. In the 1.32 user guide, I see references to some iSCSI support being deprecated and the section on iSCSI initiators (that you might be reading in your 1.30 guide) to have been removed. I inquired about multipathing support when I was testing it a while back and that is when I was told it wasn't supported, which I thought was too bad.
Not sure if anything has changed with this formal support for the provisioning above, but I will let product management chime in in case I am incorrect. If I am reading you question wrong as well...feel free to let me know
ashleywatson
13 Posts
0
June 21st, 2015 17:00
thanks guys for your answers.
Just to be clear we were hoping to trial out an instance of ScaleIO on one of out whitebox SupoerMicro based converged units running vSphere 6 and currently OmnioOS as a VM presenting storage back to the host itself to act as a converged storage unit (currently used as a backup taget).
As ScaleIO requires a minimum of 3 nodes, I can not make use of the ScaleIO plugin at this stage on that single host, so I deployed 3xCentOS VMs which I then deployed the ScaleIO components onto, but it appears that regardless of what I do this functionality no longer works under 1.32 to a vSphere6 host (even on a single nic configuration). One of the issues seems to be that the dynamic discovery of the iscsi targets doesn't seem to work as expected.
I tried to manually create the initiator representing the client;
scli --add_scsi_initiator --initiator_name backup01 --iqn iqn.2015-06:backup01
and then map that with;
scli --map_volume_to_scsi_initiator --volume_name ScaleIO-Dev --initiator_name backup01 --mdm_ip a.b.c.d
Successfully mapped volume ScaleIO-Dev to SCSI Initiator backup01 with LUN number 0
but then on the ESX host with the iqn name set to be iqn.2015-06:backup01 I'm unable to scan the storage even thought the iscsi configuration is bound to that vmkernel interface.
If there is no multi-pathing support in general for the iscsi target, this severely limits its use to be honest, compared to all other iscsi targets I have worked with.
It quickly gets to the point where we might as well just either stick with ZFS based storage converged units, or we run with a manual configuration of Linux IO target (LIO) on top of an open distributed file system.'
I don't understand the logic in removing iscsi presentation to VMware hosts - surely software defined storage should have the flexibility to be deployed in any method the client wishes even if the performance is obviously not optimal.
ashleywatson
13 Posts
0
June 22nd, 2015 02:00
Thanks Eran,
What you say makes perfect sense and I do believe what you and your team have created has the potential for massive disruption in the storage space.
However in terms of iscsi, the reason we have chosen to use it up to now has always been is lack of vendor lock in and its flexibility and the fact it is OS agnostic. That flexibility is important to us in any product that we chose to deploy in our environment particularly in the cloud era. At the moment we are a VMware shop but even that could potentially change going forwards.
Our tier1 storage up to now has always been supplied by FC connected SANs but I was looking at technology to help us reduce our dependency on fibre channel and to give us greater flexibility and scale out capabilities as well as reducing costs.
While we'd love to try ScaleIO in the way its designed to be deployed in a VMware environment, we can not work on business case until we can understand more about the quirks of the product and I was hoping to become familiar with the management and operational framework without having to run the product up inside nested vsphere 6 hosts.
I don't fully understand how the scaleIO product can backend storage to other deployment types like Xen/HyperV/any other etc without the storage being presented as iSCSI (unless you are running similar native drivers on those platforms as well?) - and if that is the case I don't understand how iSCSI support can be removed from the product.
Another big problem is that most of the information on the net is referring to versions prior to 1.32 so a lot of people are going to hit the same sorts of frustration as we are. I may have missed something but I can't recall seeing any information about the iSCSI target functionality being removed from 1.32. It would be easier if the iSCSI functionality could be made available in 1.32 with a warning that the style of deployment is not supported or recommended particularly with reference to VMware.
I'd love to be able to run my own benchmarking tests comparing the native SDC solution to the same product except running as an iSCSI target - but this is not possible under 1.32.
cheers
Ashley
ashleywatson
13 Posts
0
June 22nd, 2015 20:00
Thanks Eran, The lack of iSCSI surely means that although the storage can be presented by CentOS hosts, it can't be presented to VMware hosts as that would need to be via ISCSI or NFs (if the SDC can't be run under VMware - our use case as we only have a single host for the POC) - unless we run standard LIO over a ScaleIO mounted volume.
Are there any technical reasons why iSCSI couldn't be re-introduced back into the product architecture as an option - or is the decision linked to strategy?
We'll have a discussion this side to see if there is a way we can proceed but suspect it's going to be a long process.
cheers
Ashley
ashleywatson
13 Posts
0
June 23rd, 2015 02:00
thanks guys, this is great and gives a variety of deployment options that should help everyone.
Bearing in mind most VMware shops will be like us and use FC attached storage and/or iSCSI or NFS storage, it would be great if there was a post on the net outlining the process to standup 3x Centos7 VMs and to confgure the SDC vib extension on VMware to connect to this. Many people including ourselves will not be familiar with the deployment flexibility of ScaleIO and this is one of its strong points IMHO.
This would be particularly useful while the net is flooded with old releases of ScaleIO setup refering specifically to the iSCSI drivers (which as we know don't exist in 1.32).