1 Rookie
•
31 Posts
0
71
Base 4port IO Module
HI
Here we have dell powerstore 1200 and I am just wondering about this.
This is base IO module and its speed is 10 GBps which based on this article is as it should be
https://infohub.delltechnologies.com/l/dell-powerstore-introduction-to-the-platform-1/i-o-module-44/
However in the same article if we get 25 GBE optical 4 port then we are at 25 GBps apparently
right now our cluster network is at 25 Gbps but IO module is 10 GBps.
My understanding is that cluster network is just for management purposes so we still working with 10 Gbps as I understand that IO module is communication between esxi host and storage disks
Am I right or I am missing something
DELL-Josh Cr
Moderator
Moderator
•
8.7K Posts
0
December 15th, 2023 14:50
You should call phone support and open a support case so they can take a look.
DELL-Josh Cr
Moderator
Moderator
•
8.7K Posts
0
December 14th, 2023 13:51
Hi,
Thanks for your question.
You are correct that the IO module speed is what matters for hosts and storage.
Let us know if you have any additional questions.
slavonskibecar
1 Rookie
1 Rookie
•
31 Posts
0
December 14th, 2023 15:41
Ya i was under impression that cluster network is just for managing the cluster but it has nothing to do IO outputs between hosts and storage
Then i went to datacenter and got this. Here it clearly says that it is 25 GBps but for some reason other guy claims that this doesnt matter but cluster network speed matters. This was so confusing to me. Cluster network doesnt carry IO traffic but these modules do
So this module can be 25 GBps it is just wrong SFP used this is why PowerStore shows 10 GBps
slavonskibecar
1 Rookie
1 Rookie
•
31 Posts
0
December 14th, 2023 16:31
Hi Josh
Also I found this diagram
As far as I understand this diagram ports are dedicated for iSCSI traffic NVME traffic they cannot be mix and match for example for iSCSI
My understanding is that if we implement iSCSi then we use port 2 on module and if we are going with NVME TCP/IP then we have to use port 0 on each controller.
Not also clear about LACP bonding can ports for NVME be bonded to form LACP for example from IO module 1 and 2
Example NVME TCP network port 0 on node 1 to be bonded with bonded with port 0 on node 2
DELL-Josh Cr
Moderator
Moderator
•
8.7K Posts
0
December 14th, 2023 16:43
You should be able to bond nvme/tcp with port 3.
https://dell.to/3NrSzAq
slavonskibecar
1 Rookie
1 Rookie
•
31 Posts
0
December 14th, 2023 16:53
I can see that according to this diagram Port 3 is reserved for NAS network> we will be only using block storage no NAS
According to diagram port 0 is used for NVME and it doesnt say can be bonded ALso it says it is dedicated port for NVME TCP
So i can bond Port 0 with port 3 on each node for NVME TCP
slavonskibecar
1 Rookie
1 Rookie
•
31 Posts
0
December 14th, 2023 17:41
https://www.dell.com/support/manuals/en-us/powerstore-5200t/pwrstrt-ntwkg/cable-the-nvmetcp-network?guid=guid-41803c07-302d-43d6-9793-5bdcf5767f16&lang=en-us
This document also specifies which port to use for NVME TCP/IP didnt see anything on NVME TCP port bonding
DELL-Josh Cr
Moderator
Moderator
•
8.7K Posts
0
December 14th, 2023 17:54
https://dell.to/48cyyFI
“Direct attached iSCSI or NVMe TCP hosts are supported on other ports, but not for ports 0 and 1 (bond0) on the 4-port card.”
Should be fine on port 3 if no NAS is being used.
Also can do it at OS level.
https://dell.to/3uS8S2M
slavonskibecar
1 Rookie
1 Rookie
•
31 Posts
0
December 15th, 2023 14:11
based on what you said then we dont even need IO modules.4 port cards port 0 and 1 are connected. I was under impression that these ports are used just for Cluster networking but have no bearing in carrying traffic from host.
IN every other storage manufacturer cluster network is one thing and IO input output from hosts to actual disk is carried over IO modules. Cluster network is just management network but apparently it also carries traffic from hosts to disks which is first time I hear about after almost 10 years working with storage
Direct attached iSCSI or NVMe TCP hosts are supported on other ports, but not for ports 0 and 1 (bond0) on the 4-port card.
from hosts to actual disk is carried over IO modules
Then over here I have 4 ports IO module which is for some reason 10GBps
These ones should carry traffic and Ports 0 and 1 on 4 port card should be only for Cluster network but apparently that is not the case
slavonskibecar
1 Rookie
1 Rookie
•
31 Posts
0
December 15th, 2023 14:34
THe reason this is confusing is that I have extremely bad performance on NVME TCP IP hosts and based on internet articles NVME is supposed to outperform iSCSI however in my case NVME is extremely slow and basically it comes to the point that NVME cannot be used otherwise performance is way worse than iSCSI so I would basically go with iSCSi where I get decent performance.
Based on this I dont even need IO module but what I got from this document networks mapped for 4 port card are just for management and high availability but not IO traffic cause right now IO modules are configured for 10GBps
Apparently it is high misunderstanding on part but I cannot get any decent performance out of NVME TCP
slavonskibecar
1 Rookie
1 Rookie
•
31 Posts
0
December 15th, 2023 14:47
But this document makes perfect sense
https://www.dell.com/support/manuals/en-us/powerstore-1000t/pwrstrt-ntwkg/sample-configuration?guid=guid-de12a12b-225d-4ee7-aa57-d4e0dfe137e9&lang=en-us
It states that port0 is dedicated port to NVME and it should be 25 GBps and port2 for example is used for iSCSI traffic on IO module
Port3 on IO module is used for NAS which we will not use as we will be only using block storage from PowerStore