Start a Conversation

Unsolved

This post is more than 5 years old

1565

October 3rd, 2013 06:00

Dominant port? And performance problems (2 questions)

A couple of years ago, a DMX Symmetrix Performance Expert was at my site and told me FA port 0 is dominant to port 1. His explanation was that if both ports are utilizing 50% of the CPU resources shared between those two ports, and they both increase, port 0 will be able to allow the increase, decreasing the CPU resource availability to port 1.

Since the VMAX actually shares multi-core processors in each engine, and each engine has 8 ports, this should not hold true. However, I've been told when connecting two arrays for SRDF, it's a good policy to use port 0 and not port 1 of one director. As an example, I know one company that uses 7h0 and 8f0 for their SRDF and they do not use 7h1 or 8f1. If what I've read of the VMAX design holds true, it should not matter if they use 7h1 and 8f1 for normal operations. Am I correct?

I have one friend who has configured all of his FA ports in one port group (this friend has a 20k and is not using SRDF). From what I've read, this is not a good idea at all. I've read that each port can handle a maximum of 32 unique wwn's. He has more than 100 servers and they are all assigned to this one port group in each masking view, which, as I stated previously, is all of his FA's. He is experiencing many performance issues. He said EMC looked at it and told him everything is configured properly. Does this mean the EMC documentation is wrong and you can configure it this way without issue?

Thanks in advance for any response.

859 Posts

October 3rd, 2013 08:00

in my opinion both ports share the CPU resource equally and priority is not set for any port (0 or 1).

in case of SRDF, when you convert your FA to RF, port 1 gets disabled. So you can only use port 0.

Configuring all the FA ports in one port group is not a good idea from performance perspective plus for windows cluster. One should configure portgroup to accomodate only one port of processor (e.g 7h:1 and 8h:1 or 7h:0 and 8h:0 or 7h:0 and 8h:1)

regards,

Saurabh

4 Posts

October 4th, 2013 05:00

Thanks for the reply, Saurabh. Your answer, however, leads to a question or two:

If the VMAX design shows that each port shares the CPU resources, then why does changing the 0 port to an RA disable the 1 port? Seems like a waste of a port to me.

Also, my friend is seeing a high miss rate of 56% Miss hit and 44% Hit (over a month's period). I know this usually means they need more cache, but could this be due to the port group configuration too?

859 Posts

October 4th, 2013 06:00

well disabling port 1 of a RF adapter does not cause waste of resources. You may never want both the ports of the processor to carry your srdf operations.

Not sure about your other question if port group config could cause more misses...perhaps someone else can shed more light on to this..

regards,

Saurabh

1.3K Posts

October 4th, 2013 06:00

The RDF emulation only runs one of the two ports.  So one port is disabled with the RDF emulation installed.

The cache usage won't have anything to do with the ports.  Low hit ratios may just be the nature of the application, but adding cache won't hurt.

4 Posts

October 4th, 2013 07:00

Thank you very much, to all three of you for the help.

226 Posts

October 4th, 2013 07:00

Phil,

The CPU resources on a VMAX director are not shared. On both systems we call a front-end CPU core (VMAX) or processor (DMX) a front-end slice. The relationship between front-end slices to front-end ports on VMAX and DMX is pretty much the same: there are two ports for every slice. Using both ports on a slice does not provide any IOPS/processing gain.

Thanks,

- Sean

4 Posts

October 4th, 2013 07:00

As you already know, there is one CPU per FA ports zero and one on the DMX. Meaning, FA 3a:0 and 3a:1 share one CPU. Therefore, when you change an FA to an RA, port 1 is disabled, giving the full CPU resources to port 0.


However, it was my understanding the VMAX is different:

"The main architectural difference between DMX and Vmax model is that vmax has engine concept.In DMX model,we have different hardware for front end (FA director), back end (DA director) and memory modules. But in Vmax all these hardwares are integrated together and is known as Vmax Engine. A EMC Vmax storage array support from 1 to maximum of 8 Vmax engines. Each of these engines contains two symmetrix vmax directors. Each director includes:

                  - 8 multi-core CPUs (total 16 per engine)

                  – Cache memory(global memory)

                  – Front end I/O modules

                  – Back end I/O modules

                  – System Interface Module(SIB)"

From the architecture information above, it would seem to me that the disabling of a port when configuring a port as an RA is a legacy configuration. If all CPU resources are shared across the director ports, then why would there be a need to disable a port? I understand the need on the DMX, but not on the VMAX. Can anyone help with this? Thanks, Phil

No Events found!

Top