March 9th, 2011 09:00

No problem.

Yes you are right that port channels and their native ability to balance via IP hash or TCP/UDP ports will only provide so much benefit.  Keep in mind that the fan-out side of the connection will have different IPs (different ESX servers), so although their destination could be at one IP, load balancing can take place.  The other option you mentioned was around adding multiple IP addresses.  You can create extra interfaces that ride on top of your trunked (4x1gbe) uplink to the switch.  These extra interfaces can has different IPs and can be tagged to be on different VLANs or subnets.  All available exports will be advertised from the interfaces that are created.

Hope this helps.

3 Posts

March 10th, 2011 04:00

Thanks for your fast answer!

Keep in mind that the fan-out side of the connection will have  different IPs (different ESX servers), so although their destination  could be at one IP, load balancing can take place.

I already considered that, but at the beginning we'll have about 5-7 VMs per Host so I think the 1Gbit link won't have enough power.

You can create extra interfaces that ride on top of your trunked  (4x1gbe) uplink to the switch.  These extra interfaces can has different  IPs and can be tagged to be on different VLANs or subnets.  All  available exports will be advertised from the interfaces that are  created.

Let's say the storage has 4 different IPs and so the same NFS export will be mounted 4 times on the ESX host(through the different IPs). Wouldn't it then be much work to seperate the VMs according to their storage needs and monitor the performance of the different connections?

What happens if one connection is overloaded? Will I just move the VM with storage vMotion to a different IP(which sounds a bit crazy since it's the same export on the same storage )?

I think the best thing will be to setup a testlab for that topic and see how it works. Does EMC offer a simulator like the one from Netapp?

March 10th, 2011 14:00

Yes, EMC offers a virtual storage appliance to demonstrate all NAS related functionality.  Anyone, EMC and non-EMC customers, can download it from the below link.

http://nickapedia.com/2010/11/01/new-torrent-links-vsas-and-tools/

Also there is documentation for higher level testing for replication and other needs below.

http://nickapedia.com/2011/02/05/how-to-uber-new-celerra-uber-vsa-guide/

There is also a link below that shows some settings tweaks in the VSA OS (redhat) that will allow for a bit better performance.

https://community.emc.com/message/528101


Keep in mind that what the VSA is meant for testing and education and the performance of it is determined by the redhat OS and the underlying infrastructure you place it on.  So you can play around with the functionality, but don't expect it to be a workhorse =)  As a side note, since Redhat is advertising the network links to the VSA services, the VSA cannot do trunk ports.  So in order to get multiple true VLANs you would need to advertise multiple virtiual adapters.  If possible try and avoid multiple VLANs and if you need separate IP subnets just keep them in the same VLAN and on the same virtual adapter.

3 Posts

March 12th, 2011 05:00

Great! As soon as I've got a bit freetime I'll start playing with VSA.

Just a last quick question, can I create multiple VLANs on one Interface on the VNXe? CIFS and NFS will be in seperate VLANs but go over the same interface(s).

March 18th, 2011 12:00

Sorry for the delay on this.  Yes it does support trunking (vlans) 802.1q, see the spec sheet below for other details as well.

http://www.emc.com/collateral/hardware/specification-sheet/h8515-vnxe-ss.pdf

No Events found!

Top