Start a Conversation

Unsolved

A

5 Posts

733

August 1st, 2020 16:00

56Gb Mellanox infiniband mezzanine options - do they have an ethernet mode?

We are using Proxmox and Ceph in Dell blades using the M1000e modular chassis.

NICs and switches are currently all 10 GbE broadcom.

Public LANs, guest LANs, and Corosync are handled by 4x 10GbE cards on 40 GbE MXL switches.

Ceph front-end and back-end are each handled by a dual-port 10 GbE, also on MXL switches.

OSDs are 8TB intel P4510 NVMe/PCIe 3.0 drives, and I need to know the best way to build the Ceph networks to get the most out of them.

As the Ceph front-end and back-end are each composed of 20 Gb LAGs, our i/o bottleneck is quite obvious and our benchmark testing pegs right to 2 GB/s. If I combine the Ceph nets onto a single 40 Gb LAG, performance is slightly worse.

I just need to know, given the constraints of the M1000e, what is the best way to build the Ceph front/back end networks for the best guest VM performance?

There are 56 Gb and 40 Gb infiniband options for our hardware that we are looking at to try and move the bottleneck as far out as we can.

All our available NIC options are based on ConnectX-3 silicon. The part numbers are 0J05YT, 0CDMG5, 08PTD1, 0P90JM, 0K6V3V. Some of them are "VPI" cards that have an ethernet mode, right? Can you tell me which ones? The manual is not too clear about it. I am not sure what I should be shooting for. I would like to get 56Gb parts if I can, but do they have an ethernet mode? Do the M4001F switches work with that?

All the NICs are dual-port NICs and there will be 4 switches in the chassis. Can I combine/aggregate/bond the links to effectively get 112 Gb for each Ceph LAN?

Thank you

Moderator

 • 

790 Posts

August 3rd, 2020 04:00

Hi there,

 

I need to dive deeper into this and will come back as soon as I have an answer for you.

 

Cheers
Stefan

Moderator

 • 

790 Posts

August 4th, 2020 01:00

Hi again,

 

I got a response from some colleagues about your questions and concerns. Let me answer you as well as possible.

 

First of all, be aware that your questions are beyond our support, all this is consulting and you should consider contacting your sales rep. in order to receive a consulting solution to performance questions.

 

Let me quickly answer your questions.

 

  1. I just need to know, given the constraints of the M1000e, what is the best way to build the Ceph front/back end networks for the best guest VM performance?
    This is a design question and not in the scope of support. We do offer these services through different means. Please contact your sales rep.
  2. The part numbers are 0J05YT, 0CDMG5, 08PTD1, 0P90JM, 0K6V3V. Some of them are "VPI" cards that have an ethernet mode, right?
    After some research, we found none of these cards have the VPI (virtual protocol interconnect) if you want to use this feature you would need to purchase cards that support this.
  3. Do the M4001F switches work with that?
    These are Mellanox switches so you would need to check with Mellanox regarding if the switch supports this.
  4. Can I combine/aggregate/bond the links to effectively get 112 Gb for each Ceph LAN?
    Another design question that is out of scope.

 

Well, there might be someone else on the forum who is able to answer your questions accordingly, from our end it's out of support.

The best bet is to contact sales rep and ask for a consulting solution.

 

Best regards,
Stefan

 

No Events found!

Top