Start a Conversation

Unsolved

This post is more than 5 years old

2126

January 10th, 2012 08:00

The NAS vs. SAN thing, reprise

An interesting discussion has broken out on the oSpecialist DL and I wanted to raise it here. The idea is that a customer wants to compare VMAX to NetApp dNFS, but resists considering VNX. Which has led to the question of why the customer resists VNX, and how EMC NAS compares to NetApp (very well in my view).

The bulk of the discussion is below. My sidenotes are in italics. Comments, please?

From Brad Davie:

IHAC who is currently running Oracle on VMAX.  They WILL NOT talk to us about VNX.  However, they are testing Oracle DNFS on Netapp.

From everything I’ve read, DNFS will perform well and is a viable option, but I need to steer the customer away from it to keep them on VMAX.

Do we have any competitive strategies to help?

Response by Jeff Browning:

Annoying that the customer will not consider VNX. Our NAS stuff has improved dramatically and is now fully competitive with NetApp's offerings.

Bashing dNFS is a bad idea. It will likely backfire. dNFS is a fantastic feature within the Oracle stack, and provides many advantages, including near-FC I/O performance, management simplicity, port scalability and failover, etc.

Can you turn the customer around on talking to you about VNX?

Response by Allan Robertson:

It should be remembered that the near-FC I/O performance that NetApp achieved was on their own kit.

Response by Jeff Browning:

The team in Shanghai (Note: The Shanghai Solutions Center has a large team of Oracle specialists performing testing on Oracle configurations, including dNFS) achieved 96% of FC performance on an NS-960 compared to a CX4-960 in August of 2010. I am not sure what has done since then. Oracle has also done lots of testing that they have shared with me. See my OOW presentation for more information on that.

Response by Kevin Clossen:

I concur with Jeff, with caveat. It is quite simple to find parity between dNFS via GbE and 4GFC for OLTP-like workloads where the majority of I/O is random single block. The CPUs go critical on the transaction processing code long before I/O can be driven up past even GbE bandwidth. In my experience dNFS can handle about 14,000 random 8K transfers per GbE. To that end, dNFS on about 4 wires will provide enough I/O bandwidth to allow the transaction code to saturate the processors on most two socket boxes (even WSM-EP).

Response by Jeff Rosser:

To add-on to Kevin and Jeff…

What’s really cool about dNFS is that those 4 wires(or more) get automatically load balanced by the protocol itself.  Oracle has done a great job with this and consequentially the performance limits of Oracle on dNFS are not the limits of the network pipes, but rather something else.  

As for VNX gear vs. Netapp gear.   Two things have changed recently that allow us to compete better with Netapp.  First, the VNX platform with FAST technology leap froged us over Netapp from a performance perspective.  Second, we’ve addressed some latency issues with our NFS module that adversely effected performance.   I haven’t seen a head to head comparison, but my belief is that we would soundly thump them in an OLTP Oracle test, if it were properly done. 

Response by Allan Robertson:

I am aware of that paper, having reviewed it at the time.

DNFS has made a big improvement for file performance and, on VNX, very good numbers can be achieved.

In this case, the comparison would be file on VNX vs block on VMAX. (A comment I disagree with BTW. Remember the customer is comparing VMAX to NetApp. If we could get them to test VNX, the comparison would be between NetApp and VNX, not between VMAX and VNX.)

Response by Kevin Clossen:

Be aware that dNFS is also a bit of a religion. It actually turns out that Oracle on Solaris 10 and 11 with kernel NFS and bonded NICs performs *better* than dNFS…Glenn  Fawcett and I did a bunch of side by side , same storage U verus K mode NFS comparisons…me with Nehalem EP and he with M8000 gear. The M8000 gear performed about 8% better with kernel NFS … So if you get a VNX into a Sol shop do two things 1) hint over beers that they need to re-think their long term strategy because SPARC is dead regardless of the fibs (aka bold faced lies) uttered by Larry Ellison and 2) have them test with and without the dNFS odm library just to see how it goes.

P.S., I'm sorry for stepping on anyone's toes if you're bullish on SPARC. (I love that last comment on Sun / SPARC. Solaris for SPARC used to be my personal favorite OS. No longer. I have switched to Linux for servers and (of course) Mac OS for desktop.)

Another response by Kevin Clossen:

My old friend Glenn did jot some down:

http://glennfawcett.wordpress.com/2009/12/14/direct-nfs-vs-kernel-nfs-bake-off-with-oracle-11g-and-solaris-and-the-winner-is/ (Very nice blog, BTW.)

Response by Randy Thompson:

Based on the original thread, the unfortunate part is that it will be NTAP file pricing vs. VMAX block pricing. From a performance perspective, it is usually not about the very lowest latency and the very highest performance in terms of IOPS or MBps, it is about good enough.  Unless you can change the game, I can say who will win with a reasonable degree of certainty and so can NTAP. (Exactly! Given the choice between VMAX and NetApp / NAS, many customers will choose NetApp / NAS for many databases, assuming that price performance is the primary driver. Hence the need to get VNX in the mix. If VNX is considered, then we have a very high chance of prevailing over NetApp. VNX is a killer product at this point!)

And my (Jeff Browning's) final response to Kevin's comment about OLTP and dNFS:

And, of course, this means that dNFS will bog down quickly if you get into DSS / data warehouse workloads. Although Oracle touts it for this, I have almost never seen a NAS-based DSS database of any significant size. As long as you are talking random / small block I/O, then as Kevin indicates, the CPU quickly becomes the bottleneck and thus dNFS can be very competitive. For very large data warehouses doing lots of big block / sequential I/O, the I/O layer becomes the bottleneck and dNFS has a much harder time keeping up with, say, 8 Gb FC.

A few final words:

I maintain (stubbornly) that EMC VNX IP storage is a highly competive, fully viable alternative to NetApp. I think we have some perceptual issues to overcome. But we have many, many very successful VNX IP storage installations which are storing Oracle databases at this point. VNX works very well with dNFS. We have actually stolen the initiative away from NetApp in this area, as indicated by my joint presentation (with Oracle's Kevin Jernigan) on dNFS / clonedb at both EMCWorld and Oracle OpenWorld in 2011.

There is absolutely no reason why we should not be selling VNX aggressively at this point. Yes, VMAX is also a best-of-breed, fantastic product as well. Each have their place, and each belong in our arsenal. Think of VNX as a conventional weapon and VMAX as a strategic nuclear weapon. I think you get the idea.

I would love to hear other's thoughts on the use of VNX for storing Oracle in conjunction with IP / dNFS. Comments, please?

225 Posts

January 10th, 2012 23:00

As performance data I received, NFS with F_Cache enabled on NS 480 would have 10-15% performance improvement @ 8kib 80/20 mix. I believed the improvement on VNX would be better because of lager F_Cache size and SP CPU.

But according to current VNX Perf data, IO bandwidth is pushed well from File Model BE to FE, but IOPS is limited with File module CPU, that would limit F_Cache efforts. I think balance sizing is important. This part is sort of my deduction.

As I know, Netapp use P_Card to improve read IOPS, F_Cache works both of R/W

Eddy

No Events found!

Top