メイン コンテンツに進む
  • すばやく簡単にご注文が可能
  • 注文内容の表示、配送状況をトラック
  • 会員限定の特典や割引のご利用
  • 製品リストの作成とアクセスが可能

PowerScale, Isilon OneFS: HBase Performance Testing on Isilon

概要: This article illustrates the performance benchmarking tests on an Isilon X410 cluster using the YCSB benchmarking suite and CDH 5.10.

この記事は次に適用されます: この記事は次には適用されません: この記事は、特定の製品に関連付けられていません。 すべての製品パージョンがこの記事に記載されているわけではありません。

現象

Not required

原因

Not required

解決方法

NOTE: This topic is part of the Using Hadoop with OneFS Info Hub. 


Introduction

We did a series of performance benchmarking tests on an Isilon X410 cluster using the YCSB benchmarking suite and CDH 5.10.

The CAE POC lab environment was configured with 5x Isilon x410 nodes are running OneFS 8.0.0.4 and later 8.0.1.1 NFS large Block streaming benchmarks we should expect 5x ~700 MB/s writes (3.5 GB/s) and 5x ~1 GB/s reads (5 GB/s) for our theoretical aggregate maximums in any of these tests.

The (9) Compute nodes are Dell PowerEdge FC630 servers running CentOS 7.3.1611 each configured with 2x18C/36T-Intel Xeon® CPU E5-2697 v4 @ 2.30GHz with 512GB of RAM. Local storage is 2xSSD in RAID 1 formatted as XFS for both operating system and scratch space/spill files.

There were also three additional edge servers which were used to drive the YCSB load.

The backend network between compute nodes and Isilon is 10Gbps with Jumbo Frames set (MTU=9162) for the NICs and the switch ports.

SLN319167_en_US__1i_isilon_1arch_kb_v1a
 
CDH 5.10 was configured to run in an Access Zone on Isilon, service accounts were created in the Isilon Local provider and locally in the client /etc/passwd files. All tests were run using a basic test user with no special privileges.

Isilon statistics were monitored with both IIQ and Grafana/Data Insights package. CDH statistics were monitored with Cloudera Manager and also with Grafana.


Initial Testing

The first series of tests were to determine the relevant parameters on the HBASE side that affected the overall output. We used the YCSB tool to generate the load for HBASE. This initial test was run using a single client (edge server) using the 'load' phase of YCSB and 40 Million rows. This table was deleted prior to each run.
 

ycsb load hbase10 -P workloads/workloada1 -p table='ycsb_40Mtable_nr' -p columnfamily=family -threads 256 -p recordcount=40000000

hbase.regionserver.maxlogs - Maximum number of Write-Ahead Log (WAL) files. This value multiplied by HDFS Block Size (dfs.blocksize) is the size of the WAL that must be replayed when a server crashes. This value is inversely proportional to the frequency of flushes to the disk.

hbase.wal.regiongrouping.numgroups - When using Multiple HDFS WAL as the WALProvider, sets how many write-ahead-logs each RegionServer should run. Results in this number of HDFS pipelines. Writes for a given Region only go to a single pipeline, spreading the total RegionServer load.

SLN319167_en_US__2i_isilon_2thruvspipe_kb_v1a

SLN319167_en_US__3i_isilon_3latvspipe_kb_v1a
The philosophy here was to parallelize as many writes as we could, so increasing the number of WALs and then the number of threads (pipeline) per WAL accomplishes this. The previous two charts show that for a given number for 'maxlogs' 128 or 256 we see no real change indicating that we are not really pushing into this number from the client side. Oby varying the number of 'pipelines' per file though we see the trend indicating the parameter that is sensitive to parallelization. The next question is then would be where does Isilon "get in the way" either with Disk I/O, network, CPU, or OneFS and we can look at what Isilon statistics report.

SLN319167_en_US__4i_isilon_4networkload_kb_v1a
 
The network and CPU graphs tell us that the Isilon cluster is underutilized and has room for more work. CPU would be > 80%, and network bandwidth would be more than 3 GB/s.

SLN319167_en_US__5i_isilon_5proto_kb_v1a

These plots show the HDFS protocol statistics and how they are translated by OneFS. The HDFS ops are multiples of dfs.blocksize which is 256MB here. The interesting thing here is that the 'Heat' graph shows the OneFS file operations and you can see correlation of writes and locks. In this case, HBase is doing appends to the WAL's so OneFS locks the WAL file for each write that is appended. Which is what we would expect for stable writes on a clustered file system. These would appear to be contributing to the limiting factor in this set of tests.


HBase Updates

This next test was to do some more experimenting in finding what happens at scale so I created a one Billion-row table, which took a good hour to generate, and then did a YCSB run that updated 10 million of the rows using the 'workloada' settings (50/50 read/write). This was run on a single client, and I was also looking for the most throughput I could generate so I ran this as a function of the number of YCSB threads. One other note was that we did some tuning of Isilon and went to OneFS 8.0.1.1 which has performance tweaks for the Data node service. You can see the bump up in performance compared to the previous set of runs. For these runs, we set the hbase.regionserver.maxlogs = 256 and the hbase.wal.regiongrouping.numgroups = 20

SLN319167_en_US__6i_isilon_6table1_kb_v1a

SLN319167_en_US__7i_isilon_7table2_kb_v1a
SLN319167_en_US__8i_isilon_8table3_kb_v1a

Looking at these runs the first thing that is apparent is the fall off at high thread count. I was curious if this was an Isilon issue or a client-side issue. We see some further tests regarding that in the upcoming paragraphs. But I can say that driving 200K+ Ops at an update latency of < 3ms is impressive. Each of these update runs was fast and I could do them one after another and the graph below shows the even balance across the Isilon nodes for these runs.

SLN319167_en_US__9i_isilon_9heat_kb_v1a

Again you can see from the Heat graph that the file operations are writes and locks corresponding to the append nature of the WAL processes.


Region Server Scaling

The next test was to determine how the Isilon nodes (five of them) would fare against a different number of region servers. The same update script ran in the previous test was run here. A one Billion-row table and 10 million rows updated using 'workloada' with a single client and YCSB threads at 51, We also kept the same setting on the maxlogs and pipelines (256 and 20 respectively).

SLN319167_en_US__10i_isilon_10scaling1_kb_v1a

SLN319167_en_US__11i_isilon_11scaling2_kb_v1a
 
The results are informative, albeit not surprising. The scale-out nature of HBase combined with the scale-out nature of Isilon and more==better. This is a test that I would recommend customers run on their environments as part of their own sizing exercise. It may come to a point of diminishing returns, but here we have nine hefty servers pushing five Isilon nodes and it looks like there is room for more.


More Clients

The last series of tests come from that deep dark place that makes you want to break the system you are testing. After all, it is a perfectly valid scientific method to ratchet a test up until things break and call thereby knowing what the upper limit on the parameters being tested are. In this series of tests, I had two additional servers that I could use to run the client from, in addition I ran two YCSB clients on each one allowing me to scale up to six clients each driving 512 threads, which would be 4096 threads overall. I went back and created two different tables one with 4 Billion rows split into 600 regions and one with 400million rows split into 90 regions.  

 SLN319167_en_US__12i_isilon_12clientscaling1_kb_v1a

SLN319167_en_US__13i_isilon_13clientscaling2_kb_v1a

 
SLN319167_en_US__14i_isilon_14clientscaling3_kb_v1a
As you can see, the size of the table matters little in this test. Looking at the Isilon Heat charts again you can see that there is a few percentage difference in the number of file operations mostly inline with the differences of a four Billion-row table to 400 Million rows.

SLN319167_en_US__15i_isilon_15row1_kb_v1a


Conclusion

HBase is a good candidate for running on Isilon, mainly because of the scale-out to scale-out architectures. HBase does a lot of its own caching, and is splitting the table across a good number of regions you get HBase to scale-out with your data. In other words, it does a good job of taking care of its own needs, and the file system is there for persistence. We were not able to push the load tests to the point of actually breaking things but if your looking at four Billion rows in your HBase design and expect 800,000 operations with less than 3 ms of latency this architecture supports it. If you notice that I did not mention much more about any of the myriad other client side tweaks you could apply to HBase itself, I would expect all those tweaks to still be valid, and beyond the scope of this test.

 

対象製品

Isilon, PowerScale OneFS
文書のプロパティ
文書番号: 000128942
文書の種類: Solution
最終更新: 20 9月 2023
バージョン:  6
質問に対する他のDellユーザーからの回答を見つける
サポート サービス
お使いのデバイスがサポート サービスの対象かどうかを確認してください。