Non résolu
27 messages
0
811
What block size should I choose for my file system on a RAID 6 LUN with 64KB stripes?
Hello,
The whole question is in the title but here is the full context :
I am using a Dell PowerEdge R720XD with PERC H710P, 12x 6TB 512e HDD in RAID 6 and Debian 11 as my only homelab server. This server stores a various data (18TB of videos, 6TB of binary data from the STORJ network, 2TB of miscellaneous files (home NAS), etc...) so I chose 64KB RAID stripes (default value).
Given the proportion of large files, some of you will certainly tell me that I should have used larger RAID bands. It's probably right but, when I installed the server, I had no idea that I'll get this data distribution, these proportions are likely to change over time and now I can't change this without formatting the server. Whatever, any advice is welcome.
Here's what my storage stack looks like:
My objective is to get the maximum performance while keeping this maximum available storage (I don't want to switch to a RAID 10). From what I understood, the RAID controller will only write to disk every [stripe size] x [number of data drives] = 64KB x 10 (12 drives - 2 parity drives) = 640KB in my case
Is this correct?
If yes :
- then ideally I should instruct LVM and filesystems to use 64KB or 128KB (only divisors of 640KB which are >=64KB) blocks to fill those 640KB flushed to disk as best as possible?
- then why is the default value, returned by the RAID controller to the OS, 512B?
Thanks in advance for your help and explanations !
DellEMCSupport
Moderator
Moderator
•
631 messages
0
14 mai 2023 20:00
Hello would this article help?
https://dell.to/42E2WXo
tigerblue77
27 messages
0
3 juin 2023 10:00
Hello, no, it doesn't...
DellEMCSupport
Moderator
Moderator
•
631 messages
0
4 juin 2023 18:00
Hello, this is more in the line of consulting area which pre sales, whereas folks here are mostly for the after sales tech support. I can't give you much help on this, but I will just leave this link here for you to have a look:
https://dell.to/3MQWOnP