This section presents guidelines for configuring protocols for OneFS.
For assistance, contact your PowerScale account representative or PowerScale Technical Support.
Item | OneFS 9.8.0.0 and later | Description |
---|---|---|
FTP connections per node | 200 | The recommended limit for FTP connections per node. This number is the tested limit. If the number of FTP connections to a node exceeds the recommended limit, FTP performance might be negatively affected. The limit for FTP connections per node assumes anonymous access that requires no authentication. |
HDFS block size | 64 MB–512 MB | The recommended range for HDFS block sizes. For best results, the block size should not be smaller than 4 KB or larger than 1 GB. The specific value varies by workflow. Smaller block sizes require more tasks; however, you want a large enough number of tasks to take advantage of all the slots on the cluster. |
HDFS root directory | 1 per access zone | The number of HDFS root directories per access zone that OneFS supports. The limitation for access zones and authentication providers is the same for HDFS and other protocols. |
Files and directories per HDFS fsimage | 30,000,000 | HDFS supports a dataset of 30,000,000 objects (files or directories) for the generation of an fsimage in each zone. |
Encryption zone keys for HDFS | 999 | Transparent Data Encryption for HDFS protocol stores encrypted data in a directory tree that is called the encryption zone. Each encryption zone is defined by a KMS key. Each OneFS cluster supports up to 999 keys. The same key can be used in multiple zones, so this does not limit the creation or management of encryption zones themselves. |
HTTP connections per node | 500 | The limit for HTTP connections per node. OneFS runs version 2 of the Apache HTTP Server, which includes the Apache Multi-Processing Module (MPM) that implements a hybrid multiprocess, multithreaded server. The Apache MPM configuration limits the number of simultaneous connections that OneFS services. OneFS queues connections after the connection limit is reached and processes them as resources become available. Exceeding this limit might negatively affect the cluster performance and client connections. Evaluate the workflow and workloads for your cluster to determine the value that works best for your environment. |
NDMP block size | 512 KB | The size limit for tape blocks. If you back up tape blocks that are larger than the size limit, OneFS backs up 256 KB blocks. |
NDMP connections per node | 64 | The limit for the number of NDMP connections that are allowed per node. |
NFS exports per cluster | 40,000 | The recommended limit for NFS exports per cluster. Exceeding this limit might negatively affect the cluster performance and client connections. Evaluate the workflow and workloads for your cluster to determine the value that works best for your environment. |
NFS max read size | 1 MB | The limit for NFS read size, or rsize, for NFS3 and NFS4. When you mount NFS exports from a cluster, a larger read size for remote procedure calls can improve throughput. The default read size in OneFS is 128 KB. An NFS client uses the largest supported size by default. As a result, avoid setting the size on your clients. Setting the value too low on a client overrides the default value and can undermine performance. |
NFS max write size | 1 MB | The limit for NFS write size, or wsize, for NFS3 and NFS4. When you mount NFS exports from a cluster, a larger write size for remote procedure calls can improve throughput. The default write size in OneFS is 512 KB. An NFS client uses the largest supported size by default. As a result, avoid setting the size on your clients. Setting the value too low on a client overrides the default value and can undermine performance. |
NFS3 connections per node | 1,024 connections | The recommended NFS3 connections. The maximum has not been established; however, the number of available TCP sockets can limit the number of NFS connections. The number of connections that a node can process depends on the ratio of active-to-idle connections and on the resources that are available to process the sessions. Monitoring the number of NFS connections to each node helps prevent overloading a node with connections. |
NFS4 connections per node | 1,024 connections | The recommended NFS4 connections. The maximum has not been established; however, the number of available TCP sockets can limit the number of NFS connections. The number of connections that a node can process depends on the ratio of active-to-idle connections and on the resources that are available to process the sessions. Monitoring the number of NFS connections to each node helps prevent overloading a node with connections. |
NFS over RDMA connections per node | 32 connections | The recommended maximum limit for NFS over RDMA connections per node. |
Concurrent PAPI processes per node | For 8.2.2 and later, the number of PAPI processes per node increases by 20 for the following:
|
The limit for the process pool for the PAPI daemon. This limit scales automatically based on the size of the cluster. This limit affects the number of PAPI requests that can be processed concurrently. |
RAN attribute key length | 200 B | The limit of the key length for the OneFS extended user attribute (x-isi-ifs-attr-<name>). |
RAN attribute value length | 1 KB | The limit of the value length for the OneFS extended user attribute (x-isi-ifs-attr-<name>). |
Maximum RAN concurrent connections per node |
50 (default) 300 (custom) |
The limit of RAN concurrent connections per node using default parameters. You can obtain higher scalability for RAN by using nondefault configuration parameters. The maximum limit depends on many parameters and can be specific to a cluster's workflow. Contact your Dell EMC PowerScale account team or PowerScale Technical Support for help with configuring the nondefault parameters. For more information, see the PowerScale knowledge base article 304701, How to update RAN scalability parameters (restricted). |
RAN URI length | 8 KB | The limit for the URI length that is used for the RAN HTTP operation. |
RAN user attributes | 126 | The limit for extended user attributes that OneFS supports. |
S3 object key length | 1024 bytes | The maximum object key length used to identify objects uniquely within a bucket can be 1024 bytes. |
S3 maximum number of objects per bucket | 1,000,000 | This is the limit of objects per bucket. This affects only the number of direct children of a prefix, not the total number of objects that can be stored within a root bucket. Exceeding this limit might negatively affect the cluster performance and client connections. Evaluate the workflow and workloads for your cluster to determine the value that works best for your environment. |
S3 buckets per cluster | 40,000 total buckets | Total number of S3 buckets that can be created on the cluster. There is also a limit of 1000 buckets per user. |
S3 metadata size | Key length: 200 bytes. Value length: 1024 bytes. | Objects may have arbitrary keys that consist of maximum of 200 bytes of UTF-8 encoded, case-sensitive alphanumeric characters, period ('.'), and underscore ('_') characters. Values of the attributes are arbitrary binary data of not more than 1 KB. Although objects on OneFS can support up to 128 extended attributes with a total size of 8 KB, S3 file upload operations support a lower limit as we are limited by a maximum HTTP header size of 8 KB. |
S3 connections per node | 500 | The limit for concurrent S3 connections per node. |
S3 maximum object size | 4.398 TB (4 TiB) | The maximum size for a file for all PowerScale clusters. Files larger than 1 TB can negatively affect job engine performance. |
S3 expanded object size | 17.6 TB (16 TiB) | The maximum size for a file that can be supported with specific PowerScale hardware configurations. |
S3 multi-part upload: part size | 5 MB to 5 GB | This limit is the same as that for Amazon S3. |
SMB share names | 80 characters |
SMB share names of length limited to 80 characters are supported. Unicode characters are supported except control characters (0x00-0x1F). The following characters are illegal in a share name: " \ / [ ] : | < > + = ; , * ? |
SMB shares per cluster | 80,000 | This is the recommended limit for SMB shares per cluster. |
SMB 1 connections per node | 1,000 | The number of SMB 1 connections that a node can process depends on the type of node and whether the connections are active or idle. The more CPUs and RAM that a node has, the more active connections the node can process. The kernel imposes memory constraints on the OneFS protocol daemons, such as the input-output daemon (lwio), and these constraints limit the number of SMB 1 connections. |
SMB 1 request size | 64 KB | The SMB1 protocol determines the request size limit. |
SMB 2 request size | 1 MB | OneFS supports the large 1 MB maximum transmission unit (MTU) that the SMB2.1 introduced. The MTU is the size of the largest data unit that the SMB protocol can transmit and receive. The large MTU can improve the overall throughput of SMB transmissions. |
SMB 2 and SMB 3 connections per node |
3,000 active connections 27,000 idle connections |
The number of active SMB 2 or SMB 3 connections that a node can process depends on the type of node. The more CPUs and RAM that a node has, the more active connections the node can process. The kernel imposes memory constraints on the
OneFS protocol daemons, such as the input-output daemon (lwio), and these constraints limit the number of SMB 2 or SMB 3 connections. To ensure that a node does not become overloaded with connections, you should monitor the number of SMB connections to each node.
NOTE:SMB 3 features require increased memory and CPU processing. Enabling continuous availability or encryption on a share reduces these limits.
|
Partitioned Performance: Number of datasets | Maximum number of datasets per cluster: 4 datasets. | The limit for the number of datasets per cluster that can be configured is 4. |
Partitioned Performance: Number of workloads | Maximum number of workloads to be pinned per dataset: 1024 workloads. | The limit for the number of workloads that can be pinned per dataset is 1024. |
Partitioned Performance: protocol ops limits per cluster | Maximum number of protocol ops limits per cluster: 4096 limits. | The maximum number of protocol ops limits that can be configured on the cluster is 4 datasets * 1024 pinned workloads per dataset = 4096. |
Partitioned Performance: number of workload that can be monitored. | The maximum number of workloads that is displayed with isi statistics workloads list is 2048. | "isi statistics workload list" lists the Top workloads (The ones consuming more CPU at any given point) and the Pinned workloads. |