At this time PowerScale OneFS supports NFS versions 3 and 4. NFS version 2 is not supported.
NFS version 3 is the most widely used version of the NFS protocol today, and is generally considered to have the widest client and filer adoption. Here are key components of this version:
NFS version 4 is the newest major revision of the NFS protocol, and is increasing in adoption. At this time NFSv4 is generally less performant than v3 against the same workflow due to the greater amount of identity mapping and session tracking work required to reply. Here are some of the key differences between v3 and v4
NFSv4.1 and v4.2 are available starting in OneFS version 9.3
Here is the official release information for 9.3:
https://dl.dell.com/content/docu105998_powerscale-onefs-9-3-0-0-release-notes.pdf?language=en_us
While we do not have hard requirements for mount options, we do make some recommendations on how clients connection. We have not provided specific mount strings, as the syntax used to define these options varies depending on the operating system in use. You must keep distribution maintainers documentation for specific mount syntax.
While the PowerScale generally replies to client communication very quickly, during instances when a node has lost power or network connectivity, it might take a few seconds for it s IP addresses to move to a functional node, as such it is important to have correctly defined timeout and retry values. PowerScale generally recommends a timeout of 60 seconds to account for a worst case failover scenario, set to retry two times before reporting a failure.
Hard mounts cause the client to retry it s operations indefinitely on timeout or error. This ensures that the client does not disconnect the mount in circumstances where the PowerScale cluster moves IP addresses from one node to another. A soft mount will error out and expire the mount requiring a remount to restore access after the IP address moves.
By default, most clients do no allow you to interrupt an input/output or I/O wait, meaning you cannot use ctrl+c
, etc, to end the waiting process if the cluster is stopping responding, including the interrupt
mount option allows those signals to pass normally instead.
When mounting an NFS export, you can specify whether a like will perform it s locks locally, or using the lock co-ordinator on the cluster. Most clients default to remote locking, and this is generally the best option when multiple clients will be accessing the same directory, however there can be performance benefits to performing local locking when a client does not must share access to the directory it is working with. In addition, some databases and software will request you use local locking, as they have their own coordinator.