Perform short-term maintenance that lasts less than 30 minutes. It is designed for quick entry to and exit from a maintenance state. The node is immediately and temporarily removed from active participation.
Use for scenarios such as non-disruptive, rolling upgrades, where the maintenance window is only a few minutes (for example, a reboot) and there are no known hardware issues.
Protected maintenance mode
Perform maintenance or updates that require longer than 30 minutes in a safe and protected manner.
PowerFlex makes a temporary copy of the data, providing data availability without the risk of exposure of an accessible single copy.
Instant maintenance mode
In instant maintenance mode, the data on the
PowerFlex node undergoing maintenance is not removed from the cluster. However, this data is not available for use for the duration of the maintenance activity. Instead, extra copies of data residing on the other
PowerFlex nodes are used for application reads.
The existing data on the
PowerFlex node being maintained is, in effect, frozen on the node. This is a planned operation that does not trigger a rebuild. Instead, the
PowerFlex metadata manager instructs the
storage data clients (SDC) where to read and write IOs intended to be directed at the
PowerFlex node in maintenance.
A disadvantage of instant maintenance mode is that it introduces a risk of having only a single copy of data available during maintenance activity. During instant maintenance mode, there are always two copies of data. However, any copy residing on the
PowerFlex node in maintenance is unavailable for the maintenance duration.
When exiting instant maintenance mode, you do not need to rehydrate the
PowerFlex node completely. You need to only sync back any relevant changes that have occurred and reuse all the unchanged data on the
PowerFlex node. This results in a quick exit from maintenance mode and quick return to full capacity and performance.
Protected maintenance mode
Protected maintenance mode initiates a many-to-many rebalancing process. Data is preserved on the
PowerFlex node entering maintenance, and a temporary copy of the data is created on the sustaining
PowerFlex nodes. Data on the
PowerFlex node in maintenance is frozen and inaccessible. Protected maintenance mode maintains two copies of data at all times, avoiding the risks from the single copy in instant maintenance mode.
During protected maintenance mode, changes are tracked only for writes that affect the SDS under maintenance mode. When exiting the SDS from maintenance mode, only the changes that occurred during maintenance need to be synced to the SDS.
Due to the creation of a temporary third data copy, protected maintenance mode requires more spare capacity than instant maintenance mode. Account for this spare capacity during deployment if you plan to use protected maintenance mode. There must be enough spare capacity to handle at least one other
PowerFlex node failure, as protected maintenance mode cycles might be long and other elements could fail.
Protected maintenance mode makes the best use of all unused, available capacity, as it uses both the allocated spare capacity and any generally free capacity. It does not ignore capacity requirements.
PowerFlex nodes entering protected maintenance mode or in the same fault set may have degraded capacity.
The following equation summarizes the minimum requirements: Free + spare - 5% of the storage pool >= protected maintenance mode
PowerFlex node size.
Eject the node from the cluster
When a
PowerFlex node is gracefully removed using the UI or CLI, a many-to-many rebalance operation between nodes begins. This ensures that there are two copies of all data on all other
PowerFlex nodes before the
PowerFlex node being maintained is dropped from the cluster. Data is fully protected as there are always two available copies of the data.
You may need to adjust the spare capacity assigned to the cluster overall, as the data rebalancing uses up free spare capacity on the other
PowerFlex nodes. For example, if you start with 10 nodes and 10% spare capacity, running with nine
PowerFlex nodes requires 12% spare capacity to avoid an insufficient spare capacity alert. Spare capacity must be equal to or greater than the capacity of the smallest unit (PowerFlex node).
During maintenance, the cluster functions normally, but with one less node and therefore less capacity and lower performance. Data writes are sent to and mirrored on the other
PowerFlex nodes. It does not matter how long the maintained
PowerFlex node is offline, as it is no longer a part of the cluster. There is no exposure or risk of data unavailability if a problem arises that prohibits the
PowerFlex node from being re-added.
General restrictions and limitations
Do not put two
PowerFlex nodes from the same protection domain simultaneously into instant maintenance mode or protected maintenance mode.
You cannot mix protected maintenance mode and instant maintenance mode on the same protection domain.
For each protection domain, all SDS concurrently in protected maintenance mode must belong to the same fault set. There are no inter-protection domain dependencies for protected maintenance mode.
You can take down one SDS or full fault set in protected maintenance mode.
Data is not available for the Topic
Please provide ratings (1-5 stars).
Please provide ratings (1-5 stars).
Please provide ratings (1-5 stars).
Please select whether the article was helpful or not.
Comments cannot contain these special characters: <>()\