Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Dell EMC PowerFlex 3.6.0.2 Release Notes

PDF

Fixed issues

The following table lists the issues that are fixed in PowerFlex 3.6.0.2.

NOTE If an issue was reported by customers, the customers' Service Request numbers appear in the "Issue number & SR number" column, and correlate between customer-reported issues and the PowerFlex issue number.
Table 1. Fixed issues
Issue number & SR number Problem summary
SCI-62464

SR# 119879257

In very rare cases, an SDS may fail in the following scenario: the system was upgraded from pre-3.5 version to 3.5 or higher version and the Medium Granularity checksum was enabled after a certain SDS device was removed.

Additional information on the device: In a single SDS, if the last device of a certain Storage Pool is removed, that Storage Pool mistakenly changes its checksum state from NOT_READY to READY. This action would have been correct if the removed device was the last device in the entire Storage Pool (no other devices in it - means all future devices are guaranteed to be READY).

Nevertheless, as long as there are other SDSs that contain pre-3.5 devices in this Storage Pool, this change to READY is problematic and causes unplanned failure of SDS devices. The Storage Pool logic assumes that all its devices are prepared with the reserved checksum space, when in fact some of them may not be. This leads to a failure as soon as the user enables persistent checksum: the MDM will instruct the SDSs to start using the (non-existing) reserved checksum space, and the device headers will be over-written with checksum data.

SCI-62191

SR# 929897

In extremely rare cases, when attempting to log in to the MDM using LDAP, the MDM crashes.
SCI-61773 A new command was added to PowerFlex v3.6.0.2 to help address cases where the used path is going to change on next reboot, or changed after reboot:

For example, in a CloudLink software encryption use case, if a disk is using /dev/mapper/svm_sdb and after reboot it changes to /dev/mapper/svm_sdc

we know about the transition with Cloudlink 7.1.1 to /dev/mapper/svm_<unique ID>

If device is in failed state, use the --set_sds_device_path command and then clear the device error.

If the SDS is in instant maintenance mode or protected maintenance mode, first update the path of all failed devices, clear device errors and then exit instant maintenance mode or protected maintenance mode.

Command information:

Usage: scli --set_sds_device_path (--device_id <ID> | ((--sds_id <ID> | --sds_name <NAME> | --sds_ip <IP> [--sds_port <PORT>]) (--device_name <NAME> | --device_path <PATH>))) --new_device_path <PATH>

Description: Configure the path of the given storage device

Parameters:

--sds_id <ID> SDS ID

--sds_name <NAME> SDS name

--sds_ip <IP> SDS IP address

--sds_port <PORT> Port assigned to the SDS

--device_id <ID> Device ID

--device_name <NAME> Device name

--device_path <PATH> SDS storage device path or file path

Can only be used when the device is available at this path

--new_device_path <PATH> The new path to configure

if the device has failed. Use --device_id <ID> switch instead of --device_path <PATH> to identify the original device, that will change path to new one.

REST info: curl -s -k -i -X POST -H "Content-Type:application/json" -u admin:$token

https://127.0.0.1/api/instances/Device::123456789/action/setPath

-d '{"newPath":"/dev/sda"}'

SCI-61476 After an SDS is disconnected from MDM, upon reconnecting, the MDM will initiate an update action (reconfigure) to validate and align the SDS to the MDM after the disconnection.

As part of the "reconfigure" process, an "Add SDS disk device" action was prepared and sent, and in addition, a "Remove SDS" command was invoked.

MDM encountered a software issue related to the timing of those two actions mentioned above, and experienced an unexpected process restart.

The MDM watchdog starts the MDM after the process crash, and if that SDS has devices in error, another software issue will cause the MDM to crash again repeatedly. After multiple MDM crashes, MDM process stop and do not start up again.

MDM switchover does not resolve the issue, and the secondary MDM will also crash in such a case. This leads to a data unavailability scenario.

SCI-56740

SR# 120522492

In rare cases, while a node is in protected maintenance mode and a device error occurs, this may cause an I/O error. This issue occurs due to a race condition between MDM and SDS. They try to fail the device at the same time as the MDM is trying to remove protected maintenance mode protection and enable a forward rebuild.

(Fixed in v3.6 and v3.5.1.2)


Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\