All events in this section apply to the block subsystem.
Code | Name | Description | Severity |
---|---|---|---|
20010001 | MDM_STARTED | MDM has started{{more_info}} | INFORMATION |
20010002 | CLI_COMMAND_RECEIVED | Command {{command_name}} received. [{{seq}}]{{more_info}} | INFORMATION |
20010003 | MDM_BECOMING_PRIMARY | This MDM is switching to Primary mode. MDM will start running{{more_info}} | MINOR |
20010004 | SDS_DEV_ERROR_REPORT | Device error reported on SDS: {{sds_title}}, Device: {{dev_path}}{{more_info}} | MAJOR |
20010005 | SDS_DECOUPLED | SDS: {{sds_title}} (ID: {{sds_id}}) decoupled{{more_info}} | MAJOR |
20010006 | MDM_DATA_NORMAL | The system is now in NORMAL state{{more_info}} | INFORMATION |
20010007 | MDM_DATA_DEGRADED | The system is now in DEGRADED state{{more_info}} | MAJOR |
20010008 | MDM_DATA_FAILED | The system is now in DATA FAILURE state. Some data is unavailable{{more_info}} | CRITICAL |
20010009 | CLI_COMMAND_SUCCEEDED | Command {{command_name}} succeeded. Return code: {{command_return_value}} ({{command_return_code}}) [{{seq}}]{{more_info}} | INFORMATION |
2001000a | CLI_COMMAND_FAILED | Command {{command_name}} was not successful. Error code: {{command_return_value}} ({{command_return_code}}) [{{seq}}]{{more_info}} | MINOR |
2001000b | CUSTOM_INFO | Info: {{text}}{{more_info}} | INFORMATION |
2001000c | CUSTOM_WARNING | Warning: {{text}}{{more_info}} | MINOR |
2001000d | CUSTOM_ERROR | Error: {{text}}{{more_info}} | MAJOR |
2001000e | CUSTOM_CRITICAL | Critical Error: {{text}}{{more_info}} | CRITICAL |
2001000f | CREATE_SNAPSHOT_FAILED | Could not create snapshot for volume: ID: {{vol_title}}. Error: {{rc}}{{more_info}} | MAJOR |
20010010 | OPEN_SDS_DEVICE_FAILED | Could not open a device on SDS: {{sds_id}} (Path: {{dev_path}}). Error message: {{rc}}{{more_info}} | MAJOR |
20010011 | SDS_RECONNECTED | SDS: {{sds_name}} (ID {{sds_id}}) reconnected{{more_info}} | INFORMATION |
20010012 | NEW_SDC_CONNECTED | New SDC connected. ID: {{sdc_id}}; IP: {{sdc_ip}}; GUID: {{sdc_guid}}{{more_info}} | MINOR |
20010013 | SDC_CONNECTED | SDC connected. ID: {{sdc_id}}; IP: {{sdc_ip}}; GUID: {{sdc_guid}}{{more_info}} | INFORMATION |
20010014 | SDC_DISCONNECTED | SDC disconnected. ID: {{sdc_id}}; GUID: {{sdc_guid}}{{more_info}} | MINOR |
20010015 | SDS_REMOVE_DONE | SDS {{sds_title}} (ID {{sds_id}}) removed successfully{{more_info}} | INFORMATION |
20010016 | DEV_CAPACITY_USAGE_NORMAL | Capacity usage on {{sp_title}} is normal{{more_info}} | INFORMATION |
20010017 | DEV_CAPACITY_USAGE_HIGH | Capacity usage on {{sp_title}} is HIGH{{more_info}} | MINOR |
20010018 | DEV_CAPACITY_USAGE_CRITICAL | Capacity usage on {{sp_title}} is CRITICAL{{more_info}} | MAJOR |
20010019 | NO_REBUILD_PROGRESS_WARNING | No rebuild progress for {{num}} minutes{{more_info}} | MINOR |
2001001a | NO_REBUILD_PROGRESS_ERROR | No rebuild progress for {{num}} minutes{{more_info}} | MAJOR |
2001001b | NO_REBUILD_PROGRESS_CRITICAL | No rebuild progress for {{num}} minutes{{more_info}} | CRITICAL |
2001001c | REBUILD_PROGRESS_RESUMED | Rebuild progress has resumed{{more_info}} | INFORMATION |
2001001d | LICENSE_EXPIRATION_WARNING | License will expire in {{num}} days{{more_info}} | MINOR |
2001001e | LICENSE_EXPIRATION_ERROR | License will expire in {{num}} days{{more_info}} | MAJOR |
2001001f | LICENSE_EXPIRED | License has expired{{more_info}} | CRITICAL |
20010020 | UPGRADE_STARTED | Upgrade to version {{ver}} has started{{more_info}} | INFORMATION |
20010021 | UPGRADE_FINISHED | Upgrade completed successfully{{more_info}} | INFORMATION |
20010022 | UPGRADE_FAILED | Upgrade was not successful. Reason: {{reason}}{{more_info}} | MAJOR |
20010023 | SNAPSHOT_VOLUMES_FAILED_BY_ID | Could not snapshot volumes, because a volume was not found. ID: {{vol_id}}{{more_info}} | MAJOR |
20010024 | SNAPSHOT_VOLUMES_FAILED_BY_NAME | Could not snapshot volumes, because a volume was not found. Name: {{vol_name}}{{more_info}} | MAJOR |
20010025 | REMOTE_SYSLOG_MODULE_INITIALIZED | Initialized the remote syslog module{{more_info}} | INFORMATION |
20010026 | MDM_CLI_COMMAND_RECEIVED | Command {{command_name}} received, User: '{{user}}'. [{{seq}}]{{more_info}} | INFORMATION |
20010027 | MDM_CLI_CONF_COMMAND_RECEIVED | Command {{command_name}} received, User: '{{user}}'. [{{seq}}]{{more_info}} | INFORMATION |
20010028 | DEVICE_TEST_FAILED | Device test failed on SDS {{sds_name}} (ID: {{sds_id}}), device {{dev_path}} ({{dev_id}}){{more_info}} | MINOR |
20010029 | CLI_CONF_COMMAND_RECEIVED | Command {{command_name}} received. [{{seq}}]{{more_info}} | INFORMATION |
2001002a | MDM_CLUSTER_NORMAL | MDM cluster is now in NORMAL mode{{more_info}} | INFORMATION |
2001002b | MDM_CLUSTER_BECOMING_PRIMARY | This MDM {{mdm_title}}, took control of the cluster and is now the Primary MDM{{more_info}} | MINOR |
2001002c | MDM_CLUSTER_NODE_FAILURE | This MDM cannot communicate with MDM cluster node {{mdm_title}}, invalid response ({{rc}}){{more_info}} | MAJOR |
2001002d | MDM_CLUSTER_VOTER_FAILURE | This node, ID {{mdm_id}}, received unexpected messages from MDM node {{other_mdm_id}} - Please check the MDM cluster configuration{{more_info}} | MAJOR |
2001002e | MDM_CLUSTER_VOTER_NOT_CONFIG | This MDM {{mdm_id}} configuration is inconsistent. Please remove it from the MDM cluster and add it again{{more_info}} | MAJOR |
2001002f | SDS_CONFIG_INVALID | SDS {{sds_name}} (ID {{sds_id}}) configuration is invalid{{more_info}} | CRITICAL |
20010030 | SCANNER_NEW_UNFIXED_ERRORS__ERROR | SDS {{sds_title}} encountered one or more unfixed {{type}} errors on device {{dev_path}} ({{counters}}){{more_info}} | MAJOR |
20010031 | SCANNER_NEW_UNFIXED_ERRORS__WARN | SDS {{sds_title}} encountered one or more unfixed {{type}} errors on device {{dev_path}} ({{counters}}){{more_info}} | MINOR |
20010032 | SCANNER_NEW_FIXED_ERRORS__WARN | SDS {{sds_title}} encountered one or more {{type}} errors on device {{dev_path}}, and they were all fixed ({{counters}}){{more_info}} | MINOR |
20010033 | SCANNER_NEW_FIXED_ERRORS__INFO | SDS {{sds_title}} encountered one or more {{type}} errors on device {{dev_path}}, and they were all fixed ({{counters}}){{more_info}} | INFORMATION |
20010034 | SCANNER_FIXED_SOME_OLD_ERRORS__WARN | SDS {{sds_title}} fixed some of the encountered {{type}} errors on device {{dev_path}} ({{counters}}){{more_info}} | MINOR |
20010035 | SCANNER_FIXED_SOME_OLD_ERRORS__INFO | SDS {{sds_title}} fixed some of the encountered {{type}} errors on device {{dev_path}} ({{counters}}){{more_info}} | INFORMATION |
20010036 | SCANNER_FIXED_ALL_OLD_ERRORS__WARN | SDS {{sds_title}} fixed all encountered {{type}} errors on device {{dev_path}} ({{counters}}){{more_info}} | MINOR |
20010037 | SCANNER_FIXED_ALL_OLD_ERRORS__INFO | SDS {{sds_title}} fixed all encountered {{type}} errors on device {{dev_path}} ({{counters}}){{more_info}} | INFORMATION |
20010038 | PERFORMANCE_PARAMETER_INVALID | Trying to set an invalid value ({{val}}) for performance parameter ({{name}}){{more_info}} | MINOR |
20010039 | UPGRADE_PROCESS_STARTED | Upgrade process started{{more_info}} | INFORMATION |
2001003a | UPGRADE_PROCESS_ABORTED | Upgrade process aborted{{more_info}} | INFORMATION |
2001003b | UPGRADE_PROCESS_COMPLETED | Upgrade process completed. The system moved to version {{ver}}{{more_info}} | INFORMATION |
2001003c | SDS_MAINTENANCE_MODE_ENDED | SDS {{sds_name}} (ID: {{sds_id}}) has exited maintenance mode{{more_info}} | INFORMATION |
2001003d | MDM_CLUSTER_UPGRADED | The MDM cluster was successfully upgraded to version {{ver}}{{more_info}} | INFORMATION |
2001003e | SDS_UPGRADED | SDS: {{sds_title}} (ID {{sds_id}}) upgraded to version {{ver}}{{more_info}} | INFORMATION |
2001003f | SDC_UPGRADED | SDC upgraded to version {{ver}}. ID: {{sdc_id}}; IP: {{sdc_ip}}; GUID: {{sdc_guid}}{{more_info}} | INFORMATION |
20010040 | MDM_UPGRADED | MDM upgraded to version {{ver}}. ID: {{mdm_title}}{{more_info}} | INFORMATION |
20010041 | SDR_UPGRADED | SDR: {{sdr_title}} (ID {{sdr_id}}) upgraded to version {{ver}}{{more_info}} | INFORMATION |
20010042 | MDM_TB_START | MDM started with the role of Tie-Breaker{{more_info}} | INFORMATION |
20010043 | MDM_MANAGER_START | MDM started with the role of Manager{{more_info}} | INFORMATION |
20010044 | OSCILLATION_COUNTER_PASSED_THRESHOLD | {{obj}} (Name: {{name}}, ID: {{id}}) reports frequently exceeded {{counter_name}}. {{type}} window threshold ({{threshold}} {{counter_desc}} in {{window}} seconds){{more_info}} | MINOR |
20010045 | DEV_OSCILLATION_COUNTER_PASSED_THRESHOLD | SDS (Name: {{sds_name}}, ID: {{sds_id}}) device (Name: {{dev_name}}, ID: {{dev_id}}) reports frequently exceeded {{counter_name}}. {{type}} window threshold ({{threshold}} {{counter_desc}} in {{window}} seconds){{more_info}} | MINOR |
20010046 | NET_OSCILLATION_COUNTER_PASSED_THRESHOLD | {{obj}} (Name: {{name}}, ID: {{id}}) reports frequently exceeded {{counter_name}} (Name: {{other_obj_name}}, ID: {{other_obj_id}} IP: {{other_obj_ip}}). {{type}} window threshold ({{threshold}} {{counter_desc}} in {{window}} seconds){{more_info}} | MINOR |
20010047 | NOT_ENOUGH_FREE_UMTS_AVAILABLE | You are trying to allocate too many UMTs ({{num}}). The maximum allowed is ({{max_num}}){{more_info}} | MINOR |
20010048 | CLUSTER_GAINED_VOTER_LEASE | Gained Tie-Breaker {{voter_title}} lease{{more_info}} | INFORMATION |
20010049 | CLUSTER_LOST_VOTER_LEASE | Lost Tie-Breaker {{voter_title}} lease{{more_info}} | MINOR |
2001004a | SDS_AUTHENTICATION_FAILED | SDS: {{sds_title}} (ID {{sds_id}}) failed authentication ({{rc}}){{more_info}} | MAJOR |
2001004b | SDS_MAINTENANCE_MODE_ENTRY_FAILED | SDS {{sds_name}} (ID {{sds_id}}) could not enter maintenance mode ({{rc}}){{more_info}} | MINOR |
2001004c | SDS_MAINTENANCE_MODE_EXIT_FAILED | SDS {{sds_name}} (ID {{sds_id}}) could not exit maintenance mode ({{rc}}){{more_info}} | MINOR |
2001004d | MDM_FAILED_LOADING_AUTHENTICATION | This MDM could not load authentication ({{rc}}){{more_info}} | CRITICAL |
2001004e | MDM_FAILED_LOADING_CLIENTS_SECURITY | This MDM could not load management clients security ({{rc}}){{more_info}} | CRITICAL |
2001004f | MDM_CLUSTER_SECONDARY_ERROR | The Secondary MDM, {{secondary_mdm_title}}, cannot process requests from the MDM cluster node, ID {{mdm_id}}, ({{rc}}){{more_info}} | MAJOR |
20010050 | MDM_CLUSTER_NOT_RESPOND | The MDM, {{mdm_title}}, is not responding{{more_info}} | MINOR |
20010051 | MDM_CLUSTER_LOST_CONNECTION | The MDM, {{mdm_title}}, has lost connection to the cluster{{more_info}} | MINOR |
20010052 | MDM_CLUSTER_CONNECTED | The MDM, {{mdm_title}}, connected after {{num}}ms{{more_info}} | INFORMATION |
20010053 | MDM_CLUSTER_NODE_DEGRADED | MDM cluster node is now DEGRADED and is in offline node {{mdm_title}}; {{portal}}{{more_info}} | MAJOR |
20010054 | MDM_CLUSTER_NODE_NORMAL | MDM cluster node {{mdm_title}}; {{portal}} is now in NORMAL state{{more_info}} | INFORMATION |
20010056 | SDS_DEV_WARNING | A device warning threshold has been reached on SDS: {{sds_title}}, Device: {{dev_path}}{{more_info}} | MINOR |
20010057 | SDS_DEV_NOTICE | A device notice threshold has been reached on SDS: {{sds_title}}, Device: {{dev_path}}{{more_info}} | INFORMATION |
20010058 | SDS_IN_COOL_DOWN | SDS {{sds_title}} (ID {{sds_id}}) will disconnect from MDM for {{num}} seconds{{more_info}} | MINOR |
20010059 | MDM_CLUSTER_FAILED_EXPOSE_VIRT_IP | The Primary MDM, {{mdm_title}}, could not expose virtual IP addresses{{more_info}} | MAJOR |
2001005a | SDS_DEV_FAILURE_STATE_CROSSED_THRESHOLD | The device failure state threshold was crossed on SDS: {{sds_title}}, Device: {{dev_path}}{{more_info}} | MAJOR |
2001005b | SDS_DEV_TEMPERATURE_PASSED_THRESHOLD | The device temperature threshold has been passed on SDS: {{sds_title}}, Device: {{dev_path}}, Threshold: {{threshold}}, Current: {{curr_val}}, Worst: {{worst_val}}{{more_info}} | MAJOR |
2001005c | SDS_DEV_SSD_EOL_PASSED_THRESHOLD | The SSD Device end of life attribute has passed its threshold on SDS: {{sds_title}}, Device: {{dev_path}}{{more_info}} | MAJOR |
20010060 | MDM_CLUSTER_FAILED_CREATE_VIRT_IP | The MDM cluster node, {{mdm_title}}, could not create virtual IP addresses{{more_info}} | MAJOR |
20010061 | SDC_DISCONNECTED_FROM_SDS_IP | SDC Name: {{sdc_name}}; ID: {{sdc_id}} disconnected from the IP address {{sds_ip}} of SDS {{sds_title}}; ID: {{sds_id}}{{more_info}} | MINOR |
20010062 | SDC_CONNECTED_TO_SDS_IP | SDC Name: {{sdc_name}}; ID: {{sdc_id}} is now connected to the IP address {{sds_ip}} of SDS {{sds_title}}; ID: {{sds_id}}{{more_info}} | INFORMATION |
20010063 | MULTIPLE_SDC_CONNECTIVITY_CHANGES | Multiple SDC connectivity changes occurred{{more_info}} | INFORMATION |
20010064 | VOLUME_BLOCK_MIGRATION_FAILED | Could not migrate part of Volume: {{vol_title}}, VTree: {{vtree_title}}{{more_info}} | MINOR |
20010065 | VTREE_MIGRATION_DONE | Finished migration of Volume: {{vol_title}}, VTree: {{vtree_title}} from Storage Pool: {{src_sp_title}} to Storage Pool: {{dst_sp_title}}{{more_info}} | INFORMATION |
20010066 | SDS_DEV_MEDIA_TYPE_MISMATCH | Performance metrics on SDS {{sds_title}} ({{sds_id}}) device {{dev_path}} indicate a media type mismatch. Expected {{expected_media_type}}, detected {{detected_media_type}}. Check the health of mismatched device, or replace the device{{more_info}} | MINOR |
20010067 | SUSPECT_DUPLICATE_SDC_GUID | SDC ID: {{sdc_id}} GUID: {{sdc_guid}} old address: {{old_ip}} new address: {{new_ip}}{{more_info}} | MINOR |
20010068 | SUSPECT_DUPLICATE_SDC_IP | The IP address {{ip}} is being used by two different SDCs: ID: {{sdc_id1}} and ID: {{sdc_id2}}{{more_info}} | MINOR |
20010069 | MDM_CLUSTER_SECONDARY_RESYNC_END | The Secondary MDM, {{secondary_mdm_title}}, synchronization from MDM cluster node, {{mdm_title}}, is complete{{more_info}} | INFORMATION |
2001006a | SPARK_MESSAGE | Test Message{{more_info}} | INFORMATION |
2001006b | USER_DATA_MAYBE_OVERRIDDEN | Device data might have been overridden on SDS: {{sds_id}} (Path: {{dev_path}}) - could not verify{{more_info}} | MINOR |
2001006c | DATA_CORRUPTION_DISCOVERED | Data corruption was discovered on SDS: {{sds_id}} Device: {{dev_id}} (Context: {{comb_id}}){{more_info}} | MAJOR |
2001006d | MDM_CLI_CONF_COMMAND_RECEIVED_CONT | The command {{command_name}} is in progress. User: '{{user}}'. [{{seq}}]{{more_info}} | INFORMATION |
2001006e | SNAP_POLICY_FAILED | The snapshot policy {{snap_policy_id}} with {{num}} source volumes could not take snapshots at {{time}}. Reason: {{reason}}{{additional_info}}{{more_info}} | MAJOR |
2001006f | SNAP_POLICY_WAS_AUTO_PAUSED | The snapshot policy {{snap_policy_id}} was automatically paused due to failover or switchover flow{{more_info}} | MINOR |
20010070 | SNAP_POLICY_WAS_AUTO_RESUMED | The snapshot policy {{snap_policy_id}} was automatically resumed following exit failover or switchover flow{{more_info}} | INFORMATION |
20010071 | SNAP_POLICY_HAS_RPL_VOLS | The snapshot policy {{snap_policy_id}} has replication volumes. We recommend using replication snapshot policy for replicated volumes instead{{more_info}} | MINOR |
20010072 | DEV_CAPACITY_USAGE_FULL | Capacity usage on {{sp_title}} is FULL{{more_info}} | MAJOR |
20010073 | MDM_CLUSTER_VERSION_MISMATCH | This MDM (version: {{ver}}) cannot communicate with MDM, {{other_mdm_title}} (version: {{other_mdm_ver}}) due to a version mismatch{{more_info}} | MAJOR |
20010074 | DEV_METADATA_USAGE_HIGH | Metadata usage on SDS: {{sds_title}}, Device: {{dev_path}} is too high{{more_info}} | MINOR |
20010075 | UNAPPROVED_SDC_IP | SDC with GUID {{sdc_guid}} is trying to connect from the unapproved IP address {{ip}}. Approved IPs: {{ips}}{{more_info}} | MAJOR |
20010076 | UNAPPROVED_SDC_IP_FULL_HASH | The Hash of SDC connections from the unapproved IP address is full{{more_info}} | MINOR |
20010077 | CMATRIX_POLICY_BECAME_BETTER | The inter-SDS connectivity health for Protection Domain ID: {{pd_id}} has changed from {{old_policy}} to {{new_policy}}{{more_info}} | INFORMATION |
20010078 | CMATRIX_POLICY_BECAME_WORSE | The inter-SDS connectivity health for Protection Domain ID: {{pd_id}} has changed from {{old_policy}} to {{new_policy}}{{more_info}} | MINOR |
20010079 | RECOVERABLE_CHECKSUM_MISMATCH_FOUND | A recoverable checksum mismatch has been found on device ID: {{dev_id}}{{more_info}} | MAJOR |
2001007a | ACC_DEVICE_OVERBOOKING | Your system has insufficient NVDIMM capacity on SDS ID: {{sds_id}}, Device ID: {{dev_id}}. The required capacity for this device is {{capacity}} MB{{more_info}} | MAJOR |
2001007b | ACC_DEVICE_METADATA_CORRUPTED | Metadata of acceleration device on SDS: {{sds_id}} may be corrupted. Device ID: {{dev_id}}, Path: {{dev_path}}{{more_info}} | CRITICAL |
20010082 | SDR_DECOUPLED | SDR: {{sdr_title}} (ID: {{sdr_id}}) was decoupled{{more_info}} | MAJOR |
20010083 | PEER_MDM_UPGRADED | Replication Peer System: {{peer_mdm_title}} (ID {{peer_mdm_id}}) was upgraded to version {{ver}}{{more_info}} | INFORMATION |
20010084 | PEER_MDM_CONNECTED | Replication Peer System: {{peer_mdm_title}} (ID {{peer_mdm_id}}) is connected{{more_info}} | INFORMATION |
20010085 | PEER_MDM_DISCONNECTED | Replication Peer System: {{peer_mdm_title}} (ID {{peer_mdm_id}}) lost connection{{more_info}} | MINOR |
20010086 | PEER_MDM_ADDPEER_SUCCEEDED | The logical client peer connection to the Replication Peer System {{peer_mdm_title}} (ID {{peer_mdm_id}}) is established successfully{{more_info}} | INFORMATION |
20010087 | PEER_MDM_ADDPEER_FAILED | The logical client peer connection to the Replication Peer System {{peer_mdm_title}} (ID {{peer_mdm_id}}) has failed: {{rc}}{{more_info}} | MINOR |
20010088 | SDR_CONFIG_INVALID | SDR: {{sdr_title}} (ID: {{sdr_id}}) configuration is invalid{{more_info}} | CRITICAL |
20010089 | SDR_IN_COOL_DOWN | SDR: {{sdr_title}} (ID: {{sdr_id}}) will disconnect from MDM for {{num}} seconds{{more_info}} | MINOR |
2001008a | SDR_AUTHENTICATION_FAILED | SDR: {{sdr_title}} (ID: {{sdr_id}}) failed authentication ({{rc}}){{more_info}} | MAJOR |
2001008b | RPL_CG_SRC_REJECTED | Replication Consistency Group creation was refused at Source: {{rc}}{{more_info}} | MINOR |
2001008c | RPL_CG_SRC_REQUESTED | Replication Consistency Group ID {{rcg_id}} was created successfully in REQUESTED state in Source{{more_info}} | INFORMATION |
2001008d | RPL_CG_SRC_RECEIVED_REJECTED | Replication Consistency Group ID {{rcg_id}} was rejected by Destination: {{rc}}{{more_info}} | MINOR |
2001008e | RPL_CG_SRC_RECEIVED_NORMAL | Replication Consistency Group ID {{rcg_id}} was confirmed by Destination. Remote ID: {{remote_id}}{{more_info}} | INFORMATION |
2001008f | RPL_CG_DST_REJECTED | Replication Consistency Group Remote ID {{rcg_id}} creation on remote system {{remote_sys_id}} was rejected: {{rc}}{{more_info}} | MINOR |
20010090 | RPL_CG_DST_CREATED_NORMAL | Replication Consistency Group Remote ID {{rcg_id}} creation as NORMAL was completed, with local ID {{local_id}}{{more_info}} | INFORMATION |
20010091 | RPL_CG_DELETION_ON_END_OF_PD_CAPACITY | Replication Consistency Group ID {{rcg_id}} deletion sequence started due to end of Protection Domain capacity{{more_info}} | CRITICAL |
20010092 | RPL_CG_TERMINATION_ON_END_OF_PD_CAPACITY | Replication Consistency Group ID {{rcg_id}} termination sequence started due to end of Protection Domain capacity{{more_info}} | CRITICAL |
20010093 | RPL_CG_DELETION_STARTED | Replication Consistency Group ID {{rcg_id}} deletion sequence started{{more_info}} | INFORMATION |
20010094 | RPL_CG_DELETED | Replication Consistency Group ID {{rcg_id}} deletion sequence was completed{{more_info}} | INFORMATION |
20010095 | RPL_CG_MOVED_TO_SLIM_MODE | Replication Consistency Group ID {{rcg_id}} entered slim mode{{more_info}} | INFORMATION |
20010096 | RPL_CG_EXITED_SLIM_MODE | Replication Consistency Group ID {{rcg_id}} exited slim mode{{more_info}} | INFORMATION |
20010097 | RPL_CG_APPLY_FROZEN | Replication Consistency Group ID {{rcg_id}} apply is now frozen{{more_info}} | INFORMATION |
20010098 | RPL_CG_APPLY_UNFROZEN | Replication Consistency Group ID {{rcg_id}} apply is now unfrozen{{more_info}} | INFORMATION |
20010099 | RPL_CG_REVEAL_SUCCEEDED | Replication Consistency Group ID {{rcg_id}} Reveal SnapGroupId {{snap_group_id}} snapshot was completed successfully{{more_info}} | INFORMATION |
2001009a | RPL_CG_REVEAL_FAILED | Replication Consistency Group ID {{rcg_id}} Reveal SnapGroupId {{snap_group_id}} snapshot failed: {{rc}}{{more_info}} | MINOR |
2001009b | RPL_CG_REACHED_NEUTRAL | Replication Consistency Group ID {{rcg_id}} reached Neutral mode state{{more_info}} | INFORMATION |
2001009c | RPL_CG_CONSISTENCY_REACHED | Replication Consistency Group ID {{rcg_id}} Consistency was reached{{more_info}} | INFORMATION |
2001009d | RPL_CG_FAILOVER_TEST_STARTED | Replication Consistency Group ID {{rcg_id}} failover test started{{more_info}} | INFORMATION |
2001009e | RPL_CG_FAILOVER_TEST_ABORTED | Replication Consistency Group ID {{rcg_id}} failover test was aborted{{more_info}} | INFORMATION |
2001009f | RPL_CG_PROXY_COMMAND_RECEIVED | Replication Consistency Group Name: {{rcg_name}}, ID: {{rcg_id}} is starting proxy command: {{command_name}}{{more_info}} | INFORMATION |
200100a0 | RPL_CG_PROXY_COMMAND_COMPLETED | Replication Consistency Group ID {{rcg_id}} has completed the proxy command with the result: {{rc}}{{more_info}} | INFORMATION |
200100a1 | RPL_CG_FREEZE_ON_DEL_SKIPPED | Replication Consistency Group ID {{rcg_id}} Freeze of apply on removal was skipped because Consistency Engine is not initialized or not in CONSISTENT state{{more_info}} | INFORMATION |
200100a2 | RPL_CG_FREEZE_ON_DEL_ABORTED | Replication Consistency Group ID {{rcg_id}} Freeze apply on removal was aborted because the freeze took too long{{more_info}} | INFORMATION |
200100a3 | RPL_CG_SRC_START_ACTIVATION_HANDSHAKE | Replication Consistency Group ID {{rcg_id}} (SRC) (Remote ID {{remote_id}}) starting Activation handshake{{more_info}} | INFORMATION |
200100a4 | RPL_CG_SRC_ACTIVATED_ON_HANDSHAKE | Replication Consistency Group ID {{rcg_id}} (SRC) (Remote ID {{remote_id}}) Activation handshake completed successfully{{more_info}} | INFORMATION |
200100a5 | RPL_CG_DST_ACTIVATED_ON_HANDSHAKE | Replication Consistency Group ID {{rcg_id}} (DST) (Remote ID {{remote_id}}) Activation handshake request responded positively{{more_info}} | INFORMATION |
200100a6 | RPL_CG_COMPLETED_ACTIVATION_TASK | Replication Consistency Group ID {{rcg_id}} ({{dir}}) (Remote ID {{remote_id}}) completed activation task{{more_info}} | INFORMATION |
200100a7 | RPL_CG_MARKED_FOR_TERMINATION | Replication Consistency Group ID {{rcg_id}} ({{dir}}) (Remote ID {{remote_id}}) is marked for termination{{more_info}} | INFORMATION |
200100a8 | RPL_CG_TERMINATION_PROCESS_STARTED | Replication Consistency Group ID {{rcg_id}} ({{dir}}) (Remote ID {{remote_id}}) termination procedure started{{more_info}} | INFORMATION |
200100a9 | RPL_CG_TERMINATED_ON_LOCAL | Replication Consistency Group ID {{rcg_id}} ({{dir}}) (Remote ID {{remote_id}}) is now terminated on local peer{{more_info}} | INFORMATION |
200100aa | RPL_CG_TERMINATED_ON_REMOTE | Replication Consistency Group ID {{rcg_id}} ({{dir}}) (Remote ID {{remote_id}}) is now known to be terminated on remote peer{{more_info}} | INFORMATION |
200100ab | RPL_CG_TERMINATED_ON_BOTH | Replication Consistency Group ID {{rcg_id}} ({{dir}}) (Remote ID {{remote_id}}) is now known to be terminated on both peers{{more_info}} | INFORMATION |
200100ac | RPL_CG_SWITCHOVER_STARTED | Replication Consistency Group ID {{rcg_id}} DST: Switchover started{{more_info}} | INFORMATION |
200100ad | RPL_CG_SWITCHOVER_DONE | Replication Consistency Group ID {{rcg_id}} DST: Switchover completed{{more_info}} | INFORMATION |
200100ae | RPL_CG_FAILOVER_STARTED | Replication Consistency Group ID {{rcg_id}} DST: Failover started{{more_info}} | INFORMATION |
200100af | RPL_CG_FAILOVER_DONE | Replication Consistency Group ID {{rcg_id}} DST: Failover completed{{more_info}} | INFORMATION |
200100b0 | RPL_CG_RESTORE_STARTED | Replication Consistency Group ID {{rcg_id}} DST: Restore started{{more_info}} | INFORMATION |
200100b1 | RPL_CG_RESTORE_RPL_COMPLETED | Replication Consistency Group ID {{rcg_id}} DST: Restore completed{{more_info}} | INFORMATION |
200100b2 | RPL_CG_REVERSE_STARTED | Replication Consistency Group ID {{rcg_id}} DST: Reverse started{{more_info}} | INFORMATION |
200100b3 | RPL_CG_REVERSE_RPL_COMPLETED | Replication Consistency Group ID {{rcg_id}} DST: Reverse completed{{more_info}} | INFORMATION |
200100b4 | RPL_CG_SRC_AWARE_SWITCHOVER_STARTED | Replication Consistency Group ID {{rcg_id}} SRC is notified Switchover started{{more_info}} | INFORMATION |
200100b5 | RPL_CG_SRC_AWARE_SWITCHOVER_DONE | Replication Consistency Group ID {{rcg_id}} SRC is notified Switchover completed{{more_info}} | INFORMATION |
200100b6 | RPL_CG_SRC_AWARE_FAILOVER_STARTED | Replication Consistency Group ID {{rcg_id}} SRC is notified Failover started{{more_info}} | INFORMATION |
200100b7 | RPL_CG_SRC_AWARE_FAILOVER_DONE | Replication Consistency Group ID {{rcg_id}} SRC is notified Failover completed{{more_info}} | INFORMATION |
200100b8 | RPL_CG_SRC_AWARE_RESTORE_STARTED | Replication Consistency Group ID {{rcg_id}} SRC is notified Restore started{{more_info}} | INFORMATION |
200100b9 | RPL_CG_SRC_AWARE_RESTORE_DONE | Replication Consistency Group ID {{rcg_id}} SRC is notified Restore completed{{more_info}} | INFORMATION |
200100ba | RPL_CG_SRC_AWARE_REVERSE_STARTED | Replication Consistency Group ID {{rcg_id}} SRC is notified Reverse started{{more_info}} | INFORMATION |
200100bb | RPL_CG_SRC_AWARE_REVERSE_DONE | Replication Consistency Group ID {{rcg_id}} SRC is notified Reverse completed{{more_info}} | INFORMATION |
200100bc | RPL_PAIR_SRC_REJECTED | Replication Pair creation was refused at Source: {{rc}}. (SYS RCG (if available) {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | MINOR |
200100bd | RPL_PAIR_SRC_RECEIVED_REJECTED | Replication Pair ID {{rpl_pair_id}} was rejected by Destination: {{rc}}. (SYS RCG (if available) {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | MINOR |
200100be | RPL_PAIR_SRC_RECEIVED_NORMAL | Replication Pair ID {{rpl_pair_id}} was confirmed by Destination. Volume ID: {{vol_id}}. Remote Volume ID: {{remote_vol_id}} Remote Pair ID: {{remote_rpl_pair_id}} (SYS RCG (if available) {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100bf | RPL_PAIR_DST_REJECTED | Replication Pair Remote ID {{remote_rpl_pair_id}} creation was rejected: {{rc}}. (SYS RCG (if available) {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | MINOR |
200100c0 | RPL_PAIR_DST_CREATED_NORMAL | Replication Pair ID {{rpl_pair_id}} created as NORMAL (Remote ID {{remote_rpl_pair_id}}, SYS RCG (if available) {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100c1 | RPL_PAIR_DONE_INITIAL_COPY | Replication Pair ID {{rpl_pair_id}} completed initial copy (Remote ID {{remote_rpl_pair_id}}, SYS RCG {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100c2 | RPL_PAIR_DELETION_STARTED | Replication Pair ID {{rpl_pair_id}} deletion sequence started (SYS RCG (if relevant) {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100c3 | RPL_PAIR_DELETED | Replication Pair ID {{rpl_pair_id}} deleted sequence was completed (SYS RCG (if relevant) {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100c4 | RPL_PAIR_PROXY_COMMAND_STARTED | Replication Pair ID {{rpl_pair_id}} is starting the proxy command: {{command_name}} (SYS RCG (if relevant and found) {{sys_rcg_id}}, USR RCG (if found) {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100c5 | RPL_PAIR_PROXY_COMMAND_COMPLETED | Replication Pair ID {{rpl_pair_id}} completed the proxy command with the result: {{rc}} (SYS RCG (if relevant and found) {{sys_rcg_id}}, USR RCG (if found) {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100c6 | RPL_PAIR_UICPON_DELETED | Replication Pair ID {{rpl_pair_id}} was deleted because it was in an unsupported Initial Copy phase while entering the Neutral state (SYS RCG (if relevant) {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100c7 | RPL_PAIR_SRC_ENABLING_RPL_ABORTED | Replication Pair ID {{rpl_pair_id}} (SRC) aborted ENABLING RPL back to INACTIVE due to error: {{rc}}. (SYS RCG {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100c8 | RPL_PAIR_DST_ENABLING_RPL_REJECTED | Replication Pair ID {{rpl_pair_id}} (DST) rejected ENABLING RPL with error: %s. (SYS RCG {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100c9 | RPL_PAIR_SRC_ENABLING_RPL_GOT_REJECT | Replication Pair ID {{rpl_pair_id}} (SRC) received DST rejection for ENABLING RPL back to INACTIVE, the rejection error: {{rc}}. (SYS RCG {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100ca | RPL_PAIR_SRC_RPL_ENABLED | Replication Pair ID {{rpl_pair_id}} (SRC) has reached RPL_ENABLED state and starting Initial Copy. (SYS RCG {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100cb | RPL_PAIR_DST_RPL_ENABLED | Replication Pair ID {{rpl_pair_id}} (DST) has reached RPL_ENABLED state and ready for Initial Copy. (SYS RCG {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100cc | RPL_PAIR_TERMINATION_STARTED | Replication Pair ID {{rpl_pair_id}} started termination sequence (SYS RCG (if relevant) {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100cd | RPL_PAIR_TERMINATED_ON_LOCAL | Replication Pair ID {{rpl_pair_id}} is now terminated on local peer (SYS RCG (if relevant) {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100ce | RPL_PAIR_TERMINATED_ON_REMOTE | Replication Pair ID {{rpl_pair_id}} is now known to be terminated on remote peer (SYS RCG (if relevant) {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100cf | RPL_PAIR_TERMINATED_ON_BOTH | Replication Pair ID {{rpl_pair_id}} is now known to be terminated on both peers (SYS RCG (if relevant) {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
200100d0 | SDR_RECONNECTED | SDR: {{sdr_title}} (ID {{sdr_id}}) was reconnected{{more_info}} | INFORMATION |
200100d1 | SDR_REMOVED | SDR: {{sdr_title}} (ID {{sdr_id}}) removal was completed{{more_info}} | INFORMATION |
200100d2 | SDR_IN_MAINTENANCE | SDR: {{sdr_title}} (ID {{sdr_id}}) is now in maintenance mode{{more_info}} | INFORMATION |
200100d3 | SDR_EXIT_MAINTENANCE | SDR: {{sdr_title}} (ID {{sdr_id}}) has exited maintenance mode{{more_info}} | INFORMATION |
200100d4 | SET_RPL_JOURNAL_CAPACITY | The replication journal capacity was set: Protection Domain ID {{pd_id}} Storage Pool ID {{sp_id}} name: {{sp_name}} capacity ratio: {{ratio}} result: {{rc}}{{more_info}} | INFORMATION |
200100d5 | RPL_JOURNAL_CAPACITY_ENDED_AT_DST | Pausing transmission for Replication Consistency Group (ID {{rcg_id}}) as journal capacity of the remote Protection Domain (ID {{pd_id}}) became insufficient{{more_info}} | MINOR |
200100d6 | RPL_JOURNAL_CAPACITY_RESTORED_AT_DST | Resuming transmission for Replication Consistency Group (ID {{rcg_id}}) as journal capacity of the remote Protection Domain (ID {{pd_id}}) was restored{{more_info}} | INFORMATION |
200100d7 | SDR_CRITICAL_CAP_CHANGE | SDR ID {{sdr_id}}) handling user data changed{{more_info}} | MAJOR |
200100d8 | RPL_PD_CAP_UTILIZATION_NORMAL | Protection Domain ID {{pd_id}} Replication journal capacity utilization level is back to {{level}}{{more_info}} | INFORMATION |
200100d9 | RPL_PD_CAP_UTILIZATION_MINOR | Protection Domain ID {{pd_id}} Replication journal capacity is at {{level}} utilization level{{more_info}} | MINOR |
200100da | RPL_PD_CAP_UTILIZATION_MAJOR | Protection Domain ID {{pd_id}} Replication journal capacity is at {{level}} utilization level{{more_info}} | MAJOR |
200100db | RPL_PD_CAP_UTILIZATION_CRITICAL | Protection Domain ID {{pd_id}} Replication journal capacity is at {{level}} utilization level{{more_info}} | CRITICAL |
200100dc | RPL_CAP_MGR_FAILED_TO_ALLOCATE_MORE_CAP | Failed to allocate more replication capacity for journal volumes. No replicated Storage Pools are available{{more_info}} | MINOR |
200100dd | SDS_MAINTENANCE_MODE_STARTED | SDS {{sds_name}} (ID {{sds_id}}) has entered maintenance mode{{more_info}} | INFORMATION |
200100de | RPL_CG_ENTER_NEUTRAL_DURING_RENAME | Replication Consistency Group ID {{rcg_id}} entered Neutral state during the renaming process{{more_info}} | MINOR |
200100df | SDS_DEV_UNRECOVERABLE_ERROR_REPORT | Unrecoverable device error was reported on SDS: {{sds_title}}, Device: {{dev_path}}{{more_info}} | MAJOR |
200100e0 | SP_PERSISTENT_CHECKSUM_STATE_CHANGE | Storage Pool ID {{sp_id}} persistent checksum state changed to {{state}}{{more_info}} | INFORMATION |
200100e1 | SET_SDC_AUTHENTICATION | The system is now running with SDC authentication and authorization {{state}}{{more_info}} | INFORMATION |
200100e2 | SDC_AUTHENTICATION_PSWD_GEN | Generated a password for SDC ID {{sdc_id}}{{more_info}} | INFORMATION |
200100e3 | SDC_AUTHENTICATION_PSWD_RESET | Reset a password for SDC ID {{sdc_id}}. The reason for the reset was: {{reason}}{{more_info}} | INFORMATION |
200100e4 | SDC_AUTHENTICATION_FAILED | SDC ID {{sdc_id}} attempted to connect to MDM with an incorrect password{{more_info}} | INFORMATION |
200100e5 | SDR_DISCONNECTED_FROM_SDS_IP | SDR Name: {{sdr_name}}; ID: {{sdr_id}} disconnected from the IP address {{sds_ip}} of SDS {{sds_title}}; ID: {{sds_id}}{{more_info}} | MINOR |
200100e6 | SDR_CONNECTED_TO_SDS_IP | SDR Name: {{sdr_name}}; ID: {{sdr_id}} is now connected to the IP address {{sds_ip}} of SDS {{sds_title}}; ID: {{sds_id}}{{more_info}} | INFORMATION |
200100e7 | SDC_DISCONNECTED_FROM_SDR_IP | SDC Name: {{sdc_name}}; ID: {{sdc_id}} disconnected from the IP address {{sdr_ip}} of SDR {{sdr_title}}; ID: {{sdr_id}}{{more_info}} | MINOR |
200100e8 | SDC_CONNECTED_TO_SDR_IP | SDC Name: {{sdc_name}}; ID: {{sdc_id}} is now connected to the IP address {{sdr_ip}} of SDR {{sdr_title}}; ID: {{sdr_id}}{{more_info}} | INFORMATION |
200100e9 | SDR_DISCONNECTED_FROM_SDR_IP | SDR Name: {{sdr_name1}}; ID: {{sdr_id1}} disconnected from the IP address {{sdr_ip}} of SDR {{sdr_title2}}; ID: {{sdr_id2}}{{more_info}} | MINOR |
200100ea | SDR_CONNECTED_TO_SDR_IP | SDR Name: {{sdr_name1}}; ID: {{sdr_id1}} is now connected to the IP address {{sdr_ip}} of SDR {{sdr_title2}}; ID: {{sdr_id2}}{{more_info}} | INFORMATION |
200100eb | MULTIPLE_SDR_SDS_CONNECTIVITY_CHANGES | Multiple SDR to SDS connectivity changes occurred{{more_info}} | INFORMATION |
200100ec | MULTIPLE_SDR_SDR_CONNECTIVITY_CHANGES | Multiple SDR to SDR connectivity changes occurred{{more_info}} | INFORMATION |
200100ed | THRESHOLD_RELATION_INVALID | Trying to set a threshold value that violates relation to other threshold values{{more_info}} | MINOR |
200100ee | SDR_UPGRADED_BEFORE_SDS | SDR: {{sdr_title}} (ID {{sdr_id}}) was upgraded to version {{ver}} before upgrade was finished by SDS{{more_info}} | MAJOR |
200100ef | SDR_UPGRADED_SDS_DONE | SDR: {{sdr_title}} (ID {{sdr_id}}) version {{ver}} is now compatible to SDS{{more_info}} | INFORMATION |
20010104 | PMM_AUTO_ABORTED | Protected maintenance mode on SDS {{sds_name}} (ID {{sds_id}}) was automatically aborted {{reason}}{{more_info}} | MAJOR |
20010105 | SDR_RATIO_VIOLATION | Illegal SDRs ratio between Protection Domain ID {{pd_id1}} and remote Protection Domain ID {{pd_id2}}{{more_info}} | MINOR |
20010106 | SDC_PROXY_ACCESS_MODE_CHANGED | The system is now running with SDC proxy access {{state}}{{more_info}} | INFORMATION |
20010107 | PORT_FLAP_COMP_BECAME_STABLE | The port flapping state of {{obj_type}} (ID {{obj_id}}) became STABLE{{more_info}} | INFORMATION |
20010108 | PORT_FLAP_COMP_BECAME_UNSTABLE | The port flapping state of {{obj_type}} (ID {{obj_id}}) became UNSTABLE{{more_info}} | MINOR |
20010109 | PORT_FLAP_COMP_OUT_OF_TME | {{obj_type}} (ID {{obj_id}}) is no longer in TME state{{more_info}} | INFORMATION |
2001010a | PORT_FLAP_COMP_IN_TO_TME | {{obj_type}} (ID {{obj_id}}) is now in TME state{{more_info}} | MINOR |
2001010b | PORT_FLAP_ENABLE_DISABLE | Port Flapping feature state changed to: {{state}}{{more_info}} | INFORMATION |
2001010c | PORT_FLAP_CAP_CRITICAL | The port flapping capacity reached {{capacity}} MB limit. New reports will be rejected{{more_info}} | MINOR |
2001010d | PORT_FLAP_CAP_NORMAL | The port flapping capacity became normal{{more_info}} | INFORMATION |
2001010e | ICPMGR_NUM_ITER_OF_APPLYING_CHANGES_TOO_HIGH | Number of iterations of applying changes accumulated since adding the pair is too high. CgId: {{rcg_id}}, pairId: {{rpl_pair_id}}, number of iterations: {{num}}{{more_info}} | MINOR |
2001010f | ICPMGR_MOVE_PAIR_SUCCESS_AFTER_HIGH_NUM_OF_ITERATIONS | Move Pair succeeded after number of iterations of applying changes accumulated since adding the pair was too high. CgId: {{rcg_id}}, pairId: {{rpl_pair_id}}, number of iterations: {{num}}{{more_info}} | INFORMATION |
20010110 | SDC_DISCONNECTED_FROM_SOME_SDSS | The SDC Name: {{sdc_name}}; ID: {{sdc_id}} cannot connect to some SDSs{{more_info}} | MINOR |
20010111 | SDC_EXCLUDED_FROM_CONNECT_ANALYSIS | The SDC Name: {{sdc_name}}; ID: {{sdc_id}} is {{state}}{{more_info}} | INFORMATION |
20010112 | MDM_ALL_SDCS_AUTO_REFRESHED | All SDCs supporting automatic update report MDM IP list consistent with the current list{{more_info}} | INFORMATION |
20010113 | MDM_DISCONNECTED_SDCS_MAYBE_STALE | There may be disconnected SDCs with a stale MDM IPs list{{more_info}} | INFORMATION |
20010114 | MDM_NOT_ALL_SDCS_AUTO_REFRESHED | At least one connected SDC that supports automatic updates has a stale MDM IP list{{more_info}} | INFORMATION |
2001012c | SDT_RECONNECTED | SDT: {{sdt_title}} (ID {{sdt_id}}) was reconnected to the MDM{{more_info}} | INFORMATION |
2001012d | SDT_DECOUPLED | SDT: {{sdt_title}} (ID {{sdt_id}}) disconnected from the MDM{{more_info}} | MINOR |
2001012e | SDT_REMOVED | SDT: {{sdt_title}} (ID {{sdt_id}}) removal was completed{{more_info}} | INFORMATION |
2001012f | SDT_IN_MAINTENANCE | SDT: {{sdt_title}} (ID {{sdt_id}}) is in Maintenance Mode{{more_info}} | INFORMATION |
20010130 | SDT_EXIT_MAINTENANCE | SDT: {{sdt_title}} (ID {{sdt_id}}) has exited Maintenance Mode{{more_info}} | INFORMATION |
20010131 | SDT_IN_COOL_DOWN | SDT: {{sdt_title}} (ID {{sdt_id}}) will disconnect from MDM for {{num}} seconds{{more_info}} | MINOR |
20010132 | SDT_CONFIG_INVALID | SDT: {{sdt_title}} (ID {{sdt_id}}) configuration is invalid{{more_info}} | CRITICAL |
20010133 | SDT_AUTHENTICATION_FAILED | SDT: {{sdt_title}} (ID {{sdt_id}}) failed authentication ({{rc}}){{more_info}} | MAJOR |
20010134 | SDT_UPGRADED | SDT: {{sdt_title}} (ID {{sdt_id}}) upgraded to version {{ver}}{{more_info}} | INFORMATION |
20010137 | NVME_HOST_REMOVED | NVMe Host Name {{nvme_host_name}} ID {{nvme_host_id}} removal was completed{{more_info}} | INFORMATION |
20010138 | SDT_DISCONNECTED_FROM_SDS_IP | SDT Name: {{sdt_title}} ID: {{sdt_id}} disconnected from the IP address {{ip_addr}} of SDS {{sds_title}} ID: {{sds_id}}{{more_info}} | MINOR |
20010139 | SDT_CONNECTED_TO_SDS_IP | SDT Name: {{sdt_title}} ID: {{sdt_id}} is now connected to the IP address {{ip_addr}} of SDS {{sds_title}} ID: {{sds_id}}{{more_info}} | INFORMATION |
2001013a | MULTIPLE_SDT_SDS_CONNECTIVITY_CHANGES | Multiple SDT to SDS connectivity changes occurred{{more_info}} | INFORMATION |
2001013b | RPL_CG_DELETION_CAUSE_REMOVE_SNAPSHOT_POLICY | Replication Consistency Group ID {{rcg_id}} deletion causes removing snapshot policy {{snap_policy_id}}, and its auto snapshots will be detached{{more_info}} | CRITICAL |
2001013c | AP_VOL_CREATED_ON_DEST | Volume created successfully on destination. Volume ID: {{vol_id}}, Protection Domain ID: {{pd_id}}, Storage Pool ID: {{sp_id}}{{more_info}} | INFORMATION |
2001013d | AP_VOL_CREATION_FAILED_ON_DEST | Volume creation on destination failed with rc: {{rc}}{{more_info}} | MAJOR |
2001013e | AP_VOL_MAPPED_ON_DEST | Volume was successfully mapped to SDC on destination. Volume ID: {{vol_id}}, SDC ID: {{sdc_id}}{{more_info}} | INFORMATION |
2001013f | AP_VOL_MAPPING_FAILED_ON_DEST | Volume mapping on destination failed with rc: {{rc}}{{more_info}} | MAJOR |
20010140 | RPL_CG_SLIM_MODE_FAILOVER_STARTED | Replication Consistency Group ID {{rcg_id}} DST: Failover for Slim Mode DST Data Drop started{{more_info}} | INFORMATION |
20010141 | RPL_CG_SLIM_MODE_FAILOVER_FAILED | Replication Consistency Group ID {{rcg_id}} DST: Failover for Slim Mode DST Data Drop failed: {{rc}}{{more_info}} | INFORMATION |
20010142 | RPL_CG_SLIM_MODE_FAILOVER_DONE | Replication Consistency Group ID {{rcg_id}} DST: Failover for Slim Mode DST Data Drop completed{{more_info}} | INFORMATION |
20010143 | RPL_CG_SRC_AWARE_SM_FAILOVER_STARTED | Replication Consistency Group ID {{rcg_id}} SRC is notified SM Failover started{{more_info}} | INFORMATION |
20010144 | RPL_CG_SRC_AWARE_SM_FAILOVER_DONE | Replication Consistency Group ID {{rcg_id}} SRC is notified SM Failover completed{{more_info}} | INFORMATION |
20010145 | RPL_PAIR_TERMINATED_MOVE_ON_NEUTRAL | Replication Pair ID {{rpl_pair_id}} was terminated as was in the moving phase of initial copy while entering the Neutral state (SYS RCG (if relevant) {{sys_rcg_id}}, USR RCG {{usr_rcg_id}}){{more_info}} | INFORMATION |
20010146 | HOST_CONNECTIVITY_DOES_NOT_MATCH_GOAL | NVMe Host Name {{nvme_host_name}} ID {{nvme_host_id}} connectivity does not match the connectivity goal{{more_info}} | MINOR |
20010147 | HOST_CONNECTIVITY_MATCH_GOAL | NVMe Host Name {{nvme_host_name}} ID {{nvme_host_id}} connectivity match the connectivity goal{{more_info}} | INFORMATION |
20010148 | HOST_CONNECTIVITY_RESILIENT_IN_PD | A NVMe Host Name {{nvme_host_name}} ID {{nvme_host_id}} has a resilient connectivity in Protection Domain {{pd_id}}{{more_info}} | INFORMATION |
20010149 | HOST_CONNECTIVITY_LOW_RESILIENT_IN_PD | The NVMe Host Name {{nvme_host_name}} ID {{nvme_host_id}} path resilience requirements cannot be satisfied in a Protection Domain {{pd_id}} from which the host has a mapped volume.{{more_info}} | MINOR |
2001014a | HOST_CONNECTIVITY_NOT_RESILIENT_IN_PD | A NVMe Host Name {{nvme_host_name}} ID {{nvme_host_id}} has a non-resilient connectivity in Protection Domain {{pd_id}}. A subsequent failure can result in the host experiencing data unavailability.{{more_info}} | MAJOR |
2001014b | HOST_DISCONNECTED_IN_PD | A NVMe Host Name {{nvme_host_name}} ID {{nvme_host_id}} is disconnected from the system in Protection Domain {{pd_id}}{{more_info}} | MAJOR |
2001014c | SDT_LOAD_UNBALANCED | SDT load is unbalanced on Protection Domain {{pd_id}}{{more_info}} | MINOR |
2001014d | SDT_LOAD_BALANCED | SDT load is balanced on Protection Domain {{pd_id}}{{more_info}} | INFORMATION |
2001014e | SDT_PORT_HEALTH_ERROR_OF_ONE_SYS_PORT | The SDT Name: {{sdt_title}} ID: {{sdt_id}} identified a port health issue (ID {{sys_port_id}}){{more_info}} | MINOR |
2001014f | SDT_PORT_HEALTH_ERROR_OF_MULTIPLE_SYS_PORTS | The SDT Name: {{sdt_title}} ID: {{sdt_id}} identified a port health issue of multiple sys ports{{more_info}} | MINOR |
20010150 | SDT_PORT_HEALTH_OK | The SDT Name: {{sdt_title}} ID: {{sdt_id}} has no port health issue{{more_info}} | INFORMATION |
20010151 | SDT_NOT_CONNECTED_TO_HOST | The SDT Name: {{sdt_title}} ID: {{sdt_id}} has no host connections even though it is included in a connectivity goal{{more_info}} | MINOR |
20010152 | SDT_CONNECTED_TO_HOST | The SDT Name: {{sdt_title}} ID: {{sdt_id}} has host connections{{more_info}} | INFORMATION |
20010153 | SDC_SDS_ALL_CONNECTED | All SDCs are connected to all SDSs{{more_info}} | INFORMATION |
20010154 | SDC_SDS_ONE_CLIENT_FROM_ONE_SERVER | One SDC (ID {{sdc_id}}) is disconnected from one SDS (ID {{sds_id}}){{more_info}} | MINOR |
20010155 | SDC_SDS_ONE_CLIENT_FROM_ONE_SERVER_IP | One SDC (ID {{sdc_id}}) is disconnected from one SDS (ID {{sds_id}}) on IP {{sds_ip}}{{more_info}} | MINOR |
20010156 | SDC_SDS_ONE_CLIENT_FROM_ALL_SERVER | One SDC (ID {{sdc_id}}) is disconnected from all SDSs{{more_info}} | MINOR |
20010157 | SDC_SDS_ALL_CLIENT_FROM_ONE_SERVER | All SDCs are disconnected from one SDS (ID {{sds_id}}){{more_info}} | MINOR |
20010158 | SDC_SDS_ALL_CLIENT_FROM_ONE_SERVER_IP | All SDCs are disconnected from one SDS (ID {{sds_id}}) on IP {{sds_ip}}{{more_info}} | MINOR |
20010159 | SDC_SDS_ALL_CLIENT_FROM_ALL_SERVER | All SDCs are disconnected from all SDSs{{more_info}} | MINOR |
2001015a | SDC_SDS_MULTIPLE_DISCONNECTIONS | Multiple SDCs are disconnected from multiple SDSs{{more_info}} | MINOR |
2001015b | SDC_SDR_ALL_CONNECTED | All SDCs are connected to all SDRs{{more_info}} | INFORMATION |
2001015c | SDC_SDR_ONE_CLIENT_FROM_ONE_SERVER | One SDC (ID {{sdc_id}}) is disconnected from one SDR (ID {{sdr_id}}){{more_info}} | MINOR |
2001015d | SDC_SDR_ONE_CLIENT_FROM_ONE_SERVER_IP | One SDC (ID {{sdc_id}}) is disconnected from one SDR (ID {{sdr_id}}) on IP {{sdr_ip}}{{more_info}} | MINOR |
2001015e | SDC_SDR_ONE_CLIENT_FROM_ALL_SERVER | One SDC (ID {{sdc_id}}) is disconnected from all SDRs{{more_info}} | MINOR |
2001015f | SDC_SDR_ALL_CLIENT_FROM_ONE_SERVER | All SDCs are disconnected from one SDR (ID {{sdr_id}}){{more_info}} | MINOR |
20010160 | SDC_SDR_ALL_CLIENT_FROM_ONE_SERVER_IP | All SDCs are disconnected from one SDR (ID {{sdr_id}}) on IP {{sdr_ip}}{{more_info}} | MINOR |
20010161 | SDC_SDR_ALL_CLIENT_FROM_ALL_SERVER | All SDCs are disconnected from all SDRs{{more_info}} | MINOR |
20010162 | SDC_SDR_MULTIPLE_DISCONNECTIONS | Multiple SDCs are disconnected from multiple SDRs{{more_info}} | MINOR |
20010163 | SDR_SDS_ALL_CONNECTED | All SDRs are connected to all SDSs{{more_info}} | INFORMATION |
20010164 | SDR_SDS_ONE_CLIENT_FROM_ONE_SERVER | One SDR (ID {{sdr_id}}) is disconnected from one SDS (ID {{sds_id}}){{more_info}} | MINOR |
20010165 | SDR_SDS_ONE_CLIENT_FROM_ONE_SERVER_IP | One SDR (ID {{sdr_id}}) is disconnected from one SDS (ID {{sds_id}}) on IP {{sds_ip}}{{more_info}} | MINOR |
20010166 | SDR_SDS_ONE_CLIENT_FROM_ALL_SERVER | One SDR (ID {{sdr_id}}) is disconnected from all SDSs{{more_info}} | MINOR |
20010167 | SDR_SDS_ALL_CLIENT_FROM_ONE_SERVER | All SDRs are disconnected from one SDS (ID {{sds_id}}){{more_info}} | MINOR |
20010168 | SDR_SDS_ALL_CLIENT_FROM_ONE_SERVER_IP | All SDRs are disconnected from one SDS (ID {{sds_id}}) on IP {{sds_ip}}{{more_info}} | MINOR |
20010169 | SDR_SDS_ALL_CLIENT_FROM_ALL_SERVER | All SDRs are disconnected from all SDSs{{more_info}} | MINOR |
2001016a | SDR_SDS_MULTIPLE_DISCONNECTIONS | Multiple SDRs are disconnected from multiple SDSs{{more_info}} | MINOR |
2001016b | SDT_SDS_ALL_CONNECTED | All SDTs are connected to all SDSs{{more_info}} | INFORMATION |
2001016c | SDT_SDS_ONE_CLIENT_FROM_ONE_SERVER | One SDT (ID {{sdt_id}}) is disconnected from one SDS (ID {{sds_id}}){{more_info}} | MINOR |
2001016d | SDT_SDS_ONE_CLIENT_FROM_ONE_SERVER_IP | One SDT (ID {{sdt_id}}) is disconnected from one SDS (ID {{sds_id}}) on IP {{sds_ip}}{{more_info}} | MINOR |
2001016e | SDT_SDS_ONE_CLIENT_FROM_ALL_SERVER | One SDT (ID {{sdt_id}}) is disconnected from all SDSs{{more_info}} | MINOR |
2001016f | SDT_SDS_ALL_CLIENT_FROM_ONE_SERVER | All SDTs are disconnected from one SDS (ID {{sds_id}}){{more_info}} | MINOR |
20010170 | SDT_SDS_ALL_CLIENT_FROM_ONE_SERVER_IP | All SDTs are disconnected from one SDS (ID {{sds_id}}) on IP {{sds_ip}}{{more_info}} | MINOR |
20010171 | SDT_SDS_ALL_CLIENT_FROM_ALL_SERVER | All SDTs are disconnected from all SDSs{{more_info}} | MINOR |
20010172 | SDT_SDS_MULTIPLE_DISCONNECTIONS | Multiple SDTs are disconnected from multiple SDSs{{more_info}} | MINOR |
20010173 | SDR_SDR_ALL_CONNECTED | All SDRs are connected to all SDRs{{more_info}} | INFORMATION |
20010174 | SDR_SDR_ONE_CLIENT_FROM_ONE_SERVER | One SDR (ID {{sdr_client_id}}) is disconnected from one SDR (ID {{sdr_server_id}}){{more_info}} | MINOR |
20010175 | SDR_SDR_ONE_CLIENT_FROM_ONE_SERVER_IP | One SDR (ID {{sdr_client_id}}) is disconnected from one SDR (ID {{sdr_server_id}}) on IP {{sdr_ip}}{{more_info}} | MINOR |
20010176 | SDR_SDR_ONE_CLIENT_FROM_ALL_SERVER | One SDR (ID {{sdr_id}}) is disconnected from all SDRs{{more_info}} | MINOR |
20010177 | SDR_SDR_ALL_CLIENT_FROM_ONE_SERVER | All SDRs are disconnected from one SDR (ID {{sdr_id}}){{more_info}} | MINOR |
20010178 | SDR_SDR_ALL_CLIENT_FROM_ONE_SERVER_IP | All SDRs are disconnected from one SDR (ID {{sdr_id}}) on IP {{sdr_ip}}{{more_info}} | MINOR |
20010183 | SDR_SDR_ALL_CLIENT_FROM_ALL_SERVER | All SDRs are disconnected from all SDRs{{more_info}} | MINOR |
20010184 | SDR_SDR_MULTIPLE_DISCONNECTIONS | Multiple SDRs are disconnected from multiple SDRs{{more_info}} | MINOR |
20010185 | SDC_DISCONNECTED_PROXY_USED | SDC is disconnected and uses MDM proxy access functionality. ID: {{sdc_id}}; GUID: {{sdc_guid}}{{more_info}} | MINOR |
20010186 | SDC_DISCONNECTED_PROXY_NOT_USED | SDC is disconnected and does not use MDM proxy access functionality. ID: {{sdc_id}}; GUID: {{sdc_guid}}{{more_info}} | MINOR |
20010187 | CLUSTER_MEMBERSHIP_CHANGE | Cluster membership change{{more_info}} | INFORMATION |
20010188 | CLUSTER_ADD_STANDBY_MDM | The Standby MDM, {{mdm_title}}, was added to the cluster{{more_info}} | INFORMATION |
20010189 | CLUSTER_REMOVE_STANDBY_MDM | The Standby MDM, {{mdm_title}}, was removed from the cluster{{more_info}} | INFORMATION |
2001018a | MDM_CLUSTER_MNO_CERT_BAD_VALIDITY_PERIOD | The certificate {{cert_file_name}} validity period for the MDM {{mdm_title}} is not valid{{more_info}} | MAJOR |
2001018b | MDM_CLUSTER_TECHNICIAN_CA_CERT_MISSING | The technician certificate {{cert_file_name}} is missing{{more_info}} | MAJOR |
2001018c | UNKNOWN_UNAPPROVED_SDC_IP | Unknown SDC is trying to connect from the unapproved IP address {{ip}}{{more_info}} | MAJOR |
2001018d | SDS_EXCEEDED_IMM_TIMEOUT | SDS {{sds_title}} (ID {{sds_id}}) is running with Instant Maintenance Mode (IMM) for more than {{threshold}} minutes, this places the system at risk for a 2nd failure of another node and might lead to Data Unavailability. Refer to PowerFlex guidelines for remediation recommendation{{more_info}} | CRITICAL |
200101ae | SYSTEM_REACHED_MAX_PERSISTENT_DISCOVERY_CONTROLLERS | The system has reached maximal number of persistent discovery controllers.{{more_info}} | MINOR |
200101af | SYSTEM_NUMBER_PERSISTENT_DISCOVERY_CONTROLLERS_NORMAL | The system number of persistent discovery controllers has returned to normal.{{more_info}} | MINOR |
200101b0 | SDT_REACHED_MAX_PERSISTENT_DISCOVERY_CONTROLLERS | SDT: {{sdt_title}} (ID {{sdt_id}}) reached maximum number of persistent discovery controllers{{more_info}} | MINOR |
200101b1 | SDT_NUMBER_PERSISTENT_DISCOVERY_CONTROLLERS_NORMAL | SDT: {{sdt_title}} (ID {{sdt_id}}) number of persistent discovery controllers returned to normal{{more_info}} | MINOR |
200101c2 | DM_SNAPSHOT_COPY_TASK_FAILED | Snapshot copy task, {{task_id}}, failed{{more_info}} | MAJOR |
200101c3 | DM_SNAPSHOT_COPY_TASK_REMOVED | Snapshot copy task, {{task_id}}, removed{{more_info}} | INFORMATION |
200101c4 | DM_SNAPSHOT_COPY_TASK_RESUMED | Snapshot copy task, {{task_id}}, resumed{{more_info}} | INFORMATION |
200101c5 | DM_SNAPSHOT_COPY_TASK_STUCK | Snapshot copy task, {{task_id}}, appears to be stuck{{more_info}} | MAJOR |
200101c6 | DM_SNAPSHOT_COPY_TASK_PAUSED | Snapshot copy task, {{task_id}}, paused{{more_info}} | INFORMATION |
200101c7 | DM_SNAPSHOT_COPY_TASK_CREATE | MDM: CopyTask Create TaskId: {{task_id}} SrcVID: {{src_vol_id}} BaseVID: {{base_vol_id}} remote system {{remote_sys_id}} NVMeNSId: {{nvme_ns_id}}{{more_info}} | INFORMATION |
200101c8 | DM_SNAPSHOT_COPY_TASK_COMPLETED | Snapshot copy task, {{task_id}}, completed{{more_info}} | INFORMATION |
200101c9 | DM_SNAPSHOT_COPY_TASK_UNSTUCK | Snapshot copy task, {{task_id}}, is no longer stuck{{more_info}} | INFORMATION |
200101ca | MDM_FAILED_TO_PERFORM_NETWORK_REALIGNMENT | MDM ID {{mdm_id}} failed to perform network realignment{{more_info}} | MINOR |
200101cb | MDM_STARTED_NETWORK_REALIGNMENT | MDM ID {{mdm_id}} has started network realignment{{more_info}} | INFORMATION |
200101cc | MDM_FINISHED_NETWORK_REALIGNMENT | MDM ID {{mdm_id}} has finished network realignment{{more_info}} | INFORMATION |
200101cd | MDM_MNO_CERTIFICATE_EXPIRES_THIS_YEAR | The certificate {{cert_file_name}} of actor {{actor_id}} will expire in {{time_to_expire}}{{more_info}} | MINOR |
200101ce | MDM_MNO_CERTIFICATE_DUE_TO_EXPIRE | The certificate {{cert_file_name}} of actor {{actor_id}} will expire in {{time_to_expire}}{{more_info}} | MINOR |
200101cf | MDM_MNO_CERTIFICATE_EXPIRES_NOW | The certificate {{cert_file_name}} of actor {{actor_id}} will expire in {{time_to_expire}}{{more_info}} | MAJOR |
200101d0 | MDM_MNO_CERTIFICATE_WAS_RENEWED | The certificate of actor {{actor_id}} has been renewed{{more_info}} | INFORMATION |
200101d1 | DM_COPY_JOB_WORKER_DEGRADED | Copy task {{copy_task_id}} find SDT/Worker {{sdt_id}} degraded!{{more_info}} | INFORMATION |
200101d2 | DM_SDT_INITIATOR_UNRESPONSIVE_FATAL_ERROR | An SDT {{sdt_id}} NVMe initiator is unresponsive or had a fatal error{{more_info}} | CRITICAL |
200101d3 | DM_SDT_INITIATOR_OPERATIONAL | An SDT {{sdt_id}} NVMe initiator is operational{{more_info}} | INFORMATION |