Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products

RecoverPoint Classic - Registered Storage Pane Appears Empty

Summary: In Unisphere for RecoverPoint, under the RPA Clusters registered storage, no storage details are displayed.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

In Unisphere for RecoverPoint, under the RPA Clusters registered storage, no storage details are displayed.
 
Unisphere information showing no details about registered storage for the RPA clusters
Figure 1: Unisphere information showing no details about registered storage for the RPA clusters

In the storage details, there should be storage information (The type of storage devices may change depending on your environment).

Cause

While reviewing RecoverPoint logs, you may see the below entries:
Log location /home/kos/control/result.log.gz and /home/kos/mirror/result.log.gz:


2023/01/18 09:07:59.452 - #1 - 6819/6148 - ScsiInitiator: handle_input: TIMEOUT path = (0x500124800ac04a03,0x5001248000205a03,0xbbbbbbbbbb01bbbb,e_fiberChannel)
2023/01/18 09:07:59.453 - #2 - 7058/6148 - MultiPath: MPVolumeConnectedState::handleBadPath removing path (0x500124800ac04a03,0x5001248000205a03,0xbbbbbbbbbb01bbbb,e_fiberChannel)0xaaaabee482eb0101 (StorageType=KBOX, revision= a_volume = 0xaaaabee482eb0101 (StorageType=KBOX, revision=
2023/01/18 09:07:59.453 - #2 - 7058/6148 - MultiPath: PathSwitcher::removePath (0x500124800ac04a03,0x5001248000205a03,0xbbbbbbbbbb01bbbb,e_fiberChannel)0xaaaabee482eb0101 (StorageType=KBOX, revision= i = 0


Log location /home/kos/klr/result.log.gz:


2023/01/18 04:39:29.122 - sll - 0/0 - #2 - hba_0_0: sll_status_entry: CS_PORT_LOGGED_OUT status detected - removing target_index 0 wwn 5001248000205a03 from pdb
2023/01/18 04:39:29.122 - sll - 4253/4004 - #2 - hba_0_0: sll_handle_els_24xx: PRLI adding port_id: 0x0e3e00 wwn: 0x5001248000205a03 nphl: 0x0000
2023/01/18 04:39:29.122 - sll - 4253/4004 - #2 - hba_0_0: add_nphl_to_logins: added port_id: 0x0e3e00 wwn: 0x5001248000205a03 nphl: 0x0000 at idx: 3646 hash 3646
2023/01/18 04:39:29.123 - sll - 0/0 - #2 - hba_2_0: sll_status_entry: CS_PORT_LOGGED_OUT status detected - removing target_index 0 wwn 5001248000205a03 from pdb
2023/01/18 04:39:29.124 - sll - 0/0 - #2 - hba_2_0: sll_handle_els_24xx: PRLI adding port_id: 0x0e3e00 wwn: 0x5001248000205a03 nphl: 0x0000
2023/01/18 04:39:29.124 - sll - 0/0 - #2 - hba_2_0: add_nphl_to_logins: added port_id: 0x0e3e00 wwn: 0x5001248000205a03 nphl: 0x0000 at idx: 3646 hash 3646
2023/01/18 04:39:43.108 - sll - 4414/4004 - #2 - hba_0_0: sll_fabric_login: trying fabric login loop_id=133 for port=0xe3e00 wwn=0x5001248000205a03 timeout=4 timeout_reset=0
2023/01/18 04:39:43.108 - sll - 4414/4004 - #2 - pdb: hba_0_0: sll_fabric_attempt_to_add_to_pdb: device returned loop_id 0 for port_id 0xe3e00 wwn=0x5001248000205a03 - retry login
2023/01/18 04:39:43.108 - sll - 4414/4004 - #2 - hba_0_0: sll_fabric_login: trying fabric login loop_id=0 for port=0xe3e00 wwn=0x5001248000205a03 timeout=4 timeout_reset=0
2023/01/18 04:39:43.108 - sll - 4414/4004 - #2 - hba_0_0: add_nphl_to_logins: added port_id: 0x0e3e00 wwn: 0x5001248000205a03 nphl: 0x0000 at idx: 3646 hash 3646
2023/01/18 04:39:43.108 - sll - 4414/4004 - #2 - pdb: hba_0_0: sll_fabric_attempt_to_add_to_pdb: new loop_id=0 for port_id=0xe3e00 wwn=0x5001248000205a03 bInUse=1
2023/01/18 04:39:43.110 - sll - 4414/4004 - #2 - hba_2_0: sll_fabric_login: trying fabric login loop_id=133 for port=0xe3e00 wwn=0x5001248000205a03 timeout=4 timeout_reset=0
2023/01/18 04:39:43.110 - sll - 4414/4004 - #2 - pdb: hba_2_0: sll_fabric_attempt_to_add_to_pdb: device returned loop_id 0 for port_id 0xe3e00 wwn=0x5001248000205a03 - retry login
2023/01/18 04:39:43.110 - sll - 4414/4004 - #2 - hba_2_0: sll_fabric_login: trying fabric login loop_id=0 for port=0xe3e00 wwn=0x5001248000205a03 timeout=4 timeout_reset=0
2023/01/18 04:39:43.110 - sll - 4414/4004 - #2 - hba_2_0: add_nphl_to_logins: added port_id: 0x0e3e00 wwn: 0x5001248000205a03 nphl: 0x0000 at idx: 3646 hash 3646
2023/01/18 04:39:43.110 - sll - 4414/4004 - #2 - pdb: hba_2_0: sll_fabric_attempt_to_add_to_pdb: new loop_id=0 for port_id=0xe3e00 wwn=0x5001248000205a03 bInUse=0
2023/01/18 04:39:49.163 - sll - 3794/3794 - #2 - hba_2_0: PDB_RemoveDevicesNotInNS: wwn 0x5001248000205a03 port_id 0x0e3e00 loop_id 0 not in ns - remove it from PDB
2023/01/18 04:39:49.164 - sll - 3792/3792 - #2 - hba_0_0: PDB_RemoveDevicesNotInNS: wwn 0x5001248000205a03 port_id 0x0e3e00 loop_id 0 not in ns - remove it from PDB
2023/01/18 04:39:49.201 - sll - 4593/4004 - #2 - sll_ioctl_set_kbox_paths: path: 0 can't be resolved for (hba: 0:ffff8807fa700000, vp: 0, wwn: 0x5001248000205a03)

2023/01/18 04:25:24.143 - sll - 25719/25719 - #2 - hba_2_0: sll_status_entry: CS_PORT_LOGGED_OUT status detected - removing target_index 0 wwn 5001248000205a03 from pdb
2023/01/18 04:39:29.122 - sll - 0/0 - #2 - hba_0_0: sll_status_entry: CS_PORT_LOGGED_OUT status detected - removing target_index 0 wwn 5001248000205a03 from pdb
2023/01/18 04:39:29.123 - sll - 0/0 - #2 - hba_2_0: sll_status_entry: CS_PORT_LOGGED_OUT status detected - removing target_index 0 wwn 5001248000205a03 from pdb
2023/01/18 04:15:40.607 - sll - 0/0 - #2 - hba_2: sll_handle_ctio_ret: WARNING: ctio failed(status=0x2) cdb=(2a 0 0 0 0 0 0 0 1) timestamp=(time=0 trans=0 data=0) lun=0xbbbbbbbbbbbbbb52 iid=0 initiator=0x5001248000205a03


Log location /spa/EMC/CEM/log/cemtracer_health_services.log:

Line 28: 18 Jan 2023 04:29:02  - [Health] INFO - {0:45928:894976145}[26373|17535|d37feb40][findRuleInRuleVector @ ../../../components/providers/osls/Health/src/Rule.cpp:119] Rule found Rule[ state=0x8000 log=0x140006000a severity=6 health=10 descr=ALRT_INITIATOR_NO_LOGGED_IN_PATH res=initiator_fix_connection]  for 50:01:24:80:00:20:5A:02:50:01:24:80:00:20:5A:03 OperationalStatus: 0x8000
The logs indicate that there is a port communication issue.

Reviewing the fabric, shows that one port is facing connectivity issues:
Interrupts:        13003      Link_failure: 10         Frjt:         0

Unknown:           0          Loss_of_sync: 0          Fbsy:         0

Lli:               13003      Loss_of_sig:  60

Resolution

Workaround:
In order to resolve the port connectivity issue, look for cable or SFP issues and replace them.

Resolution:
There is no change needed at RecoverPoint level, as connectivity issues cause the issue.

Affected Products

RecoverPoint Gen6 Server
Article Properties
Article Number: 000208648
Article Type: Solution
Last Modified: 20 Sep 2023
Version:  4
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.