Start a Conversation

Unsolved

B

10 Posts

447

February 21st, 2023 09:00

Set Volume Online Fails due to "Failback Operation In Progress"

We experienced multiple disk failures in our array that caused erratic behavior. (x6 PS6000, x1 PS6010)
Four hard drives failed in two different members but were replaced and rebuilt. (Two in each, RAID 6 across the board)
All data was preserved, but only some members were appearing in group manager. Set volumes offline and disconnected iSCSI connections.
After both members finished rebuilding the missing members were still not showing...figured we had to reboot the array.
Shut down each member directly via serial to the controller. Brought each back online.
Another disk failed in a different member but everything was back to normal in group manager. At this time, all rebuilds are finished but four out of twelve volumes are still offline.
When trying to bring them online we get the following:

% Error - Failback operation is in progress on this volume. The volume cannot be set online.

There are no Failback operations in progress according to group manager and we do not use replication. CLI shows no operations in progress on the volume either.

Thank you in advance. I appreciate any assistance or input.

Please let me know if I can provide any additional information.

EQL-Array> volume select Veeam-01 show
____________________________ Volume Information ______________________________
Name: Veeam-01
VolReserve: 12.85TB
ReplReserveInUse: 0MB
iSCSI Name:
Description:
Snap-Reserve-Avail: 0% (0MB)
DesiredStatus: offline
Connections: 0
Bind:
ReplicationReserveSpace: 0MB
ReplicationPartner:
Transmitted-Data: 331.76TB
Pref-Raid-Policy: none
Thin-Provision: enabled
Thin-Growth-Warn: 90% (13.49TB)
ReplicationTxData: 0MB
iSNS-Discovery: disabled
Thin-Clone: N
NAS Container: N
SyncReplStatus:
Thin-Warn-Mode: offline
Space Borrowed: 0MB
SectorSize: 512
CompressedSnapDataSize: N/A
Size: 15TB
VolReserveInUse: 12.84TB
iSCSI Alias: Veeam-01
ActualMembers: 5
Snap-Warn: 100%
Snap-Depletion: delete-oldest
Snap-Reserve: 0%
Permission: read-write
Status: offline
Snapshots: 0
Type: not-replicated
Replicas: 0
Pool: VEEAM
Received-Data: 415.54TB
Pref-Raid-Policy-Status: none
Thin-Min-Reserve: 15% (2.25TB)
Thin-Growth-Max: 98% (14.69TB)
MultiHostAccess: enabled
Replica-Volume-Reserve: 0MB
Template: N
Administrator:
SyncReplTxData: 0MB
Snap-space Borrowing: disabled
Folder: VEEAM
ExpandedSnapDataSize: N/A
CompressionSavings: N/A

3 Apprentice

 • 

1.5K Posts

February 21st, 2023 10:00

Hello, 

 I don't have a known answer to this issue. 

 Please run the following commands and copy/paste the output 

GrpName> show member -poolinfo

GrpName>show recentevents

Grpname>show pool

GrpName>show volume

 I noticed the "desired status" is offline.  Which usually means someone put it offline.  

 Just to see if there's more info try onlining the volume at the CLI and send me the result

 GrpName>volume  select VOLUMENAME online 

 i.e.   GrpName>volume select Veeam-01 online

Do you have a current support contract?  If not, depending on the local region you are in, you may be able to open a one time support case for a fee.   They could gather diagnostic logs to get a better understnding of what is causing the issue.

 What version of firmware are you using? 

 Regards, 

Don 

#iworkfordell

 

February 21st, 2023 12:00

Thank you Don!

EQL-Array> show member -poolinfo
Name Status Version Disks Capacity FreeSpace Connections Pool
---------- ------- ---------- ----- ---------- ---------- ----------- -------
EQL-04 online V10.0.2 (R 16 11.34TB 3.69TB 4 VEEAM
465844)
EQL-01 online V10.0.2 (R 16 34.88TB 8.72TB 4 VEEAM
465844)
EQL-05 online V10.0.2 (R 16 34.88TB 9.37TB 6 VEEAM
465844)
EQL-03 online V10.0.2 (R 16 11.34TB 7.88TB 1 VEEAM
465844)
EQL-02 online V10.0.2 (R 16 34.88TB 4.07TB 5 VEEAM
465844)
EQL-07 online V10.0.2 (R 16 34.88TB 15.01TB 5 VEEAM
EQL-06 online V10.0.2 (R 16 23.08TB 1.92TB 6 VEEAM
465844)
EQL-Array>

I reconnected the volumes that are online so show recent events are filled with iscsi connection logs

38415:31688:EQL-07:MgmtExec:21-Feb-2023 16:11:45.863016:targetAttr.cc:846:INFO:7
.2.14:iSCSI login to target '172.16.16.16:3260, iqn.2001-05.com.equallogic:0-8a09
06-730783706-18f00cd1bcb5b2c8-veeam-07' from initiator '172.16.16.2:61912, iqn.19
91-05.com.microsoft:veeam-data' successful, using Jumbo Frame length.

38414:31687:EQL-07:MgmtExec:21-Feb-2023 16:11:45.843015:targetAttr.cc:974:INFO:7
.2.15:iSCSI session to target '172.16.16.15:3260, iqn.2001-05.com.equallogic:0-8a
0906-730783706-18f00cd1bcb5b2c8-veeam-07' from initiator '172.16.16.2:61868, iqn.
1991-05.com.microsoft:veeam-data' was closed.
Logout request was received from the initiator.

38412:31686:EQL-07:MgmtExec:21-Feb-2023 16:11:25.563014:targetAttr.cc:846:INFO:7
.2.14:iSCSI login to target '172.16.16.75:3260, iqn.2001-05.com.equallogic:0-8a09
06-d75cb8e04-3f74204dc8663458-veeam-11' from initiator '172.16.16.2:61909, iqn.19
91-05.com.microsoft:veeam-data' successful, using Jumbo Frame length.

Looking through the events there is nothing obvious that explains this issue. 

I did try setting the volume online and that is when I get the "% Error - Failback operation is in progress on this volume. The volume cannot be set online."

EQL-Array> show pool
Name Default Members Volumes Capacity FreeSpace Maintenance
-------------------- ------- ------- ------- ---------- ---------- -----------
default true 0 0 0MB 0MB false
maintenance false 1 0 0MB 0MB true
VEEAM false 7 12 185.28TB 50.57TB false

All members on 10.0.2 R465844

We might submit a ticket but wanted to post it here first. 

 

 

3 Apprentice

 • 

1.5K Posts

February 21st, 2023 12:00

Hello, 

 Thank you for the reply. 

  At this point opening a case is your best option.  There's no quiick fix that I am aware of. 

  If you do open a case, gathering diags now will shorten the process.  You will need to get diagnostics from ALL members. 

  Regards, 

Don 

 

#iworkfordell

February 21st, 2023 13:00

Thank you, Don. Appreciate the quick reply!

3 Apprentice

 • 

1.5K Posts

February 21st, 2023 13:00

Hello, 

 You are quite welcome!   Wish I could have helped more.  If you do get it resolved, would you mind posting the answer here?    

  Regards, 

Don 

#iworkfordell

No Events found!

Top