Unsolved
This post is more than 5 years old
1 Rookie
•
6 Posts
1
4967
Need help removing a LUN
Hello,
I recently lost a DAE in our old Clariion and would like to salvage the 2 remaining DAEs for use in testing. right now, it still sees the old LUNs (now the LUNs are in the Private folder) and storage pool, but when I try to delete them they confirm a successful delete but are still there. The same thing happens when I attempt a lun destroy from naviseccli (which doesn't error out, but goes right back to the prompt). I need a point in the right direction of what to look for, whether it is preceding commands or data I need to perform this kill with extreme prejudice.
Thanks for your thoughts!
brettesinclair
2 Intern
2 Intern
•
715 Posts
0
April 3rd, 2015 02:00
Was it used by MirrorView/Sancopy etc at some point when it was good ?
See if any drivers are listed from ;
naviseccli -h arrayspa getlun -messner lun# -stack
Sheron1
224 Posts
0
April 3rd, 2015 03:00
After executing Brett's command see if it shows any drivers for mirror view or any other replication.
If yes then try the below command.
naviseccli -h -user -password -scope 0 mirror -Async|-sync -setfeature - off -LUN
HTH,
Sheron
dgapinski
1 Rookie
1 Rookie
•
6 Posts
0
April 3rd, 2015 06:00
I thought maybe this info might be helpful:
C:\Program Files (x86)\EMC\Navisphere CLI>naviseccli -h 172.16.1.82 storagepool
-list -id 0
Pool Name: Pool 0
Pool ID: 0
Raid Type: r_5
Percent Full Threshold: 70
Description:
Disk Type: Fibre Channel
State: Offline
Status: An internal error occurred resulting in a Pool lun going offline. (0x712d8514)
Current Operation: None
Current Operation State: N/A
Current Operation Status: N/A
Current Operation Percent Completed: 0
Raw Capacity (Blocks): 17684230997
Raw Capacity (GBs): 8432.498
User Capacity (Blocks): 33455810560
User Capacity (GBs): 15952.974
Consumed Capacity (Blocks): 33327876480
Consumed Capacity (GBs): 15891.970
Available Capacity (Blocks): 127934080
Available Capacity (GBs): 61.004
Percent Full: 99.618
Total Subscribed Capacity (Blocks): 0
Total Subscribed Capacity (GBs): 0.000
Percent Subscribed: 0.000
Oversubscribed by (Blocks): 0
Oversubscribed by (GBs): 0.000
Disks:
Bus 0 Enclosure 1 Disk 8
Bus 0 Enclosure 1 Disk 6
Bus 0 Enclosure 1 Disk 4
Bus 0 Enclosure 1 Disk 2
Bus 0 Enclosure 1 Disk 10
Bus 0 Enclosure 1 Disk 13
Bus 0 Enclosure 1 Disk 11
Bus 0 Enclosure 1 Disk 12
Bus 0 Enclosure 1 Disk 9
Bus 0 Enclosure 1 Disk 7
Bus 0 Enclosure 1 Disk 5
Bus 0 Enclosure 1 Disk 3
Bus 0 Enclosure 1 Disk 1
Bus 0 Enclosure 1 Disk 0
LUNs: 23, 22
C:\Program Files (x86)\EMC\Navisphere CLI>naviseccli -h 172.16.1.82 lun -list -l 22
LOGICAL UNIT NUMBER 22
Name: MKE_TEST_SPA_001_8TB
UID: 60:06:01:60:24:00:23:00:3A:16:4C:8B:B3:0C:E4:11
Current Owner: SP B
Default Owner: SP A
Allocation Owner: SP A
User Capacity (Blocks): 17179869184
User Capacity (GBs): 8192.000
Consumed Capacity (Blocks): N/A
Consumed Capacity (GBs): N/A
Pool Name: Pool 0
Raid Type: r_5
Offset: 0
Auto-Assign Enabled: DISABLED
Auto-Trespass Enabled: DISABLED
Current State: Offline
Status: An internal error occurred resulting in a Pool lun going offline.(0x712d8514)
Is Faulted: false
Is Transitioning: false
Current Operation: None
Current Operation State: N/A
Current Operation Status: N/A
Current Operation Percent Completed: 0
Is Pool LUN: Yes
Is Thin LUN: No
Is Private: Yes
Is Compressed: No
Initial Tier: Optimize Pool
Tier Distribution:
FC: 100.00%
dgapinski
1 Rookie
1 Rookie
•
6 Posts
0
April 3rd, 2015 06:00
Thanks Sheron, even though these luns didn't have any drivers attached, I tried running your command and got the responses that the system doesn't have Mirror Sync or Asynchronous software installed (tried running it both ways). Any other thoughts?
dgapinski
1 Rookie
1 Rookie
•
6 Posts
0
April 3rd, 2015 06:00
Thanks Brett@s - no drivers listed though for either LUN.
dgapinski
1 Rookie
1 Rookie
•
6 Posts
0
April 3rd, 2015 07:00
That is correct
Sheron1
224 Posts
0
April 3rd, 2015 07:00
The logs tell me that they are offline. Is that correct?
dgapinski
1 Rookie
1 Rookie
•
6 Posts
0
April 3rd, 2015 08:00
OK if that is the case it would be best to investigate a nuclear option then. We can't afford out of warranty support.
Sheron1
224 Posts
0
April 3rd, 2015 08:00
Oh, this would need an EMC ticket, this will be elevated to engineering. You might require to capture KTCONs. Please contact clariion support.
kelleg
4.5K Posts
0
April 8th, 2015 14:00
The only nuclear option is to re-image the array. That message indications the Pool is off-line due to a metadata fault - engineering can try to fixed it, but even then it's not clear they could in this case.
glen
asegrera
4 Posts
0
May 19th, 2015 07:00
Hello
I have the same problem . Can you tell me what is the procedure to re-image the array?
kelleg
4.5K Posts
0
May 22nd, 2015 13:00
The procedure is a support-only process - you'll need to contact EMC support to get this done.
glen
Mtexter
2 Intern
2 Intern
•
207 Posts
0
May 27th, 2015 11:00
Gee, wouldn't it be great if the "delete" function actually worked?
We've seen this happen a number of times on Clariion systems that used to be Celerra, and VNX block systems that used to be unified. I gather that it happens because the array wasn't properly decommissioned, and/or cases like yours where a DAE goes missing. The only recourse has been to re-image the entire array, as others have said. It's that, or live with a useless DAE and constant error messages about missing drives and broken LUNs.