This post is more than 5 years old
7 Posts
0
2717
Issues connecting legacy HP server to CELERRA NS-120
I have an old HP rp7400 better known as an N4000 that runs the PA-RISC chipset. Inside are two A6795A or 2GB HP FC cards connected to a few cisco switches. Things have been zoned correctly as I can see multiple LUNz's on the system itself. However, the Array does not see it as a valid host. Trying to add it manually also fails. Basically the array can not see it.
Little more information
- HP n4000 running HPUX 11iv1
- 2GB FC card "HP Fibre Channel Tachyon TL/TS/XL2 Driver B.11.11.19 " "Driver state = ONLINE" "Link Speed = 2Gb"
- Running Host Agent software:
NAVICLI 7.33.1.0.33 Navisphere Disk Array Management Tool (CLI)
UNIAGENT 1.3.1.1.0033 Unisphere Disk Array Management Tool (AGENT)
- Zoning looks good.
I can provide more information from the Array/Switch side if necessary.
I know it is a shot in the dark but has anyone configured an HP legacy system successfully to their array? Or does anyone have any helpful troubleshooting hints that I may of missed? EMC re-routed me to an inside sales guy so i have little hope they will get me anywhere.
dshetler
7 Posts
0
August 15th, 2014 13:00
Small miracles. It was indeed a network restriction that we werent aware of on that vlan. I need to do some research on my part to see which ports need to be open between the host agent and the array however this hasnt been an issue in the past to my knowledge. However once networking opened things up the host registered and all the fc cards checked in.
dynamox
2 Intern
2 Intern
•
20.4K Posts
1
August 14th, 2014 14:00
so you validated zoning looks good, you actually see HP FC cards logged-in to the switch ? If zoning is correct, you should see the WWN login to the array and be visible under Connectivity Status. If you don't see them under connectivity status, run ioscan and look under Connect Status again (might have to refresh). Let us know once you get that far.
dshetler
7 Posts
0
August 14th, 2014 14:00
fcmsutil does show the cards online on the server so they have a link at least to the switch.
Vendor ID is = 0x00103c
Device ID is = 0x001029
XL2 Chip Revision No is = 2.3
PCI Sub-system Vendor ID is = 0x00103c
PCI Sub-system ID is = 0x00128c
Topology = PTTOPT_FABRIC
Link Speed = 2Gb
Local N_Port_id is = 0x361d00
Driver state = ONLINE
Hardware Path is = 1/10/0/0
Number of Assisted IOs = 588
Number of Active Login Sessions = 0
Dino Present on Card = NO
Maximum Frame Size = 2048
Driver Version = @(#) libtd.a HP Fibre Channel Tachyon TL/TS/XL2 Driver B.11.11.19 (AR0909) /ux/kern/kisu/TL/src/common/wsio/td_glue.c: May 28 2009, 12:15:13
Vendor ID is = 0x00103c
Device ID is = 0x001029
XL2 Chip Revision No is = 2.3
PCI Sub-system Vendor ID is = 0x00103c
PCI Sub-system ID is = 0x00128c
Topology = PTTOPT_FABRIC
Link Speed = 2Gb
Local N_Port_id is = 0xa41b00
Driver state = ONLINE
Hardware Path is = 1/0/0/0
Number of Assisted IOs = 588
Number of Active Login Sessions = 0
Dino Present on Card = NO
Maximum Frame Size = 2048
Driver Version = @(#) libtd.a HP Fibre Channel Tachyon TL/TS/XL2 Driver B.11.11.19 (AR0909) /ux/kern/kisu/TL/src/common/wsio/td_glue.c: May 28 2009, 12:15:13
The zoning does look solid. I do see lunz's presented to the system through ioscan which tells me there is some level of connectivity between the array and the host. However under connectivity status on the array i do not see the WWNs of the cards listed above. I did re-run ioscan but the cards are still not showing up in connectivity status on the array. Also the server has been rebooted as well.
disk 6 1/0/0/0.164.4.255.0.0.0 sdisk CLAIMED DEVICE DGC LUNZ
disk 8 1/0/0/0.164.5.255.0.0.0 sdisk CLAIMED DEVICE DGC LUNZ
disk 7 1/0/0/0.164.6.255.0.0.0 sdisk CLAIMED DEVICE DGC LUNZ
disk 9 1/0/0/0.164.8.255.0.0.0 sdisk CLAIMED DEVICE DGC LUNZ
disk 5 1/10/0/0.54.7.255.0.0.0 sdisk CLAIMED DEVICE DGC LUNZ
disk 2 1/10/0/0.54.8.255.0.0.0 sdisk CLAIMED DEVICE DGC LUNZ
disk 3 1/10/0/0.54.9.255.0.0.0 sdisk CLAIMED DEVICE DGC LUNZ
disk 4 1/10/0/0.54.10.255.0.0.0 sdisk CLAIMED DEVICE DGC LUNZ
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
August 14th, 2014 15:00
let's restart management server on both SP (it's non-disruptive). Go to http://SPA/setup ..scroll down and select restart management server, do the same thing on SPB. Then let's check connectivity Status. The fact that you are seeing LUNZ proves that connectivity is there ..but we need to get the HBAs to show up so you can register them and add them to a storage group.
Anonymous User
375 Posts
1
August 15th, 2014 07:00
Hi dshetler,
You can manually register the HBA. See if this can help you (specially for vnx might help you too):
Briefly there are 10 major steps for manually registering HBA to the storage.
On the HP-UX hosts, run the following commands to get such information:
# ioscan -funC fc => Get the device name of FC HBA cards, for example: /dev/fcd0, /dev/fcd1
# fcmsutil /dev/fcd0 => Get the WWN of HBA, for example:
N_Port Node World Wide Name = 0x500143800422de57
N_Port Port World Wide Name = 0x500143800422de56
Then we convert the output to Node WWN:Port WWN format. So finally 50:01:43:80:04:22:de:57:50:01:43:80:04:22:de:56is what we want.
On VNX storage, go to Unisphere > System > Hardware > Storage Hardware, find the SP and click on “+“ sign. In this case, we can get the WWN of the A0 FE port.
# ioscan => Scan LUNs
# insf-e => Create device file
# ioscan-funC disk => Check the device file, for example: /dev/dsk/c15t0d0
# ddif=/dev/dsk/c15t0d0 of=/dev/null bs=1024k count=1024 => Initiate I/Os to the storage
…
# powermtcheck
# powermtdisplay dev=all
Thanks
Rakesh
dshetler
7 Posts
0
August 15th, 2014 07:00
Sorry for the late reply. I restarted both management servers on the SPs. Waited a bit and the host is not showing up still. When i try to add it manually it says it can connect. Im going to go through the zoning with a fine tooth comb and probably delete and re-add the server.
dshetler
7 Posts
0
August 15th, 2014 10:00
Thank you for the response. I have tried earlier to add the card wwn to the array but the option to click "ok" is greyed out after filling in all the necessary information.
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
August 15th, 2014 12:00
if you are going to manually create initiators ( i still don't see why you would need to do that, you have to get the host to login), you need to make sure to input WWN in this format. Make sure to select correct Failover Mode for HPUX. I think the modes differ if you use PVlinks versus PowerPath.
dynamox
2 Intern
2 Intern
•
20.4K Posts
1
August 15th, 2014 13:00
host agent is nice to have but even without it the WWN should show up in Connectivity Status. HPUX has always been a PITA because if it's idle it will log out. Typically ioscan will "tickle" it so it logs back in.
dshetler
7 Posts
0
August 15th, 2014 13:00
Thanks for the help all and taking it easy on me. My first help post that is. Hopefully i can contribute back as well.
dshetler
7 Posts
0
August 15th, 2014 13:00
I agree dyanmox. As a side note we are having our network team to see if traffic is moving from the host agent to the array without being blocked.
Anonymous User
375 Posts
0
August 15th, 2014 23:00
Just keep in mind that for HP-UX EMC recommends to choose fail-over mode as mode-1 while configuring power-path.
Thanks
Rakesh