Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2717

August 14th, 2014 14:00

Issues connecting legacy HP server to CELERRA NS-120

I have an old HP rp7400 better known as an N4000 that runs the PA-RISC chipset.  Inside are two A6795A or 2GB HP FC cards connected to a few cisco switches.   Things have been zoned correctly as I can see multiple LUNz's on the system itself.  However, the Array does not see it as a valid host.   Trying to add it manually also fails.   Basically the array can not see it. 

Little more information

- HP n4000 running HPUX 11iv1

- 2GB FC card "HP Fibre Channel Tachyon TL/TS/XL2 Driver B.11.11.19 " "Driver state = ONLINE" "Link Speed = 2Gb"

- Running Host Agent software:

         NAVICLI                               7.33.1.0.33    Navisphere Disk Array Management Tool (CLI)

         UNIAGENT                              1.3.1.1.0033   Unisphere Disk Array Management Tool (AGENT)

- Zoning looks good. 

I can provide more information from the Array/Switch side if necessary. 

I know it is a shot in the dark but has anyone configured an HP legacy system successfully to their array?  Or does anyone have any helpful troubleshooting hints that I may of missed?   EMC re-routed me to an inside sales guy so i have little hope they will get me anywhere.

7 Posts

August 15th, 2014 13:00

Small miracles.  It was indeed a network restriction that we werent aware of on that vlan.   I need to do some research on my part to see which ports need to be open between the host agent and the array however this hasnt been an issue in the past to my knowledge.  However once networking opened things up the host registered and all the fc cards checked in.

2 Intern

 • 

20.4K Posts

August 14th, 2014 14:00

so you validated zoning looks good, you actually see HP FC cards logged-in to the switch ?  If zoning is correct, you should see the WWN login to the array and be visible under Connectivity Status. If you don't see them under connectivity status, run ioscan and look under Connect Status again (might  have to refresh).  Let us know once you get that far.

7 Posts

August 14th, 2014 14:00

fcmsutil does show the cards online on the server so they have a link at least to the switch.

                           Vendor ID is = 0x00103c

                           Device ID is = 0x001029

                XL2 Chip Revision No is = 2.3

            PCI Sub-system Vendor ID is = 0x00103c

                   PCI Sub-system ID is = 0x00128c

                               Topology = PTTOPT_FABRIC

                             Link Speed = 2Gb

                     Local N_Port_id is = 0x361d00

                           Driver state = ONLINE

                       Hardware Path is = 1/10/0/0

                 Number of Assisted IOs = 588

        Number of Active Login Sessions = 0

                   Dino Present on Card = NO

                     Maximum Frame Size = 2048

                         Driver Version = @(#) libtd.a HP Fibre Channel Tachyon TL/TS/XL2 Driver B.11.11.19 (AR0909) /ux/kern/kisu/TL/src/common/wsio/td_glue.c: May 28 2009, 12:15:13

                          Vendor ID is = 0x00103c

                           Device ID is = 0x001029

                XL2 Chip Revision No is = 2.3

            PCI Sub-system Vendor ID is = 0x00103c

                   PCI Sub-system ID is = 0x00128c

                               Topology = PTTOPT_FABRIC

                             Link Speed = 2Gb

                     Local N_Port_id is = 0xa41b00

                           Driver state = ONLINE

                       Hardware Path is = 1/0/0/0

                 Number of Assisted IOs = 588

        Number of Active Login Sessions = 0

                   Dino Present on Card = NO

                     Maximum Frame Size = 2048

                         Driver Version = @(#) libtd.a HP Fibre Channel Tachyon TL/TS/XL2 Driver B.11.11.19 (AR0909) /ux/kern/kisu/TL/src/common/wsio/td_glue.c: May 28 2009, 12:15:13

The zoning does look solid.  I do see lunz's presented to the system through ioscan which tells me there is some level of connectivity between the array and the host.  However under connectivity status on the array i do not see the WWNs of the cards listed above.  I did re-run ioscan but the cards are still not showing up in connectivity status on the array.  Also the server has been rebooted as well.

disk      6  1/0/0/0.164.4.255.0.0.0   sdisk  CLAIMED     DEVICE       DGC     LUNZ

disk      8  1/0/0/0.164.5.255.0.0.0   sdisk  CLAIMED     DEVICE       DGC     LUNZ

disk      7  1/0/0/0.164.6.255.0.0.0   sdisk  CLAIMED     DEVICE       DGC     LUNZ

disk      9  1/0/0/0.164.8.255.0.0.0   sdisk  CLAIMED     DEVICE       DGC     LUNZ

disk      5  1/10/0/0.54.7.255.0.0.0   sdisk  CLAIMED     DEVICE       DGC     LUNZ

disk      2  1/10/0/0.54.8.255.0.0.0   sdisk  CLAIMED     DEVICE       DGC     LUNZ

disk      3  1/10/0/0.54.9.255.0.0.0   sdisk  CLAIMED     DEVICE       DGC     LUNZ

disk      4  1/10/0/0.54.10.255.0.0.0  sdisk  CLAIMED     DEVICE       DGC     LUNZ

2 Intern

 • 

20.4K Posts

August 14th, 2014 15:00

let's restart management server on both SP (it's non-disruptive). Go to http://SPA/setup  ..scroll down and select restart management server, do the same thing on SPB. Then let's check connectivity Status. The fact that you are seeing LUNZ proves that connectivity is there ..but we need to get the HBAs to show up so you can register them and add them to a storage group.

August 15th, 2014 07:00

Hi dshetler,

You can manually register the HBA. See if this can help you (specially for vnx might help you too):


Briefly there are 10 major steps for manually registering HBA to the storage.

  1. 1.     First we should verify the physical connection between hosts and switches, switches and storages. The zoning configurations on the switches should also be correct.
  2. 2.     Get the WWN information of HBA cards and storage’s front-end ports.

On the HP-UX hosts, run the following commands to get such information:

# ioscan -funC fc            => Get the device name of FC HBA cards, for example: /dev/fcd0, /dev/fcd1

# fcmsutil /dev/fcd0         => Get the WWN of HBA, for example:

N_Port Node World Wide Name = 0x500143800422de57

N_Port Port World Wide Name = 0x500143800422de56

Then we convert the output to Node WWN:Port WWN format. So finally 50:01:43:80:04:22:de:57:50:01:43:80:04:22:de:56is what we want.

On VNX storage, go to Unisphere > System > Hardware > Storage Hardware, find the SP and click on “+“ sign. In this case, we can get the WWN of the A0 FE port.

  1. 3.     According to the above WWN information and the zoning configurations on the switches, we can get the matching the HBA cards to the storage front-end ports. For example, fcd0 to A1/B1, fcd1 to A0/B0.
  2. 4.     According to the mapping relation in step 3, we can begin to manually register the host HBAs to the storage. Go to VNX Unisphere > Hosts > Initiators, click on the “Create” button, enter the HBA WWN in step 2. Choose the corresponding “SP - port”. Follow the EMC Knowledge Base article ID 31521 to specify the value of “Initiator Type” and “Failover Mode”. For HP-UX hosts, the value should be “HP No Auto Trespass” and “1 ”. Then you can enter the host name and IP address in the “New Host”. If the host had been registered before, select “Existing Host” and then choose the correct one.
  3. 5.     Please NOTE that if there are many paths between the HP-UX host and the storage, for example, four paths; then all these four paths should be manually registered.
  4. 6.     Now the HP-UX host should be registered successfully. Next what we should do is creating LUNs and the Storage Group. Select the LUNs then add the host to the group. Please don’t forget to specify the HLU (Host LUN ID) for these LUNs. The first LUN should be HLU 0, this will avoid the conflict once we add new LUNs or deleting old LUNs in the future.

        

  1. 7.     Then we can scan the newly added LUNs on the HP-UX host. Don’t forget to use the “dd” command to initiate I/Os on each LUN, or the Initiator status in Unisphere will not show as “Logged In”.

# ioscan               => Scan LUNs

# insf-e                => Create device file

# ioscan-funC disk   => Check the device file, for example: /dev/dsk/c15t0d0

# ddif=/dev/dsk/c15t0d0 of=/dev/null bs=1024k count=1024 => Initiate I/Os to the storage 

  1. 8.     Check the Initiator status on Unisphere, if the status is “Logged In”, then everything is Ok.
  2. 9.     If the host has PowerPath (PP) installed, we should check if PP recognizethese new LUNs. Then we can begin to configure the Logical Volume Manager (LVM) and applications.

# powermtcheck

# powermtdisplay dev=all

  1. 10.  The following diagram presents all the ports which could be used during the setup of a VNX storage and SAN switches.

Thanks

Rakesh

7 Posts

August 15th, 2014 07:00

Sorry for the late reply.  I restarted both management servers on the SPs.   Waited a bit and the host is not showing up still.  When i try to add it manually it says it can connect.  Im going to go through the zoning with a fine tooth comb and probably delete and re-add the server.

7 Posts

August 15th, 2014 10:00

Thank you for the response.  I have tried earlier to add the card wwn to the array but the option to click "ok" is greyed out after filling in all the necessary information.  

2 Intern

 • 

20.4K Posts

August 15th, 2014 12:00

if you are going to manually create initiators ( i still don't see why you would need to do that, you have to get the host to login), you need to make sure to input WWN in this format.  Make sure to select correct Failover Mode for HPUX. I think the modes differ if you use PVlinks versus PowerPath.

8-15-2014 3-07-40 PM.bmp

2 Intern

 • 

20.4K Posts

August 15th, 2014 13:00

host agent is nice to have but even without it the WWN should show up in Connectivity Status. HPUX has always been a PITA because if it's idle it will log out. Typically ioscan will "tickle" it so it logs back in.

7 Posts

August 15th, 2014 13:00

Thanks for the help all and taking it easy on me.  My first help post that is.   Hopefully i can contribute back as well.

7 Posts

August 15th, 2014 13:00

I agree dyanmox.   As a side note we are having our network team to see if traffic is moving from the host agent to the array without being blocked.

August 15th, 2014 23:00

Just keep in mind that for HP-UX EMC recommends to choose fail-over mode as mode-1 while configuring power-path.

Thanks

Rakesh

No Events found!

Top