Unsolved
This post is more than 5 years old
12 Posts
0
4069
Configuring DNFS on VNX
Hi,
I am standing up a three-node Oracle RAC cluster on Sun/Oracle M4000 servers running Solaris 10. I am using NFS (VNX 7500) for shared storage and need to configure DNFS. Each RAC server has two 10GbE NIC ports that will be used for DNFS. Each NIC is on its own subnet as shown below from one server:
ixgbe0 10.9.100.20
ixgbe0 10.9.101.20
The NAS IP is 10.9.100.16.
The problem that I am running into is that traceroute from ixgbe0 to NAS runs fine where as it does not run fine from ixgbe1 to NAS. For example, the following runs fine:
traceroute -F -s 10.9.100.20 10.9.100.16 9000
Where as the following does not:
traceroute -F -s 10.9.101.20 10.9.100.16 9000
Is there anything that we need to configure on the NAS side so that traceroute can work?
Any help will be appreciated.
Thanks
Amir
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
April 6th, 2012 12:00
what is your subnet mask ?
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
April 6th, 2012 12:00
on client and datamover ?
AHameed1
12 Posts
0
April 6th, 2012 12:00
Hi,
The subnet mask is " 255.255.0.0"
AHameed1
12 Posts
0
April 6th, 2012 14:00
On client
jeff_browning
256 Posts
0
April 7th, 2012 06:00
dNFS creates multi-pathing automatically across the available interfaces and DMs. This enables HA. This is a core feature of dNFS. You do not have to do anything special to make this happen.
You do need to configure the OraNFStab file to account for all available paths, of course.
jeff_browning
256 Posts
0
April 7th, 2012 06:00
AHameed:
You have described the IP on the RAC side, but what is the network setup on the VNX like?
Basically, you need multiple (in your case 2) single-NIC paths between the RAC nodes and the VNX. The following setup would be typical:
RAC Node 1:
ixgbe0 10.9.100.20 / 255.255.255.0
ixgbe0 10.9.101.20 / 255.255.255.0
RAC Node 2:
ixgbe0 10.9.100.21 / 255.255.255.0
ixgbe0 10.9.101.21 / 255.255.255.0
VNX server_2:
el30 10.9.100.22 / 255.255.255.0
el31 10.9.101.22 / 255.255.255.0
VNX server_3:
el30 10.9.100.23 / 255.255.255.0
el31 10.9.101.23 / 255.255.255.0
You should then be able to ping each interface on the VNX from each RAC node. Say:
RAC node 1:
$ ping 10.9.100.22
PING 10.9.100.22 (10.9.100.20): 56 data bytes
64 bytes from 10.9.100.22: icmp_seq=0 ttl=51 time=8.902 ms
64 bytes from 10.9.100.22: icmp_seq=1 ttl=51 time=7.973 ms
64 bytes from 10.9.100.22: icmp_seq=2 ttl=51 time=7.241 ms
64 bytes from 10.9.100.22: icmp_seq=3 ttl=51 time=8.615 ms
64 bytes from 10.9.100.22: icmp_seq=4 ttl=51 time=7.331 ms
^C
$ ping 10.9.101.23
PING 10.9.101.23 (10.9.101.20): 56 data bytes
64 bytes from 10.9.101.23: icmp_seq=0 ttl=51 time=8.321 ms
64 bytes from 10.9.101.23: icmp_seq=1 ttl=51 time=5.567 ms
64 bytes from 10.9.101.23: icmp_seq=2 ttl=51 time=7.554 ms
64 bytes from 10.9.101.23: icmp_seq=3 ttl=51 time=8.624 ms
64 bytes from 10.9.101.23: icmp_seq=4 ttl=51 time=9.331 ms
^C
That would confirm connectivity over both networks to both DMs. If you want to be completely thorough, ping all four combinations.
At that point, you should be able to mount the file systems from the VNX onto the RAC nodes. I would do this with normal NFS semantics, in order to confirm the operation of kernel NFS. For example, if the VNX has the following file systems:
/data_fs
/log1_fs
/log2_fs
Then create entries in /etc/vfstab on the RAC nodes for each of these file systems (using the semantics for Solaris since that is your context). Typically you would create mount points on the RAC nodes for these exports as:
/data_fs -> /u02
/log1_fs -> /u03
/log2_fs -> /u04
Then mount the file systems to these mount points and test read / write access by using, say, mkfile. Do this while logged in as oracle. If mkfile succeeds then you have read / write access to the file systems, and you are golden. If not, you still have some work to do Typically, you will have to adjust the settings on the VNX to get this to work correctly. These settings can be found in Unisphere in the Export area. You will have to set each of the exports to allow read / write access from each RAC node in order for this work. Also, you will need to set the privileges on the exports to allow the oracle user read / write access to the file systems. Typically, this consists of running the following as root (after the exports have been mounted to the mount points):
# chown oracle:dba /u02 /u03 /u04
# chmod 775 /u02 /u03 /u04
If that works, then you are ready to enable dNFS. This occurs at two levels: The OraNFStab file, and the dNFS ODM library. The following would be an example OraNFStab file, assuming your VNX DM called server_2 is named within /etc/hosts (or DNS) as OraVNX1. (This name needs to resolve to an IP address for the data mover, in this case likely server_2 on the VNX. If you prefer, you can simply use the IP address.)
server: OraVNX1
path: 10.9.100.22
path: 10.9.101.22
export: /data_fs mount: /u02
export: /log1_fs mount: /u03
export: /log2_fs mount: /u04
I say /etc/hosts, because typically, the ethernet network used for storage networking is completely isolated and dedicated. Thus, DNS is not typically available. Certainly, it is a best practice to isolate and dedicate this network to do nothing but dNFS I/O. Create a VLAN if nothing else.
Finally, you need to enable dNFS by swapping out the dNFS ODM library. This is located in $ORACLE_HOME/lib. On 11g R1, you have to do this manually, as follows:
$ cd $ORACLE_HOME/lib
$ mv libodm10libodm11.so libodm10libodm11.so_stub
$ ln –s libnfsodm10libnfsodm11.so libodm10libodm11.so
On 11g R2, there is a Perl script to accomplish this.
Let me know if this helps. I have done this many times. It works very weill if you do it correctly.
Also, make sure you have sufficient network bandwidth. I am a bit worried because you are describing some beefy servers, and have only 2 NICs. Hopefully these are 10 GbE? If so, then you are golden. If not, and they are 1 GbE, you may find yourself port bound, especially if you have a port failure.
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
April 7th, 2012 06:00
server_2 and server_3 are not configured for HA ?
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
April 7th, 2012 07:00
but file system on Celerra can only be mounted on one datamover, were there multiple file systems ..some on server_2 and some on server_3 ?
AHameed1
12 Posts
0
April 7th, 2012 07:00
The data movers are configured in active/passive mode.
AHameed1
12 Posts
0
April 7th, 2012 07:00
One more thing, for now, we have not mounted any file systems, we are struggling with the making traceroute work from each NIC to the DM IP.
jeff_browning
256 Posts
0
April 7th, 2012 07:00
Do not trunk the 10 GbE NICs on the VNX DMs. Set them up as individual IP addresses as I indicate above. Let dNFS do the load balancing / multipathing. Do not do that within the VNX. dNFS is much better at this than either the VNX or Solaris OS-level trunking. dNFS will transparently load balance across the available NICs as well as manage failover if a port fails.
AHameed1
12 Posts
0
April 7th, 2012 07:00
Hi Jeff,
I will check with our storage folks on the VNX setup. Basically, initially we had the following setup:
RAC Node 1:
ixgbe0 10.9.100.20 / 255.255.0.0
ixgbe0 10.9.101.20 / 255.255.0.0
RAC Node 2:
ixgbe0 10.9.100.21 / 255.255.0.0
ixgbe0 10.9.101.21 / 255.255.0.0
RAC Node 3:
ixgbe0 10.9.100.22 / 255.255.0.0
ixgbe0 10.9.101.22 / 255.255.0.0
VNX
10.9.100.16 (I am assuming that the subnet mask is 255.255.0.0, I will have to confirm with our storage folks though)
Ping was working fine from each RAC NIC to VNX IP. However, traceroute from one NIC on each RAC node to VNX was not working. It seems that we need two IPs on the DM side?
Thanks
Amir
AHameed1
12 Posts
0
April 7th, 2012 07:00
Thanks Jeff. I will pass this information to the storage folks. The DM was setup by EMC and I believe that trunking was done as part of the setup. Can the following be used as a workaround:
- If another IP (10.19.101.16) is plumbed on the trunked DM interface, would that resolve the issue?
- An alias (10.19.101.16) is created on the DM side for the current NAS IP (10.19.100.16), would that resolve the issue?
Can you point me to a document that has information on how to configure DM to work with DNFS. All the documents that I have seen so far have steps on configuring oranfstab file but none has information on how to configure NIC interfaces on the DM side.
Thanks
AHameed1
12 Posts
0
April 7th, 2012 07:00
I believe that on the DM side, the two 10GbE NICs are trunked.
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
April 7th, 2012 08:00
right, there is no way you can mount file system on two different datamovers at the same time. Are these connection going through different physical switches, hence different subnets ?