Vertsnavn
|
IP-adresse
|
Funksjon
|
lnx-node1.amer.lan
|
192.168.9.108
|
Fysisk node 1
|
lnx-node2.amer.lan
|
192.168.9.109
|
Fysisk node 2
|
lnx-nwcluster.amer.lan
|
192.168.9.110
|
Logisk navn som brukes av NetWorker
|
root@lnx-node1:~# ls -l / | grep nsr
lrwxrwxrwx. 1 root root 14 Oct 5 10:49 nsr -> /nsr_share/nsr
drwxr-xr-x. 11 root root 116 Aug 31 17:20 nsr.NetWorker.local
drwxr-xr-x. 3 root root 17 Aug 31 17:23 nsr_share
Passiv node:
root@lnx-node2:~# ls -l / | grep nsr
lrwxrwxrwx. 1 root root 20 Oct 3 17:08 nsr -> /nsr.NetWorker.local
drwxr-xr-x. 11 root root 116 Aug 31 17:19 nsr.NetWorker.local
drwxr-xr-x. 2 root root 6 Aug 31 17:18 nsr_share
pcs status
root@lnx-node1:~# pcs status Cluster name: rhelclus Status of pacemakerd: 'Pacemaker is running' (last updated 2023-10-05 10:59:19 -04:00) Cluster Summary: * Stack: corosync * Current DC: lnx-node1.amer.lan (version 2.1.5-9.3.el8_8-a3f44794f94) - partition with quorum * Last updated: Thu Oct 5 10:59:20 2023 * Last change: Thu Oct 5 10:59:13 2023 by root via cibadmin on lnx-node1.amer.lan * 2 nodes configured * 3 resource instances configured Node List: * Online: [ lnx-node1.amer.lan lnx-node2.amer.lan ] Full List of Resources: * Resource Group: NW_group: * fs (ocf::heartbeat:Filesystem): Started lnx-node1.amer.lan * ip (ocf::heartbeat:IPaddr): Started lnx-node1.amer.lan * nws (ocf::EMC_NetWorker:Server): Started lnx-node1.amer.lan Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
pcs resource config
Eksempel:
root@lnx-node1:~# pcs resource config Group: NW_group Resource: fs (class=ocf provider=heartbeat type=Filesystem) Attributes: fs-instance_attributes device=/dev/sdb1 directory=/nsr_share fstype=xfs Operations: monitor: fs-monitor-interval-20 interval=20 timeout=300 start: fs-start-interval-0s interval=0s timeout=60s stop: fs-stop-interval-0s interval=0s timeout=60s Resource: ip (class=ocf provider=heartbeat type=IPaddr) Attributes: ip-instance_attributes cidr_netmask=24 ip=192.1xx.9.1x0 nic=ens192 Operations: monitor: ip-monitor-interval-15 interval=15 timeout=120 start: ip-start-interval-0s interval=0s timeout=20s stop: ip-stop-interval-0s interval=0s timeout=20s Resource: nws (class=ocf provider=EMC_NetWorker type=Server) Meta Attributes: nws-meta_attributes is-managed=true Operations: meta-data: nws-meta-data-interval-0 interval=0 timeout=10 migrate_from: nws-migrate_from-interval-0 interval=0 timeout=120 migrate_to: nws-migrate_to-interval-0 interval=0 timeout=60 monitor: nws-monitor-interval-100 interval=100 timeout=1200 start: nws-start-interval-0 interval=0 timeout=600 stop: nws-stop-interval-0 interval=0 timeout=600 validate-all: nws-validate-all-interval-0 interval=0 timeout=10
lcmap
Eksempel:
root@lnx-node1:~# lcmap type: NSR_CLU_TYPE; clu_type: NSR_LC_TYPE; interface version: 1.0; type: NSR_CLU_VIRTHOST; hostname: 192.168.9.110; local: TRUE; owned paths: /nsr_share; clu_nodes: lnx-node1.amer.lan lnx-node2.amer.lan;
Hvis NetWorker-tjenestene ikke starter, kontrollerer du ressursstatusen til PC-en for å se hvilken ressurs som svikter:
pcs status
root@lnx-node1:~# pcs status ... ... Node List: * Online: [ lnx-node1.amer.lan lnx-node2.amer.lan ] Full List of Resources: * Resource Group: NW_group: * fs (ocf::heartbeat:Filesystem): Started lnx-node1.amer.lan * ip (ocf::heartbeat:IPaddr): Started lnx-node1.amer.lan * nws (ocf::EMC_NetWorker:Server): Started lnx-node1.amer.lan Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
pcs resource cleanup nws
dbgcommand -n nsrd Debug=#
pcs resource
pcs resource fs
root@lnx-node1:~# pcs resource * Resource Group: NW_group: * fs (ocf::heartbeat:Filesystem): Started lnx-node1.amer.lan * ip (ocf::heartbeat:IPaddr): Started lnx-node1.amer.lan * nws (ocf::EMC_NetWorker:Server): Started lnx-node1.amer.lan root@lnx-node1:~# pcs resource config fs Resource: fs (class=ocf provider=heartbeat type=Filesystem) Attributes: fs-instance_attributes device=/dev/sdb1 directory=/nsr_share fstype=xfs Operations: monitor: fs-monitor-interval-20 interval=20 timeout=300 start: fs-start-interval-0s interval=0s timeout=60s stop: fs-stop-interval-0s interval=0s timeout=60s
df -h
Eksempel:
root@lnx-node1:~# df -h | grep /nsr_share /dev/sdb1 94G 1.5G 92G 2% /nsr_share
lsblk
Eksempel:
root@lnx-node1:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 600M 0 part /boot/efi
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 38.4G 0 part
├─rhel-root 253:0 0 34.4G 0 lvm /
└─rhel-swap 253:1 0 4G 0 lvm [SWAP]
sdb 8:16 0 100G 0 disk
└─sdb1 8:17 0 93.1G 0 part /nsr_share
sr0 11:0 1 1024M 0 rom
blkid
root@lnx-node1:~# blkid
/dev/mapper/rhel-root: UUID="7cf2f957-18d8-45b8-bf8f-6361aadc3517" BLOCK_SIZE="512" TYPE="xfs"
/dev/sda3: UUID="QpZ2hK-OuE2-igN0-Ryba-EwMN-uxq1-LE48hD" TYPE="LVM2_member" PARTUUID="1193db91-4b63-4b33-a4d4-03a22317e064"
/dev/sda1: UUID="F243-AD41" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="6c81bd63-0249-4bdf-afdb-cdde72034162"
/dev/sda2: UUID="7677ad6b-8191-4a45-8a8a-16cf7d00d72c" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="57481b7a-83ec-4cd8-bf2d-bca09ac27040"
/dev/sdb1: UUID="600bca60-dd5d-4162-bf77-0537daa3b1e5" BLOCK_SIZE="512" TYPE="xfs" PARTLABEL="networker" PARTUUID="769aaac2-764b-431d-be21-3b5753d6a5d3"
/dev/mapper/rhel-swap: UUID="537962b6-07d4-4a40-9687-deab2e488936" TYPE="swap"
pcs resource
pcs resource config ip
root@lnx-node1:~# pcs resource
* Resource Group: NW_group:
* fs (ocf::heartbeat:Filesystem): Started lnx-node1.amer.lan
* ip (ocf::heartbeat:IPaddr): Started lnx-node1.amer.lan
* nws (ocf::EMC_NetWorker:Server): Started lnx-node1.amer.lan
root@lnx-node1:~# pcs resource config ip
Resource: ip (class=ocf provider=heartbeat type=IPaddr)
Attributes: ip-instance_attributes
cidr_netmask=24
ip=192.1xx.9.1x0
nic=ens192
Operations:
monitor: ip-monitor-interval-15
interval=15
timeout=120
start: ip-start-interval-0s
interval=0s
timeout=20s stop:
ip-stop-interval-0s
interval=0s
timeout=20s
ifconfig -a
root@lnx-node1:~# ifconfig -a ens192: flags=4163 mtu 1500 inet 192.1xx.9.1x8 netmask 255.255.255.0 broadcast 192.1xx.9.255 inet6 fe80::250:56ff:fea5:48e1 prefixlen 64 scopeid 0x20 ether 00:50:56:a5:48:e1 txqueuelen 1000 (Ethernet) RX packets 953865 bytes 349705527 (333.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1190983 bytes 179749786 (171.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 1000 (Local Loopback) RX packets 129798 bytes 13274289 (12.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 129798 bytes 13274289 (12.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
nslookup ip nslookup logical_name_FQDN nslookup logical_name_short
root@lnx-node1:~# nslookup 192.1xx.9.1x0 110.9.1xx.1x2.in-addr.arpa name = lnx-nwcluster.amer.lan. root@lnx-node1:~# nslookup lnx-nwcluster.amer.lan. Server: 192.1xx.9.1x0 Address: 192.1xx.9.100#53 Name: lnx-nwcluster.amer.lan Address: 192.1xx.9.1x0 root@lnx-node1:~# nslookup lnx-nwcluster Server: 192.1xx.9.1x0 Address: 192.1xx.9.100#53 Name: lnx-nwcluster.amer.lan Address: 192.1xx.9.1x0
Det anbefales også å utføre de samme trinnene mot den fysiske nodens IP-adresse, FQDN og kortnavn. Se Dell-artikkelen Feilsøke problemer med DNS og navneløsing.
ping -c 4 ip
root@lnx-node1:~# ping -c 4 192.1xx8.9.1x0 PING 192.1xx8.9.1x0 (192.1xx.9.1x0) 56(84) bytes of data. 64 bytes from 192.1xx.9.1x0: icmp_seq=1 ttl=64 time=0.051 ms 64 bytes from 192.1xx.9.1x0: icmp_seq=2 ttl=64 time=0.043 ms 64 bytes from 192.1xx.9.1x0: icmp_seq=3 ttl=64 time=0.033 ms 64 bytes from 192.1xx.9.1x0: icmp_seq=4 ttl=64 time=0.034 ms --- 192.1xx.9.1x0 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3108ms rtt min/avg/max/mdev = 0.033/0.040/0.051/0.008 ms
Pacemaker or PCS version: pcs --version Enable resource: pcs resource enable resource_name Disable resource: pcs resource disable resource_name Cleanup (restart) resource: pcs resource cleanup resource_name Stop cluster: pcs stop cluster --force Start cluster: pcs start cluster --all Put the node in standby: pcs node standby node_name Take node out of standby: pcs node unstandby node_name