Changing HP ServiceGuard clustering parameters

orgdwprd@root:/etc/cmcluster> cmapplyconf -v -C dw_clx.conf
Begin cluster verification...
Checking cluster file: dw_clx.conf
Checking nodes ... Done
Checking existing configuration ... Done
Gathering storage information
Found 7 devices on node orgdwprd
Found 29 devices on node orgdwap1
Analysis of 36 devices should take approximately 6 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 3 volume groups on node orgdwprd
Found 4 volume groups on node orgdwap1
Analysis of 7 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Volume group /dev/lckdwvg is configured differently on node orgdwprd than on node orgdwap1
Volume group /dev/lckdwvg is configured differently on node orgdwap1 than on node orgdwprd
Volume group /dev/vgsap is configured differently on node orgdwprd than on node orgdwap1
Volume group /dev/vgsap is configured differently on node orgdwap1 than on node orgdwprd
Gathering network information
Beginning network prodwng (this may take a while)


orgdwprd@root:/etc/cmcluster> cmapplyconf -v -C dw_clx.conf
Begin cluster verification...
Checking cluster file: dw_clx.conf
Checking nodes ... Done
Checking existing configuration ... Done
Gathering storage information
Found 7 devices on node orgdwprd
Found 29 devices on node orgdwap1
Analysis of 36 devices should take approximately 6 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 3 volume groups on node orgdwprd
Found 4 volume groups on node orgdwap1
Analysis of 7 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Volume group /dev/lckdwvg is configured differently on node orgdwprd than on node orgdwap1
Volume group /dev/lckdwvg is configured differently on node orgdwap1 than on node orgdwprd
Volume group /dev/vgsap is configured differently on node orgdwprd than on node orgdwap1
Volume group /dev/vgsap is configured differently on node orgdwap1 than on node orgdwprd
Gathering network information
Beginning network prodwng (this may take a while)
Completed network prodwng
Cluster dw_CLX1 is an existing cluster
Modifying NODE_TIMorgUT value from 2000000 to 20000000 while cluster dw_CLX1 is running.
Cluster dw_CLX1 is an existing cluster
Checking for inconsistencies
Maximum configured packages parameter is 150.
Configuring 1 package(s).
149 package(s) can be added to this cluster.
200 access policies can be added to this cluster.
Modifying configuration on node orgdwprd
Modifying configuration on node orgdwap1
Modifying the cluster configuration for cluster dw_CLX1

Modify the cluster configuration ([y]/n)?
Marking/unmarking volume groups for use in the cluster
Unable to apply the configuration change: No such file or directory
. Check the syslog file(s) for additional information.
cmapplyconf: Unable to apply the configuration
orgdwprd@root:/etc/cmcluster>

Solaris: df: cannot statvfs - ERROR

OS: Sun Solaris 8/10
Hardware: Fire-V880

Error Message:

df: cannot statvfs /oracle/test/vol158: I/O error
df: cannot statvfs /oracle/test/vol159: I/O error
df: cannot statvfs /oracle/test/vol160: I/O error
df: cannot statvfs /oracle/test/vol161: I/O error
/dev/vx/dsk/testdg/vol16
142369792 140858696 1499360 99% /oracle/test/vol16
df: cannot statvfs /oracle/test/vol162: I/O error
df: cannot statvfs /oracle/test/vol163: I/O error
df: cannot statvfs /oracle/test/vol164: I/O error
df: cannot statvfs /oracle/test/vol165: I/O error
# vxprint -vt grep -i disabled
v vol125 - DISABLED ACTIVE 243609600 SELECT - fsgen
v vol158 - DISABLED ACTIVE 284774400 SELECT - fsgen
v vol159 - DISABLED ACTIVE 284774400 SELECT - fsgen
v vol160 - DISABLED ACTIVE 284774400 SELECT - fsgen
v vol161 - DISABLED ACTIVE 284774400 SELECT - fsgen
v vol162 - DISABLED ACTIVE 284774400 SELECT - fsgen
You may get errors on Filesystems and the volumes will get disabled automatically.
This may be due to intermittent connection loss with the Storage.
1. Try to enable the volume with command vxvol enable volxxx
2. Then check the state of the PLEX and if the plex state is in "Recover" State then,
make the plex "Clean" and "Stale" using the following script if you have multiple volumes

for i in `vxprint grep "Disk group:" awk '{ print $3 }'` ; do
for j in `vxprint -g $i grep "RECOVER" awk '{ print $2 }'` ; do
vxmend -g $i fix stale $j
vxmend -g $i fix clean $j
done
done
Reference:

http://support.veritas.com/docs/231913

+++++Post Your Queries related to HP-UX/SOlaris/VCS/VXVM - http://unixadvice.formyjob.net/ +++++++++

---------------------------------------------------------------------------------------------------------------------------------
Solaris 10 Zone Control Commands



The following control commands can be used to manage and monitor transitions between states:


• zlogin options zone-name

• zoneadm -z zone-name boot

• zoneadm -z zone-name halt
• zoneadm -z zone-name install
• zoneadm -z zone-name ready

• zoneadm -z zone-name reboot

• zoneadm -z zone-name uninstall

• zoneadm -z zone-name verify

• zonecfg -z zone-name: Interactive mode; can be used to remove properties of the following types: fs, device, rctl, net, attr

• zonecfg -z zone-name commit

• zonecfg -z zone-name create

• zonecfg -z zone-name delete

• zonecfg -z zone-name verify

Solaris 10 Zone Control Commands



The following control commands can be used to manage and monitor transitions between states:


• zlogin options zone-name

• zoneadm -z zone-name boot

• zoneadm -z zone-name halt

• zoneadm -z zone-name install

• zoneadm -z zone-name ready

• zoneadm -z zone-name reboot

• zoneadm -z zone-name uninstall

• zoneadm -z zone-name verify

• zonecfg -z zone-name: Interactive mode; can be used to remove properties of the following types: fs, device, rctl, net, attr

• zonecfg -z zone-name commit

• zonecfg -z zone-name create

• zonecfg -z zone-name delete

• zonecfg -z zone-name verify

Solaris 11 Reference:
http://solaristipsandtricks.blogspot.in/#!/2011/12/solaris-11-smf-and-nscd.html

EFI - dbprofile

nullnull

HP-UX Ignite make_net_recovery procedure to take a net backup and to restore it on the client machine.

{note: Please post your comments at the end of this page}
STEP1:


Run the make_net_recovery from the client machine ie SAND-BOX



# make_net_recovery -s IGNITE-SRV -A -x inc_entire=vg00



IGNITE-SRV = hostname of the Ignite server



======= 09/23/10 10:16:05 UAE Started make_net_recovery. (Thu Sep 23 10:16:05

UAE 2010)

@(#)Ignite-UX Revision C.7.5.142

@(#)ignite/net_recovery (opt) Revision:

/branches/IUX_RA0803/ignite/src@72866 Last Modified: 2008-02-06

15:49:50 -0700 (Wed, 06 Feb 2008)





* Testing for necessary pax patch.

* Checking Versions of Recovery Tools

* Scanning system for IO devices...

* Boot device is: 0/2/1/0.0x5382d3ae4c7121c.0x0

* Creating System Configuration.

* /opt/ignite/bin/save_config -f /var/opt/ignite/recovery/client_mnt/0x0

01F29BC2AFD/recovery/2010-09-23,10:16/system_cfg vg00

* Backing Up Volume Group /dev/vg00

* /usr/sbin/vgcfgbackup /dev/vg00

Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf

* Creating Map Files for Volume Group /dev/vg00

* /usr/sbin/vgexport -s -p -m /etc/lvmconf/vg00.mapfile /dev/vg00

vgexport: Volume group "/dev/vg00" is still active.

vgexport: Preview of vgexport on volume group "/dev/vg00" succeeded.



* Backing Up Volume Group /dev/vgDB2

* /usr/sbin/vgcfgbackup /dev/vgDB2

Volume Group configuration for /dev/vgDB2 has been saved in /etc/lvmconf/vgDB2.conf

* Creating Map Files for Volume Group /dev/vgDB2

* /usr/sbin/vgexport -s -p -m /etc/lvmconf/vgDB2.mapfile /dev/vgDB2

recovery.log (24%)



vgexport: Volume group "/dev/vgDB2" is still active.

vgexport: Preview of vgexport on volume group "/dev/vgDB2" succeeded.



* Creating Control Configuration.

WARNING: Failed to find "/dev/vgDB2/lvDB2snd" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvDB2mnt" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvDB2trans" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvoracle" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvoraclient" in IOTree, will not be added

to the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvorastage" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvorasnd" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lv102_64" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvoraarch" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvDB2reorg" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvDB2data1" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvDB2data2" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvDB2data3" in IOTree, will not be added to

the _hp_hide_other_disks list

recovery.log (56%)

WARNING: Failed to find "/dev/vgDB2/lvDB2data4" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvoriglogA" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvoriglogB" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvmirrlogA" in IOTree, will not be added to

the _hp_hide_other_disks list

WARNING: Failed to find "/dev/vgDB2/lvmirrlogB" in IOTree, will not be added to

the _hp_hide_other_disks list

* Creating Archive File List

* Creating Archive Configuration



* /opt/ignite/lbin/make_arch_config -c /var/opt/ignite/recovery/client_m

nt/0x001F29BC2AFD/recovery/2010-09-23,10:16/archive_cfg -g /var/opt/ig

nite/recovery/client_mnt/0x001F29BC2AFD/recovery/2010-09-23,10:16/flis

t -n 2010-09-23,10:16 -r ipf -b 64 -d Recovery\ Archive -L

/var/opt/ignite/recovery/arch_mnt -l

IGNITE-SRV:/var/opt/ignite/recovery/archives/SAND-BOX -i 1 -m t

* Saving the information about archive to

/var/opt/ignite/recovery/previews

* Creating The Networking Archive



* /opt/ignite/data/scripts/make_sys_image -d

/var/opt/ignite/recovery/arch_mnt -t n -s local -n 2010-09-23,10:16 -m

t -w /var/opt/ignite/recovery/client_mnt/0x001F29BC2AFD/recovery/2010-

09-23,10:16/recovery.log -u -R -g /var/opt/ignite/recovery/client_mnt/

0x001F29BC2AFD/recovery/2010-09-23,10:16/flist -a 10491250



* Preparing to create a system archive.

recovery.log (87%)

IGNITE-SRV:/var/opt/ignite/recovery/archives/SAND-BOX -i 1 -m t

* Saving the information about archive to

/var/opt/ignite/recovery/previews

* Creating The Networking Archive



* /opt/ignite/data/scripts/make_sys_image -d

/var/opt/ignite/recovery/arch_mnt -t n -s local -n 2010-09-23,10:16 -m

t -w /var/opt/ignite/recovery/client_mnt/0x001F29BC2AFD/recovery/2010-

09-23,10:16/recovery.log -u -R -g /var/opt/ignite/recovery/client_mnt/

0x001F29BC2AFD/recovery/2010-09-23,10:16/flist -a 10491250



* Preparing to create a system archive.

* The archive is estimated to reach 5245625 kbytes.

* Free space on /var/opt/ignite/recovery/arch_mnt

after archive should be about 1293351 kbytes.



* Archiving contents of SAND-BOX via tar to

/var/opt/ignite/recovery/arch_mnt/2010-09-23,10:16.

* Creation of system archive complete.



* Creating CINDEX Configuration File



* /opt/ignite/bin/manage_index -q -c 2010-09-23,10:16\ Recovery\ Archive

-i /var/opt/ignite/recovery/client_mnt/0x001F29BC2AFD/CINDEX -u

Recovery\ Archive







======= 09/23/10 10:38:47 UAE make_net_recovery completed with warnings



*********************************************************************************************************



STEP2:

Boot the client machine with Ignite server.

Error description:
When you try to boot with LAN0 or LAN1 to boot with IGNITE server you get this following error.

PXE-E18: Timeout. Server did not respond

In new itanium boxes dbprofile needs to be defined.
Interrupt the boot
Select Change Confinguration
New configuration
Enter the shell

The below example is done in my environment with a BL-860c server and I have HP-UX 11.31. when i was trying to recover a make_net_recovery from a IGNITE server i faced this issue.


Shell> dbprofile -dn new1

Profile Name: new1
Network Type: IPv4
Client IP address: 0.0.0.0
Gateway IP address: 0.0.0.0
Subnet Mask: 0.0.0.0
Server IP address: 16.26.86.25
Boot File: /opt/ignite/boot/nbp.efi
Optional Data:

Shell>
Shell> lanboot select -dn new1
01 Acpi(HWP0002,PNP0A03,200)/Pci(2|0)/Mac(0017A4993DCB)
02 Acpi(HWP0002,PNP0A03,200)/Pci(2|1)/Mac(0017A4993DCA)
Select Desired LAN: 1

BOOTP Server IP Address: 10.100.100.49
DHCP Server IP Address: 0.0.0.0
Boot file name: /opt/ignite/boot/nbp.efi

Retrieving File Size............|
PXE-E18: Timeout. Server did not respond.
Exit status code: Invalid Parameter

To set all the IP s and the boot file enter the following command.

shell> dbprofile -dn new1 -sip 10.100.100.49 -cip 10.100.100.11 -gip 10.100.100.1 -b /opt/ignite/boot/nbp.efi

where cip- client IP, SIP - Ignite server IP, gip - gateway IP and b - bootfile
Shell> lanboot -dn new1

Client MAC Address: 00 1F 29 BC 2A FC
Client IP Address: 10.100.100.11
Subnet Mask: 255.255.255.0
BOOTP Server IP Address: 10.100.100.49
DHCP Server IP Address: 0.0.0.0
Boot file name: /opt/ignite/boot/nbp.efi

Retrieving File Size.
Retrieving File (TFTP).
@(#) HP-UX IA64 Network Bootstrap Program Revision 1.0
Downloading HPUX bootloader
Starting HPUX bootloader
Obtaining size of fpswa.efi (328192 bytes)
Downloading file fpswa.efi (328192 bytes)

(C) Copyright 1999-2006 Hewlett-Packard Development Company, L.P.
All rights reserved

HP-UX Boot Loader for IPF -- Revision 2.035

Booting from Lan
Obtaining size of AUTO (226 bytes)
Downloading file AUTO (226 bytes)
Obtaining size of AUTO (226 bytes)
Downloading file AUTO (226 bytes)



Obtaining size of AUTO (226 bytes)
Downloading file AUTO (226 bytes)
1. target OS is B.11.23 IA
2. target OS is B.11.31 IA
3. Exit Boot Loader

Choose an operating system to install that your hardware supports:3

Obtaining size of AUTO (226 bytes)
Downloading file AUTO (226 bytes)
Exiting bootloader.

Shell>
Shell>
Shell>
Shell>
Shell>
Shell>
Shell>
Shell> lanboot -dn new1

Client MAC Address: 00 1F 29 BC 2A FC
Client IP Address: 10.100.100.11
Subnet Mask: 255.255.255.0
BOOTP Server IP Address: 10.100.100.49
DHCP Server IP Address: 0.0.0.0
Boot file name: /opt/ignite/boot/nbp.efi

Retrieving File Size.
Retrieving File (TFTP).
@(#) HP-UX IA64 Network Bootstrap Program Revision 1.0
Downloading HPUX bootloader
Starting HPUX bootloader
Obtaining size of fpswa.efi (328192 bytes)
Downloading file fpswa.efi (328192 bytes)

(C) Copyright 1999-2006 Hewlett-Packard Development Company, L.P.
All rights reserved

HP-UX Boot Loader for IPF -- Revision 2.035

Booting from Lan
Obtaining size of AUTO (226 bytes)
Downloading file AUTO (226 bytes)
Obtaining size of AUTO (226 bytes)
Downloading file AUTO (226 bytes)

Installing Ingress Controller - Kubernetes

Installing the Ingress Controller Prerequisites Make sure you have access to the Ingress controller image: For NGINX Ingress controll...