Saturday, January 29, 2011

bosboot fails with malloc error 0301-106

bosboot fails with malloc error 0301-106
Problem(Abstract)

During or after an OS upgrade, bosboot fails with the following error:

0301-106 /usr/lib/boot/bin/mkboot_chrp the malloc call failed for size

0301-158 bosboot: mkboot failed to create bootimage.

0301-165 bosboot: WARNING! bosboot failed - do not attempt to boot device.



Symptom

During or after an OS upgrade, bosboot fails with the following error:

0301-106 /usr/lib/boot/bin/mkboot_chrp the malloc call failed for size

0301-158 bosboot: mkboot failed to create bootimage.

0301-165 bosboot: WARNING! bosboot failed - do not attempt to boot device.




Diagnosing the problem

Check size of PdDv.vc ODM class file...

eg...

# ls -al /usr/lib/objrepos/PdDv*
-rw-r--r-- 1 root system 110592 Apr 14 11:42 PdDv
-rw-r--r-- 1 root system 200937472 Apr 14 11:42 PdDv.vc


Resolving the problem

bosboot uses the PdDv ODM class files to build device information into the boot image and pre-allocate memory for these devices. If the file is too large, malloc cannot satisfy the request, causing bosboot to fail.

The following instructions can be used to reduce the size of the PdDv.vc file:

# mkdir /tmp/objrepos
# cd /tmp/objrepos
# export ODMDIR=/usr/lib/objrepos
# odmget PdDv > PdDv.out
# cp /usr/lib/objrepos/PdDv /usr/lib/objrepos/PdDv.bak
# cp /usr/lib/objrepos/PdDv.vc /usr/lib/objrepos/PdDv.vc.bak
# export ODMDIR=/tmp/objrepos
# echo $ODMDIR
# odmcreate -c /usr/lib/cfgodm.ipl
# ls -l PdDv*
# odmadd /tmp/objrepos/PdDv.out
# ls -l PdDv*
# cp /tmp/objrepos/PdDv /usr/lib/objrepos/PdDv
# cp /tmp/objrepos/PdDv.vc /usr/lib/objrepos/PdDv.vc
# export ODMDIR=/etc/objrepos
# rm -rf /tmp/objrepos





Bosboot too Small

Bosboot too Small
Action Taken: Need to remove hd5(boot image) from all the drives and
recreate it on a disk in rootvg with the correct size.

 Here are the steps to take once the system has been restored from
mksysb tape.

  1.Login as root

  2.Remove the logical volume hd5
      rmlv hd5

  3. Clear the boot record from each drive.
      chpv -c hdisk#
   run this command for each drive that had hd5 on it.

  4. Run the mklv command to create the logical volume space on hdisk0.
    mklv -t boot -y hd5 -ae rootvg 1 hdisk0

  NOTE: the '1' in this command stands for 1 partition.
  Make sure the default size of your partitions is 16mb or larger.
  To find out run;

   lsvg rootvg -> look for the parameter PP SIZE. If it's 16mb or larger
run the above command as is. If it's smaller than 16mb run the mklv
command with a 2 instead of a 1.

 5. Create the boot image
    bosboot -ad /dev/hdisk0

 6. Make hdisk0 the first device in the boot list.
    bootlist -m normal hdisk0

Now reboot and see if the system come up cleanly

'bosboot' hangs

######################################################################
PROBLEM: 'bosboot' hangs

CAUSE: The major/minor number of the hdiskX(rootvg) in the odm is different from the /dev directory.

SOLUTION: Rectifying the odm entry using 'odmdelete' 'odmadd' commands. Explanation given below.
Please note that odm commands have to be handled very carefully; it is at the SAs own risk.
########

Here's a strange problem.

bosboot hangs. If you can do a "ps -ef | grep bootinfo", you'll see "/usr/sbin/bootinfo -g /dev/hd5". Or, if you run "ksh -x /usr/sbin/bosboot -ad /dev/hdisk0", you'll see it hang at "valid_dev /dev/hdisk0".

Here's the solution (assuming hdisk0):
cd dev
ls -l hdisk0 -> 26, 1

odmget -q value3=hdisk0 CuDvDr  -> 24, 1

looks like odm thinks that hdisk0 is a different major/minor number then /dev thinks it is      [root cause for the bosboot hang]
see if any other devices are in ODM at 26,1

odmget -q 'value1=26 value2=1' CuDvDr

if there is, and it is something that you can live without, get rid of it from the odm

odmdelete -q 'value1=26 value2=1' -o CuDvDr

Now, let's fix hdisk0.

odmget -q value3=hdisk0 CuDvDr >hdisk0.out

vi the file and change the major/minor number to reflect /dev

odmdelete -q value3=hdisk0 -o CuDvDr ->to delete old entry

odmadd hdisk0.out

synclvodm -Pv rootvg -> OK

odmget -q value3=hdisk0 CuDvDr  ->looks good

bosboot -ad /dev/hdisk0 -> OK this time

bootlist -m normal hdisk0 ->OK

odmget info ex(on one machine):
# odmget -q value3=hdisk0 CuDvDr|more

CuDvDr:
        resource = "devno"
        value1 = "22"
        value2 = "0"
        value3 = "hdisk0"
#

fixing broken fileset issue in HACMP

We updated HACMP 5.3 to 5.5 and is seeing lppchk output for three
commands:

# lppchk -v ==> The 5.3 versions of 3 HACMP show up as "broken"
cluster.es.cspoc.cmds
cluster.es.cspoc.dsh
cluster.es.cspoc.rte

# lslpp -l | grep cluster.es.cspoc ==> Only 5.5 versions show up

we tar's up ODM:
# cd /
# tar -cvf /tmp/odm.tar ./etc/objrepos ./usr/lib/objrepos


cluster filesets upgrade to ha 5.5, but the install gave
messages that the following filesets are broken...

cluster.es.cspoc.* 5.3


# export ODMDIR=/usr/lib/objrepos
# odmget -q "name=cluster.es.cspoc.cmds and rel=3" lpp

# lppchk -v
lppchk:  The following filesets need to be installed or corrected to
bring
         the system to a consistent state:

  bos.txt.bib.data 4.1.0.0                (not installed; requisite
fileset)
  cluster.es.cspoc.cmds 5.3.0.3           (BROKEN)
  cluster.es.cspoc.dsh 5.3.0.0            (BROKEN)
  cluster.es.cspoc.rte 5.3.0.3            (BROKEN)
# export ODMDIR=/usr/lib/objrepos
# odmget -q "lpp_name=cluster.es.cspoc.cmds and rel=3" product

product:
        lpp_name = "cluster.es.cspoc.cmds"
        comp_id = "5765-F6200"
        update = 0
        cp_flag = 273
        fesn = ""
        name = "cluster.es.cspoc"
        state = 10
        ver = 5
        rel = 3
        mod = 0
        fix = 0
        ptf = ""
        media = 3
        sceded_by = ""
        fixinfo = ""
        prereq = "*coreq cluster.es.cspoc.rte 5.3.0.0\n\
"
        description = "ES CSPOC Commands"
        supersedes = ""

product:
        lpp_name = "cluster.es.cspoc.cmds"
        comp_id = "5765-F6200"
        update = 1
        cp_flag = 289
        fesn = ""
        name = "cluster.es.cspoc"
        state = 7
        ver = 5
        rel = 3
        mod = 0
        fix = 3
        ptf = ""
        media = 3
        sceded_by = ""
        fixinfo = ""
        prereq = "*ifreq cluster.es.cspoc.rte (5.3.0.0) 5.3.0.1\n\
*ifreq cluster.es.server.diag (5.3.0.0) 5.3.0.1\n\
*ifreq cluster.es.server.rte (5.3.0.0) 5.3.0.1\n\
"
        description = "ES CSPOC Commands"
        supersedes = ""
# odmget -q "name=cluster.es.cspoc.cmds and rel=3" lpp

lpp:
        name = "cluster.es.cspoc.cmds"
        size = 0
        state = 7
        cp_flag = 273
        group = ""
        magic_letter = "I"
        ver = 5
        rel = 3
        mod = 0
        fix = 0
        description = "ES CSPOC Commands"
        lpp_id = 611

# odmdelete -q lpp_id=611 -o lpp
# odmdelete -q "lpp_name=cluster.es.cspoc.cmds and rel=3" -o product
2 objects deleted
# odmdelete -q lpp_id=611 -o lpp
1 objects deleted
# odmdelete -q lpp_id=611 -o inventory
199 objects deleted
# odmdelete -q lpp_id=611 -o history
4 objects deleted

We canclean up the lppchk -v "BROKEN" entries by doing the following:

Getting the lpp_id's:
# export ODMDIR=/usr/lib/objrepos
# odmget -q "name=cluster.es.cspoc.cmds and rel=3" lpp | grep lpp_id
        lpp_id = 611
# odmget -q "name=cluster.es.cspoc.dsh and rel=3" lpp | grep lpp_id
        lpp_id = 604
# odmget -q "name=cluster.es.cspoc.rte and rel=3" lpp | grep lpp_id
        lpp_id = 610

Deleting the 5.3 entries:
# export ODMDIR=/usr/lib/objrepos
# odmdelete -q "lpp_name=cluster.es.cspoc.cmds and rel=3" -o product
# odmdelete -q lpp_id=611 -o lpp
# odmdelete -q lpp_id=611 -o inventory
# odmdelete -q lpp_id=611 -o history
# odmdelete -q "lpp_name=cluster.es.cspoc.dsh and rel=3" -o product
# odmdelete -q lpp_id=604 -o lpp
# odmdelete -q lpp_id=604 -o inventory
# odmdelete -q lpp_id=604 -o history
# odmdelete -q "lpp_name=cluster.es.cspoc.rte and rel=3" -o product
# odmdelete -q lpp_id=610 -o lpp
# odmdelete -q lpp_id=610 -o inventory
# odmdelete -q lpp_id=610 -o history
# export ODMDIR=/etc/objrepos

That will leave you with this:
# lppchk -v
lppchk:  The following filesets need to be installed or corrected to
bring
         the system to a consistent state:

  bos.txt.bib.data 4.1.0.0                (not installed; requisite
fileset)

For that to go away, you'll need to install that from Volume 1 of your
AIX installation media.

--------------------------

I followed your procedure and got the following results.  It appears I
don?t end up with ?bos.txt.bib.data 4.1.0.0? needing to be installed.  I
did notice two of the inventory commands deleting large numbers of
objects and would like to know if that is a potential issue.  Everything
else looks great. 

oxxxxxxx:/te/root> export ODMDIR=/usr/lib/objrepos
oxxxxxxx:/te/root> lppchk -v
lppchk:  The following filesets need to be installed or corrected to
bring
         the system to a consistent state:

  cluster.es.cspoc.cmds 5.3.0.3           (BROKEN)
  cluster.es.cspoc.dsh 5.3.0.0            (BROKEN)
  cluster.es.cspoc.rte 5.3.0.3            (BROKEN)

oxxxxxxx:/te/root> odmget -q "name=cluster.es.cspoc.cmds and rel=3"
lpp | grep lpp_id
        lpp_id = 611
oxxxxxxx:/te/root> odmget -q "name=cluster.es.cspoc.dsh and rel=3"
lpp | grep lpp_id
        lpp_id = 604
oxxxxxxx:/te/oot> odmget -q "name=cluster.es.cspoc.rte and rel=3"
lpp | grep lpp_id
        lpp_id = 610

oxxxxxxx:/te/root> odmdelete -q "lpp_name=cluster.es.cspoc.cmds and
rel=3" -o product
0518-307 odmdelete: 0 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=611 -o lpp
0518-307 odmdelete: 1 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=611 -o inventory
0518-307 odmdelete: 199 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=611 -o history
0518-307 odmdelete: 4 objects deleted.
oxxxxxxx:/te/root> odmdelete -q "lpp_name=cluster.es.cspoc.dsh and
rel=3" -o product
0518-307 odmdelete: 0 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=604 -o lpp
0518-307 odmdelete: 1 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=604 -o inventory
0518-307 odmdelete: 3 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=604 -o history
0518-307 odmdelete: 2 objects deleted.
oxxxxxxx:/te/root> odmdelete -q "lpp_name=cluster.es.cspoc.rte and
rel=3" -o product
0518-307 odmdelete: 0 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=610 -o lpp
0518-307 odmdelete: 1 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=610 -o inventory
0518-307 odmdelete: 53 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=610 -o history
0518-307 odmdelete: 4 objects deleted.
oxxxxxxx:/te/root> export ODMDIR=/etc/objrepos
oxxxxxxx:/te/root> lppchk -v
oxxxxxxx:/te/root>

Fixing broken fileset in AIX

axxxx@xxxxxxx)lppchk -v
lppchk:  The following filesets need to be installed or corrected to bring the system to a consistent state:
openssh.license 4.1.0.5300              (BROKEN)
In order to fix this you'll need to get the base level of the
'openssh.license' fileset and run a force overwrite.
# installp -acFNXYd <device or directory location> openssh.license
This will reinstall the fileset in the committed state and remove the
broken status.


broken X11.base.lib fileset in the Broken state
after 6100-02 to 6100-04 update.
Action Taken : we had the base 6100-04 version.
# installp -acFNXYd . X11.base.rte  -> success
# lppchk -v  -> clean
# oslevel -s  -> 6100-04-01


To correct your phantom fileset problem, please run the following :
$ export ODMDIR=/usr/lib/objrepos
$ odmdelete -q lpp_name="http_server.base.source" -o product
 ==> It should answer "1 object deleted".
Then set back the odm dir to default :
$ export ODMDIR=/etc/objrepos

Burn Image to DVD in AIX

Burn Image to DVD
there are two ways to restore a mksysb file.  One is to use NIM, the other is to burn the mksysb image onto
DVD.

  This was the copying a mksysb image to a DVD or creating an ISO
image with the entire DVD image in it.  So I'll just give you some
sample commands:

To create the mksysb image:
# mksysb -i /some/file
Note: Make sure that the filesystem you are using is either larg-file
enabled JFS or JFS2.

To burn that mksysb image onto a DVD (using UDF format):
# mkdvd -U -m /some/file -d /dev/cd0
This will skip the step of creating the mksysb image and use the one you
specify.  Again, this must be done on a system at the same ML or higher
than the original system.

To create an ISO image of the DVD:
# mkdvd -S -m /some/file -d /dev/cd0
If you want the mkdvd command to create the mksysb image for you, just
leave out the -m flag:
# mkdvd -S -d /dev/cd0

To burn an ISO image using an AIX system:
# burn_cd -d /dev/cd0 /some/ISO_file
Note: The -d flag indicates that this is a DVD.  For CDs, leave the -d
out.




Cannot create a file on a NFS mount point


NFS master 'tsm2' exporting /mksysb. Trying to mount /mksysb

mount tsm2:/mksysb /mnt // mounted and available.

/usr/bin/mksysb -i /mnt/filename >> cannot open /mnt/filename
permission denied

cd /mnt
touch jack.out >> cannot create

logged in as root.

#showmount -e tsm2 >> /mksysb everyone

# ls -ld /mnt >> 777 root,system

# hostname -> pxxxxxx2

On the NFS:

host pxxxxx2 >> pxxxxxt.fxxxxxxr.com is 10.20.100.5 aliases:
pxxxxxxt.fxxxxxd.com

host 10.20.100.5 >> pxxxxxxxxt.fxxxxxxxr.com is 10.20.100.5

more /etc/netsvc.conf >> nothing uncommented.

more /etc/hosts >> 10.20.100.5 is not found.

umount /mksysb from the client

on the NFS server:

exportfs -u /mksysb
more /etc/exports >>  /mksysb (no permissions)
cp /etc/exports /etc/exports.old
vi /etc/exports >> comment out /mksysb
smitty mknfsexp >>  Add a Directory to Exports List
  Hosts allowed root access  pxxxxxxxt

# showmount -e >> /mksysb everyone
# more /etc/exports >> /mksysb -root=tsm2  << tsm2 should be pxxxxxxt
# smitty rmnfsexp >> /mksysb
# showmount -e >> mksysb is not listed.
# smitty mknfsexp >> Hosts allowed root access  pxxxxxxxt
# showmount -e >> /mksysb everyone

On the NFSclient:

# mount tsm2:/mksysb /mnt
# touch file >> created the file.

/usr/sbin/mksysb -i /mnt/filename // the mksysb is running.


Invalid Login for all users after 5.3 migration

Invalid Login for all users after 5.3 migration
.
System Model: 9117
System Serial Number: 00-xxxxxxx
Operating System:  AIX 5L
Product Group:  AIX Base Operating System 5L V530 R530 (5765G0300)
.
Environment:
Migrated from 5.2 to 5.3-05
Root and all users get 3004-007 Invalid Login ... after migration.
.
Problem:
Able to ssh to server.  The oslevel is 5300-05 now.  I have not found key differences in /etc/security files.

I reset root password but cannot login via console window.  I reset my userid password but get the same error message.

telnet fails with error though ssh works.

 usrck -l ALL
The system is inaccessible to daemon, due to the following:
        User account is expired.
        User has no password.
        User password is expired and only system administrator can change it.
The system is inaccessible to bin, due to the following:
        User account is expired.
        User has no password.
        User password is expired and only system administrator can change it.
The system is inaccessible to sys, due to the following:
        User account is expired.
        User has no password.
        User password is expired and only system administrator can change it.
The system is inaccessible to adm, due to the following:
        User has no password.
        User password is expired and only system administrator can change it.
The system is inaccessible to guest, due to the following:
        User has no password.
        User password is expired and only system administrator can change it.
        User denied access by login, rlogin applications.
The system is inaccessible to nobody, due to the following:
        User account is expired.
        User has no password.
        User password is expired and only system administrator can change it.
The system is inaccessible to lpd, due to the following:
        User account is expired.
        User has no password.
        User password is expired and only system administrator can change it.


Your getting allot of different errors for each user. User denied access
by login rlogin applications, User password is expired, User has too
many consecutive failed login attempts, Some of these errors might be
expected. For example, you might not want an ID to be able to login
remotely. Some ID's are getting more than one error.
There are multiple things to try. If you can, ssh in as root and do a
test on one of the id's, mtaylor for example. Change the password for
mtaylor and see if the user for mtaylor can login using the new
password.
To do that, as root run the command, passwd mxxxxr

For users getting the error: User has too many consecutive failed login attempts, edit /etc/security/user and change, loginretries to 0 in the stanza for sxxxxxxxxi and see if sxxxxxxxi can login.

If there's no way you can login as root, what your going to have to do is boot into maintenance mode to make changes.

Start from here first and let me know what happens.


Here is what it turned out to be.  Back in the 5.2 days, I implemented an ?su only? script per instructions by IBM for a slick post-authentication method to force su?s for service accounts.  I ran into problems getting this to work on 5.3 and it turns out there are workarounds in smit with 5.3 ( so I no longer used this approach).

default:
        admin = false
        login = true
        su = true
        daemon = true
        rlogin = true
        sugroups = ALL
        admgroups =
        ttys = ALL
        auth1 = SYSTEM,auth_method
        auth2 = NONE

The key variable is auth1.  It specifies SYSTEM by default but I added an ?auth_method? which runs another script at login.  I had forgotten that this method does not work in 5.3 and in fact results in the behavior mentioned: ?3004-007 Invalid login or password ??.

It would be good for IBM to document this error code with this setting in auth1 in 5.3.

0821-067 Ping: the socket creation call failed

When trying to ping as a user that is not root, the following error message was  displayed:

0821-067 Ping: the socket creation call failed.
the file access permissions do not allow the specified
actions.



 Change the setuid bit permissions for /usr/sbin/ping. Enter:
chmod 4555 /usr/sbin/ping

Cannot reduce filesystem size

Cannot reduce fs size

I have little issue reducing the fs on AIX
.5.3. Here is what I get

root@dccccc-svc/etc>df -g /data/edw/init_stg
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/init_stg_lv01    793.00    425.25   47%     1081     1%
/data/edw/init_stg
root@dccccc-svc/etc>oslevel -s
5300-08-03-0831
root@dccccc-svc/etc>

root@dccccc-svc/etc>chfs -a size=-10G /data/edw/init_stg
chfs: There is not enough free space to shrink the file system.

System Model: IBM,9131-52A
Machine Serial Number: 0xxxxxxxxxx
Processor Type: PowerPC_POWER5
Processor Implementation Mode: POWER 5
Processor Version: PV_5_3
Number Of Processors: 4
Processor Clock Speed: 1648 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 3 dccccc-normal
Memory Size: 32000 MB
Good Memory Size: 32000 MB
Platform Firmware level: SF240_332
Firmware Version: IBM,SF240_332

That happens when you try to reduce a big chunk of data (in this case
10G) that may not be contiguous in the filesystem because you have files
scattered everywhere.

1. Try to defrag the FS

#defragfs -s /data/edw/init_stg

2. If you still can't reduce it after this. Try reducing the FS in
smaller chunks.
Instead of 10G at a time, try reducing 1 or 2 gigs. Then, repeat the
operation.

3. Try looking for files large using the find cmd and move them out
temporarily, just to see if we can shrink the fs without them:

#find /<filesystem> -xdev -size +2048 -ls|sort -r +6|pg

4. Sometimes processes open big files and use lots of temporary space in
those filesystem.
You could check processes/applications running against the filesystem
and stop them temporarily, if you can.
#fuser -cu[x] <filsystem>

Please, let me know if this works.

------------------------------------------
Explanations to the behavior of shrinkfs:

In the beginning of the JFS2 filesystem, there is the superblock, the
superblock backup, and then the data and metadata of the filesystem.  At
the end is the inline log (if there is one), and the fsck working area.

The way the filesystem shrink works is this:  When chfs is run and a
size is given (either -NUM or an absolute NUM size) AIX calculates where
that exists within the filesystem.  This marker is known as "the fence".
The system then calculates how much data is left outside the fence, that
must be moved inside it (since we don't want to lose data).  It
calculates the free space available, and subtracts a minimal amount for
the fsck working area and inline log (if any) that must go at the tail
end of the filesystem.

What chfs has to do is some complex calculating: in the area outside the
fence, is there any data to be saved and moved inside? In the area
inside the fence, how much data is there?  Is it contiguous?  How much
free space is there we have to play with?  Is there enough space to move
the data from outside the fence inside it to save it?  And lastly, is
there enough space to move the fsck working area and inline logs inside
also along with these?

It does not try to reorganize the data in any way.  If a large file
outside the fence is make up of contiguous extents, then AIX looks for
an equivalent contiguous free space area inside the fence to move the
file to.  If it can't find one, either due to a lack of space or free
space fragmentation, it fails this operation and won't shrink the
filesystem.  The chfs shrink will also not purposely fragment a file to
force it to fit within fragmented free space.

In some cases running defragfs on the filesystem to defragment the files
will help, but many times it doesn't.  The reason is because the purpose
of defragfs is to coalesce files into more contiguous extents, but not
to coalesce the free space in between them.

If non-contiguous free space is the issue, the only way to get them to
coalesce into large enough regions is to back up the data, remove it,
and restore it.  Then the filesystem shrink may find enough contiguous
free space when chfs is run to move the data outside the fence into.

There's a limit to how much chfs can shrink a filesystem. This is
because chfs has to take into account not only the data you are
moving around, but it tries to keep the contiguous blocks of data in
files still contiguous. So if you have a filesystem with a lot of
space that is broken up into small areas, but you are moving around
large files it may fail even though it looks like you have a lot of
space left to shrink.

The free space reported by the df command is not necessary the space
that can be truncated by a shrinkFS request due to filesystem
fragmentation. A fragmented filesystem may not be shrunk if it does
not have enough free space for an object to be moved out of the region
to be truncated, and shrinkFS does not perform filesystem
defragmentation. In this case, the chfs command should fail with the
returned message: chfs: There is not enough free space to shrink the
file system - return code 28 (ENOSPC).

One of the common areas we see that limits customers is the
inclusion of large, unfragmented files in a filesystem, such as binary
database files.  If a filesystem consists of a few, but extremely large
files, depending on how these are laid out the chfs may fail to find
enough space to move the data from outside the fence into it if it were
to attempt to shrink the filesystem.

Cannot telnet to Server After Changing Soft Filesize Limit

After changing soft filesize limit to "0", customer is unable to telnet into
server.

Users see error message:

/dev/pts/0: 3004-004 You must "exec" login from the lowest login shell.

Environment
AIX Version 5.1 To unlimit the soft filesize(fsize) value, change your value to "-1" in the /etc/security/limits file.

how to check out the network adapters for errors and network performance

 how to check out the network adapters for errors and network performance.

entstat -d ent1 --> 10/100 PCI 23100020
0 CRC
0 alignment
media speed 100 Full selected and running
0 No Receive Pool Buffer errors
The adapter looks good.
ftp to another machine
login
bin
put "|dd if=/dev/zero bs=32k count=100" /dev/null
between 7 and 8 MB/sec which is good.



lscfg -vl ent0   --> dv210 microcode  --> latest microcode

lslpp -l |grep 14108902  --> 5.3.0.50  --> latest device driver

key privileged password, that means someone lock the machine by SMS

When performing AIX installation by a mksysb backup, remember not to perform the following steps.

Trigger open firmware prompt
setenv real_base 1000000
reset-all
It would caused the machine boot up dead lock in firmware prompt.

Solution : Call IBM Engineer to remove card battery in order to dry the cap.
Tips 2:

When you got the message key privileged password, that means someone lock the machine by SMS.

Solution : Call IBM Engineer to remove card battery in order to dry the cap. Then trigger SMS menu and disable password feature.

How to change the status of a disk from 'removed' to 'active'

After an I/O failure to a disk due to a path or system crash, a volume group may have a disk in a removed state for one or more of it's disks. This will cause file systems to not mount and other failures related to the disk.

The status of a Volume Group disk may be seen by:

$ lsvg -p <VolumeGroupName>

where the VolumeGroupName is the Volume Group in question.

Example of viewing a disks in uservg volume group:
# lsvg -p uservg
uservg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk6 removed 639 238 128..00..00..00..110
hdisk7 active 639 639 128..128..127..128..128

The status of a disk will be shows on the 'PV STATE'

If a disk has a status of 'removed', you may not be able to mount file systems that exist on the

disk in question:
# mount /home/user1/userfs
mount: 0506-324 Cannot mount /dev/userlv1 on /home/user1/userfs: There is an input or output error.


Changing the status of a disk to active:

chpv -va <hdisk#>

where hdisk# is the disk in question.

Example of changing hdisk6 to active state in the uservg volume group:
# chpv -va hdisk6


Changing upper bound value in AIX

Actually, the hdisk number doesn't matter to either AIX or HACMP.  The
important bit is the PVID.  So, given that fact, we can skip straight
to what is the next step in getting the disk into the vg, and extending:
the filesystems.

So next thing to do is run:
chdev -l hdiskX -a pv=yes
on both nodes for the hdisks (hdisk13 on node a, hdisk14 on node b)
this will put the pvid in place.

next, run
smitty hacmp --> extended configuration --> discover hacmp related
information

This will poll both systems and find the new disk

Finally, the changes you'll want to make can be done here:
smitty cl_admin --> shared logical volume management
                    shared volume groups
                    set characteristics of a shared volume group
                    add a disk to a shared volume group

smitty cl_admin --> shared logical volume management
                    shared filesystems
                    set characteristics of a shared filesystem

that should be it.  Using the cl_admin menus propogates all changes
to all nodes in the cluster, so you don't need to worry about that,
we'll take care of it for you.


Everything works until the "set characteristics of a
shared filesystem" step.  When I try to increase the size, I get this
error:
node_ua1: 0516-404 allocp: This system cannot fulfill the allocation
request.
node_ua1:       There are not enough free partitions or not enough
physical volumes
node_ua1:       to keep strictness and satisfy allocation requests.
The command
node_ua1:       should be retried with different allocation
characteristics.
node_ua1: cl_rsh had exit code = 1, see cspoc.log and/or clcomd.log for
more information
cl_chfs: Error executing chfs  -a size="6G" /MQHA/GENUV1/log on node
node_ua1
When I list the characteristics of the shared volume group, I get this:
node_ua1: LOGICAL VOLUME:     mquva1log              VOLUME GROUP:
mqma1vg
node_ua1: LV IDENTIFIER:      00cf36dc00004c000000010f074041fc.3
PERMISSION:     read/write
node_ua1: VG STATE:           active/complete        LV STATE:
opened/syncd
node_ua1: TYPE:               jfs2                   WRITE VERIFY:   off
node_ua1: MAX LPs:            512                    PP SIZE:        64
megabyte(s)
node_ua1: COPIES:             1                      SCHED POLICY:
parallel
node_ua1: LPs:                40                     PPs:            40
node_ua1: STALE PPs:          0                      BB POLICY:
relocatable
node_ua1: INTER-POLICY:       minimum                RELOCATABLE:    yes
node_ua1: INTRA-POLICY:       middle                 UPPER BOUND:    1
node_ua1: MOUNT POINT:        /MQHA/GENUV1/log       LABEL:
/MQHA/GENUV1/log
node_ua1: MIRROR WRITE CONSISTENCY: on/ACTIVE
node_ua1: EACH LP COPY ON A SEPARATE PV ?: yes
node_ua1: Serialize IO ?:     NO


the upperbound needs to be increaased.  Currently you've got it
set to be only able to be on a single disk
 smitty cl_admin --> shared logical volume management
                     shared logical volumes
                     set characteristics of a logical volume
                     change upper bound setting to 16 or something, just in case

That will set the max number of drives you can spread across to whatever
value you use.  I'd recommend something large, like 16 or 32, again,
just in case you ever need to add more disks

Under the change a logical volume option the Maximum number of physical
volumes correlates to the upper bound value on the volume group.
The default is 32, so that's been changed at some point in the past.
I'd recommend either 16, or back to 32, just to avoid this in the future
If the "range of physical volumes" option is set to minimum it will
avoid spreading across more than the necessary number of disks anyway.

Checking root and /usr file systems in AIX

Checking root and /usr file systems
To run the fsck command on / or /usr file system, you must shut down the system and reboot it from removable media because the / (root) and /usr file systems cannot be unmounted from a running system.

The following procedure describes how to run fsck on the / and /usr file systems from the maintenance shell.

With root authority, shut down your system.
Boot from your installation media.

From the Welcome menu, choose the Maintenance option.
From the Maintenance menu, choose the option to access a volume group.
Choose the rootvg volume group. A list of logical volumes that belong to the volume group you selected is displayed.
Choose 2 to access the volume group and to start a shell before mounting file systems. In the following steps, you will run the fsck command using the appropriate options and file system device names. The fsck command checks the file system consistency and interactively repairs the file system. The / (root) file system device is /dev/hd4 and the /usr file system device is /dev/hd2.
To check / file system, type the following:
$ fsck -y /dev/hd4
The -y flag is recommended for less experienced users (see the fsck command).

To check the /usr file system, type the following:
$ fsck -y /dev/hd2
To check other file systems in the rootvg, type the fsck command with the appropriate device names. The device for /tmp is /dev/hd3, and the device for /var is /dev/hd9var.
When you have completed checking the file systems, reboot the system.

Unable to increase the size of the filesystem

chfs -a size=+10G /dev/coc01dblv
0516-404 allocp: This system cannot fulfill the allocation request.
        There are not enough free partitions or not enough physical volumes
        to keep strictness and satisfy allocation requests.  The command
        should be retried with different allocation characteristics.



 root@dcccccccc::/> chfs -a size=+10G /dev/coc01dblv
0516-404 allocp: This system cannot fulfill the allocation request.
        There are not enough free partitions or not enough physical volumes
        to keep strictness and satisfy allocation requests.  The command
        should be retried with different allocation characteristics.

Since your upper bound is 1 you cannot go more than 1 disk
 chlv -u 4 lvnam

        -u upperbound
            Sets the maximum number of physical volumes for new allocation. The value of the upperbound variable
            should be between one and the total number of physical volumes. When using super strictness, the upper
            bound indicates the maximum number of physical volumes allowed for each mirror copy. When using striped
            logical volumes, the upper bound must be multiple of stripewidth. If upperbound is not specified it is
            assumed to be stripewidth for striped logical volumes.

now able to increase the size of the FS

migration attempt to go from 5.2 to 5.3 of HACMP and now when running the verification/synchronization it fails

there was a migration attempt to go from 5.2 to 5.3 of HACMP and now when running the verification/synchronization it fails
with the following:
cldare: Migration has been detected.

ACTION TAKEN:
aix 5.3 hacmp 5.3


* odmget HACMPcluster ==> The cluster_version field should be equal to
the following:
 HACMP 5.2    ==> cluster_version = 7
 HACMP 5.3    ==> cluster_version = 8

 Fix:
 #odmget HACMPcluster > cluster.file
 #vi cluster.file ==> correct the field
 #odmdelete -o HACMPcluster ==> removes the contents of the object class
 #odmadd cluster.file
 #odmget HACMPcluster ==> should now show the correct cluster_version

* odmget HACMPnode | more==> The version for all the nodes in the
cluster should also be:
 HACMP 5.2    ==> version = 7
 HACMP 5.3    ==> version = 8

 Fix:
 #odmget HACMPnode > nodes.file
 #vi nodes.file ==> correct the fields
 #odmdelete -o HACMPnode ==> removes the contents of the object class
 #odmadd nodes.file
 #odmget HACMPnode ==> should now show the correct version

* odmget HACMPrules | more ==> You could run into a problem with the
following 3 rules:
 TE_JOIN_NODE
 TE_FAIL_NODE
 TE_RG_MOVE

 The recovery_prog_path should be set the following:
 "/usr/es/sbin/cluster/events/"

 We have seen issues where for those 3 rules it gets changed to:
 "/usr/lpp/save.config/usr/es/sbin/cluster/events/"

 Fix:
 #odmget HACMPrules > rules.file
 #vi rules.file ==> correct the fields
 #odmdelete -o HACMPrules ==> removes the contents of the object class
 #odmadd rules.file
 #odmget HACMPrules ==> should now show the correct path

 made the changes listed above and we tried the synchronization
again. It failed and said that it could not connect to seconday node.

cd /usr/es/sbin/cluster/etc/rhosts -->ALL IP addresses for both
nodes should be in this file.

 modified the rhost file and he is no longer getting the
migration error, however he is getting other configuration errors.




Event six error ODM internal node failed after restart of cluster
   attempted from his script caused a TE_JOIN_NODE



  After running the following (smitty clstart command> the node cluster
  started correctly instead of his script that failed to syncronize
  things.

      # smitty clstart -> ok

   - verification and synchronization was successful this time.

   - started the cluster on the secondary node.

* odmget HACMPrules | more ==> You could run into a problem with the
  following 3 rules:
  TE_JOIN_NODE; TE_FAIL_NODE; TE_RG_MOVE

 The recovery_prog_path should be set the following:
 "/usr/es/sbin/cluster/events/<event.rp>"

 We have seen issues where for those 3 rules it gets changed to:
 "/usr/lpp/save.config/usr/es/sbin/cluster/events/<event.rp>"

 Fix:
 #odmget HACMPrules > rules.file
 #vi rules.file ==> correct the fields
 #odmdelete -o HACMPrules ==> removes the contents of the object class
 #odmadd rules.file
 #odmget HACMPrules ==> should now show the correct path

-----------------------


We got error message sayng that HACMPrules is not found.
--> on main node: HACMPrules not found in /etc/es/objrepos
    but the symlink exists.
--> we did rcp from the bck node
.
Then we tried to sync --> this time its HACMPsrvc who is not found
--> we did rcp of /etc/es/objrepos/* from bck to main node.
--> the sync from bkp to main node is OK
.
smit clstart -> OK on both nodes

 A normal(not a concurrent) shared volume group had a failed disk, which was unmirrored & reduced from the vg outside the cspoc utility.
& later the disk was replaced, readded to vg and lv's remirrored.  All this was done on primary node where the vg was active. [This was performed outside the cspoc utility because of the fact that using the cspoc utility we were unable to unmirror].
Since the vg changes were done outside the cspoc(cluster single point of contact) utility, the changes were not synched across to the secondary node. Therefore there was risk that if a failure occurs & if the resource group fails over to the secondary node, it may fail to get activated on secondary node due to vgda in secondary node not in sync state.

Solution [manual updation of vgda on secondary node] [there is no need for any downtime of any resource group or node]:
a)  Take ur system(both nodes) & cluster information.
b)  Ensure the new replaced disk is also seen on the secondary node(run cfgmgr).Using the pvid grep in the lspv output.
c)  unlock the vg (release scsi reserve on the vg/disks) on the primary node:
"varyonvg -bu datavg"
d) Run importvg -L to detect the changes on secondary node:
importvg -L datavg hdiskX     [hdiskX is any disk of the datavg]
(if this command  displays error, u can perform the below steps)
or
exportvg datavg     [on secondary node]
importvg -V 41 -y datavg -n -F hdisk20    [on secondary node]
41 is the major number of the datavg, it should be the same major number as specified on primary node.
-n : option tells the importvg not to varyon the vg (very important).  -F: fast checkup of vgda areas.
hdisk20: is one of the disk of the datavg
e) Run "varyonvg datavg" on primary node to reimpose locking/reserves of the vg.

Important: The above steps are applicable only for the normal shared volume group & not for concurrent volume group.

Ravi was working on this issue, using this approach he successfully resolved the issu







root@dccccccc:/> lsvg -l dcccccccvg
dcccccccvg:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
hadbaslv            jfs        8     16    2    open/syncd    /hadbas
loglv00             jfslog     1     2     2    open/syncd    N/A
hagfen1lv           jfs        4     8     2    open/syncd    /hagfen1
habfen1lv           jfs        4     8     2    open/syncd    /habfen1
hafenc1lv           jfs        4     8     2    open/syncd    /hafenc1
hagws01lv           jfs        128   256   2    open/syncd    /hagws01
habws01lv           jfs        128   256   2    open/syncd    /habws01
db2_repllv          jfs        128   256   2    open/syncd    /db2/db2_repl
haigb01lv           jfs        64    128   2    open/syncd    /haigb01
db2_tables01lv      jfs        160   320   2    open/syncd    /db2/db2_tables01
db2_tables02lv      jfs        96    192   2    open/syncd    /db2/db2_tables02
db2_indexes01lv     jfs        128   256   2    open/syncd    /db2/db2_indexes01
db2_indexes02lv     jfs        64    128   2    open/syncd    /db2/db2_indexes02
db2_logslv          jfs        128   256   2    open/syncd    /db2/db2_logs
db2_archivelv       jfs        288   576   2    open/syncd    /db2/db2_archive
db2_tmpsp01lv       jfs        192   384   2    open/syncd    /db2/db2_tempspace01
db2_backuplv        jfs        2020  4040  24   open/syncd    /db2/db2_backup
tsmshrlv            jfs        1     2     2    open/syncd    /ha_mnt1/tsmshr
db2_auditlv         jfs        64    64    1    open/syncd    /db2/db2_audit
root@dccccccc:/> lsvg -o
dcccccccvg
rootvg
root@dccccccc:/>

root@dccccccc:/usr/sbin/cluster/utilities> ./clfindres
-----------------------------------------------------------------------------
Group Name     Type       State      Location
-----------------------------------------------------------------------------
udb_rg         cascading  ONLINE     dccccccc
                          OFFLINE    deeeeeee

root@dccccccc:/usr/sbin/cluster/utilities>
root@dccccccc:/usr/sbin/cluster/utilities> ./clshowres

Resource Group Name                          udb_rg
Node Relationship                            cascading
Site Relationship                            ignore
Participating Node Name(s)                   dccccccc deeeeeee
Node Priority
Service IP Label                             dccccccc
Filesystems                                  ALL
Filesystems Consistency Check                fsck
Filesystems Recovery Method                  sequential
Filesystems/Directories to be exported
Filesystems to be NFS mounted
Network For NFS Mount
Volume Groups                                dcccccccvg
Concurrent Volume Groups
Use forced varyon for volume groups, if necessaryfalse
Disks
GMD Replicated Resources
PPRC Replicated Resources
AIX Connections Services
AIX Fast Connect Services
Shared Tape Resources
Application Servers                          udb_app
Highly Available Communication Links
Primary Workload Manager Class
Secondary Workload Manager Class
Delayed Fallback Timer
Miscellaneous Data
Automatically Import Volume Groups           false
Inactive Takeover                            false
Cascading Without Fallback                   false
SSA Disk Fencing                             false
Filesystems mounted before IP configured     true


Run Time Parameters:

Node Name                                    dccccccc
Debug Level                                  high
Format for hacmp.out                         Standard

Node Name                                    deeeeeee
Debug Level                                  high
Format for hacmp.out                         Standard

root@dccccccc:/usr/sbin/cluster/utilities>

root@dccccccc:/usr/sbin/cluster/utilities> ./cllsserv
libodm: The specified search criteria is incorrectly formed.
        Make sure the criteria contains only valid descriptor names and
        the search values are correct.

Application server [] does not exist.
root@dccccccc:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> lsvg -o
dcccccccvg
rootvg
root@dccccccc:/usr/sbin/cluster/utilities> lsvg dcccccccvg
VOLUME GROUP:       dcccccccvg              VG IDENTIFIER:  0004047a00004c00000000fb5e7fe19e
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      13008 (208128 megabytes)
MAX LVs:            256                      FREE PPs:       5852 (93632 megabytes)
LVs:                19                       USED PPs:       7156 (114496 megabytes)
OPEN LVs:           19                       QUORUM:         13
TOTAL PVs:          24                       VG DESCRIPTORS: 24
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         24                       AUTO ON:        yes
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
root@dccccccc:/usr/sbin/cluster/utilities>
root@deeeeeee:/> lsvg
rootvg
dcccccccvg
root@deeeeeee:/> lsvg dcccccccvg
0516-010 : Volume group must be varied on; use varyonvg command.
root@deeeeeee:/>


qroot@deeeeeee:/usr/sbin/cluster> cd uti*
root@deeeeeee:/usr/sbin/cluster/utilities> ./clfindres
-----------------------------------------------------------------------------
Group Name     Type       State      Location
-----------------------------------------------------------------------------
udb_rg         cascading  ONLINE     dccccccc
                          OFFLINE    deeeeeee

root@deeeeeee:/usr/sbin/cluster/utilities> ./clRGinfo
-----------------------------------------------------------------------------
Group Name     Type       State      Location
-----------------------------------------------------------------------------
udb_rg         cascading  ONLINE     dccccccc
                          OFFLINE    deeeeeee

root@deeeeeee:/usr/sbin/cluster/utilities>
root@dccccccc:/usr/sbin/cluster/utilities> lsvg -o
dcccccccvg
rootvg
root@dccccccc:/usr/sbin/cluster/utilities> varyonvg -bu dcccccccvg
root@dccccccc:/usr/sbin/cluster/utilities>


 Importvg –L vg0001 <any disk name on this VG>


importvg -L dcccccccvg hdisk20



root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    dcccccccvg
hdisk3          0004047a5bbc52c1                    dcccccccvg
hdisk4          0004047a5bbc65da                    dcccccccvg
hdisk5          0004047a5bbc79ad                    dcccccccvg
hdisk6          0004047a5bbc8bed                    dcccccccvg
hdisk7          0004047a5bbc9ee6                    dcccccccvg
hdisk8          0004047a5bbcb0f7                    dcccccccvg
hdisk9          0004047a5bbcc38d                    dcccccccvg
hdisk10         0004047a5bbcd6ed                    dcccccccvg
hdisk11         0004047a5bbce7d9                    dcccccccvg
hdisk12         0004047a5bbcf9df                    dcccccccvg
hdisk13         0004047a5bbd0c49                    dcccccccvg
hdisk14         0004047a5bbd1cac                    dcccccccvg
hdisk15         0004047a5bbd2fde                    dcccccccvg
hdisk16         0004047a5bbd4259                    dcccccccvg
hdisk17         0004047a5bbd5742                    dcccccccvg
hdisk18         0004047a5bbd6bcd                    dcccccccvg
hdisk19         0004047a5bbd7932                    dcccccccvg
hdisk20         0004047a5bbd8068                    dcccccccvg
hdisk21         0004047a5bbd879d                    dcccccccvg
hdisk23         0004047a5bbd9622                    dcccccccvg
hdisk24         0004047a5bbd9d73                    dcccccccvg
hdisk25         0004047a5bbda4bf                    dcccccccvg
hdisk26         0004047a8abfee03                    dcccccccvg
hdisk27         none                                None
root@deeeeeee:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> importvg -L dcccccccvg hdisk20
0516-304 getlvodm: Unable to find device id 0004047ad46dd30f in the Device
        Configuration Database.
0516-304 : Unable to find device id 0004047ad46dd30f0000000000000000 in the Device
        Configuration Database.
0516-780 importvg: Unable to import volume group from hdisk20.
root@deeeeeee:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> importvg -L dcccccccvg hdisk20
0516-304 getlvodm: Unable to find device id 0004047ad46dd30f in the Device
        Configuration Database.
0516-304 : Unable to find device id 0004047ad46dd30f0000000000000000 in the Device
        Configuration Database.
0516-780 importvg: Unable to import volume group from hdisk20.
root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    dcccccccvg
hdisk3          0004047a5bbc52c1                    dcccccccvg
hdisk4          0004047a5bbc65da                    dcccccccvg
hdisk5          0004047a5bbc79ad                    dcccccccvg
hdisk6          0004047a5bbc8bed                    dcccccccvg
hdisk7          0004047a5bbc9ee6                    dcccccccvg
hdisk8          0004047a5bbcb0f7                    dcccccccvg
hdisk9          0004047a5bbcc38d                    dcccccccvg
hdisk10         0004047a5bbcd6ed                    dcccccccvg
hdisk11         0004047a5bbce7d9                    dcccccccvg
hdisk12         0004047a5bbcf9df                    dcccccccvg
hdisk13         0004047a5bbd0c49                    dcccccccvg
hdisk14         0004047a5bbd1cac                    dcccccccvg
hdisk15         0004047a5bbd2fde                    dcccccccvg
hdisk16         0004047a5bbd4259                    dcccccccvg
hdisk17         0004047a5bbd5742                    dcccccccvg
hdisk18         0004047a5bbd6bcd                    dcccccccvg
hdisk19         0004047a5bbd7932                    dcccccccvg
hdisk20         0004047a5bbd8068                    dcccccccvg
hdisk21         0004047a5bbd879d                    dcccccccvg
hdisk23         0004047a5bbd9622                    dcccccccvg
hdisk24         0004047a5bbd9d73                    dcccccccvg
hdisk25         0004047a5bbda4bf                    dcccccccvg
hdisk26         0004047a8abfee03                    dcccccccvg
hdisk27         none                                None
root@deeeeeee:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> ls -l /dev/dcccccccvg
crw-r-----   1 root     system       41,  0 Jul 20 02:43 /dev/dcccccccvg
root@dccccccc:/usr/sbin/cluster/utilities>



root@deeeeeee:/usr/sbin/cluster/utilities> ls -l /dev/dcccccccvg
crw-r-----   1 root     system       41,  0 Oct 28 14:37 /dev/dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities>


importvg -V 41 -y dcccccccvg -n -F hdisk20


root@deeeeeee:/usr/sbin/cluster/utilities> lsvg -o
rootvg
root@deeeeeee:/usr/sbin/cluster/utilities> lsvg
rootvg
dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    dcccccccvg
hdisk3          0004047a5bbc52c1                    dcccccccvg
hdisk4          0004047a5bbc65da                    dcccccccvg
hdisk5          0004047a5bbc79ad                    dcccccccvg
hdisk6          0004047a5bbc8bed                    dcccccccvg
hdisk7          0004047a5bbc9ee6                    dcccccccvg
hdisk8          0004047a5bbcb0f7                    dcccccccvg
hdisk9          0004047a5bbcc38d                    dcccccccvg
hdisk10         0004047a5bbcd6ed                    dcccccccvg
hdisk11         0004047a5bbce7d9                    dcccccccvg
hdisk12         0004047a5bbcf9df                    dcccccccvg
hdisk13         0004047a5bbd0c49                    dcccccccvg
hdisk14         0004047a5bbd1cac                    dcccccccvg
hdisk15         0004047a5bbd2fde                    dcccccccvg
hdisk16         0004047a5bbd4259                    dcccccccvg
hdisk17         0004047a5bbd5742                    dcccccccvg
hdisk18         0004047a5bbd6bcd                    dcccccccvg
hdisk19         0004047a5bbd7932                    dcccccccvg
hdisk20         0004047a5bbd8068                    dcccccccvg
hdisk21         0004047a5bbd879d                    dcccccccvg
hdisk23         0004047a5bbd9622                    dcccccccvg
hdisk24         0004047a5bbd9d73                    dcccccccvg
hdisk25         0004047a5bbda4bf                    dcccccccvg
hdisk26         0004047a8abfee03                    dcccccccvg
hdisk27         none                                None
root@deeeeeee:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> lsvg
rootvg
dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities> lsvg -o
rootvg
root@deeeeeee:/usr/sbin/cluster/utilities> exportvg dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities> lsvg
rootvg
root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    None
hdisk3          0004047a5bbc52c1                    None
hdisk4          0004047a5bbc65da                    None
hdisk5          0004047a5bbc79ad                    None
hdisk6          0004047a5bbc8bed                    None
hdisk7          0004047a5bbc9ee6                    None
hdisk8          0004047a5bbcb0f7                    None
hdisk9          0004047a5bbcc38d                    None
hdisk10         0004047a5bbcd6ed                    None
hdisk11         0004047a5bbce7d9                    None
hdisk12         0004047a5bbcf9df                    None
hdisk13         0004047a5bbd0c49                    None
hdisk14         0004047a5bbd1cac                    None
hdisk15         0004047a5bbd2fde                    None
hdisk16         0004047a5bbd4259                    None
hdisk17         0004047a5bbd5742                    None
hdisk18         0004047a5bbd6bcd                    None
hdisk19         0004047a5bbd7932                    None
hdisk20         0004047a5bbd8068                    None
hdisk21         0004047a5bbd879d                    None
hdisk23         0004047a5bbd9622                    None
hdisk24         0004047a5bbd9d73                    None
hdisk25         0004047a5bbda4bf                    None
hdisk26         0004047a8abfee03                    None
hdisk27         none                                None
root@deeeeeee:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> cfgmgr -l ssar
root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    None
hdisk3          0004047a5bbc52c1                    None
hdisk4          0004047a5bbc65da                    None
hdisk5          0004047a5bbc79ad                    None
hdisk6          0004047a5bbc8bed                    None
hdisk7          0004047a5bbc9ee6                    None
hdisk8          0004047a5bbcb0f7                    None
hdisk9          0004047a5bbcc38d                    None
hdisk10         0004047a5bbcd6ed                    None
hdisk11         0004047a5bbce7d9                    None
hdisk12         0004047a5bbcf9df                    None
hdisk13         0004047a5bbd0c49                    None
hdisk14         0004047a5bbd1cac                    None
hdisk15         0004047a5bbd2fde                    None
hdisk16         0004047a5bbd4259                    None
hdisk17         0004047a5bbd5742                    None
hdisk18         0004047a5bbd6bcd                    None
hdisk19         0004047a5bbd7932                    None
hdisk20         0004047a5bbd8068                    None
hdisk21         0004047a5bbd879d                    None
hdisk23         0004047a5bbd9622                    None
hdisk24         0004047a5bbd9d73                    None
hdisk25         0004047a5bbda4bf                    None
hdisk26         0004047a8abfee03                    None
hdisk27         none                                None
root@deeeeeee:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> ifconfig -a
en0: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN>
        inet 10.1.50.83 netmask 0xffff0000 broadcast 10.1.255.255
en1: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(INACTIVE),PSEG,LARGESEND,CHAIN>
        inet 9.23.219.215 netmask 0xffffff00 broadcast 9.23.219.255
en3: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN>
        inet 192.168.121.14 netmask 0xffffff00 broadcast 192.168.121.255
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
        inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
        inet6 ::1/0
         tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
root@deeeeeee:/usr/sbin/cluster/utilities> lsattr -El en0
alias4                    IPv4 Alias including Subnet Mask           True
alias6                    IPv6 Alias including Prefix Length         True
arp           on          Address Resolution Protocol (ARP)          True
authority                 Authorized Users                           True
broadcast                 Broadcast Address                          True
mtu           1500        Maximum IP Packet Size for This Device     True
netaddr       10.1.50.83  Internet Address                           True
netaddr6                  IPv6 Internet Address                      True
netmask       255.255.0.0 Subnet Mask                                True
prefixlen                 Prefix Length for IPv6 Internet Address    True
remmtu        576         Maximum IP Packet Size for REMOTE Networks True
rfc1323                   Enable/Disable TCP RFC 1323 Window Scaling True
security      none        Security Level                             True
state         up          Current Interface Status                   True
tcp_mssdflt               Set TCP Maximum Segment Size               True
tcp_nodelay               Enable/Disable TCP_NODELAY Option          True
tcp_recvspace             Set Socket Buffer Space for Receiving      True
tcp_sendspace             Set Socket Buffer Space for Sending        True
root@deeeeeee:/usr/sbin/cluster/utilities> lsattr -El en3
alias4                       IPv4 Alias including Subnet Mask           True
alias6                       IPv6 Alias including Prefix Length         True
arp           on             Address Resolution Protocol (ARP)          True
authority                    Authorized Users                           True
broadcast                    Broadcast Address                          True
mtu           1500           Maximum IP Packet Size for This Device     True
netaddr       192.168.121.14 Internet Address                           True
netaddr6                     IPv6 Internet Address                      True
netmask       255.255.255.0  Subnet Mask                                True
prefixlen                    Prefix Length for IPv6 Internet Address    True
remmtu        576            Maximum IP Packet Size for REMOTE Networks True
rfc1323                      Enable/Disable TCP RFC 1323 Window Scaling True
security      none           Security Level                             True
state         up             Current Interface Status                   True
tcp_mssdflt                  Set TCP Maximum Segment Size               True
tcp_nodelay                  Enable/Disable TCP_NODELAY Option          True
tcp_recvspace                Set Socket Buffer Space for Receiving      True
tcp_sendspace                Set Socket Buffer Space for Sending        True
root@deeeeeee:/usr/sbin/cluster/utilities> lsattr -El en1
alias4                      IPv4 Alias including Subnet Mask           True
alias6                      IPv6 Alias including Prefix Length         True
arp           on            Address Resolution Protocol (ARP)          True
authority                   Authorized Users                           True
broadcast                   Broadcast Address                          True
mtu           1500          Maximum IP Packet Size for This Device     True
netaddr       9.23.219.215  Internet Address                           True
netaddr6                    IPv6 Internet Address                      True
netmask       255.255.255.0 Subnet Mask                                True
prefixlen                   Prefix Length for IPv6 Internet Address    True
remmtu        576           Maximum IP Packet Size for REMOTE Networks True
rfc1323                     Enable/Disable TCP RFC 1323 Window Scaling True
security      none          Security Level                             True
state         up            Current Interface Status                   True
tcp_mssdflt                 Set TCP Maximum Segment Size               True
tcp_nodelay                 Enable/Disable TCP_NODELAY Option          True
tcp_recvspace               Set Socket Buffer Space for Receiving      True
tcp_sendspace               Set Socket Buffer Space for Sending        True
root@deeeeeee:/usr/sbin/cluster/utilities>

root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    None
hdisk3          0004047a5bbc52c1                    None
hdisk4          0004047a5bbc65da                    None
hdisk5          0004047a5bbc79ad                    None
hdisk6          0004047a5bbc8bed                    None
hdisk7          0004047a5bbc9ee6                    None
hdisk8          0004047a5bbcb0f7                    None
hdisk9          0004047a5bbcc38d                    None
hdisk10         0004047a5bbcd6ed                    None
hdisk11         0004047a5bbce7d9                    None
hdisk12         0004047a5bbcf9df                    None
hdisk13         0004047a5bbd0c49                    None
hdisk14         0004047a5bbd1cac                    None
hdisk15         0004047a5bbd2fde                    None
hdisk16         0004047a5bbd4259                    None
hdisk17         0004047a5bbd5742                    None
hdisk18         0004047a5bbd6bcd                    None
hdisk19         0004047a5bbd7932                    None
hdisk20         0004047a5bbd8068                    None
hdisk21         0004047a5bbd879d                    None
hdisk23         0004047a5bbd9622                    None
hdisk24         0004047a5bbd9d73                    None
hdisk25         0004047a5bbda4bf                    None
hdisk27         0004047ad46dd30f                    None
hdisk26         0004047a8abfee03                    None
root@deeeeeee:/usr/sbin/cluster/utilities> lspv|grep 0004047ad46dd30f
hdisk27         0004047ad46dd30f                    None
root@deeeeeee:/usr/sbin/cluster/utilities>

root@deeeeeee:/usr/sbin/cluster/utilities> importvg -V 41 -y dcccccccvg -n -F hdisk20
dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    dcccccccvg
hdisk3          0004047a5bbc52c1                    dcccccccvg
hdisk4          0004047a5bbc65da                    dcccccccvg
hdisk5          0004047a5bbc79ad                    dcccccccvg
hdisk6          0004047a5bbc8bed                    dcccccccvg
hdisk7          0004047a5bbc9ee6                    dcccccccvg
hdisk8          0004047a5bbcb0f7                    dcccccccvg
hdisk9          0004047a5bbcc38d                    dcccccccvg
hdisk10         0004047a5bbcd6ed                    dcccccccvg
hdisk11         0004047a5bbce7d9                    dcccccccvg
hdisk12         0004047a5bbcf9df                    dcccccccvg
hdisk13         0004047a5bbd0c49                    dcccccccvg
hdisk14         0004047a5bbd1cac                    dcccccccvg
hdisk15         0004047a5bbd2fde                    dcccccccvg
hdisk16         0004047a5bbd4259                    dcccccccvg
hdisk17         0004047a5bbd5742                    dcccccccvg
hdisk18         0004047a5bbd6bcd                    dcccccccvg
hdisk19         0004047a5bbd7932                    dcccccccvg
hdisk20         0004047a5bbd8068                    dcccccccvg
hdisk21         0004047a5bbd879d                    None
hdisk23         0004047a5bbd9622                    dcccccccvg
hdisk24         0004047a5bbd9d73                    dcccccccvg
hdisk25         0004047a5bbda4bf                    dcccccccvg
hdisk27         0004047ad46dd30f                    dcccccccvg
hdisk26         0004047a8abfee03                    dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> lsvg dcccccccvg
VOLUME GROUP:       dcccccccvg              VG IDENTIFIER:  0004047a00004c00000000fb5e7fe19e
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      13008 (208128 megabytes)
MAX LVs:            256                      FREE PPs:       5852 (93632 megabytes)
LVs:                19                       USED PPs:       7156 (114496 megabytes)
OPEN LVs:           19                       QUORUM:         13
TOTAL PVs:          24                       VG DESCRIPTORS: 24
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         24                       AUTO ON:        yes
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
root@dccccccc:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> lspv
hdisk0          0004047ae34c2b1e                    rootvg          active
hdisk1          0004047ae325c810                    rootvg          active
hdisk2          0004047a5bbbb462                    dcccccccvg     active
hdisk3          0004047a5bbc52c1                    dcccccccvg     active
hdisk4          0004047a5bbc65da                    dcccccccvg     active
hdisk5          0004047a5bbc79ad                    dcccccccvg     active
hdisk6          0004047a5bbc8bed                    dcccccccvg     active
hdisk7          0004047a5bbc9ee6                    dcccccccvg     active
hdisk8          0004047a5bbcb0f7                    dcccccccvg     active
hdisk9          0004047a5bbcc38d                    dcccccccvg     active
hdisk10         0004047a5bbcd6ed                    dcccccccvg     active
hdisk11         0004047a5bbce7d9                    dcccccccvg     active
hdisk12         0004047a5bbcf9df                    dcccccccvg     active
hdisk13         0004047a5bbd0c49                    dcccccccvg     active
hdisk14         0004047a5bbd1cac                    dcccccccvg     active
hdisk15         0004047a5bbd2fde                    dcccccccvg     active
hdisk16         0004047a5bbd4259                    dcccccccvg     active
hdisk17         0004047a5bbd5742                    dcccccccvg     active
hdisk18         0004047a5bbd6bcd                    dcccccccvg     active
hdisk19         0004047a5bbd7932                    dcccccccvg     active
hdisk20         0004047a5bbd8068                    dcccccccvg     active
hdisk23         0004047a5bbd9622                    dcccccccvg     active
hdisk24         0004047a5bbd9d73                    dcccccccvg     active
hdisk25         0004047a5bbda4bf                    dcccccccvg     active
hdisk26         0004047a8abfee03                    dcccccccvg     active
hdisk21         0004047ad46dd30f                    dcccccccvg     active
root@dccccccc:/usr/sbin/cluster/utilities> lspv|wc -l
      26
root@dccccccc:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> lspv|wc -l
      27
root@deeeeeee:/usr/sbin/cluster/utilities> rmdev -dl hdisk
root@deeeeeee:/usr/sbin/cluster/utilities> lspv hdisk21
0516-320 : Physical volume hdisk21 is not assigned to
        a volume group.
root@deeeeeee:/usr/sbin/cluster/utilities> rmdev -dl hdisk21
hdisk21 deleted
root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    dcccccccvg
hdisk3          0004047a5bbc52c1                    dcccccccvg
hdisk4          0004047a5bbc65da                    dcccccccvg
hdisk5          0004047a5bbc79ad                    dcccccccvg
hdisk6          0004047a5bbc8bed                    dcccccccvg
hdisk7          0004047a5bbc9ee6                    dcccccccvg
hdisk8          0004047a5bbcb0f7                    dcccccccvg
hdisk9          0004047a5bbcc38d                    dcccccccvg
hdisk10         0004047a5bbcd6ed                    dcccccccvg
hdisk11         0004047a5bbce7d9                    dcccccccvg
hdisk12         0004047a5bbcf9df                    dcccccccvg
hdisk13         0004047a5bbd0c49                    dcccccccvg
hdisk14         0004047a5bbd1cac                    dcccccccvg
hdisk15         0004047a5bbd2fde                    dcccccccvg
hdisk16         0004047a5bbd4259                    dcccccccvg
hdisk17         0004047a5bbd5742                    dcccccccvg
hdisk18         0004047a5bbd6bcd                    dcccccccvg
hdisk19         0004047a5bbd7932                    dcccccccvg
hdisk20         0004047a5bbd8068                    dcccccccvg
hdisk23         0004047a5bbd9622                    dcccccccvg
hdisk24         0004047a5bbd9d73                    dcccccccvg
hdisk25         0004047a5bbda4bf                    dcccccccvg
hdisk27         0004047ad46dd30f                    dcccccccvg
hdisk26         0004047a8abfee03                    dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> ./clfindres
-----------------------------------------------------------------------------
Group Name     Type       State      Location
-----------------------------------------------------------------------------
udb_rg         cascading  ONLINE     dccccccc
                          OFFLINE    deeeeeee

root@dccccccc:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> ./clfindres
-----------------------------------------------------------------------------
Group Name     Type       State      Location
-----------------------------------------------------------------------------
udb_rg         cascading  ONLINE     dccccccc
                          OFFLINE    deeeeeee

root@deeeeeee:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> lsvg
rootvg
dcccccccvg
root@dccccccc:/usr/sbin/cluster/utilities> lsvg dcccccccvg
VOLUME GROUP:       dcccccccvg              VG IDENTIFIER:  0004047a00004c00000000fb5e7fe19e
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      13008 (208128 megabytes)
MAX LVs:            256                      FREE PPs:       5852 (93632 megabytes)
LVs:                19                       USED PPs:       7156 (114496 megabytes)
OPEN LVs:           19                       QUORUM:         13
TOTAL PVs:          24                       VG DESCRIPTORS: 24
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         24                       AUTO ON:        yes
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
root@dccccccc:/usr/sbin/cluster/utilities> chvg -a n dcccccccvg
root@dccccccc:/usr/sbin/cluster/utilities> lsvg dcccccccvg
VOLUME GROUP:       dcccccccvg              VG IDENTIFIER:  0004047a00004c00000000fb5e7fe19e
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      13008 (208128 megabytes)
MAX LVs:            256                      FREE PPs:       5852 (93632 megabytes)
LVs:                19                       USED PPs:       7156 (114496 megabytes)
OPEN LVs:           19                       QUORUM:         13
TOTAL PVs:          24                       VG DESCRIPTORS: 24
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         24                       AUTO ON:        no
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
root@dccccccc:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> lsvg
rootvg
dcccccccvg
root@dccccccc:/usr/sbin/cluster/utilities> lsvg dcccccccvg
VOLUME GROUP:       dcccccccvg              VG IDENTIFIER:  0004047a00004c00000000fb5e7fe19e
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      13008 (208128 megabytes)
MAX LVs:            256                      FREE PPs:       5852 (93632 megabytes)
LVs:                19                       USED PPs:       7156 (114496 megabytes)
OPEN LVs:           19                       QUORUM:         13
TOTAL PVs:          24                       VG DESCRIPTORS: 24
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         24                       AUTO ON:        yes
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
root@dccccccc:/usr/sbin/cluster/utilities> chvg -a n dcccccccvg
root@dccccccc:/usr/sbin/cluster/utilities> lsvg dcccccccvg
VOLUME GROUP:       dcccccccvg              VG IDENTIFIER:  0004047a00004c00000000fb5e7fe19e
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      13008 (208128 megabytes)
MAX LVs:            256                      FREE PPs:       5852 (93632 megabytes)
LVs:                19                       USED PPs:       7156 (114496 megabytes)
OPEN LVs:           19                       QUORUM:         13
TOTAL PVs:          24                       VG DESCRIPTORS: 24
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         24                       AUTO ON:        no
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
root@dccccccc:/usr/sbin/cluster/utilities>