Actually, the hdisk number doesn't matter to either AIX or HACMP. The
important bit is the PVID. So, given that fact, we can skip straight
to what is the next step in getting the disk into the vg, and extending:
the filesystems.
So next thing to do is run:
chdev -l hdiskX -a pv=yes
on both nodes for the hdisks (hdisk13 on node a, hdisk14 on node b)
this will put the pvid in place.
next, run
smitty hacmp --> extended configuration --> discover hacmp related
information
This will poll both systems and find the new disk
Finally, the changes you'll want to make can be done here:
smitty cl_admin --> shared logical volume management
shared volume groups
set characteristics of a shared volume group
add a disk to a shared volume group
smitty cl_admin --> shared logical volume management
shared filesystems
set characteristics of a shared filesystem
that should be it. Using the cl_admin menus propogates all changes
to all nodes in the cluster, so you don't need to worry about that,
we'll take care of it for you.
Everything works until the "set characteristics of a
shared filesystem" step. When I try to increase the size, I get this
error:
node_ua1: 0516-404 allocp: This system cannot fulfill the allocation
request.
node_ua1: There are not enough free partitions or not enough
physical volumes
node_ua1: to keep strictness and satisfy allocation requests.
The command
node_ua1: should be retried with different allocation
characteristics.
node_ua1: cl_rsh had exit code = 1, see cspoc.log and/or clcomd.log for
more information
cl_chfs: Error executing chfs -a size="6G" /MQHA/GENUV1/log on node
node_ua1
When I list the characteristics of the shared volume group, I get this:
node_ua1: LOGICAL VOLUME: mquva1log VOLUME GROUP:
mqma1vg
node_ua1: LV IDENTIFIER: 00cf36dc00004c000000010f074041fc.3
PERMISSION: read/write
node_ua1: VG STATE: active/complete LV STATE:
opened/syncd
node_ua1: TYPE: jfs2 WRITE VERIFY: off
node_ua1: MAX LPs: 512 PP SIZE: 64
megabyte(s)
node_ua1: COPIES: 1 SCHED POLICY:
parallel
node_ua1: LPs: 40 PPs: 40
node_ua1: STALE PPs: 0 BB POLICY:
relocatable
node_ua1: INTER-POLICY: minimum RELOCATABLE: yes
node_ua1: INTRA-POLICY: middle UPPER BOUND: 1
node_ua1: MOUNT POINT: /MQHA/GENUV1/log LABEL:
/MQHA/GENUV1/log
node_ua1: MIRROR WRITE CONSISTENCY: on/ACTIVE
node_ua1: EACH LP COPY ON A SEPARATE PV ?: yes
node_ua1: Serialize IO ?: NO
the upperbound needs to be increaased. Currently you've got it
set to be only able to be on a single disk
smitty cl_admin --> shared logical volume management
shared logical volumes
set characteristics of a logical volume
change upper bound setting to 16 or something, just in case
That will set the max number of drives you can spread across to whatever
value you use. I'd recommend something large, like 16 or 32, again,
just in case you ever need to add more disks
Under the change a logical volume option the Maximum number of physical
volumes correlates to the upper bound value on the volume group.
The default is 32, so that's been changed at some point in the past.
I'd recommend either 16, or back to 32, just to avoid this in the future
If the "range of physical volumes" option is set to minimum it will
avoid spreading across more than the necessary number of disks anyway.
important bit is the PVID. So, given that fact, we can skip straight
to what is the next step in getting the disk into the vg, and extending:
the filesystems.
So next thing to do is run:
chdev -l hdiskX -a pv=yes
on both nodes for the hdisks (hdisk13 on node a, hdisk14 on node b)
this will put the pvid in place.
next, run
smitty hacmp --> extended configuration --> discover hacmp related
information
This will poll both systems and find the new disk
Finally, the changes you'll want to make can be done here:
smitty cl_admin --> shared logical volume management
shared volume groups
set characteristics of a shared volume group
add a disk to a shared volume group
smitty cl_admin --> shared logical volume management
shared filesystems
set characteristics of a shared filesystem
that should be it. Using the cl_admin menus propogates all changes
to all nodes in the cluster, so you don't need to worry about that,
we'll take care of it for you.
Everything works until the "set characteristics of a
shared filesystem" step. When I try to increase the size, I get this
error:
node_ua1: 0516-404 allocp: This system cannot fulfill the allocation
request.
node_ua1: There are not enough free partitions or not enough
physical volumes
node_ua1: to keep strictness and satisfy allocation requests.
The command
node_ua1: should be retried with different allocation
characteristics.
node_ua1: cl_rsh had exit code = 1, see cspoc.log and/or clcomd.log for
more information
cl_chfs: Error executing chfs -a size="6G" /MQHA/GENUV1/log on node
node_ua1
When I list the characteristics of the shared volume group, I get this:
node_ua1: LOGICAL VOLUME: mquva1log VOLUME GROUP:
mqma1vg
node_ua1: LV IDENTIFIER: 00cf36dc00004c000000010f074041fc.3
PERMISSION: read/write
node_ua1: VG STATE: active/complete LV STATE:
opened/syncd
node_ua1: TYPE: jfs2 WRITE VERIFY: off
node_ua1: MAX LPs: 512 PP SIZE: 64
megabyte(s)
node_ua1: COPIES: 1 SCHED POLICY:
parallel
node_ua1: LPs: 40 PPs: 40
node_ua1: STALE PPs: 0 BB POLICY:
relocatable
node_ua1: INTER-POLICY: minimum RELOCATABLE: yes
node_ua1: INTRA-POLICY: middle UPPER BOUND: 1
node_ua1: MOUNT POINT: /MQHA/GENUV1/log LABEL:
/MQHA/GENUV1/log
node_ua1: MIRROR WRITE CONSISTENCY: on/ACTIVE
node_ua1: EACH LP COPY ON A SEPARATE PV ?: yes
node_ua1: Serialize IO ?: NO
the upperbound needs to be increaased. Currently you've got it
set to be only able to be on a single disk
smitty cl_admin --> shared logical volume management
shared logical volumes
set characteristics of a logical volume
change upper bound setting to 16 or something, just in case
That will set the max number of drives you can spread across to whatever
value you use. I'd recommend something large, like 16 or 32, again,
just in case you ever need to add more disks
Under the change a logical volume option the Maximum number of physical
volumes correlates to the upper bound value on the volume group.
The default is 32, so that's been changed at some point in the past.
I'd recommend either 16, or back to 32, just to avoid this in the future
If the "range of physical volumes" option is set to minimum it will
avoid spreading across more than the necessary number of disks anyway.
No comments:
Post a Comment