Search This Blog

Monday 8 August 2016

How to extend Root Partition in LVM


I had an adhoc plan to install latest version of OEM13C on my existing RAC VM’s instead of creating a dedicated machine for OEM. But, Root filesystem space about to exhaust and I felt there is no room for one Repository DB. Initially I added 20GB of normal disks to each of the VM’s and Just followed the below steps to extend the root partition.

[root@vm2 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_vm1-lv_root
                       36G   25G  8.6G  75% /
tmpfs                  12G  1.6G   11G  14% /dev/shm
/dev/sda1             485M  110M  351M  24% /boot

Root partition has only 8.6 GB of free space; I am going to extend this now.
Newly added disk sdl

Disk /dev/sdl: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Logical Volume information

[root@vm2 ~]# lvs
  LV      VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_root vg_vm1 -wi-ao---- 35.57g
  lv_swap vg_vm1 -wi-ao----  3.94g

Follow the below steps to create partition on the new disk and change partition type from Linux to LVM(8e)

[root@vm2 ~]# fdisk /dev/sdl
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x6d8ac917.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4):
Value out of range.
Partition number (1-4):
Value out of range.
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610):
Using default value 2610

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

My already mounted filesystem is of type ext4.  So, formatting the newly created partition using the below command.

[root@vm2 ~]# mkfs.ext4 /dev/sdl1
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5241198 blocks
262059 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

Initialize the newly created partition as physical volume
[root@vm2 ~]# pvcreate /dev/sdl1
  Physical volume "/dev/sdl1" successfully created

Check the Volume Groups using the below command

[root@vm2 ~]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  vg_vm1   1   2   0 wz--n- 39.51g    0

Extend the new Volume Group with new Physical Volume

[root@vm2 ~]# vgextend vg_vm1 /dev/sdl1
  Volume group "vg_vm1" successfully extended

Extend the Logical Volume (lv_root) with all Free space of VG.

[root@vm2 ~]# lvextend -l +100%FREE /dev/vg_vm1/lv_root
  Size of logical volume vg_vm1/lv_root changed from 35.57 GiB (9106 extents) to 55.56 GiB (14224 extents).
  Logical volume lv_root successfully resized.

Finally Resize the Filesystem

[root@vm2 ~]# resize2fs /dev/vg_vm1/lv_root
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/vg_vm1/lv_root is mounted on /; on-line resizing required
old_desc_blocks = 3, new_desc_blocks = 4
Performing an on-line resize of /dev/vg_vm1/lv_root to 14565376 (4k) blocks.
The filesystem on /dev/vg_vm1/lv_root is now 14565376 blocks long.

Verify the filesystem size

[root@vm2 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_vm1-lv_root
                       55G   25G   28G  48% /
tmpfs                  12G  1.6G   11G  14% /dev/shm
/dev/sda1             485M  110M  351M  24% /boot
[root@vm2 ~]# lvs
  LV      VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_root vg_vm1 -wi-ao---- 55.56g
  lv_swap vg_vm1 -wi-ao----  3.94g
[root@vm2 ~]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  vg_vm1   2   2   0 wz--n- 59.50g    0










Friday 29 July 2016

Migrating the OCR and Voting disk among  Disk groups


Agenda is to Migrate or Move the OCR and Voting disks from one disk group to another.  I had a chance to create 2 node 12C RAC GI and database setup in Oracle Virtual Box (OVB). As I opted for only one disk group i.e, DATA while installing the GI software both the data files and clusterware files created in the DATA disk group. Though it’s a virtual environment, felt it’s a good practice to maintain the copy of OCR and Voting disks in a separate disk group.
Added three disks of size 2G and modified it to shareable mode in both the machines.

Steps to format the newly added disks and makes visible

Format the disks with the fdisk utility and sync the disks in the second node.
fdisk /dev/sdh
fdisk /dev/sdi
fdisk /dev/sdj

Generate the UUID with scsi_id.
/sbin/scsi_id -g -u -d /dev/sdh
/sbin/scsi_id -g -u -d /dev/sdi
/sbin/scsi_id -g -u -d /dev/sdj

Add the UUID to the udev rules file as below
vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBa4895aff-2eaa7462", NAME="asm-disk7", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB1a8c4c0e-2c74713d", NAME="asm-disk8", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBf8f2ea7a-ae4d459f", NAME="asm-disk9", OWNER="oracle", GROUP="oinstall", MODE="0660"

To apply the changes and makes disk visible execute the below commands
/sbin/partprobe /dev/sdh
/sbin/partprobe /dev/sdi
/sbin/partprobe /dev/sdj

Once the above steps are completed, the disks must visible in the DB level to create the disk group. Using ASMCA I have created the new disk group  +OCR_VOTING.

Steps to move the clusterware files to the newly created diskgroup.

Prerequisites to check the location of the files

Check for the location of the existing OCR, voting disks, ASM SPfile and Password file

 [oracle@vm1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1484
         Available space (kbytes) :     408084
         ID                       :  762963091
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user

[oracle@vm1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   a788105fb6524f4ebf146609c6e2d4d0 (/dev/asm-disk1) [DATA]
 2. ONLINE   f0778099cf4a4f22bf84ff94281178fe (/dev/asm-disk2) [DATA]
 3. ONLINE   67c70817b9404fc1bfbcb85d2e62fc7a (/dev/asm-disk3) [DATA]
Located 3 voting disk(s).

[oracle@vm1 ~]$ asmcmd spget
+DATA/scan-vm/ASMPARAMETERFILE/registry.253.918403997

[oracle@vm1 ~]$ asmcmd pwget --asm
+DATA/orapwASM

Get name of the cluster

[oracle@vm1 ~]$ olsnodes -c
scan-vm

Steps for Migrating the OCR

[root@vm1 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -add +OCR_VOTING
[root@vm1 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1484
         Available space (kbytes) :     408084
         ID                       :  762963091
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         : +OCR_VOTING
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check succeeded

[root@vm1 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -delete +DATA
[root@vm1 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1484
         Available space (kbytes) :     408084
         ID                       :  762963091
         Device/File Name         : +OCR_VOTING
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check succeeded

Steps for Migrating the Voting disks

[root@vm2 ~]# /u01/app/12.1.0/grid/bin/crsctl replace votedisk +OCR_VOTING
Successful addition of voting disk 473ca034fcfe4ff2bfc4221442a9d715.
Successful addition of voting disk e4eaf4c7c76d4feabf2ceec6ffb3ebe8.
Successful addition of voting disk d223dd90dbf24fe9bf30764452d16583.
Successful deletion of voting disk a788105fb6524f4ebf146609c6e2d4d0.
Successful deletion of voting disk f0778099cf4a4f22bf84ff94281178fe.
Successful deletion of voting disk 67c70817b9404fc1bfbcb85d2e62fc7a.
Successfully replaced voting disk group with +OCR_VOTING.
CRS-4266: Voting file(s) successfully replaced
[root@vm2 ~]# /u01/app/12.1.0/grid/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   473ca034fcfe4ff2bfc4221442a9d715 (/dev/asm-disk7) [OCR_VOTING]
 2. ONLINE   e4eaf4c7c76d4feabf2ceec6ffb3ebe8 (/dev/asm-disk8) [OCR_VOTING]
 3. ONLINE   d223dd90dbf24fe9bf30764452d16583 (/dev/asm-disk9) [OCR_VOTING]
Located 3 voting disk(s).



Steps for moving the ASM SPfile and ASM Password file

[oracle@vm1 ~]$ asmcmd spmove '+DATA/scan-vm/ASMPARAMETERFILE/registry.253.918403997' '+OCR_VOTING/scan-vm/spfileASM.ora'
ORA-15032: not all alterations performed
ORA-15028: ASM file '+DATA/scan-vm/ASMPARAMETERFILE/registry.253.918403997' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)
[oracle@vm1 ~]$ asmcmd spget
+OCR_VOTING/scan-vm/spfileASM.ora
[oracle@vm1 ~]$ asmcmd pwmove --asm +DATA/orapwASM +OCR_VOTING/scan-vm/orapwASM
moving +DATA/orapwASM -> +OCR_VOTING/scan-vm/orapwASM
[oracle@vm1 ~]$ asmcmd pwget --asm

+OCR_VOTING/scan-vm/orapwASM