Search This Blog

Monday 11 September 2017

Installing RAC Database in Silent Mode


As part of the Migration Activity, Standalone database from solaris server to the exadata x5-2 machine. I chose to do it using transportable tablespace. For migrating the business data using TTS we have to create a skeleton database in the Exadata Machine first. Anyhow, Migration is not the actual topic here.

I just 2 node database in the 3rd and 4th nodes of the exadata Machine using the below simple script.


The output of the log file looks similar as below





























once the above output completes the execution, Our RAC database is ready. Make sure to create the dedicated listener if required using netca and configure the address on local_listener parameter of the database.


Cheers!















Monday 8 August 2016

How to extend Root Partition in LVM


I had an adhoc plan to install latest version of OEM13C on my existing RAC VM’s instead of creating a dedicated machine for OEM. But, Root filesystem space about to exhaust and I felt there is no room for one Repository DB. Initially I added 20GB of normal disks to each of the VM’s and Just followed the below steps to extend the root partition.

[root@vm2 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_vm1-lv_root
                       36G   25G  8.6G  75% /
tmpfs                  12G  1.6G   11G  14% /dev/shm
/dev/sda1             485M  110M  351M  24% /boot

Root partition has only 8.6 GB of free space; I am going to extend this now.
Newly added disk sdl

Disk /dev/sdl: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Logical Volume information

[root@vm2 ~]# lvs
  LV      VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_root vg_vm1 -wi-ao---- 35.57g
  lv_swap vg_vm1 -wi-ao----  3.94g

Follow the below steps to create partition on the new disk and change partition type from Linux to LVM(8e)

[root@vm2 ~]# fdisk /dev/sdl
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x6d8ac917.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4):
Value out of range.
Partition number (1-4):
Value out of range.
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610):
Using default value 2610

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

My already mounted filesystem is of type ext4.  So, formatting the newly created partition using the below command.

[root@vm2 ~]# mkfs.ext4 /dev/sdl1
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5241198 blocks
262059 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

Initialize the newly created partition as physical volume
[root@vm2 ~]# pvcreate /dev/sdl1
  Physical volume "/dev/sdl1" successfully created

Check the Volume Groups using the below command

[root@vm2 ~]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  vg_vm1   1   2   0 wz--n- 39.51g    0

Extend the new Volume Group with new Physical Volume

[root@vm2 ~]# vgextend vg_vm1 /dev/sdl1
  Volume group "vg_vm1" successfully extended

Extend the Logical Volume (lv_root) with all Free space of VG.

[root@vm2 ~]# lvextend -l +100%FREE /dev/vg_vm1/lv_root
  Size of logical volume vg_vm1/lv_root changed from 35.57 GiB (9106 extents) to 55.56 GiB (14224 extents).
  Logical volume lv_root successfully resized.

Finally Resize the Filesystem

[root@vm2 ~]# resize2fs /dev/vg_vm1/lv_root
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/vg_vm1/lv_root is mounted on /; on-line resizing required
old_desc_blocks = 3, new_desc_blocks = 4
Performing an on-line resize of /dev/vg_vm1/lv_root to 14565376 (4k) blocks.
The filesystem on /dev/vg_vm1/lv_root is now 14565376 blocks long.

Verify the filesystem size

[root@vm2 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_vm1-lv_root
                       55G   25G   28G  48% /
tmpfs                  12G  1.6G   11G  14% /dev/shm
/dev/sda1             485M  110M  351M  24% /boot
[root@vm2 ~]# lvs
  LV      VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_root vg_vm1 -wi-ao---- 55.56g
  lv_swap vg_vm1 -wi-ao----  3.94g
[root@vm2 ~]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  vg_vm1   2   2   0 wz--n- 59.50g    0










Friday 29 July 2016

Migrating the OCR and Voting disk among  Disk groups


Agenda is to Migrate or Move the OCR and Voting disks from one disk group to another.  I had a chance to create 2 node 12C RAC GI and database setup in Oracle Virtual Box (OVB). As I opted for only one disk group i.e, DATA while installing the GI software both the data files and clusterware files created in the DATA disk group. Though it’s a virtual environment, felt it’s a good practice to maintain the copy of OCR and Voting disks in a separate disk group.
Added three disks of size 2G and modified it to shareable mode in both the machines.

Steps to format the newly added disks and makes visible

Format the disks with the fdisk utility and sync the disks in the second node.
fdisk /dev/sdh
fdisk /dev/sdi
fdisk /dev/sdj

Generate the UUID with scsi_id.
/sbin/scsi_id -g -u -d /dev/sdh
/sbin/scsi_id -g -u -d /dev/sdi
/sbin/scsi_id -g -u -d /dev/sdj

Add the UUID to the udev rules file as below
vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBa4895aff-2eaa7462", NAME="asm-disk7", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB1a8c4c0e-2c74713d", NAME="asm-disk8", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBf8f2ea7a-ae4d459f", NAME="asm-disk9", OWNER="oracle", GROUP="oinstall", MODE="0660"

To apply the changes and makes disk visible execute the below commands
/sbin/partprobe /dev/sdh
/sbin/partprobe /dev/sdi
/sbin/partprobe /dev/sdj

Once the above steps are completed, the disks must visible in the DB level to create the disk group. Using ASMCA I have created the new disk group  +OCR_VOTING.

Steps to move the clusterware files to the newly created diskgroup.

Prerequisites to check the location of the files

Check for the location of the existing OCR, voting disks, ASM SPfile and Password file

 [oracle@vm1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1484
         Available space (kbytes) :     408084
         ID                       :  762963091
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user

[oracle@vm1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   a788105fb6524f4ebf146609c6e2d4d0 (/dev/asm-disk1) [DATA]
 2. ONLINE   f0778099cf4a4f22bf84ff94281178fe (/dev/asm-disk2) [DATA]
 3. ONLINE   67c70817b9404fc1bfbcb85d2e62fc7a (/dev/asm-disk3) [DATA]
Located 3 voting disk(s).

[oracle@vm1 ~]$ asmcmd spget
+DATA/scan-vm/ASMPARAMETERFILE/registry.253.918403997

[oracle@vm1 ~]$ asmcmd pwget --asm
+DATA/orapwASM

Get name of the cluster

[oracle@vm1 ~]$ olsnodes -c
scan-vm

Steps for Migrating the OCR

[root@vm1 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -add +OCR_VOTING
[root@vm1 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1484
         Available space (kbytes) :     408084
         ID                       :  762963091
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         : +OCR_VOTING
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check succeeded

[root@vm1 ~]# /u01/app/12.1.0/grid/bin/ocrconfig -delete +DATA
[root@vm1 ~]# /u01/app/12.1.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1484
         Available space (kbytes) :     408084
         ID                       :  762963091
         Device/File Name         : +OCR_VOTING
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check succeeded

Steps for Migrating the Voting disks

[root@vm2 ~]# /u01/app/12.1.0/grid/bin/crsctl replace votedisk +OCR_VOTING
Successful addition of voting disk 473ca034fcfe4ff2bfc4221442a9d715.
Successful addition of voting disk e4eaf4c7c76d4feabf2ceec6ffb3ebe8.
Successful addition of voting disk d223dd90dbf24fe9bf30764452d16583.
Successful deletion of voting disk a788105fb6524f4ebf146609c6e2d4d0.
Successful deletion of voting disk f0778099cf4a4f22bf84ff94281178fe.
Successful deletion of voting disk 67c70817b9404fc1bfbcb85d2e62fc7a.
Successfully replaced voting disk group with +OCR_VOTING.
CRS-4266: Voting file(s) successfully replaced
[root@vm2 ~]# /u01/app/12.1.0/grid/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   473ca034fcfe4ff2bfc4221442a9d715 (/dev/asm-disk7) [OCR_VOTING]
 2. ONLINE   e4eaf4c7c76d4feabf2ceec6ffb3ebe8 (/dev/asm-disk8) [OCR_VOTING]
 3. ONLINE   d223dd90dbf24fe9bf30764452d16583 (/dev/asm-disk9) [OCR_VOTING]
Located 3 voting disk(s).



Steps for moving the ASM SPfile and ASM Password file

[oracle@vm1 ~]$ asmcmd spmove '+DATA/scan-vm/ASMPARAMETERFILE/registry.253.918403997' '+OCR_VOTING/scan-vm/spfileASM.ora'
ORA-15032: not all alterations performed
ORA-15028: ASM file '+DATA/scan-vm/ASMPARAMETERFILE/registry.253.918403997' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)
[oracle@vm1 ~]$ asmcmd spget
+OCR_VOTING/scan-vm/spfileASM.ora
[oracle@vm1 ~]$ asmcmd pwmove --asm +DATA/orapwASM +OCR_VOTING/scan-vm/orapwASM
moving +DATA/orapwASM -> +OCR_VOTING/scan-vm/orapwASM
[oracle@vm1 ~]$ asmcmd pwget --asm

+OCR_VOTING/scan-vm/orapwASM

Monday 4 May 2015

Oracle Dataguard Setup

Data Guard Setup 

Oracle dataguard is the high availability and Disaster recovery concept provided by Oracle.

For configuring the dataguard, we need two servers located in different locations let’s say Singapore and Hong Kong. Dataguard configuration maintains transactionally consistent copies of the production databases in the standby database. So, for some reason due to Disaster or Planned outage if production database is not available then standby database can be switched to the production role.

Around 5 years earlier, I had configured the first dataguard setup and the below steps are drafted for my reference.

Steps for Configuring the Oracle Dataguard


1)    Primary database should be in Archive log mode.
2)    Take RMAN Backup on the primary Database.
a)    $ RMAN target /
RMAN> backup as compressed backupset device type disk format '/optware/oracle/tmp/chtigu_%U' database;
b)    RMAN> backup device type disk format '/optware/oracle/tmp/chtigu_standby.ctl' current controlfile for standby;
         (or) at DB level
 ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/optware/oracle/tmp/chtigu_standby.ctl';
3)    Take pfile from primary server and amend it to use at standby server
create pfile='/optware/oracle/tmp/initchtisu.ora' from spfile;
vi initchtisu.ora
chtisu.__db_cache_size=314572800
chtisu.__java_pool_size=4194304
chtisu.__large_pool_size=12582912
chtisu.__oracle_base='/optware/oracle'#ORACLE_BASE set from environment
chtisu.__pga_aggregate_target=364904448
chtisu.__sga_target=683671552
chtisu.__shared_io_pool_size=0
chtisu.__shared_pool_size=339738624
chtisu.__streams_pool_size=0
*.audit_file_dest='/optware/oracle/admin/chtisu/audit'
*.audit_sys_operations=true
*.audit_trail='db'
*.compatible='11.2.0.3'
*.control_file_record_keep_time=100
*.control_files='/uchtigu/uoradata/chtigu/chtisu_control01.ctl'
*.db_block_size=8192
*.db_file_multiblock_read_count=16
*.db_name='chtigu'
*.db_unique_name='chtisu'
*.fal_client='chtisu'
*.fal_server='chtigu'
*.job_queue_processes=10
*.local_listener=listener_chtisu
*.log_archive_config='dg_config=(chtigu,chtisu)'
*.log_archive_dest_1='location=/utiguarch/uoradata/chtigu_arch valid_for=(all_logfiles,all_roles) db_unique_name=chtisu'
*.log_archive_dest_2='service=chtigu LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=chtigu'
*.log_archive_dest_state_2='DEFER'
*.log_archive_format='chtigu_%s_%t_%r.arc'
*.log_archive_max_processes=10
*.memory_target=1000M
*.nls_date_format='DD/MM/RRRR'
*.open_cursors=5000
*.os_authent_prefix='EXT$'
*.processes=500
*.recyclebin='off'
*.remote_login_passwordfile='EXCLUSIVE'
*.resource_limit=true
*.sql92_security=TRUE
*.standby_file_management='AUTO'
*.tape_asynch_io=false
*.undo_retention=7200
*.undo_tablespace='undotbs'
4)    If the DB files FS differs primary to standby, make sure to add below parameters
db_file_name_convert=’/DB FS location_primary’ , ‘/DB FS location_standby’
log_file_name_convert=’/ DB log FS location_primary’ , ‘/DB log FS location_standby’

5)    Create password file, set remote_login_passwordfile to Exclusive and enable force_logging
$ cd $ORACLE_HOME/dbs
$ orapwd file=orapwchtigu password=sys123 force=y entries=5 ignorecase=Y
$ sqlplus / as sysdba
SQL> alter system set remote_login_passwordfile='EXCLUSIVE' SCOPE=SPFILE;
SQL> Alter system set force_logging=enable scope=spfile;
restart DB
SQL> select * from v$pwfile_users;
USERNAME          SYSDB   SYSOP   SYSAS
----------------------------------------- ----- ----- -----
SYS                          TRUE    TRUE     FALSE
6)    Set parameters in primary server for DG configuration

a)    alter system set log_archive_config='dg_config=(chtigu,chtisu)' scope=both ;
b)    alter system set log_archive_dest_1='location=/upacregarch/uoradata/cnpacu_arch valid_for=(all_logfiles,all_roles) db_unique_name=chtigu' ;
c)    alter system set log_archive_dest_2='service=chtisu LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=chtisu' scope=both;
d)    alter system set fal_server='chtisu' ;
e)    alter system set fal_client=’chtigu';
f)     alter system set standby_file_management='AUTO' scope=both;
g)    alter system set log_archive_max_processes=10 scope=both;
h)   alter system set log_archive_dest_state_2='defer';
7)    Create standby logfiles at primary side
alter database add standby logfile  thread 1 group 4 '/upacreg1/uoradata/chtigu/ chtigu _standby_log01.dbf' size 104857600;
alter database add standby logfile  thread 1 group 5 '/upacreg1/uoradata/chtigu/ chtigu _standby_log02.dbf' size 104857600;
alter database add standby logfile  thread 1 group 6 '/upacreg1/uoradata/chtigu/ chtigu _standby_log03.dbf' size 104857600;
alter database add standby logfile  thread 1 group 7 '/upacreg1/uoradata/chtigu/ chtigu _standby_log04.dbf' size 104857600;
alter database add standby logfile  thread 1 group 8 '/upacreg1/uoradata/chtigu/ chtigu _standby_log05.dbf' size 104857600;
alter database add standby logfile  thread 1 group 9 '/upacreg1/uoradata/chtigu/ chtigu _standby_log06.dbf' size 104857600;

8)    Copy password file, amended pfile, RMAN DB backup files, standby controlfile and tnsnames.ora file to Standby server
cp -ip orapwchtigu /optware/oracle/tmp/orapwchtisu
scp
9)    Login with sys user on primary and standby to cross-check connectivity.
Primary -- $ sqlplus sys@chtigu as sysdba
Standby -- $ sqlplus sys@chtisu as sysdba  ( this step we can check only when DB is mount)
10) On standby server
a)    Set DG sid
b)    Create Spfile from pfile
c)    $ rman target /
RMAN> startup nomount
RMAN> restore standby controlfile from '/optware/oracle/tmp/chtisu_standby.ctl';
RMAN> alter database mount;
RMAN> configure default device type to disk;
RMAN> restore database;
11) If the restore is from RMAN, connect to the standby database and drop standby logfiles.  If the restore is from file system cold backup it is not needed.
sqlplus /as sysdba
alter database drop standby logfile group 4;
alter database drop standby logfile group 5;
alter database drop standby logfile group 6;
alter database drop standby logfile group 7;
alter database drop standby logfile group 8;
alter database drop standby logfile group 9;
alter database add standby logfile  thread 1 group 4 '/upacreg1/uoradata/chtigu/ chtigu _standby_log01.dbf' size 104857600;
alter database add standby logfile  thread 1 group 5 '/upacreg1/uoradata/chtigu/ chtigu _standby_log02.dbf' size 104857600;
alter database add standby logfile  thread 1 group 6 '/upacreg1/uoradata/chtigu/ chtigu _standby_log03.dbf' size 104857600;
alter database add standby logfile  thread 1 group 7 '/upacreg1/uoradata/chtigu/ chtigu _standby_log04.dbf' size 104857600;
alter database add standby logfile  thread 1 group 8 '/upacreg1/uoradata/chtigu/ chtigu _standby_log05.dbf' size 104857600;
alter database add standby logfile  thread 1 group 9 '/upacreg1/uoradata/chtigu/ chtigu _standby_log06.dbf' size 104857600;
12) Put the standby database in managed recovery mode
SQL> recover managed standby database using current logfile disconnect;
13) On Primary database enable archive dest state2
SQL> alter system set log_archive_dest_state_2=enable scope=both sid='*';
SQL> alter system switch logfile;

14) Check standby side whether log shipping is happening or not
SQL> select sequence#, first_time, next_time from v$archived_log order by sequence#;

15) Incase of any error crosscheck from primary side dest status
SQL> select DEST_ID,ERROR,STATUS from v$archive_dest_status where dest_id in (1,2);

16) If  standby DB has to be on RAC, then register DB through SRVCTL
$ srvctl add database -d chtisu -o /optware/oracle/11.2/db -s mount -r physical_standby

Partitioning in oracle database

What is partitioning in theory:

Oracle partitioning is a licensing option of oracle database only available with Oracle EE.
Partitioning allows a table, index, IOT to be subdivided into small pieces. Each piece of these objects is called a partition. Each partition has its own name. From DBA Perspective, a partitioned objects has several partitions and can be managed individually or collectively. However, From Application perspective partitioning of a DB object is transparent and partitioned object is similar to non-partition object and not necessary to modify the application queries when accessing to the partitioned object.

From DBA point of view, we can achieve several performance benefits with partitioning. Here I mentioned few.

  •   Let’s say a company is running for over 10 years and their application is having 10 years of employee data consists of the employee joining date, salary and other details and this table is partitioned by a year and has 10 partitions in the table. The company HR is trying to access the data of the employees for a particular year then this query would access one partition instead of 10 partitions. Like this we can achieve 10 times faster data retrieval to the customer.
  •  The I/O contention will be reduced. Partitioned tables will be physically stored in the datafiles of the tablespace in different drives or filesystems. Hash partitioning can be used to evenly distributes the rows among partitions, giving partitions approximately the same size.
  •  Let say one partition of the table is unavailable and all remaining partitions are available and online. The application can access the table and get the results until and unless the query required the unavailable partition likewise High availability can achieved.


Enable partition option as a DBA

Partition enabling in 10g

Please follow the below steps to enable the partition in the oracle database using chopt tool
·         Check whether the partitioning is enable or not with the below query
Select * from v$option;
·         If disable then shut down the database using sqlplus or srvctl command and its services

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk part_on

$ORACLE_HOME/bin/relink all

·         Bring up the database.
·         Select * from v$option;

Partition enabling in 11g

Cd $ORACLE_HOME/bin
chopt enable partitioning

Disable partitioning

cd $ORACLE_HOME/bin
chopt disable partitioning