Quantcast
Channel: Steve Hilker's Groups Activities
Viewing all 7636 articles
Browse latest View live

Overview of key DBA features in Toad for DB2 zOS Part I

$
0
0

This video on DDL generation features is part one of an overview of key DBA features in Toad for DB2 z/OS from Dell Software.


Exadata – Clone RAC RDBMS Home

$
0
0

Introduction

Exadata Database Machine is an ideal platform for database consolidation. With multiple databases running on the same Exadata DBM you want to isolate Oracle software binaries. There can be only on GRID Infrastructure home on database server, but you can install additional Oracle RDBMS Homes based on your requirement. All the Oracle RDBMS Homes shares the same common GRID Home.

To separate database administration for different databases, create a separate operating system user account, operating system group and database ORACLE_HOME for each database.

Why Additional Oracle Home?

  • To minimize patching downtime, we will use private oracle homes.
  • Provide more Security control over database instance in Consolidation environment.

 Limitation:

  •  Using separate Oracle homes requires additional disk space.
  • Increased management complexity

See how to extend /u01 file system size on Exadata at:

http://www.toadworld.com/platforms/oracle/w/wiki/11281.extend-u01-file-system-on-exadata-compute-node

In this article I will demonstrate how to Clone a 11.2.0.4 RDBMS home on Exadata X5. Procedure to cloning RDBMS home on Exadata and Non-Exadata machine is same. The advantage on Exadata is that you can use DCLI utility to run commands across the nodes.

Assumption

  • Oracle user is the owner of RDBMS home
  • Oracle user password for Compute nodes
  • Root user password for Compute nodes
  • Oracle and Root user equivalence must be setup between compute nodes
  • Sufficient space in /u01 file system
  • No outage is required for database
  • Oracle RAC software is installed and configured already to be used for cloning.

Environment

Exadata Model

X-2 Half Rack HC 4TB

Exadata Components

Storage Cell (7), Compute node (4) & Infiniband Switch (2)

Exadata Storage cells

oracloudceladm01 – oracloudceladm07

Exadata Compute nodes

oraclouddbadm01 – oraclouddbadm04

Exadata Software Version

12.1.2.1.3

Exadata DB Version

11.2.0.4 BP16

Steps

  • Create a tar ball of existing Oracle RDBMS home

[oracle@oraclouddbadm01 ~]# cd /u01/app/oracle/product/11.2.0.4/dbhome_1

[oracle@oraclouddbadm01 ~]# tar -zcvf /u01/app/oracle/product/11.2.0.4/db11204.tgz .

[root@oraclouddbadm01 dbhome_1]# cd ..

[root@oraclouddbadm01 11.2.0.4]# ls -ltr

drwxrwxr-x 78 oracle oinstall       4096 Mar 23 10:00 dbhome_1

-rw-r--r-- 1 root   root     2666669855 Mar 24 23:55 db11204.tgz

  • Create new Oracle Home directory and copy the tar ball to other Compute nodes in the cluster. Here we are calling the NEW Oracle Home as dbhome_2

[root@oraclouddbadm01 11.2.0.4]# dcli -g ~/dbs_group -l root 'cd /u01/app/oracle/product/11.2.0.4/; mkdir dbhome_2'

[root@oraclouddbadm01 11.2.0.4]# dcli -g ~/dbs_group -l root 'ls -l /u01/app/oracle/product/11.2.0.4'

oraclouddbadm01: total 8

oraclouddbadm01: drwxrwxr-x 78 oracle oinstall 4096 Aug 6 08:59 dbhome_2

oraclouddbadm01: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:15 dbhome_1

oraclouddbadm01: -rw-r--r-- 1 root   root     2666669855 Mar 24 23:55 db11204.tgz

oraclouddbadm02: total 8

oraclouddbadm02: drwxrwxr-x 78 oracle oinstall 4096 Aug 5 21:49 dbhome_2

oraclouddbadm02: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:14 dbhome_1

oraclouddbadm03: total 8

oraclouddbadm03: drwxrwxr-x 78 oracle oinstall 4096 Aug 5 21:47 dbhome_2

oraclouddbadm03: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:15 dbhome_1

oraclouddbadm04: total 8

oraclouddbadm04: drwxrwxr-x 78 oracle oinstall 4096 Aug 8 20:45 dbhome_2

oraclouddbadm04: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:14 dbhome_1

[root@oraclouddbadm01 11.2.0.4]# scp db11204.tgz oraclouddbadm02:/u01/app/oracle/product/11.2.0.4/

[root@oraclouddbadm01 11.2.0.4]# scp db11204.tgz oraclouddbadm03:/u01/app/oracle/product/11.2.0.4/

[root@oraclouddbadm01 11.2.0.4]# scp db11204.tgz oraclouddbadm04:/u01/app/oracle/product/11.2.0.4/

  • Extract the tar ball on all the Compute nodes

[root@oraclouddbadm01 11.2.0.4]# cd dbhome_2

[root@oraclouddbadm01 dbhome]# ls -ltr

[root@oraclouddbadm01 dbhome]# pwd

/u01/app/oracle/product/11.2.0.4/dbhome_2

Node 1:

[root@oraclouddbadm01 dbhome]# tar -zxvf /u01/app/oracle/product/11.2.0.4/db11204.tgz .

Node 2:

[root@oraclouddbadm02 ~]# cd /u01/app/oracle/product/11.2.0.4/dbhome_2

[root@oraclouddbadm02 dbhome]# tar -zxvf /u01/app/oracle/product/11.2.0.4/db11204.tgz .

Node 3:

[root@oraclouddbadm03 ~]# cd /u01/app/oracle/product/11.2.0.4/dbhome_2

[root@oraclouddbadm03 dbhome]# tar -zxvf /u01/app/oracle/product/11.2.0.4/db11204.tgz .

Node 4:

[root@oraclouddbadm04 ~]# cd /u01/app/oracle/product/11.2.0.4/dbhome_2

[root@oraclouddbadm04 dbhome]# tar -zxvf /u01/app/oracle/product/11.2.0.4/db11204.tgz .

  • Change the NEW Oracle home ownership on all the Compute Nodes

Node 1:

[root@oraclouddbadm01 11.2.0.4]# dcli -g ~/dbs_group -l root 'chown -R oracle:oinstall /u01/app/oracle/product/11.2.0.4/dbhome_2'

  • Verify the ownership of New Oracle Home

[root@oraclouddbadm01 11.2.0.4]# dcli -g ~/dbs_group -l root 'ls -l /u01/app/oracle/product/11.2.0.4'

oraclouddbadm01: total 8

oraclouddbadm01: drwxrwxr-x 78 oracle oinstall 4096 Aug 6 08:59 dbhome_2

oraclouddbadm01: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:15 dbhome_1

oraclouddbadm02: total 8

oraclouddbadm02: drwxrwxr-x 78 oracle oinstall 4096 Aug 5 21:49 dbhome_2

oraclouddbadm02: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:14 dbhome_1

oraclouddbadm03: total 8

oraclouddbadm03: drwxrwxr-x 78 oracle oinstall 4096 Aug 5 21:47 dbhome_2

oraclouddbadm03: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:15 dbhome_1

oraclouddbadm04: total 8

oraclouddbadm04: drwxrwxr-x 78 oracle oinstall 4096 Aug 8 20:45 dbhome_2

oraclouddbadm04: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:14 dbhome_1

  • Clone the Oracle Home.

Make sure Oracle Base, Oracle Home and PATH are set correctly.

[oracle@oraclouddbadm01 11.2.0.4]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_2

[oracle@oraclouddbadm01 11.2.0.4]$ export ORACLE_BASE=/u01/app/oracle

[oracle@oraclouddbadm01 11.2.0.4]$ export PATH=$PATH:$ORACLE_HOME/bin

Use the following command to Clone Oracle Home.

Run the clone.pl script, which performs the main Oracle RAC cloning tasks.

Run the script as oracle or the user that owns the Oracle RAC software.

Node 1:

[oracle@oraclouddbadm01 11.2.0.4]$ perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome ORACLE_HOME_NAME=OraDb11g_home2 ORACLE_BASE=/u01/app/oracle '-O"CLUSTER_NODES={oraclouddbadm01, oraclouddbadm02, oraclouddbadm03, oraclouddbadm04}"' '-O"LOCAL_NODE=oraclouddbadm01"'

./runInstaller -clone -waitForCompletion "ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome" "ORACLE_HOME_NAME=OraDb11g_home2" "ORACLE_BASE=/u01/app/oracle" "CLUSTER_NODES={oraclouddbadm01, oraclouddbadm02, oraclouddbadm03, oraclouddbadm04}" "LOCAL_NODE=oraclouddbadm01" -silent -noConfig -nowait

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 24575 MB   Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-03-25_02-33-10AM. Please wait ...Oracle Universal Installer, Version 11.2.0.4.0 Production

Copyright (C) 1999, 2013, Oracle. All rights reserved.

You can find the log of this install session at:

/u01/app/oraInventory/logs/cloneActions2015-03-25_02-33-10AM.log

.................................................................................................... 100% Done.

Installation in progress (Wednesday, March 25, 2015 2:33:16 AM CDT)

..............................................................................                                                 78% Done.

Install successful

Linking in progress (Wednesday, March 25, 2015 2:33:20 AM CDT)

Link successful

Setup in progress (Wednesday, March 25, 2015 2:33:37 AM CDT)

Setup successful

End of install phases.(Wednesday, March 25, 2015 2:33:59 AM CDT)

WARNING:

The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.

/u01/app/oracle/product/11.2.0.4/dbhome/root.sh #On nodes oraclouddbadm01

To execute the configuration scripts:

   1. Open a terminal window

   2. Log in as "root"

   3. Run the scripts in each cluster node

The cloning of OraDb11g_home2 was successful.

Please check '/u01/app/oraInventory/logs/cloneActions2015-03-25_02-33-10AM.log' for more details.

Repeat the Clone process on the remaining nodes in the Cluster

Node 2:

[oracle@oraclouddbadm02 11.2.0.4]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome

[oracle@oraclouddbadm02 11.2.0.4]$ export ORACLE_BASE=/u01/app/oracle

[oracle@oraclouddbadm02 11.2.0.4]$ export PATH=$PATH:$ORACLE_HOME/bin

[oracle@oraclouddbadm02 11.2.0.4]$ perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome ORACLE_HOME_NAME=OraDb11g_home2 ORACLE_BASE=/u01/app/oracle '-O"CLUSTER_NODES={oraclouddbadm01, oraclouddbadm02, oraclouddbadm03, oraclouddbadm04}"' '-O"LOCAL_NODE=oraclouddbadm02"'

Node 3:

[oracle@oraclouddbadm03 11.2.0.4]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome

[oracle@oraclouddbadm03 11.2.0.4]$ export ORACLE_BASE=/u01/app/oracle

[oracle@oraclouddbadm03 11.2.0.4]$ export PATH=$PATH:$ORACLE_HOME/bin

[oracle@oraclouddbadm03 11.2.0.4]$ perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome ORACLE_HOME_NAME=OraDb11g_home2 ORACLE_BASE=/u01/app/oracle '-O"CLUSTER_NODES={oraclouddbadm01, oraclouddbadm02, oraclouddbadm03, oraclouddbadm04}"' '-O"LOCAL_NODE=oraclouddbadm03"'

Node 4:

[oracle@oraclouddbadm04 11.2.0.4]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome

[oracle@oraclouddbadm04 11.2.0.4]$ export ORACLE_BASE=/u01/app/oracle

[oracle@oraclouddbadm04 11.2.0.4]$ export PATH=$PATH:$ORACLE_HOME/bin

[oracle@oraclouddbadm04 11.2.0.4]$ perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome ORACLE_HOME_NAME=OraDb11g_home2 ORACLE_BASE=/u01/app/oracle '-O"CLUSTER_NODES={oraclouddbadm01, oraclouddbadm02, oraclouddbadm03, oraclouddbadm04}"' '-O"LOCAL_NODE=oraclouddbadm04"'

  • Run the root.sh with -silent option on all the nodes in the cluster to finish cloning

[root@oraclouddbadm01 11.2.0.4]# /u01/app/oracle/product/11.2.0.4/dbhome/root.sh -silent

[root@oraclouddbadm02 11.2.0.4]# /u01/app/oracle/product/11.2.0.4/dbhome/root.sh -silent

[root@oraclouddbadm03 11.2.0.4]# /u01/app/oracle/product/11.2.0.4/dbhome/root.sh -silent

[root@oraclouddbadm04 11.2.0.4]# /u01/app/oracle/product/11.2.0.4/dbhome/root.sh -silent

Verify Installation

  • View the log file for errors

[oracle@oraclouddbadm01 ~]$ view /u01/app/oraInventory/logs/cloneActions2015-03-25_02-33-10AM.log

[oracle@oraclouddbadm01 ~]$ ls -l oraInstall2015-03-25_02-33-10AM.err

-rw-r----- 1 oracle oinstall       0 Mar 25 02:33 oraInstall2015-03-25_02-33-10AM.err

No errors in the .err log file as the file size is 0 bytes.

  • Verify the inventory is updated with the new Oracle Home

[oracle@oraclouddbadm01 ~]$ cd /u01/app/oraInventory/ContentsXML

[oracle@oraclouddbadm01 ContentXML]$ vi comps.xml

You should see a stanza similar to the following for new Oracle Home

<HOME NAME="OraDb11g_home2" LOC="/u01/app/oracle/product/11.2.0.4/dbhome" TYPE="O" IDX="3">

   <NODE_LIST>

     <NODE NAME="oraclouddbadm01"/>

     <NODE NAME="oraclouddbadm02"/>

     <NODE NAME="oraclouddbadm03"/>

     <NODE NAME="oraclouddbadm04"/>

   </NODE_LIST>

  • Verify that new Home is using RDS Protocol

[root@oraclouddbadm01 ~]# su - oracle

[oracle@oraclouddbadm01 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome

[oracle@oraclouddbadm01 ~]$ echo $ORACLE_HOME

/u01/app/oracle/product/11.2.0.4/dbhome

[oracle@oraclouddbadm01 ~]$ which skgxpinfo

/u01/app/oracle/product/11.2.0.4/dbhome/bin/skgxpinfo

[oracle@oraclouddbadm01 ~]$ /u01/app/oracle/product/11.2.0.4/dbhome/bin/skgxpinfo -v

Oracle RDS/IP (generic)

If the output is other than RDS, this new Home is not using RDS Exadata Protocol for communication.

After the software is installed, you should run skgxpinfo from your new $ORACLE_HOME/bin directory to ensure that the binaries are compiled using the Reliable Datagram Sockets protocol. If not, relink your Oracle binary by issuing the following command:

[oracle@oraclouddbadm01 ~]$ cd $ORACLE_HOME/bin

[oracle@oraclouddbadm01 ~]$ make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk ipc_rds ioracle

It is recommended to use RDS on Exadata as it uses IB network which provides greater bandwidth.

Conclusion

In this article we have learnt how to clone a RAC RDBMS home on Exadata. Cloning is the easiest and fastest way of creating new home.

 

Oracle - Wiki

$
0
0

If you were looking for Knowledge Xpert or OraDBpedia, you're in the right place!  We've moved to make your community experience even better by leveraging the scale of Toad World's 3 million users. 

The Oracle Database (commonly referred to as Oracle RDBMS or simply as Oracle) is an object-relational database management system (ORDBMS) produced and marketed by Oracle Corporation. 

This wiki is the collective experience of many Oracle developers and DBAs.  Like Wikipedia, this wiki is an open resource for you to share your knowledge of the Oracle database.  So, join in and create/update the articles!

Learn more about how to write and edit Wiki articles, review Wiki articles, and get your blog syndicated on Toad World.

Peoplesoft

$
0
0

This section will contain articles related to Oracle and Peoplesoft.

Cross Platform Database Migration to 12c using RMAN Incremental Backups

$
0
0

This article demonstrates on how to perform a cross platform database migration using RMAN Incremental backup method. Oracle 12c, has this new feature incorporated which allows us to migrate the data across platform and also across different endian formats with minimal downtime. You can migrate the data from a 10g and 11g backup to a 12c database by using this feature.


In this example, I'm demonstrating on how to transport data from a 11.2.0.3 database on Solaris machine to a pluggable database (PDB) on 12.1.0.2 database on a Linux Machine. Both these platforms have the same Endian format but the steps remain the same if you are migrating the data across different endian formats as well. This article has been inlined presuming that the source platform has the datafiles on the file system while on the target side (Linux machine), these files will be placed on the ASM diskgroup.


Environment:

Source Hostname: solaris10
Source Database Name: sourcedb
Source Database Version: 11.2.0.3

Target Hostname: ora1-1
Target Database Name: targetdb
Target Pluggable Database Name: targetpdb
Target Database Version: 12.1.0.2

Source database details are as below:


$ . oraenv
ORACLE_SID = [sourcedb] ?
The Oracle base has been set to /export/home/oracle
$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Thu Mar 3 16:45:20 2016

Copyright (c) 1982, 2011, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

 

SQL> select status,instance_name,database_role,open_mode,platform_name,platform_id from v$database,v$instance;

STATUS   INSTANCE_NAME   DATABASE_ROLE  OPEN_MODE   PLATFORM_NAME    PLATFORM_ID
-------- --------------- -------------- ----------- ---------------- --------------------------------------
OPEN     sourcedb        PRIMARY        READ WRITE  Solaris          Operating System (x86-64) 20

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
/export/home/oradata/sourcedb/sourcedb/users01.dbf
/export/home/oradata/sourcedb/sourcedb/undotbs01.dbf
/export/home/oradata/sourcedb/sourcedb/sysaux01.dbf
/export/home/oradata/sourcedb/sourcedb/system01.dbf

Let me create a tablespace called "MYOBJECTS" on the source database.


SQL> create tablespace myobjects datafile '/export/home/oradata/sourcedb/sourcedb/myobjects01.dbf' size 10M;

Tablespace created.

I then create an application user called "MIGAPP" and assign the above tablespace as his default tablespace.


SQL> create user migapp identified by oracle default tablespace myobjects;

User created.

SQL> grant connect,resource to migapp;

Grant succeeded.

 

Let me perform some of the DML/DDL operations on the source database using the newly created user in the above steps.

 


SQL> conn migapp/oracle
Connected.
SQL>
SQL> create table test (code number);

Table created.

SQL> insert into test values (100);

1 row created.

SQL> insert into test values (101);

1 row created.

SQL> insert into test values (102);

1 row created.

SQL> commit;

Commit complete.

 

Creating another application tablespace MIGTBS1 and assigning it to another application user TSTAPP on the source database.

 


SQL> conn / as sysdba
Connected.
SQL> create tablespace migtbs1 datafile '/export/home/oradata/sourcedb/sourcedb/mygtbs101.dbf' size 10M;

Tablespace created.

SQL> create user tstapp identified by oracle default tablespace migtbs1;

User created.

SQL> grant connect,resource to tstapp;

Grant succeeded.

Perform DDL/DML operations by the newly created user TSTAPP


SQL> conn tstapp/oracle
Connected.
SQL> create table customer(cust_id number);

Table created.

SQL> insert into customer values (1234);

1 row created.

SQL> insert into customer values (83920);

1 row created.

SQL> commit;

Commit complete.

SQL> create index cust_ind on customer (cust_id);

Index created.

Take a L0 backup of the newly created tablespaces. Please note that we would be migrating only the application tablespaces and not the SYSTEM, SYSAUX, UNDO and USERS tablespaces. If there are any application objects allocated on the users tablespaces at the source side, then we need to think of moving them to application tablespaces before migrating.

$ rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Thu Mar 3 16:58:42 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database: SOURCEDB (DBID=312004970)

RMAN> backup incremental level 0 format '/export/home/bkp/%d_inc0_%U.bak' tablespace myobjects,migtbs1;

Starting backup at 03-MAR-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=44 device type=DISK
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/export/home/oradata/sourcedb/sourcedb/myobjects01.dbf
input datafile file number=00006 name=/export/home/oradata/sourcedb/sourcedb/mygtbs101.dbf
channel ORA_DISK_1: starting piece 1 at 03-MAR-16
channel ORA_DISK_1: finished piece 1 at 03-MAR-16
piece handle=/export/home/bkp/SOURCEDB_inc0_0aqvilns_1_1.bak tag=TAG20160303T165939 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 03-MAR-16

Let's perform few more DML changes on the 2 tablespaces.


SQL> conn migapp/oracle
Connected.
SQL> insert into test values (200);

1 row created.

SQL> insert into test values (201);

1 row created.

SQL> insert into test values (202);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from test;

CODE
----------
200
201
202
100
101
102

6 rows selected.

Perform few more DML operations by the other user as well.


SQL> conn tstapp/oracle
Connected.
SQL> insert into customer values (73801);

1 row created.

SQL> insert into customer values (3940);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from customer;

CUST_ID
----------
1234
83920
73801
3940

Let me get the details of the tablespaces and their datafiles on the source database.


SQL> conn / as sysdba
Connected.

SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
MYOBJECTS
MIGTBS1

7 rows selected.


SQL> select tablespace_name,file_id,file_name from dba_data_files order by tablespace_name;

TABLESPACE_NAME    FILE_ID     FILE_NAME
------------------ ----------- -----------------------------------------------------------------
MIGTBS1            6           /export/home/oradata/sourcedb/sourcedb/mygtbs101.dbf
MYOBJECTS          5           /export/home/oradata/sourcedb/sourcedb/myobjects01.dbf
SYSAUX             2           /export/home/oradata/sourcedb/sourcedb/sysaux01.dbf
SYSTEM             1           /export/home/oradata/sourcedb/sourcedb/system01.dbf
UNDOTBS1           3           /export/home/oradata/sourcedb/sourcedb/undotbs01.dbf
USERS              4           /export/home/oradata/sourcedb/sourcedb/users01.dbf

6 rows selected.

Now consider taking a L1 backup of the newly created tablespaces to back up the changes happened after the L0 backup.

 


$ rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Thu Mar 3 17:07:04 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database: SOURCEDB (DBID=312004970)

RMAN> backup incremental level 1 format '/export/home/bkp/%d_inc1_%U.bak' tablespace myobjects,migtbs1;

Starting backup at 03-MAR-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/export/home/oradata/sourcedb/sourcedb/myobjects01.dbf
input datafile file number=00006 name=/export/home/oradata/sourcedb/sourcedb/mygtbs101.dbf
channel ORA_DISK_1: starting piece 1 at 03-MAR-16
channel ORA_DISK_1: finished piece 1 at 03-MAR-16
piece handle=/export/home/bkp/SOURCEDB_inc1_0bqvim65_1_1.bak tag=TAG20160303T170714 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 03-MAR-16


$ cd /export/home/bkp
$ ls -lrt
total 4704
-rw-r----- 1 oracle oinstall 2326528 Mar 3 16:59 SOURCEDB_inc0_0aqvilns_1_1.bak
-rw-r----- 1 oracle oinstall 73728 Mar 3 17:07 SOURCEDB_inc1_0bqvim65_1_1.bak

Copy these L0 and L1 backup pieces to the target server.


$ scp * oracle@192.168.56.101:/u02/bkp/
Could not create directory '/home/oracle/.ssh'.
The authenticity of host '192.168.56.101 (192.168.56.101)' can't be established.
RSA key fingerprint is a4:47:6c:f2:b0:35:18:78:20:37:2e:c1:6f:b8:de:9a.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/oracle/.ssh/known_hosts).
oracle@192.168.56.101's password:
SOURCEDB_inc0_0aqvil 100% |********************************************************************************************************************| 2272 KB 00:00
SOURCEDB_inc1_0bqvim 100% |********************************************************************************************************************| 73728 00:00
$

On the target server, since we will be restoring these on the Pluggable database, I would like to create a service for the PDB on the target CDB by adding an entry in the TNSNAMES.ora file on the target server.

TARGETPDB =
(DESCRIPTION =
   (ADDRESS = (PROTOCOL = TCP)(HOST = ora1-1.mydomain)(PORT = 1521))
   (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = targetpdb)
   )
)

By using the above TNS alias, I would be connecting directly to the pluggable database rather than connecting to the CDB and then switching to the PDB.

Here I make a connection to the Target PDB using the above created service.


[oracle@ora1-1 ~]$ sqlplus sys/oracle@targetpdb as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 3 17:09:08 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SYS@targetpdb> show pdbs

CON_ID     CON_NAME      OPEN MODE        RESTRICTED
---------- ------------- ---------------- -------------
3          TARGETPDB     READ WRITE       NO

The targetpdb has SYSTEM, SYSAUX and USERS tablespaces. Just to ensure, the datafiles on the target database are on the ASM storage.

Now we need to move the data of the 2 newly created tablespaces on the sourcedb to this pluggable database.


SYS@targetpdb> select tablespace_name,file_id,file_name from dba_data_files;

TABLESPACE_NAME  FILE_ID      FILE_NAME
---------------  -----------  -----------------------------------------------------------------------------
SYSTEM           8            +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/system.283.904577095
SYSAUX           9            +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/sysaux.284.904577095
USERS           10            +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/users.286.904577183

Restore the 2 tablespaces using the L0 backup on the target PDB. Use the "from platform" clause to let RMAN know that the backup set taken from the source database is on a different platform. Also note that, since we are restoring from a different platform, we call this new datafiles to be restored as "Foreign Datafile".

In the below snippet, I'm trying to restore the L0 backup set of the MYOBJECTS tablespace on to the pluggable database (targetpdb) over the ASM.

 


[oracle@ora1-1 bkp]$ rman target sys/oracle@targetpdb

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Mar 3 17:11:09 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

connected to target database: TARGETDB (DBID=1296113207)

RMAN> restore from platform 'Solaris Operating System (x86-64)' foreign datafile 5 format '+DATA' from backupset '/u02/bkp/SOURCEDB_inc0_0aqvilns_1_1.bak';

Starting restore at 03-MAR-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=54 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file 00005
channel ORA_DISK_1: reading from backup piece /u02/bkp/SOURCEDB_inc0_0aqvilns_1_1.bak
channel ORA_DISK_1: restoring foreign file 5 to +DATA
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 03/03/2016 17:12:34
ORA-19870: error while restoring backup piece /u02/bkp/SOURCEDB_inc0_0aqvilns_1_1.bak
ORA-19504: failed to create file "+DATA"
ORA-17502: ksfdcre:4 Failed to create file +DATA
ORA-15122: ASM file name '+DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/.288.905533953' contains an invalid file number

RMAN throws an error message "ORA-15122: ASM file name '+DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/.288.905533953' contains an invalid file number". As per MOS ID:2028842.1, the restore of a foreign datafile to ASM in 12c fails with the above error and is marked as a bug.


Next what ? We'll restore the foreign datafiles to a local disk on the target server and then copy the file from the file system on to the ASM diskgroup.


Let me first restore the foreign datafile (MYOBJECTS Tablespace) from L0 backup on to the file system.

 


RMAN> restore from platform 'Solaris Operating System (x86-64)' foreign datafile 5 format '/u02/datafiles/myobjects01.dbf' from backupset '/u02/bkp/SOURCEDB_inc0_0aqvilns_1_1.bak';

Starting restore at 03-MAR-16
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file 00005
channel ORA_DISK_1: reading from backup piece /u02/bkp/SOURCEDB_inc0_0aqvilns_1_1.bak
channel ORA_DISK_1: restoring foreign file 5 to /u02/datafiles/myobjects01.dbf
channel ORA_DISK_1: foreign piece handle=/u02/bkp/SOURCEDB_inc0_0aqvilns_1_1.bak
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:05
Finished restore at 03-MAR-16

Now, let's move this foreign datafile from the file system on to the ASM diskgroup using ASMCMD utility.


[oracle@ora1-1 bkp]$ . oraenv
ORACLE_SID = [targetdb] ? +ASM
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora1-1 bkp]$ asmcmd
ASMCMD> cp '/u02/datafiles/myobjects01.dbf' '+DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/myobjects01.dbf'
copying /u02/datafiles/myobjects01.dbf -> +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/myobjects01.dbf
ASMCMD>

Once done, recover the above copied foreign datafilecopy from the RMAN Level 1 backup. Again over here, we make use of the "from platform" clause to let RMAN know that this is a foreign datafile of a different platform and needs to be recovered using the specified backupset which is indicated using the "from backupset" clause.

RMAN> recover from platform 'Solaris Operating System (x86-64)' foreign datafilecopy '+DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/myobjects01.dbf' from backupset '/u02/bkp/SOURCEDB_inc1_0bqvim65_1_1.bak';

Starting restore at 03-MAR-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=76 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/myobjects01.dbf
channel ORA_DISK_1: reading from backup piece /u02/bkp/SOURCEDB_inc1_0bqvim65_1_1.bak
channel ORA_DISK_1: foreign piece handle=/u02/bkp/SOURCEDB_inc1_0bqvim65_1_1.bak
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 03-MAR-16

Perfom the similar steps for the other tablespace (MIGTBS1). Restore the foreign datafile of tablespaces MIGTBS1 on to the file system on the target pluggable database.

RMAN> restore from platform 'Solaris Operating System (x86-64)' foreign datafile 6 format '/u02/datafiles/migtbs101.dbf' from backupset '/u02/bkp/SOURCEDB_inc0_0aqvilns_1_1.bak';

Starting restore at 03-MAR-16
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file 00006
channel ORA_DISK_1: reading from backup piece /u02/bkp/SOURCEDB_inc0_0aqvilns_1_1.bak
channel ORA_DISK_1: restoring foreign file 6 to /u02/datafiles/migtbs101.dbf
channel ORA_DISK_1: foreign piece handle=/u02/bkp/SOURCEDB_inc0_0aqvilns_1_1.bak
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:05
Finished restore at 03-MAR-16

Using ASMCMD utility, copy the above restored foreign datafile on to the ASM diskgroup.


[oracle@ora1-1 bkp]$ asmcmd
ASMCMD>
ASMCMD> cp '/u02/datafiles/migtbs101.dbf' '+DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/migtbs101.dbf'
copying /u02/datafiles/migtbs101.dbf -> +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/migtbs101.dbf
ASMCMD>

Recover this above copied foreign datafile using the L1 backupset.


RMAN> recover from platform 'Solaris Operating System (x86-64)' foreign datafilecopy '+DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/migtbs101.dbf' from backupset '/u02/bkp/SOURCEDB_inc1_0bqvim65_1_1.bak';

Starting restore at 03-MAR-16
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/migtbs101.dbf
channel ORA_DISK_1: reading from backup piece /u02/bkp/SOURCEDB_inc1_0bqvim65_1_1.bak
channel ORA_DISK_1: foreign piece handle=/u02/bkp/SOURCEDB_inc1_0bqvim65_1_1.bak
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 03-MAR-16

On the source database, let's make few DML changes.

SQL> select status,instance_name from v$instance;

STATUS       INSTANCE_NAME
------------ ----------------
OPEN         sourcedb

SQL> conn migapp/oracle
Connected.
SQL>
SQL> insert into test values (300);

1 row created.

SQL> insert into test values (301);

1 row created.

SQL> insert into test values (302);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from test;

CODE
----------
200
201
202
100
101
102
300
301
302

9 rows selected.

Please note that 9 rows exist in this table.

Similarly let's perform few more DML changes by the other user as well on the source database.


SQL> conn tstapp/oracle
Connected.
SQL> insert into customer values (538);

1 row created.

SQL> insert into customer values (6093);

1 row created.

SQL> insert into customer values (7495);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from customer;

CUST_ID
----------
1234
83920
73801
3940
538
6093
7495

7 rows selected.


7 rows exist in the table.

Now moving ahead, we need to place the 2 tablespaces in READ ONLY mode on the source database. This would lead to downtime.

On the source database:

SQL> alter tablespace MYOBJECTS read only;

Tablespace altered.

SQL> alter tablespace MIGTBS1 read only;

Tablespace altered.

 

Now consider taking a final L1 backup of these tablespaces on the source database.

RMAN> backup incremental level 1 format '/export/home/bkp/%d_inc1_%U.bak' tablespace myobjects,migtbs1;

Starting backup at 03-MAR-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/export/home/oradata/sourcedb/sourcedb/myobjects01.dbf
input datafile file number=00006 name=/export/home/oradata/sourcedb/sourcedb/mygtbs101.dbf
channel ORA_DISK_1: starting piece 1 at 03-MAR-16
channel ORA_DISK_1: finished piece 1 at 03-MAR-16
piece handle=/export/home/bkp/SOURCEDB_inc1_0cqvin8r_1_1.bak tag=TAG20160303T172547 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 03-MAR-16

 

Export the metadata of these 2 tablespaces from the source database using the "expdp" utility.

$ expdp \"/ as sysdba\" directory=data_pump_dir dumpfile=all_tbs.dmp logfile=data_pump_dir transport_tablespaces=myobjects,migtbs1

Export: Release 11.2.0.3.0 - Production on Thu Mar 3 17:26:52 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TRANSPORTABLE_01": "/******** AS SYSDBA" directory=data_pump_dir dumpfile=all_tbs.dmp logfile=data_pump_dir transport_tablespaces=myobjects,migtbs1
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
/export/home/oracle/admin/sourcedb/dpdump/all_tbs.dmp
******************************************************************************
Datafiles required for transportable tablespace MIGTBS1:
/export/home/oradata/sourcedb/sourcedb/mygtbs101.dbf
Datafiles required for transportable tablespace MYOBJECTS:
/export/home/oradata/sourcedb/sourcedb/myobjects01.dbf
Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 17:29:52

Once the export of the metadata is taken, place back both the tablespaces on to READ WRITE mode.

SQL> alter tablespace MYOBJECTS read write;

Tablespace altered.

SQL> alter tablespace MIGTBS1 read write;

Tablespace altered.

Now copy the final L1 backup pieces and the export dump to the target Server:

Copying the final L1 backup piece:

$ ls -lrt /export/home/bkp
total 4848
-rw-r----- 1 oracle oinstall 2326528 Mar 3 16:59 SOURCEDB_inc0_0aqvilns_1_1.bak
-rw-r----- 1 oracle oinstall 73728 Mar 3 17:07 SOURCEDB_inc1_0bqvim65_1_1.bak
-rw-r----- 1 oracle oinstall 73728 Mar 3 17:25 SOURCEDB_inc1_0cqvin8r_1_1.bak

 

$ scp /export/home/bkp/SOURCEDB_inc1_0cqvin8r_1_1.bak oracle@192.168.56.101:/u02/bkp/
Could not create directory '/home/oracle/.ssh'.
The authenticity of host '192.168.56.101 (192.168.56.101)' can't be established.
RSA key fingerprint is a4:47:6c:f2:b0:35:18:78:20:37:2e:c1:6f:b8:de:9a.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/oracle/.ssh/known_hosts).
oracle@192.168.56.101's password:
SOURCEDB_inc1_0cqvin 100% |********************************************************************************************************************| 73728 00:00

Copying the export dump:

$ ls -lrt /export/home/oracle/admin/sourcedb/dpdump/
total 228
-rw-r--r-- 1 oracle oinstall 1442 Mar 3 17:29 data_pump_dir.log
-rw-r----- 1 oracle oinstall 102400 Mar 3 17:29 all_tbs.dmp
$
$
$ scp /export/home/oracle/admin/sourcedb/dpdump/all_tbs.dmp oracle@192.168.56.101:/u02/impdir
Could not create directory '/home/oracle/.ssh'.
The authenticity of host '192.168.56.101 (192.168.56.101)' can't be established.
RSA key fingerprint is a4:47:6c:f2:b0:35:18:78:20:37:2e:c1:6f:b8:de:9a.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/oracle/.ssh/known_hosts).
oracle@192.168.56.101's password:
all_tbs.dmp 100% |********************************************************************************************************************| 100 KB 00:00
$


On the targetpdb, recover the datafiles of the 2 tablespaces (MYOBJECTS and MIGTBS1) using the final L1 backup pieces copied above.

Let me first recover the datafile of tablespace MYOBJECTS.


[oracle@ora1-1 bkp]$ rman target sys/oracle@targetpdb

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Mar 3 17:40:49 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

connected to target database: TARGETDB (DBID=1296113207)

RMAN> recover from platform 'Solaris Operating System (x86-64)' foreign datafilecopy '+DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/myobjects01.dbf' from backupset '/u02/bkp/SOURCEDB_inc1_0cqvin8r_1_1.bak';

Starting restore at 03-MAR-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=72 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/myobjects01.dbf
channel ORA_DISK_1: reading from backup piece /u02/bkp/SOURCEDB_inc1_0cqvin8r_1_1.bak
channel ORA_DISK_1: foreign piece handle=/u02/bkp/SOURCEDB_inc1_0cqvin8r_1_1.bak
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
Finished restore at 03-MAR-16


Recoverying the datafile of tablespace MIGTBS1:


RMAN> recover from platform 'Solaris Operating System (x86-64)' foreign datafilecopy '+DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/migtbs101.dbf' from backupset '/u02/bkp/SOURCEDB_inc1_0cqvin8r_1_1.bak';

Starting restore at 03-MAR-16
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/migtbs101.dbf
channel ORA_DISK_1: reading from backup piece /u02/bkp/SOURCEDB_inc1_0cqvin8r_1_1.bak
channel ORA_DISK_1: foreign piece handle=/u02/bkp/SOURCEDB_inc1_0cqvin8r_1_1.bak
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 03-MAR-16

RMAN>

 

Create the application users on the targetpdb as well before importing the tablespaces' metadata.

SYS@targetpdb> create user migapp identified by oracle;

User created.

SYS@targetpdb> create user tstapp identified by oracle;

User created.

SYS@targetpdb> grant connect,resource to migapp,tstapp;

Grant succeeded.

 

Create necessary directory on the taretpdb (if not available) to import the metadata.


SYS@targetpdb> create directory migdir as '/u02/impdir';

Directory created.

SYS@targetpdb> grant read,write on directory migdir to migapp,tstapp,sys;

Grant succeeded.

Import the metadata of the 2 tablespaces from the dumpfile copied above on to the targetpdb. Please note that we use the "transport_datafiles" clause while importing the metadata. The value for this clause would be the list of datafiles that were restored earlier.


[oracle@ora1-1 bkp]$ impdp '"sys@targetpdb as sysdba"' directory=migdir dumpfile=all_tbs.dmp transport_datafiles='+DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/myobjects01.dbf','+DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/migtbs101.dbf'

Import: Release 12.1.0.2.0 - Production on Thu Mar 3 17:45:44 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
Master table "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Source time zone version is 14 and target time zone version is 18.
Source time zone is +00:00 and target time zone is -07:00.
Starting "SYS"."SYS_IMPORT_TRANSPORTABLE_01": "sys/********@targetpdb AS SYSDBA" directory=migdir dumpfile=all_tbs.dmp transport_datafiles=+DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/myobjects01.dbf,+DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/migtbs101.dbf
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Thu Mar 3 17:46:18 2016 elapsed 0 00:00:29

[oracle@ora1-1 bkp]$

Almost done there, just cross check if all the objects are now available on the target PDB.


[oracle@ora1-1 bkp]$ sqlplus sys/oracle@targetpdb as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 3 17:46:46 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SYS@targetpdb> select status,tablespace_name from dba_tablespaces;

STATUS    TABLESPACE_NAME
--------- ------------------------------
ONLINE    SYSTEM
ONLINE    SYSAUX
ONLINE    TEMP
ONLINE    USERS
READ ONLY MIGTBS1
READ ONLY MYOBJECTS

6 rows selected.


We see that 2 imported tablespaces are in READ ONLY mode on the target pdb. Change them to READ WRITE.


SYS@targetpdb> alter tablespace migtbs1 read write;

Tablespace altered.

SYS@targetpdb> alter tablespace myobjects read write;

Tablespace altered.


Let's cross check the status of the tablespaces now.

 

SYS@targetpdb> select status,tablespace_name from dba_tablespaces;

STATUS    TABLESPACE_NAME
--------- ------------------------------
ONLINE    SYSTEM
ONLINE    SYSAUX
ONLINE    TEMP
ONLINE    USERS
ONLINE    MIGTBS1
ONLINE    MYOBJECTS

6 rows selected.


All good, let me collect the list of the datafiles of these tablespaces.

SYS@targetpdb> select tablespace_name,file_name from dba_data_files;

TABLESPACE_NAME     FILE_NAME
------------------- --------------------------------------------------------------------------------------
SYSTEM              +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/system.283.904577095
SYSAUX              +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/sysaux.284.904577095
USERS               +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/users.286.904577183
MIGTBS1             +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/migtbs101.dbf
MYOBJECTS           +DATA/TARGETDB/2C6DF09F05C51EE6E0536538A8C0B4A3/DATAFILE/myobjects01.dbf


Let's the query the tables on the targetpdb to see if we have all the rows.


SYS@targetpdb> select * from migapp.test;

CODE
----------
100
101
102
200
201
202
300
301
302

9 rows selected.

SYS@targetpdb> select * from tstapp.customer;

CUST_ID
----------
1234
83920
73801
3940
538
6093
7495

7 rows selected.


SYS@targetpdb> select index_name,status from dba_indexes where index_name='CUST_IND';

INDEX_NAME     STATUS
-------------- ----------
CUST_IND       VALID

We can see that all the 9 rows and 7 rows of the application tables and the index that was created at the source database are now available on the target PDB.

Oracle Exadata Database Machine - DCLI Introduction and Setup

$
0
0

Introduction

The distributed command-line utility (dcli) is a utility program that is provided with Oracle Exadata Database Machine. Its purpose is to provide a means to simultaneously execute monitoring and administration commands across multiple Cell Servers.

 

In DCLI, we can execute / manage the following command types:

1.       Operating system commands

2.       CellCLI commands

3.       CellCLI Scripts

 

The DCLI utility will be located in “/opt/oracle/cell/cellsrv/bin” in Exadata Storage Server and it will be located in “/usr/local/bin” in Database Server.

 

Configuring the oracle user equivalence for dcli to the cells from database server

 

Step-1 : Login to cellserver as 'celladmin' user

login as: celladmin

celladmin@192.168.56.200's password:

 

Step-2 : Copy "DCLI" utility from Cell server to dbnode under /usr/local/bin for database.

Execute the following command from cell server.

[celladmin@cell1 ~]$ scp /opt/oracle/cell/cellsrv/bin/dcli root@192.168.56.201:/usr/local/bin

root@192.168.56.201's password:

dcli 100% 41KB 41.0KB/s 00:01

 

Step-3: Login to database server as 'oracle' user. Under Oracle User configure the "DCLI"

Goto '$HOME' directory and create 'celllist.txt' file and add all IP addresses of cellservers

 

[oracle@dbnode ~]$ cat celllist.txt

 

 

Step-4: Generate the ssh key files for user 'oracle'

[oracle@db1 ~]$ ssh-keygen -t dsa

 

Output:

Generating public/private dsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_dsa):

Enter passphrase (empty for no passphrase): "DO NOT ENTER ANY TEXT"

Enter same passphrase again: "DO NOT ENTER ANY TEXT"

Your identification has been saved in /home/oracle/.ssh/id_dsa.

Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.

The key fingerprint is:

aa:b5:3b:34:9c:2f:27:78:11:d9:58:50:3b:7e:89:49 oracle@db1

 

Step-5: In database server, Configure the "DCLI" to be able to run for cell server

[oracle@dbnode ~]$ dcli -k -g $HOME/celllist.txt

 

Output:

The authenticity of host '192.168.56.200 (192.168.56.200)' can't be established.

RSA key fingerprint is 1b:b6:91:11:58:89:b1:6a:c6:eb:72:df:68:d4:dd:5b.

Are you sure you want to continue connecting (yes/no)? yes

celladmin@192.168.56.200's password: "celladmin"

The authenticity of host '192.168.56.202 (192.168.56.202)' can't be established.

RSA key fingerprint is 1b:b6:91:11:58:89:b1:6a:c6:eb:72:df:68:d4:dd:5b.

Are you sure you want to continue connecting (yes/no)? yes

celladmin@192.168.56.202's password: "celladmin"

192.168.56.200: Warning: Permanently added '192.168.56.200' (RSA) to the list of known hosts.

192.168.56.200: ssh key added

192.168.56.202: Warning: Permanently added '192.168.56.202' (RSA) to the list of known hosts.

192.168.56.202: ssh key added

 

Step-6: Test "DCLI" command from database server

[oracle@dbnode ~]$ dcli -g $HOME/celllist.txt cellcli -e list cell

 

 

[oracle@dbnode ~]$ dcli -g celllist.txt "cat /proc/meminfo | grep Mem"

 

 

[oracle@dbnode ~]$ dcli -g celllist.txt cellcli -e list iormplan attributes objective

 

 

[oracle@dbnode ~]$ dcli -c cell1,cell2,cell3 -l celladmin vmstat 2 2

 

 

Changing FlashCacheMode using DCLI utility:

 

Step-1: Check the cell attributes using ‘oracle’ user

 

 

Step-2: Drop the existing flashcache for 3rd Cell Server 3 (Cell3)

 

 

Step-3: Stop the services of ‘cellsrv’ for 3rd Cell Server 3 (Cell3)

 

 

Step-4: Changing FlashCacheMode from ‘WriteThrough’ to ‘WriteBack’ for Cell3

 

 

Step-5: Check the FlashCacheMode for 3rd Cell Server 3 (Cell3)

 

 

Step-6: Start the services of ‘cellsrv’ for 3rd Cell Server 3 (Cell3)

 

 

Step-7: Create the flashcache for 3rd Cell Server 3 (Cell3)

 

 

Step-8: Check the flashcache for 3rd Cell Server 3 (Cell3)

 

 

Configuring the root user equivalence for dcli to the cells from database server

 

Step-1: copy 'celllist.txt' from the home directory of 'oracle' user to home directory of 'root' user

[root@dbnode ~]# cp /home/oracle/celllist.txt .

 

Step-2: Generate the ssh key files for user 'root'

[root@db1 ]# ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/root/.ssh/id_dsa):

Enter passphrase (empty for no passphrase): "DO NOT ENTER ANY TEXT"

Enter same passphrase again: "DO NOT ENTER ANY TEXT"

Your identification has been saved in /root/.ssh/id_dsa.

Your public key has been saved in /root/.ssh/id_dsa.pub.

The key fingerprint is:

aa:b5:3b:34:9c:2f:27:78:11:d9:58:50:3b:7e:89:49 root@db1

 

Step-3: In database server, Configure the "DCLI" to be able to run for cell server

[root@dbnode ~]# dcli -k -g celllist.txt -l root

root@192.168.56.202's password:

root@192.168.56.200's password:

192.168.56.200: ssh key added

192.168.56.202: ssh key added

 

Step-4: Test "DCLI" command from database server

[root@dbnode ~]# dcli -g celllist.txt -l root cellcli -e list cell

 

 

Test the commands:

[root@dbnode ~]# dcli -g celllist.txt -l root "su - celladmin -c \"cellcli -e list celldisk\""

[root@dbnode ~]# dcli -g celllist.txt -l root "su - celladmin -c \"cellcli -e list griddisk\""

 

Conclusion: The distributed command-line utility (dcli) utility facilitates centralized management across Oracle Exadata Database Machine by automating the execution of a command on a set of cell servers and returning the output to the centralized management location.

Rolling RECO data disk group resize activity for Oracle Exadata Database Machine

$
0
0

Introduction

Shrink the RECO data disk group and assign free space to the DATA disk group without any performance issues and data outages in Oracle Exadata Database Machine.

Before starting this activity in Oracle Exadata Database Machine take the following steps:

  1. Execute Exachk report and review the report
  2. Take backup of all databases in Oracle Exadata Database Machine
  3. If disaster recovery site is in place for primary/ production site, check archive log files status

Step-1: Find the amount of free space in the disk groups

SQL> select group_number, name, type, total_mb, free_mb,
required_mirror_free_mb, usable_file_mb
from v$asm_diskgroup
order by group_number;

Step-2: Capture the information for ASM disks

SQL> select dg.name, count(1)  “Number Of Disks”
from v$asm_disk d, v$asm_diskgroup dg
where d.group_number= dg.group_number
group by dg.name;

 

 

Step-3: Capture the Failed Group Information

SQL> select dg.name, count (distinct failgroup) Num_Failed_Groups
from v$asm_disk d, v$asm_diskgroup dg
where d.group_number= dg.group_number and dg.name like ‘RECO%’
group by dg.name;

 

 

Step-4: Verify existing grid disk definition for the group ‘DATA’ through ‘DCLI’ utility

[root@exadatadb01 ~]# dcli –g cell_group –l root “cellcli –e list griddisk attributes name, size where name like \’DATA.*\’ ”

Step-5: Verify existing grid disk definition for the group ‘RECO’ through ‘DCLI’ utility

[root@exadatadb01 ~]# dcli –g cell_group –l root “cellcli –e list griddisk attributes name, size where name like \’RECO.*\’ ”

Step-6: Calculate the following information:

  1. Current size of the ‘DATA’s grid disks
  2. New RECO diskgroup size = Total Space - New DATA grid disk
  3. New RECO diskgroup size = New RECO grid disk Size * Number of grid disks
    1. Net RECO free space = Total RECO disk group size - RECO used space
    2. Required mirror free space = New RECO grid disk size * Number of disks in a cell * free space factor (10%)
    3. Minimum Free space for ADD/DROP = 2 x required mirror free space
  1. Now we will calculate NET free space:

 

Step-7: Login in ASM instance as ‘sysasm’ and Resize the ASM disks in the RECO diskgroup

SQL> show parameter power
SQL> alter diskgroup RECO resize all size <value>M rebalance power 32;

 

Wait for rebalance operation is to FINISH by monitoring ‘gv$asm_operation’. Please note do not proceed further steps until this query return no rows

SQL> select * from gv$asm_operation;

Step-8: Verify that all ASM disks are now at the target size and verify RECO ASM disks new size

SQL> select name, total_mb, os_mb
from v$asm_disk
where group_number = (select group_number
       from v$asm_diskgroup
       where name = ‘RECO’);

Step-9: Drop ASM disks from RECO diskgroup on the Exadata Storage Server (Cell Server). Drop the failgroup in CellServer

SQL> alter diskgroup RECO drop disks in failgroup <Cell Server> rebalance power 32 nowait;

Wait for rebalance operation is to FINISH by monitoring ‘gv$asm_operation’. Please note do not proceed further steps until this query return no rows

SQL> select * from gv$asm_operation;

Step-10: Drop the RECO grid disks on the Cell Server

Determine the status of ASM disks with the following query, HEADER_STATUS will change from MEMBER to FORMER and MOUNT_STATUS will change from CACHE to CLOSED.

SQL> select group_number, path, failgroup, header_status, mount_status
from v$asm_disk;

Verify the grid disks are in the proper status to proceed further

 [root@exadatadb01 ~]# dcli –c <CellServer> –l root “cellcli –e list griddisk attributes name, asmmodestatus, asmdeactivationoutcome”

If the ASMMODESTATUS is “UNUSED” and the ASMDEACTIVATIONOUTCOME is “Yes”, then it is safe to proceed. Otherwise, investigate further and correct before continuing.

Step-11: Drop the grid disks from the cell

From the first DB node, run the following DCLI command as the root user or oracle user.

[root@exadatadb01 ~]# dcli –c <CellServer> –l root “cellcli –e drop griddisk all harddisk prefix=RECO force”

Step-12: Verify the space formally allocated to RECO grid disks is now free:

[root@exadatadb01 ~]#dcli -c <Cell Server1> -l root "cellcli -e list celldisk attributes name,size,freespace"

Step-13: Resize the DATA grid disk on the cell to the new larger size

a. We will now resize the DATA grid disks on the cell to the new larger size:

[root@exadatadb01 ~]# dcli -c <Cell Server> -l root "cellcli -e alter griddisk <Specify list of grid disks> size=<Specify Size>M"

b. Verify the size is as expected

[root@exadatadb01 ~]# dcli -c <Cell Server> -l root "cellcli -e list griddisk attributes name,offset,size"

c. Verify that the remaining free space on the cell disk is equal to the expected size of <Size> MB:

[root@exadatadb01 ~]# dcli -c <Cell Server> -l root "cellcli -e list celldisk attributes name,size,freespace"

Step-14: Recreate RECO grid disks on the cell to a smaller size

a. From the first DB node, run the following DCLI command. Below command allocate all remaining space to the RECO grid disks

[root@exadatadb01 ~]# dcli -c <Cell Server> -l root "cellcli -e CREATE GRIDDISK ALL HARDDISK PREFIX='RECO'"

b. Verify the grid disks were created properly and contiguous with DBFS_DG

[root@exadatadb01 ~]# dcli -c <Cell Server> -l root "cellcli -e list griddisk attributes name, offset, size"

c. Verify there is no more free space on the cell disks on this cell:

[root@exadatadb01 ~]# dcli -c <Cell Server> -l root "cellcli -e list celldisk attributes name,size,freespace"

Step-15: Add RECO ASM disks from the cell to the RECO Disk group and drop RECO ASM disks from the next cell

SQL> alter diskgroup RECO add disk ‘o/*/RECO*<CellServer>’ rebalance power 32 nowait;

Wait for rebalance operation is to FINISH by monitoring ‘gv$asm_operation’. Please note do not proceed further steps until this query return no rows

SQL> select * from gv$asm_operation;

Verify disks are added back successfully with HEADER_STATUS=”MEMBER” and MOUNT_STATUS=”CACHED”; and EXDBCEL02 RECO ASM disks are now unused

SQL> select group_number, path, failgroup, header_status, mount_status
from v$asm_disk;

SQL> select group_number, path, failgroup, header_status, mount_status
from v$asm_disk;

Note : Repeat steps from Step-9 to Step-12 for remaining cellservers

Step-16: Change the name of the cells and failure groups.

Note: On the final cell, we only need to add the ASM disks back into the disk group with the command:  

SQL> alter diskgroup RECO add disk ‘o/*/RECO*<CellServer>’ rebalance power 32 nowait;

Wait for rebalance operation is to FINISH by monitoring ‘gv$asm_operation’. Please note do not proceed further steps until this query return no rows

SQL> select * from gv$asm_operation;

Step-17: Resize all DATA grid disks up to the desired size

Executing this query will refresh ASM’s cache

SQL> alter diskgroup RECO resize all size <value>M rebalance power 32;

Wait for rebalance operation is to FINISH by monitoring ‘gv$asm_operation’. Please note do not proceed further steps until this query return no rows

SQL> select * from gv$asm_operation;

Step-18: Verify all sizes

Once rebalance completed, verify that the disk group sizes

SQL> select group_number, name, type, total_mb, free_mb,
required_mirror_free_mb, usable_file_mb
from v$asm_diskgroup
order by group_number;

 

Rules Engine

$
0
0

If you ever wish to have a custom rules engine in plsql, here is one simple version of it.

- Rules are configured as simple SQL queries. These queries may use

- Bundle the rules and call it Rule Set and give an ID to it

- The caller will execute the rules engine proc by passing rule-set id as in parameter

- The rules engine proc will excute the rule queries one by one and record the result in the rule_output table

- The caller then can query the rule_output table to get the results and further understand the meaning of the result

I have used it in multiple projects

- to fire sequence of queries to introspect a domain object and signal other processes
- to generate reporting data
- to enrich a domin object

- To implement purge requirements (Ruleset triggered to Quartz scheduler)
Thought it might of some help to others.

Features

-User configures list of SQL select queries - Called rule queries.

- Rule Queries are bundled under the name RulesSet

- Caller provides the RuleSet name as input and execute the rules engine - to fire all the rules configured under the ruleset

- The rules engine prepares / parses the rule queries

Rules queries may use bind variables

- User configures bind-queries against each bind-variable, in the bind-query tables.

- Engine will execute the bind SQL queries, get the value and replace it to the Rule query to prepare the SQL Query

- Parsed SQL Queries are executed by the engine and results are collected in a rule-output tables

Controlling the execution of the rules under a RuleSet

- While the engine always executes the list of the Rules for a given RuleSet, user may wish to abort after a certain condition is met.

- In other words, the engine executes the list of Rule Queries for a given RuleSet one by one say in a while loop.

- The caller is provided with an option to control the while-loop's condition.

- The while loop's condition can be configured as another Rule-Query - breakConditionQuery rule

Passing values to the rules-engine

- A oracle global session table is used to pass values to the rule engine procedure

- Caller will insert the input parameters as key value pairs in input_param session table and then invoke the rules-engine proc in the same session

- Bind variable queries & Rule queries will use these values.

Dependency:

- Uses a database logger component similar to apache log4j, it is included as well.

CODE

/****
DDLs
****/

CREATE TABLE MY_log_properties
(
   logger      VARCHAR2 (200) PRIMARY KEY,
   loglevel    VARCHAR2 (100),
   createdby   VARCHAR2 (100),
   createddt   DATE,
   updateddt   DATE,
   updatedby   VARCHAR2 (100)
);


CREATE TABLE MY_log
(
   logid           NUMBER,
   code            VARCHAR2 (100),
   msg             CLOB,
   logger          VARCHAR2 (200),
   loglevel        VARCHAR2 (10),
   iden1           VARCHAR2 (100),
   iden2           VARCHAR2 (100),
   iden3           VARCHAR2 (100),
   iden4           VARCHAR2 (100),
   iden5           VARCHAR2 (100),
   createdby       VARCHAR2 (100),
   sys_timestamp   TIMESTAMP
);


CREATE INDEX MY_log_logid_idx
   ON MY_log (logid);

CREATE INDEX MY_log_time_idx
   ON MY_log (sys_timestamp);

CREATE INDEX MY_log_iden_idx
   ON MY_log (iden1,
                iden2,
                iden3,
                iden4,
                iden5);

CREATE SEQUENCE MY_log_seq
   MINVALUE 1
   MAXVALUE 999999999999999
   CYCLE;

--------------------------

CREATE TABLE MY_RULE
(
   ID           NUMBER NOT NULL,
   APPID        VARCHAR2 (300 BYTE) NOT NULL,
   RULE_ID      VARCHAR2 (100 BYTE),
   RULESET_ID   VARCHAR2 (100 BYTE) NOT NULL,
   RULE_QUERY   CLOB NOT NULL,
   UPDDT        TIMESTAMP (6) DEFAULT SYSTIMESTAMP
);


ALTER TABLE MY_RULE ADD (
  PRIMARY KEY
  (RULE_ID));


CREATE TABLE MY_RULE_BINDVARIABLES
(
   APPID               VARCHAR2 (300 BYTE) NOT NULL,
   VARIABLENAME        VARCHAR2 (64 BYTE),
   BIND_QUERY          CLOB,
   INC_TYPE            VARCHAR2 (2 BYTE) DEFAULT 'EQ' NOT NULL,
   CACHE_WITHIN_EXEC   VARCHAR2 (1 BYTE) DEFAULT 'Y' NOT NULL,
   UPDDT               TIMESTAMP (6) DEFAULT SYSTIMESTAMP
);


ALTER TABLE MY_RULE_BINDVARIABLES ADD (
  PRIMARY KEY
  (VARIABLENAME));

CREATE TABLE MY_RULE_EXE_CONFIG
(
   ID                    NUMBER,
   APPID                 VARCHAR2 (300 BYTE) NOT NULL,
   RULESET_ID            VARCHAR2 (100 BYTE) NOT NULL,
   BREAK_CONDN_BEF_AFT   VARCHAR2 (3 BYTE) DEFAULT 'BEF',
   BREAKING_RULEID       VARCHAR2 (100 BYTE) NOT NULL,
   UPDDT                 TIMESTAMP (6) DEFAULT SYSTIMESTAMP
);

ALTER TABLE MY_RULE_EXE_CONFIG ADD (
  PRIMARY KEY
  (ID));

CREATE GLOBAL TEMPORARY TABLE MY_RE_INPUT_PARAM
(
   EXEID   VARCHAR2 (500),
   KEY     VARCHAR2 (500),
   VALUE   VARCHAR2 (500)
)
ON COMMIT DELETE ROWS;

CREATE TABLE MY_RULE_OUTPUT
(
   APPID        VARCHAR2 (300 BYTE) NOT NULL,
   OUTPUTID     VARCHAR2 (300 BYTE),
   RULE_ID      VARCHAR2 (100 BYTE),
   RULESET_ID   VARCHAR2 (100 BYTE),
   EXEID        VARCHAR2 (200 BYTE),
   RECID        NUMBER (19),
   COLIDX       NUMBER (10),
   COLNAME      VARCHAR2 (100 BYTE),
   COLTYPE      VARCHAR2 (4000 BYTE),
   COLVAL       VARCHAR2 (4000 BYTE),
   COLVAL_DT    DATE,
   COLVAL_TS    TIMESTAMP (6),
   COLVAL_CL    CLOB,
   UPDDT        TIMESTAMP (6) DEFAULT SYSTIMESTAMP
);


CREATE INDEX MY_RULE_OUTPUT_IDX1
   ON MY_RULE_OUTPUT (APPID, RULESET_ID, EXEID);

CREATE INDEX MY_RULE_OUTPUT_IDX2
   ON MY_RULE_OUTPUT (APPID, RULE_ID, EXEID);

CREATE UNIQUE INDEX MY_RULE_OUTPUT_PK
   ON MY_RULE_OUTPUT (OUTPUTID, COLIDX);


ALTER TABLE MY_RULE_OUTPUT ADD (
  CONSTRAINT MY_RULE_OUTPUT_PK
  PRIMARY KEY
  (OUTPUTID, COLIDX)
  USING INDEX MY_RULE_OUTPUT_PK
  ENABLE VALIDATE);


CREATE SEQUENCE MY_RULE_OUTPUT_SEQ
   START WITH 81
   MAXVALUE 99999999999999
   MINVALUE 1
   CYCLE
   CACHE 20
   NOORDER;


CREATE OR REPLACE TYPE RowData IS TABLE OF VARCHAR2 (4000);

CREATE OR REPLACE TYPE ResultSet IS TABLE OF RowData;

-----------------------------------------------------


CREATE OR REPLACE PACKAGE MY_logger
AS
   PROCEDURE LOG (pLogger      MY_log.logger%TYPE,
                  pLogLevel    MY_log.loglevel%TYPE,
                  pCode        MY_log.code%TYPE,
                  pMsg         MY_log.msg%TYPE,
                  pIden1       MY_log.iden1%TYPE DEFAULT NULL,
                  pIden2       MY_log.iden2%TYPE DEFAULT NULL,
                  pIden3       MY_log.iden3%TYPE DEFAULT NULL,
                  pIden4       MY_log.iden4%TYPE DEFAULT NULL,
                  pIden5       MY_log.iden5%TYPE DEFAULT NULL);

   gv_logging_status   VARCHAR2 (100);
END MY_logger;
/

/********************************************************************************************
* INPUT         : pLogger     --type of logger
*                 pLogLevel   -- log level,
*                 pCode       -- error code Passed,
*                 pMsg        -- Error Message Passed
*                 pIden1      --Identifier 1
*                 pIden2      --Identifier 2
*                 pIden3      --Identifier 3
*                 pIden4      --Identifier 4
*                 pIden5      --Identifier 5
----------------------------------------------------
* Description   :IF logging status ("*" in MY_LOG_PROPERTIES table is OFF then return )
*                  
*                 Based On Logging level set in MY_log_properties , logging will be saved
*                 If the Passed Level is less than the level of DB , then logs will not be
*                 Stored. (If logging status not matched or passed null thenby default
*                          logs will be saved)                           
**********************************************************************************************/

CREATE OR REPLACE PACKAGE BODY MY_logger
AS
   PROCEDURE LOG (pLogger      MY_log.logger%TYPE,
                  pLogLevel    MY_log.loglevel%TYPE,
                  pCode        MY_log.code%TYPE,
                  pMsg         MY_log.msg%TYPE,
                  pIden1       MY_log.iden1%TYPE DEFAULT NULL,
                  pIden2       MY_log.iden2%TYPE DEFAULT NULL,
                  pIden3       MY_log.iden3%TYPE DEFAULT NULL,
                  pIden4       MY_log.iden4%TYPE DEFAULT NULL,
                  pIden5       MY_log.iden5%TYPE DEFAULT NULL)
   AS
      PRAGMA AUTONOMOUS_TRANSACTION;
      lnpLogLevel       NUMBER := 0;
      lnDBLogLevel      NUMBER := 0;
      generated_logid   MY_log.logid%TYPE;
      lDBLogLevel       MY_log.loglevel%TYPE := 'ERR';
      npLogLevel        NUMBER := 0;
      nDBLogLevel       NUMBER := 0;
   BEGIN
      BEGIN
         SELECT UPPER (LOGLEVEL)
           INTO gv_logging_status
           FROM MY_log_properties
          WHERE logger = '*' AND ROWNUM < 2;

         -- Returning the call from procedure , logging status is OFF
         IF gv_logging_status = 'OFF'
         THEN
            RETURN;
         END IF;
      EXCEPTION
         WHEN OTHERS
         THEN
            gv_logging_status := 'OFF';
      END;

      -- Checking the DB Log Level for the input logger
      BEGIN
         SELECT LOGLEVEL
           INTO lDBLogLevel
           FROM MY_log_properties
          WHERE UPPER (pLogger) = UPPER (logger) AND ROWNUM < 2;
      EXCEPTION
         WHEN NO_DATA_FOUND
         THEN
            BEGIN
               -- IF exact match is not found then checking for the wild card search based on the maximum defined log level
               SELECT loglevel
                 INTO lDBLogLevel
                 FROM (  SELECT DISTINCT LOGLEVEL, LENGTH (logger), ROWNUM rn
                           FROM MY_log_properties
                          WHERE UPPER (pLogger) LIKE (UPPER (logger) || '%')
                       ORDER BY LENGTH (logger) DESC)
                WHERE rn = 1;
            EXCEPTION
               WHEN NO_DATA_FOUND
               THEN
                  lDBLogLevel := 'ERR';
               WHEN OTHERS
               THEN
                  -- when any error in Query , raise the error back to environment
                  RAISE;
            END;
      END;


      -- Making the Level for passed logger
      SELECT DECODE (pLogLevel,
                     'ON', 2,
                     'ERR', 2,
                     'WAR', 1,
                     'DEB', 0,
                     -1)
        INTO lnpLogLevel
        FROM DUAL;

      -- Fetching the DB Level for passed logger
      SELECT DECODE (lDBLogLevel,  'ERR', 2,  'WAR', 1,  'DEB', 0,  2, -1)
        INTO lnDBLogLevel
        FROM DUAL;

      IF gv_logging_status = 'ON' AND (lnpLogLevel = -1 OR lnDBLogLevel = -1)
      THEN
         -- Overriding the LOG logic , if logging status is ON and logging indicators are not passed then
         -- based on this flag logging will be done
         lnpLogLevel := 2;
         lnDBLogLevel := 2;
      END IF;

      IF lnDBLogLevel <= lnpLogLevel
      THEN
         -- creating the ID for LOGS
         SELECT LPAD (MY_log_seq.NEXTVAL, 5, 0)
           INTO generated_logid
           FROM DUAL;

         -- If all validations passes then we have to insert into the log table
         INSERT INTO PMTS_PSH_OWNER.MY_LOG (logger,
                                              loglevel,
                                              logid,
                                              code,
                                              msg,
                                              iden1,
                                              iden2,
                                              iden3,
                                              iden4,
                                              iden5,
                                              sys_timestamp)
              VALUES (pLogger,
                      ploglevel,
                      generated_logid,
                      pCode,
                      pMsg,
                      pIden1,
                      pIden2,
                      pIden3,
                      pIden4,
                      pIden5,
                      CURRENT_TIMESTAMP);
      END IF;



      COMMIT;
   END LOG;
END MY_logger;
/

------------------------------------------------------------------------------

create or replace
PACKAGE "MY_RULESENGINE"
AS
   TYPE BindQueryCacheType IS TABLE OF CLOB
      INDEX BY VARCHAR2 (100);

   aLOGID   MY_LOG.logid%TYPE;

   PROCEDURE pub_fireRules (pAppId        MY_RULE.APPID%TYPE,
                            pRuleSetId    MY_RULE.RULESET_ID%TYPE,
                            pExecId       MY_RULE_OUTPUT.EXEID%TYPE);

   FUNCTION parseQuery (pAppId     MY_RULE.APPID%TYPE,
                        pQuery     CLOB,
                        pExecId    MY_RULE_OUTPUT.EXEID%TYPE)
      RETURN CLOB;

   FUNCTION exeRuleBindQuery (pAppId           MY_RULE.APPID%TYPE,
                              pVariableName    VARCHAR2,
                              pExecId          MY_RULE_OUTPUT.EXEID%TYPE)
      RETURN CLOB;

   FUNCTION exeAnyQuery (pQuery CLOB)
      RETURN ResultSet
      PIPELINED;

   FUNCTION getRuleBindQueryResult (pQuery CLOB, incType VARCHAR2)
      RETURN CLOB;

   PROCEDURE getColumnDesc (pQuery                   CLOB,
                            oColCnt    IN OUT        NUMBER,
                            oDescQry   IN OUT NOCOPY DBMS_SQL.DESC_TAB);

   PROCEDURE exeRuleQryAndGenOutput (
      pAppId           MY_RULE.APPID%TYPE,
      exeId            MY_RULE_OUTPUT.EXEID%TYPE,
      ruleId           MY_RULE_OUTPUT.RULE_ID%TYPE,
      ruleSetId        MY_RULE_OUTPUT.RULESET_ID%TYPE,
      parsedRuleQry    CLOB);

   FUNCTION shouldBreak (pAppId     MY_RULE.APPID%TYPE,
                         pQuery     CLOB,
                         pExecId    MY_RULE_OUTPUT.EXEID%TYPE)
      RETURN BOOLEAN;

   FUNCTION isSelectQuery (pParsedQuery CLOB)
      RETURN BOOLEAN;

   PROCEDURE exeAnyNonSelectQuery (
      pAppId           MY_RULE.APPID%TYPE,
      exeId            MY_RULE_OUTPUT.EXEID%TYPE,
      ruleId           MY_RULE_OUTPUT.RULE_ID%TYPE,
      ruleSetId        MY_RULE_OUTPUT.RULESET_ID%TYPE,
      parsedRuleQry    CLOB);
END MY_RULESENGINE;

/**
BODY
****/


create or replace
PACKAGE BODY                MY_RULESENGINE
AS
   aBindQryCache   BindQueryCacheType;

   /***************************************************************************
   * TYPE      : PROCEDURE
   * PURPOSE   : Gateway Proc to Fire Rules
   * INPUT     : pAppId      <ApplicationID>
   *           : pRuleSetId  <RuleID>
   *           : pExecId
   * PROCESS   : This Proc Will be called directly from the mapper to fire the
   *             Rules , process is as below
   *              1: Check Breaking Condition for APPLICATION and RULE SETID
   *              2: Fetch The Breaking Query from rule table
   *              3: Check the Breaking Condition AFTER or BEFORE
   *****************************************************************************/
   PROCEDURE pub_fireRules (pAppId        MY_RULE.APPID%TYPE,
                            pRuleSetId    MY_RULE.RULESET_ID%TYPE,
                            pExecId       MY_RULE_OUTPUT.EXEID%TYPE)
   AS
      CURSOR rules_sql
      IS
           SELECT rule_query, rule_id, ruleset_id
             FROM MY_RULE
            WHERE     appid = pAppId
                  AND ruleset_id = pRulesetId
                  AND rule_id NOT IN
                         (SELECT BREAKING_RULEID
                            FROM MY_RULE_EXE_CONFIG
                           WHERE appid = pAppId AND ruleset_id = pRulesetId)
         ORDER BY id;

      currParsedRuleQry   CLOB := NULL;
      breakConditionQry   CLOB := NULL;
      whenToBreak         MY_RULE_EXE_CONFIG.BREAK_CONDN_BEF_AFT%TYPE;
      chkBreakCondition   VARCHAR (1) := 'N';
   BEGIN
      SELECT MY_LOG_SEQ.NEXTVAL INTO aLOGID FROM DUAL;

      MY_LOGGER.LOG ('MY_RULESENGINE.pub_fireRules',
                       'DEB',
                       '0',
                       'ENTER pub_fireRules',
                       pExecId,
                       pRuleSetId,
                       NULL);

      FOR i IN (SELECT BREAK_CONDN_BEF_AFT, BREAKING_RULEID
                  FROM MY_RULE_EXE_CONFIG
                 WHERE appid = pAppId AND ruleset_id = pRulesetId)
      LOOP
         -- Bypassing the BreakingRule Condition  if conditionid =0
         IF (i.BREAK_CONDN_BEF_AFT IS NOT NULL AND i.BREAKING_RULEID <> 0)
         THEN
            SELECT rule_query
              INTO breakConditionQry
              FROM MY_RULE
             WHERE     appid = pAppId
                   AND ruleset_id = pRulesetId
                   AND rule_id = i.BREAKING_RULEID;

            chkBreakCondition := 'Y';
         END IF;

         whenToBreak := i.BREAK_CONDN_BEF_AFT;
         MY_LOGGER.LOG (
            'MY_RULESENGINE.pub_fireRules',
            'DEB',
            '0',
               'BreakingCondition:'
            || chkBreakCondition
            || '-'
            || whenToBreak
            || '-',
            pExecId,
            pRuleSetId,
            NULL,
            aLOGID,
            'Normal');
      END LOOP;

      aBindQryCache.delete;

      FOR ruleQueries IN rules_sql
      LOOP
         IF ( (chkBreakCondition = 'Y') AND (whenToBreak = 'BEF'))
         THEN
            IF shouldBreak (pAppId,
                            parseQuery (pAppId, breakConditionQry, pExecId),
                            pExecId)
            THEN
               aBindQryCache.delete;
               MY_LOGGER.LOG ('MY_RULESENGINE.pub_fireRules',
                                'DEB',
                                '0',
                                'EXIT pub_fireRules (breaking-BEF)',
                                pExecId,
                                pRuleSetId,
                                ruleQueries.rule_id,
                                aLOGID,
                                'Normal');
               EXIT;
            END IF;
         END IF;

         currParsedRuleQry :=
            parseQuery (pAppId, ruleQueries.rule_query, pExecId);
         MY_LOGGER.LOG ('MY_RULESENGINE.pub_fireRules',
                          'DEB',
                          '0',
                          'CurrentParsedRuleQuery:' || currParsedRuleQry,
                          pExecId,
                          pRuleSetId,
                          ruleQueries.rule_id,
                          aLOGID,
                          'Normal');

         IF (isSelectQuery (currParsedRuleQry))
         THEN
            exeRuleQryAndGenOutput (pAppId,
                                    pExecId,
                                    ruleQueries.rule_id,
                                    pRuleSetId,
                                    currParsedRuleQry);
         ELSE
            exeAnyNonSelectQuery (pAppId,
                                  pExecId,
                                  ruleQueries.rule_id,
                                  pRuleSetId,
                                  currParsedRuleQry);
         END IF;

         MY_LOGGER.LOG (
            'MY_RULESENGINE.pub_fireRules',
            'DEB',
            '0',
            'exeRuleQryAndGenOutput - Completed for ' || ruleQueries.rule_id,
            pExecId,
            pRuleSetId,
            ruleQueries.rule_id,
            aLOGID,
            'Normal');

         IF ( (chkBreakCondition = 'Y') AND (whenToBreak = 'AFT'))
         THEN
            IF shouldBreak (pAppId,
                            parseQuery (pAppId, breakConditionQry, pExecId),
                            pExecId)
            THEN
               MY_LOGGER.LOG ('MY_RULESENGINE.pub_fireRules',
                                'DEB',
                                '0',
                                'EXIT pub_fireRules (breaking-AFT)',
                                pExecId,
                                pRuleSetId,
                                ruleQueries.rule_id,
                                aLOGID,
                                'Normal');
               EXIT;
            END IF;
         END IF;
      END LOOP;

      aBindQryCache.delete;
      MY_LOGGER.LOG ('MY_RULESENGINE.pub_fireRules',
                       'DEB',
                       '0',
                       'EXIT pub_fireRules',
                       pExecId,
                       pRuleSetId,
                       NULL,
                       aLOGID,
                       'Normal');
   END pub_fireRules;

   /***************************************************************************
   * TYPE      : FUNCTION
   * PURPOSE   : Create a Parsed Query
   * INPUT     : pAppId      <ApplicationID>
   *           : pQuery  <Query String>
   *           : pExecId
   * PROCESS   : This Proc Will be called to create the Query
   *              Setting of parameter will be done and query will be properly
   *              Formed <Query will be picked with a bind variable in it>
   *****************************************************************************/
   FUNCTION parseQuery (pAppId     MY_RULE.APPID%TYPE,
                        pQuery     CLOB,
                        pExecId    MY_RULE_OUTPUT.EXEID%TYPE)
      RETURN CLOB
   AS
      parsedQuery     CLOB := pQuery;
      vPattern        VARCHAR2 (10) := '\$[^\$]+\$';
      vVariableName   VARCHAR2 (65) := NULL;
      i               NUMBER := 0;
   BEGIN
      --DBMS_OUTPUT.PUT_LINE ('********* PROC :parseQuery');
      --DBMS_OUTPUT.PUT_LINE ('Parameter 1:pAppId=' || pAppId);
      --DBMS_OUTPUT.PUT_LINE ('Parameter 2:pQuery=' || pQuery);
      --DBMS_OUTPUT.PUT_LINE ('Parameter 3:pExecId=' || pExecId);
      -- Parse all the variables in the rule query
      vVariableName :=
         REGEXP_SUBSTR (parsedQuery,
                        vPattern,
                        1,
                        1,
                        'm');

      WHILE ( (LENGTH (vVariableName) > 0) AND (vVariableName IS NOT NULL))
      LOOP
         IF (vVariableName IS NOT NULL)
         THEN
            IF (vVariableName = '$p.pExecId$')
            THEN
               parsedQuery :=
                  REGEXP_REPLACE (parsedQuery,
                                  vPattern,
                                  pExecId,
                                  1,
                                  1,
                                  'm');
            ELSIF (vVariableName LIKE '$v.%')
            THEN
               parsedQuery :=
                  REGEXP_REPLACE (
                     parsedQuery,
                     vPattern,
                     exeRuleBindQuery (pAppId, vVariableName, pExecId),
                     1,
                     1,
                     'm');
            ELSIF (vVariableName LIKE '$p.%')
            THEN
               MY_LOGGER.LOG (
                  'MY_RULESENGINE.parseQuery',
                  'ERR',
                  '-20501',
                     'RE-Variable Name '
                  || vVariableName
                  || ' is not supported in the query('
                  || pQuery
                  || ') ExecId('
                  || pExecId
                  || ')',
                  pExecId,
                  aLOGID,
                  NULL,
                  NULL,
                  'Error');
               raise_application_error (
                  -20501,
                     'RE-Variable Name '
                  || vVariableName
                  || ' is not supported in the query('
                  || pQuery
                  || ') ExecId('
                  || pExecId
                  || ')',
                  TRUE);
            ELSE
               MY_LOGGER.LOG (
                  'MY_RULESENGINE.parseQuery',
                  'ERR',
                  '-20500',
                     'RE-Variable Name '
                  || vVariableName
                  || ' is not supported in the query('
                  || pQuery
                  || ') ExecId('
                  || pExecId
                  || ')',
                  pExecId,
                  aLOGID,
                  NULL,
                  NULL,
                  'Error');
               raise_application_error (
                  -20500,
                     'RE-Variable Name '
                  || vVariableName
                  || ' is not supported in the query('
                  || pQuery
                  || ') ExecId('
                  || pExecId
                  || ')',
                  TRUE);
            END IF;
         END IF;

         vVariableName :=
            REGEXP_SUBSTR (parsedQuery,
                           vPattern,
                           1,
                           1,
                           'm');
      END LOOP;

      RETURN parsedQuery;
   END parseQuery;

   /**********************************************************************************************
   * TYPE      : FUNCTION
   *
   * PURPOSE   : Create a Query with Bind Variables
   *
   * INPUT     : pAppId      <ApplicationID>
   *           : pVariableName  <Variable Names>
   *           : pExecId
   *
   * PROCESS   : This Proc Will be called to create Bind Variable Query
   *             and return the output for that BIND variable to the calling
   *             Process <Bind Variable needs to be configured in MY_rule_bindvariables table>
   *
   **********************************************************************************************/
   FUNCTION exeRuleBindQuery (pAppId           MY_RULE.APPID%TYPE,
                              pVariableName    VARCHAR2,
                              pExecId          MY_RULE_OUTPUT.EXEID%TYPE)
      RETURN CLOB
   AS
      vRuleBindQuery      CLOB := NULL;
      vPattern            VARCHAR2 (10) := '\$[^\$]+\$';
      localVariableName   VARCHAR2 (65) := NULL;
      incType             VARCHAR2 (2) := 'IN';
      cache               MY_RULE_BINDVARIABLES.CACHE_WITHIN_EXEC%TYPE;
      cResult             CLOB := NULL;
   BEGIN

      BEGIN
         -- Return the result if already cached
         IF aBindQryCache.EXISTS (pVariableName)
         THEN
            cResult := aBindQryCache (pVariableName);
            MY_LOGGER.LOG ('MY_RULESENGINE.exeRuleBindQuery',
                             'DEB',
                             '0',
                             'Result from CACHE:' || cResult,
                             pExecId,
                             NULL,
                             NULL,
                             aLOGID,
                             'Normal');
            RETURN cResult;
         END IF;

         SELECT BIND_QUERY, INC_TYPE, CACHE_WITHIN_EXEC
           INTO vRuleBindQuery, incType, cache
           FROM MY_RULE_BINDVARIABLES
          WHERE variablename = pVariableName AND appid = pAppId;

         localVariableName :=
            REGEXP_SUBSTR (vRuleBindQuery,
                           vPattern,
                           1,
                           1,
                           'm');

         MY_LOGGER.LOG (
            'MY_RULESENGINE.exeRuleBindQuery',
            'DEB',
            '0',
            'Unparsed SQL/Inc_type:' || vRuleBindQuery || '/' || incType,
            pExecId,
            NULL,
            NULL,
            aLOGID,
            'Normal');

         IF (    (LENGTH (localVariableName) > 0)
             AND (localVariableName IS NOT NULL))
         THEN
            vRuleBindQuery := parseQuery (pAppId, vRuleBindQuery, pExecId);

            MY_LOGGER.LOG (
               'MY_RULESENGINE.exeRuleBindQuery',
               'DEB',
               '0',
               'Parsed SQL/Inc_type:' || vRuleBindQuery || '/' || incType,
               pExecId,
               NULL,
               NULL,
               aLOGID,
               'Normal');
         END IF;

         cResult := getRuleBindQueryResult (vRuleBindQuery, incType);

         -- Add to cache
         IF (UPPER (cache) = 'Y')
         THEN
            aBindQryCache (pVariableName) := cResult;
         END IF;

         MY_LOGGER.LOG ('MY_RULESENGINE.exeRuleBindQuery',
                          'DEB',
                          '0',
                          'Result:' || cResult,
                          pExecId,
                          NULL,
                          NULL,
                          aLOGID,
                          'Normal');
      EXCEPTION
         WHEN NO_DATA_FOUND
         THEN
            MY_LOGGER.LOG (
               'MY_RULESENGINE.exeRuleBindQuery',
               'ERR',
               SQLCODE,
                  'RE- Rule Bind query for the Variable Name '
               || pVariableName
               || ' is not found for ExecId('
               || pExecId
               || ')',
               pExecId,
               '-20502',
               aLOGID,
               'Error');
            raise_application_error (
               -20502,
                  'RE- Rule Bind query for the Variable Name '
               || pVariableName
               || ' is not found for ExecId('
               || pExecId
               || ')',
               TRUE);
      END;

      RETURN cResult;
   END exeRuleBindQuery;


   /***************************************************************************
   * TYPE      : FUNCTION
   * PURPOSE   : Create a Query with Bind Variables
   * INPUT     : pAppId      <ApplicationID>
   *           : pVariableName  <Variable Names>
   *           : pExecId
   * PROCESS   : This Proc Will be called to create the Query
   *              Setting of parameter will be done and query will be properly
   *              Formed
   *****************************************************************************/
   FUNCTION getRuleBindQueryResult (pQuery CLOB, incType VARCHAR2)
      RETURN CLOB
   AS
      colCount       NUMBER := 0;
      ctxQryResult   CLOB := NULL;

      CURSOR c1
      IS
         SELECT * FROM TABLE (exeAnyQuery (pQuery));

      currRow        RowData;
      currRowStr     CLOB;
      recordFound    BOOLEAN := FALSE;
      desctab        DBMS_SQL.DESC_TAB;
   BEGIN
      --DBMS_OUTPUT.PUT_LINE ('********* PROC :getRuleBindQueryResult');
      --DBMS_OUTPUT.PUT_LINE ('Parameter 1:pQuery=' || pQuery);
      --DBMS_OUTPUT.PUT_LINE ('Parameter 2:incType=' || incType);

      IF incType = 'IN'
      THEN
         OPEN c1;

         LOOP
            FETCH c1 INTO currRow;

            EXIT WHEN c1%NOTFOUND;
            colCount := currRow.COUNT;
            currRowStr := NULL;
            recordFound := TRUE;

            FOR i IN 1 .. currRow.COUNT
            LOOP
               currRowStr :=
                     currRowStr
                  || ''''
                  || REPLACE (currRow (i), '''', '''''')
                  || '''';

               IF (i < currRow.COUNT)
               THEN
                  currRowStr := currRowStr || ',';
               END IF;
            END LOOP;

            IF (colCount > 1)
            THEN
               currRowStr := '(' || currRowStr || ')';
            END IF;

            ctxQryResult := ctxQryResult || currRowStr || ',';
         END LOOP;

         CLOSE c1;

         -- Remove the extra ,
         ctxQryResult := SUBSTR (ctxQryResult, 0, LENGTH (ctxQryResult) - 1);

         IF (NOT recordFound)
         THEN
            getColumnDesc (pQuery, colCount, desctab);
            currRowStr := NULL;

            FOR i IN 1 .. colCount
            LOOP
               currRowStr := currRowStr || '''' || '''';

               IF (i < currRow.COUNT)
               THEN
                  currRowStr := currRowStr || ',';
               END IF;
            END LOOP;

            IF (colCount > 1)
            THEN
               currRowStr := '(' || currRowStr || ')';
            END IF;

            ctxQryResult := ctxQryResult || currRowStr || ',';
         END IF;
      ELSIF incType = 'EQ'
      THEN
         BEGIN
            SELECT * INTO currRow FROM TABLE (exeAnyQuery (pQuery));

            ctxQryResult :=
               '''' || REPLACE (currRow (1), '''', '''''') || '''';
         EXCEPTION
            WHEN NO_DATA_FOUND
            THEN
               ctxQryResult := '''''';
            WHEN OTHERS
            THEN
               MY_LOGGER.LOG ('MY_RULESENGINE.getRuleBindQueryResult',
                                'ERR',
                                SQLCODE,
                                SQLERRM,
                                '-',
                                aLOGID,
                                'Error');
               RAISE;
         END;
      END IF;

      RETURN ctxQryResult;
   EXCEPTION
      WHEN OTHERS
      THEN
         IF c1%ISOPEN
         THEN
            CLOSE c1;
         END IF;

         MY_LOGGER.LOG ('MY_RULESENGINE.getRuleBindQueryResult',
                          'ERR',
                          SQLCODE,
                          SQLERRM,
                          '-',
                          aLOGID,
                          'Error');
         RAISE;
   END getRuleBindQueryResult;

   /***************************************************************************
   * TYPE      : FUNCTION
   * PURPOSE   : upper level to execute the Query
   * INPUT     : pAppId      <ApplicationID>
   *           : pVariableName  <Variable Names>
   *           : pExecId
   * PROCESS   : This Proc Will be called to create the Query
   *              Setting of parameter will be done and query will be properly
   *              Formed
   *****************************************************************************/
   FUNCTION exeAnyQuery (pQuery CLOB)
      RETURN ResultSet
      PIPELINED
   AS
      currRow      RowData := NULL;
      v_cur_hdl    INT;
      ret          NUMBER;
      desctab      DBMS_SQL.DESC_TAB;
      colcnt       NUMBER;
      refDate      DATE;
      refNum       NUMBER;
      refVarchar   VARCHAR2 (4000);
   BEGIN
      --DBMS_OUTPUT.PUT_LINE ('********* PROC :exeAnyQuery');
      --DBMS_OUTPUT.PUT_LINE ('Parameter 1:pQuery=' || pQuery);
      v_cur_hdl := DBMS_SQL.OPEN_CURSOR;
      DBMS_SQL.PARSE (v_cur_hdl, pQuery, DBMS_SQL.NATIVE);
      DBMS_SQL.DESCRIBE_COLUMNS (v_cur_hdl, colcnt, desctab);

      FOR i IN 1 .. colcnt
      LOOP
         IF desctab (i).col_type = DBMS_TYPES.NO_DATA
         THEN
            DBMS_SQL.DEFINE_COLUMN (v_cur_hdl,
                                    i,
                                    refVarchar,
                                    4000);
         ELSIF desctab (i).col_type IN
                  (181,
                   DBMS_TYPES.TYPECODE_DATE,
                   DBMS_TYPES.TYPECODE_TIMESTAMP,
                   DBMS_TYPES.TYPECODE_TIMESTAMP_LTZ,
                   DBMS_TYPES.TYPECODE_TIMESTAMP_TZ)
         THEN
            DBMS_SQL.DEFINE_COLUMN (v_cur_hdl, i, refDate);
         ELSIF desctab (i).col_type = DBMS_TYPES.TYPECODE_NUMBER
         THEN
            DBMS_SQL.DEFINE_COLUMN (v_cur_hdl, i, refNum);
         ELSE
            DBMS_SQL.DEFINE_COLUMN (v_cur_hdl,
                                    i,
                                    refVarchar,
                                    4000);
         END IF;
      END LOOP;

      ret := DBMS_SQL.EXECUTE (v_cur_hdl);

      LOOP
         IF DBMS_SQL.FETCH_ROWS (v_cur_hdl) > 0
         THEN
            currRow := NEW RowData ();

            FOR i IN 1 .. colcnt
            LOOP
               IF desctab (i).col_type = DBMS_TYPES.NO_DATA
               THEN
                  DBMS_SQL.COLUMN_VALUE (v_cur_hdl, i, refVarchar);
                  currRow.EXTEND;
                  currRow (i) := refVarchar;
               ELSIF desctab (i).col_type IN
                        (181,
                         DBMS_TYPES.TYPECODE_DATE,
                         DBMS_TYPES.TYPECODE_TIMESTAMP,
                         DBMS_TYPES.TYPECODE_TIMESTAMP_LTZ,
                         DBMS_TYPES.TYPECODE_TIMESTAMP_TZ)
               THEN
                  DBMS_SQL.COLUMN_VALUE (v_cur_hdl, i, refDate);
                  currRow.EXTEND;
                  currRow (i) :=
                     TO_CHAR (refDate, 'DD-MON-YYYY HH12.MI.SS AM');
               ELSIF desctab (i).col_type = DBMS_TYPES.TYPECODE_NUMBER
               THEN
                  DBMS_SQL.COLUMN_VALUE (v_cur_hdl, i, refNum);
                  currRow.EXTEND;
                  currRow (i) := TO_CHAR (refNum);
               ELSE
                  DBMS_SQL.COLUMN_VALUE (v_cur_hdl, i, refVarchar);
                  currRow.EXTEND;
                  currRow (i) := refVarchar;
               END IF;
            END LOOP;

            PIPE ROW (currRow);
         ELSE
            EXIT;
         END IF;
      END LOOP;

      DBMS_SQL.CLOSE_CURSOR (v_cur_hdl);
   EXCEPTION
      WHEN OTHERS
      THEN
         IF DBMS_SQL.IS_OPEN (v_cur_hdl)
         THEN
            DBMS_SQL.CLOSE_CURSOR (v_cur_hdl);
         END IF;

         MY_LOGGER.LOG ('MY_RULESENGINE.exeAnyQuery',
                          'ERR',
                          SQLCODE,
                          SQLERRM,
                          '-',
                          aLOGID,
                          'Error');
         RAISE;
   END exeAnyQuery;

   PROCEDURE getColumnDesc (pQuery                   CLOB,
                            oColCnt    IN OUT        NUMBER,
                            oDescQry   IN OUT NOCOPY DBMS_SQL.DESC_TAB)
   AS
      v_cur_hdl   INT;
   BEGIN
      --DBMS_OUTPUT.PUT_LINE ('********* PROC :getColumnDesc ');
      --DBMS_OUTPUT.PUT_LINE ('Parameter 1:pQuery=' || pQuery);
      --DBMS_OUTPUT.PUT_LINE ('Parameter 2:oColCnt=' || oColCnt);
      ----DBMS_OUTPUT.PUT_LINE ('Parameter 3:oDescQry=' || oDescQry);
      v_cur_hdl := DBMS_SQL.OPEN_CURSOR;
      DBMS_SQL.PARSE (v_cur_hdl, pQuery, DBMS_SQL.NATIVE);
      DBMS_SQL.DESCRIBE_COLUMNS (v_cur_hdl, oColCnt, oDescQry);
      DBMS_SQL.CLOSE_CURSOR (v_cur_hdl);
   EXCEPTION
      WHEN OTHERS
      THEN
         IF DBMS_SQL.IS_OPEN (v_cur_hdl)
         THEN
            DBMS_SQL.CLOSE_CURSOR (v_cur_hdl);
         END IF;

         MY_LOGGER.LOG ('MY_RULESENGINE.getColumnDesc',
                          'ERR',
                          SQLCODE,
                          'Exception in getColumnDesc Query : ' || pQuery,
                          '',
                          aLOGID,
                          'Exception');
         MY_LOGGER.LOG ('MY_RULESENGINE.getColumnDesc',
                          'ERR',
                          SQLCODE,
                          'Exception in getColumnDesc Error : ' || SQLERRM,
                          '',
                          aLOGID,
                          'Exception');
         RAISE;
   END getColumnDesc;

   PROCEDURE exeRuleQryAndGenOutput (
      pAppId           MY_RULE.APPID%TYPE,
      exeId            MY_RULE_OUTPUT.EXEID%TYPE,
      ruleId           MY_RULE_OUTPUT.RULE_ID%TYPE,
      ruleSetId        MY_RULE_OUTPUT.RULESET_ID%TYPE,
      parsedRuleQry    CLOB)
   AS
      v_cur_hdl      INT;
      ret            NUMBER;
      desctab        DBMS_SQL.DESC_TAB;
      colcnt         NUMBER;
      refDate        DATE;
      refTimeStamp   TIMESTAMP;
      refVarchar     VARCHAR2 (4000);
      refClob        CLOB;
      refNum         NUMBER;
      outputRow      MY_RULE_OUTPUT%ROWTYPE;
      rowCount       NUMBER := 1;
   BEGIN
      --DBMS_OUTPUT.PUT_LINE ('********* PROC :exeRuleQryAndGenOutput');
      --DBMS_OUTPUT.PUT_LINE ('Parameter 1:pAppId=' || pAppId);
      --DBMS_OUTPUT.PUT_LINE ('Parameter 1:exeId=' || exeId);
      --DBMS_OUTPUT.PUT_LINE ('Parameter 1:ruleId=' || ruleId);
      --DBMS_OUTPUT.PUT_LINE ('Parameter 1:ruleSetId=' || ruleSetId);
      --DBMS_OUTPUT.PUT_LINE ('Parameter 1:parsedRuleQry=' || parsedRuleQry);
      v_cur_hdl := DBMS_SQL.OPEN_CURSOR;
      DBMS_SQL.PARSE (v_cur_hdl, parsedRuleQry, DBMS_SQL.NATIVE);
      DBMS_SQL.DESCRIBE_COLUMNS (v_cur_hdl, colcnt, desctab);

      FOR i IN 1 .. colcnt
      LOOP
         IF desctab (i).col_type = DBMS_TYPES.NO_DATA
         THEN
            DBMS_SQL.DEFINE_COLUMN (v_cur_hdl,
                                    i,
                                    refVarchar,
                                    4000);
         ELSIF desctab (i).col_type IN (DBMS_TYPES.TYPECODE_DATE)
         THEN
            DBMS_SQL.DEFINE_COLUMN (v_cur_hdl, i, refDate);
         ELSIF desctab (i).col_type IN
                  (181,
                   DBMS_TYPES.TYPECODE_TIMESTAMP,
                   DBMS_TYPES.TYPECODE_TIMESTAMP_LTZ,
                   DBMS_TYPES.TYPECODE_TIMESTAMP_TZ)
         THEN
            DBMS_SQL.DEFINE_COLUMN (v_cur_hdl, i, refTimeStamp);
         ELSIF desctab (i).col_type = DBMS_TYPES.TYPECODE_NUMBER
         THEN
            DBMS_SQL.DEFINE_COLUMN (v_cur_hdl, i, refNum);
         ELSIF desctab (i).col_type = DBMS_TYPES.TYPECODE_CLOB
         THEN
            DBMS_SQL.DEFINE_COLUMN (v_cur_hdl, i, refClob);
         ELSE
            DBMS_SQL.DEFINE_COLUMN (v_cur_hdl,
                                    i,
                                    refVarchar,
                                    4000);
         END IF;
      END LOOP;

      ret := DBMS_SQL.EXECUTE (v_cur_hdl);

      LOOP
         IF DBMS_SQL.FETCH_ROWS (v_cur_hdl) > 0
         THEN
            SELECT MY_RULE_OUTPUT_SEQ.NEXTVAL
              INTO outputRow.outputid
              FROM DUAL;

            outputRow.appid := pAppId;
            outputRow.rule_id := ruleId;
            outputRow.RULESET_ID := ruleSetId;
            outputRow.exeid := exeId;
            outputRow.recid := rowCount;

            FOR i IN 1 .. colcnt
            LOOP
               outputRow.colIdx := i;
               outputRow.colName := desctab (i).col_name;
               outputRow.colType := TO_CHAR (desctab (i).col_type);
               outputRow.colval := NULL;
               outputRow.colval_dt := NULL;
               outputRow.colval_ts := NULL;
               outputRow.colval_cl := NULL;

               IF desctab (i).col_type = DBMS_TYPES.NO_DATA
               THEN
                  DBMS_SQL.COLUMN_VALUE (v_cur_hdl, i, refVarchar);
                  outputRow.COLVAL := NULL;
               ELSIF desctab (i).col_type IN (DBMS_TYPES.TYPECODE_DATE)
               THEN
                  DBMS_SQL.COLUMN_VALUE (v_cur_hdl, i, refDate);
                  outputRow.COLVAL :=
                     TO_CHAR (refDate, 'DD-MON-YYYY HH12.MI.SS AM');
                  outputRow.COLVAL_DT := refDate;
               ELSIF desctab (i).col_type IN
                        (181,
                         DBMS_TYPES.TYPECODE_TIMESTAMP,
                         DBMS_TYPES.TYPECODE_TIMESTAMP_LTZ,
                         DBMS_TYPES.TYPECODE_TIMESTAMP_TZ)
               THEN
                  DBMS_SQL.COLUMN_VALUE (v_cur_hdl, i, refTimeStamp);
                  outputRow.COLVAL :=
                     TO_CHAR (refTimeStamp, 'DD-MON-YYYY HH12.MI.SS AM');
                  outputRow.COLVAL_TS := refTimeStamp;
               ELSIF desctab (i).col_type = DBMS_TYPES.TYPECODE_NUMBER
               THEN
                  DBMS_SQL.COLUMN_VALUE (v_cur_hdl, i, refNum);
                  outputRow.COLVAL := TO_CHAR (refNum);
               ELSIF desctab (i).col_type = DBMS_TYPES.TYPECODE_CLOB
               THEN
                  DBMS_SQL.COLUMN_VALUE (v_cur_hdl, i, refClob);
                  outputRow.COLVAL := TO_CHAR (refClob);
                  outputRow.COLVAL_CL := refClob;
               ELSE
                  DBMS_SQL.COLUMN_VALUE (v_cur_hdl, i, refVarchar);
                  outputRow.COLVAL := refVarchar;
               END IF;

               INSERT INTO MY_RULE_OUTPUT
                    VALUES outputRow;
            END LOOP;
         ELSE
            EXIT;
         END IF;

         rowCount := rowCount + 1;
      END LOOP;

      DBMS_SQL.CLOSE_CURSOR (v_cur_hdl);
   EXCEPTION
      WHEN OTHERS
      THEN
         IF DBMS_SQL.IS_OPEN (v_cur_hdl)
         THEN
            DBMS_SQL.CLOSE_CURSOR (v_cur_hdl);
         END IF;

         MY_LOGGER.LOG ('MY_RULESENGINE.exeRuleQryAndGenOutput',
                          'ERR',
                          SQLCODE,
                          SQLERRM,
                          '-',
                          aLOGID,
                          'Error');
         RAISE;
   END exeRuleQryAndGenOutput;

   FUNCTION shouldBreak (pAppId     MY_RULE.APPID%TYPE,
                         pQuery     CLOB,
                         pExecId    MY_RULE_OUTPUT.EXEID%TYPE)
      RETURN BOOLEAN
   AS
      currParsedRuleQry   CLOB := NULL;
      currRow             RowData;
   BEGIN
      --DBMS_OUTPUT.PUT_LINE ('********* PROC :shouldBreak');

      MY_LOGGER.LOG ('MY_RULESENGINE.shouldBreak',
                       'DEB',
                       '0',
                       'Executing breakingConditionQuery:' || pQuery,
                       'ExecId',
                       pExecId,
                       aLOGID,
                       'Normal');
      currParsedRuleQry := parseQuery (pAppId, pQuery, pExecId);

      BEGIN
         SELECT * INTO currRow FROM TABLE (exeAnyQuery (currParsedRuleQry));

         IF (UPPER (TRIM (currRow (1))) IN ('Y', 'YES', 'T', 'TRUE', '1'))
         THEN
            RETURN TRUE;
         ELSE
            RETURN FALSE;
         END IF;
      EXCEPTION
         WHEN NO_DATA_FOUND
         THEN
            RETURN FALSE;
         WHEN OTHERS
         THEN
            MY_LOGGER.LOG ('MY_RULESENGINE.shouldBreak',
                             'ERR',
                             SQLCODE,
                             SQLERRM,
                             pExecId,
                             aLOGID,
                             'Error');
            RAISE;
      END;
   END shouldBreak;

   FUNCTION isSelectQuery (pParsedQuery CLOB)
      RETURN BOOLEAN
   AS
   BEGIN
      --DBMS_OUTPUT.PUT_LINE ('********* PROC :isSelectQuery');

      IF (REGEXP_INSTR (TRIM (pParsedQuery),
                        'select',
                        1,
                        1,
                        0,
                        'im')) = 1
      THEN
         RETURN TRUE;
      ELSE
         RETURN FALSE;
      END IF;
   END isSelectQuery;

   PROCEDURE exeAnyNonSelectQuery (
      pAppId           MY_RULE.APPID%TYPE,
      exeId            MY_RULE_OUTPUT.EXEID%TYPE,
      ruleId           MY_RULE_OUTPUT.RULE_ID%TYPE,
      ruleSetId        MY_RULE_OUTPUT.RULESET_ID%TYPE,
      parsedRuleQry    CLOB)
   AS
   BEGIN
      --DBMS_OUTPUT.PUT_LINE ('********* PROC :exeAnyNonSelectQuery');
      MY_LOGGER.LOG ('MY_RULESENGINE.exeAnyNonSelectQuery',
                       'ERR',
                       '0',
                       'Executing exeAnyNonSelectQuery:' || parsedRuleQry,
                       exeId,
                       ruleSetId,
                       ruleId,
                       aLOGID,
                       'Normal');

      EXECUTE IMMEDIATE TO_CHAR (parsedRuleQry);
   EXCEPTION
      WHEN OTHERS
      THEN
         MY_LOGGER.LOG ('MY_RULESENGINE.exeAnyNonSelectQuery',
                          'ERR',
                          SQLCODE,
                          SQLERRM,
                          exeId,
                          ruleSetId,
                          ruleId,
                          aLOGID,
                          'Error');
         RAISE;
   END exeAnyNonSelectQuery;
END MY_RULESENGINE;

--------------------------------------------------------------------

Test Run / Sample Program


/**
DMLs
***/
INSERT INTO MY_LOG_PROPERTIES (LOGGER,
                                 LOGLEVEL,
                                 CREATEDBY,
                                 CREATEDDT,
                                 UPDATEDT,
                                 UPDATEDBY)
    VALUES ('*',
             'ON',
             'Agilan',
             SYSDATE,
             SYSDATE,
             'Agilan');

INSERT INTO MY_LOG_PROPERTIES (LOGGER,
                                 LOGLEVEL,
                                 CREATEDBY,
                                 CREATEDDT,
                                 UPDATEDT,
                                 UPDATEDBY)
     VALUES ('MY_RULESENGINE',
             'DEB',
             'Agilan',
             SYSDATE,
             SYSDATE,
             'Agilan');

---------------

INSERT INTO MY_RULE_EXE_CONFIG (ID,
                                  APPID,
                                  RULESET_ID,
                                  BREAK_CONDN_BEF_AFT,
                                  BREAKING_RULEID,
                                  UPDDT)
     VALUES (MY_exe_rule_config_seq,
             'YourApplicationID',
             'YourRuleSetId',
             'AFT',
             'breakOnFailure',
             SYSDATE);

             INSERT INTO MY_RULE_BINDVARIABLES (APPID,
                                     VARIABLENAME,
                                     BIND_QUERY,
                                     INC_TYPE,
                                     CACHE_WITHIN_EXEC,
                                     UPDDT)
     VALUES (
               'YourApplicationID',
               '$v.input_param1_from_caller$',
               'SELECT value FROM MY_RE_INPUT_PARAM WHERE EXEID=$p.pExecId$  AND upper(KEY)=''PARAMETERNAME_1''',
               'EQ',
               'N',
               SYSDATE);
              
INSERT INTO MY_RULE_BINDVARIABLES (APPID,
                                     VARIABLENAME,
                                     BIND_QUERY,
                                     INC_TYPE,
                                     CACHE_WITHIN_EXEC,
                                     UPDDT)
     VALUES (
               'YourApplicationID',
               '$v.input_param2_list_from_caller$',
               'SELECT value FROM MY_RE_INPUT_PARAM WHERE EXEID=$p.pExecId$  AND upper(KEY)=''PARAMETERNAME_2_LIST''',
               'IN',
               'N',
               SYSDATE);
              
INSERT INTO MY_RULE_BINDVARIABLES (APPID,
                                     VARIABLENAME,
                                     BIND_QUERY,
                                     INC_TYPE,
                                     CACHE_WITHIN_EXEC,
                                     UPDDT)
     VALUES (
               'YourApplicationID',
               '$v.current_date$',
               'SELECT sysdate from dual',
               'EQ',
               'Y',
               SYSDATE);

INSERT INTO MY_RULE_BINDVARIABLES (APPID,
                                     VARIABLENAME,
                                     BIND_QUERY,
                                     INC_TYPE,
                                     CACHE_WITHIN_EXEC,
                                     UPDDT)
     VALUES (
               'YourApplicationID',
               '$v.somevariable1$',
               'SELECT ''data'' from dual',
               'EQ',
               'N',
               SYSDATE);
              
INSERT INTO MY_RULE (ID,
                       APPID,
                       RULE_ID,
                       RULESET_ID,
                       RULE_QUERY,
                       UPDATEDDT)
     VALUES (MY_rule_seq.NEXTVAL,
             'YourApplicationID',
             'breakOnFailure',
             'YourRuleSetId',
             'select ''1'' neverbreak from dual where 1=2 ',
             SYSDATE);

INSERT INTO MY_RULE (ID,
                       APPID,
                       RULE_ID,
                       RULESET_ID,
                       RULE_QUERY,
                       UPDATEDDT)
     VALUES (
               MY_rule_seq.NEXTVAL,
               'YourApplicationID',
               'MyRuleQuery1',
               'YourRuleSetId',
               ' select col1,col2,col3 from some_table where some_col = $v.somevariable1$',
               SYSDATE);

INSERT INTO MY_RULE (ID,
                       APPID,
                       RULE_ID,
                       RULESET_ID,
                       RULE_QUERY,
                       UPDATEDDT)
     VALUES (
               MY_rule_seq.NEXTVAL,
               'YourApplicationID',
               'MyRuleQuery2',
               'YourRuleSetId',
               '  select col1,col2,col3 from some_other_table where some_date_col = $v.current_date$ ',
               SYSDATE);

INSERT INTO MY_RULE (ID,
                       APPID,
                       RULE_ID,
                       RULESET_ID,
                       RULE_QUERY,
                       UPDATEDDT)
     VALUES (
               MY_rule_seq.NEXTVAL,
               'YourApplicationID',
               'MyRuleQuery3',
               'YourRuleSetId',
               '  select col1,col2,col3 from some_other_table where some_col = $v.input_param1_from_caller$ ',
               SYSDATE);

INSERT INTO MY_RULE (ID,
                       APPID,
                       RULE_ID,
                       RULESET_ID,
                       RULE_QUERY,
                       UPDATEDDT)
     VALUES (
               MY_rule_seq.NEXTVAL,
               'YourApplicationID',
               'MyRuleQuery4',
               'YourRuleSetId',
               '  select col1,col2,col3 from some_other_table where some_col in $v.input_param2_list_from_caller$ ',
               SYSDATE);
              
-----------------------
/** Testing the proc **/

begin
    /** Insert input_param and execute the proc in the same session **/
    insert into MY_RE_INPUT_PARAM (exeid,key,value) values ( 1, 'PARAMETERNAME_1','apple');
    insert into MY_RE_INPUT_PARAM (exeid,key,value) values ( 1, 'PARAMETERNAME_2_LIST','apple');
    insert into MY_RE_INPUT_PARAM (exeid,key,value) values ( 1, 'PARAMETERNAME_2_LIST','mango');
    insert into MY_RE_INPUT_PARAM (exeid,key,value) values ( 1, 'PARAMETERNAME_2_LIST','banana');
    insert into MY_RE_INPUT_PARAM (exeid,key,value) values ( 1, 'PARAMETERNAME_2_LIST','avacado');

    MY_RULESENGINE.pub_fireRules('YourApplicationId','YourRuleSetId',1);
   
end;


HOW TO USE ORDER BY CLAUSE INSIDE UNION ALL QUERY

$
0
0

HOW TO USE ORDER BY CLAUSE INSIDE UNION ALL QUERY
(USING ORDER BY CLAUSE IN EACH INDIVIAUAL QUERIES & JOIN THEM USNING UNION ALL)
Author JP Vijaykumar
Date March 8th 2016

create table temp_jp(id number,name varchar2(20));
insert into temp_jp values(1,'Veeksha');
insert into temp_jp values(2,'Saketharama');
insert into temp_jp values(3,'Vineela');
commit;

SQL> select * from temp_jp;

ID NAME
---------- --------------------
1 Veeksha
2 Vineela
3 Saketharama
--Here I am using two select queries with ORDER BY clause and join them with a UNION ALL.

SQL> select * from temp_jp where name like 'S%' order by 1
union all
select * from temp_jp where name like 'V%' order by 1; 2 3
union all
*
ERROR at line 2:
ORA-00933: SQL command not properly ended

--I modified the query as show below and executed successfully.

SQL> with t1 as (select * from temp_jp where name like 'S%' order by name),
t2 as (select * from temp_jp where name like 'V%' order by name)
select * from t1
union all
select * from t2; 2 3 4 5

ID NAME
---------- --------------------
2 Saketharama
1 Veeksha
3 Vineela

--For the convenience of readability, I want to insert a blank line between the two queries.

SQL> with t1 as (select * from temp_jp where name like 'S%' order by name),
2 t2 as (select * from temp_jp where name like 'V%' order by name)
3 select * from t1
4 union all
5 select null from dual --NEED TO INSERT A BLANK LINE INBETWEEN
6 union all
7 select * from t2;
select null from dual --NEED TO INSERT A BLANK LINE INBETWEEN
*
ERROR at line 5:
ORA-01789: query block has incorrect number of result columns

--I need to select equal number of null values from DUAL, as the number of columns
--were included in the other queries.

SQL> with t1 as (select * from temp_jp where name like 'S%' order by name),
2 t2 as (select * from temp_jp where name like 'V%' order by name)
select * from t1
union all
3 select null,null from dual --NEED TO SELECT EQUAL NUMBER OF NULL COLUMNS, AS WERE SELECTED IN OTHER QUERIES
union all
select * from t2; 4 5 6 7

ID NAME
---------- --------------------
2 Saketharama

1 Veeksha
3 Vineela


3 rows selected.

--Here I generated two sql quereis with ORDER BY cluse and joining with UNION ALL,
--and seperated the two sql queries with a blank line.

Happy scripting.

Exadata – Clone RAC RDBMS Home

$
0
0

Introduction

Exadata Database Machine is an ideal platform for database consolidation. With multiple databases running on the same Exadata DBM you want to isolate Oracle software binaries. There can be only on GRID Infrastructure home on database server, but you can install additional Oracle RDBMS Homes based on your requirement. All the Oracle RDBMS Homes shares the same common GRID Home.

To separate database administration for different databases, create a separate operating system user account, operating system group and database ORACLE_HOME for each database.

Why Additional Oracle Home?

  • To minimize patching downtime, we will use private oracle homes.
  • Provide more Security control over database instance in Consolidation environment.

 Limitation:

  •  Using separate Oracle homes requires additional disk space.
  • Increased management complexity

See how to extend /u01 file system size on Exadata at:

http://www.toadworld.com/platforms/oracle/w/wiki/11281.extend-u01-file-system-on-exadata-compute-node

In this article I will demonstrate how to Clone a 11.2.0.4 RDBMS home on Exadata X5. Procedure to cloning RDBMS home on Exadata and Non-Exadata machine is same. The advantage on Exadata is that you can use DCLI utility to run commands across the nodes.

Assumption

  • Oracle user is the owner of RDBMS home
  • Oracle user password for Compute nodes
  • Root user password for Compute nodes
  • Oracle and Root user equivalence must be setup between compute nodes
  • Sufficient space in /u01 file system
  • No outage is required for database
  • Oracle RAC software is installed and configured already to be used for cloning.

Environment

Exadata Model

X-2 Half Rack HC 4TB

Exadata Components

Storage Cell (7), Compute node (4) & Infiniband Switch (2)

Exadata Storage cells

oracloudceladm01 – oracloudceladm07

Exadata Compute nodes

oraclouddbadm01 – oraclouddbadm04

Exadata Software Version

12.1.2.1.3

Exadata DB Version

11.2.0.4 BP16

Steps

  • Create a tar ball of existing Oracle RDBMS home

[oracle@oraclouddbadm01 ~]# cd /u01/app/oracle/product/11.2.0.4/dbhome_1

[oracle@oraclouddbadm01 ~]# tar -zcvf /u01/app/oracle/product/11.2.0.4/db11204.tgz .

[root@oraclouddbadm01 dbhome_1]# cd ..

[root@oraclouddbadm01 11.2.0.4]# ls -ltr

drwxrwxr-x 78 oracle oinstall       4096 Mar 23 10:00 dbhome_1

-rw-r--r-- 1 root   root     2666669855 Mar 24 23:55 db11204.tgz

  • Create new Oracle Home directory and copy the tar ball to other Compute nodes in the cluster. Here we are calling the NEW Oracle Home as dbhome_2

[root@oraclouddbadm01 11.2.0.4]# dcli -g ~/dbs_group -l root 'cd /u01/app/oracle/product/11.2.0.4/; mkdir dbhome_2'

[root@oraclouddbadm01 11.2.0.4]# dcli -g ~/dbs_group -l root 'ls -l /u01/app/oracle/product/11.2.0.4'

oraclouddbadm01: total 8

oraclouddbadm01: drwxrwxr-x 78 oracle oinstall 4096 Aug 6 08:59 dbhome_2

oraclouddbadm01: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:15 dbhome_1

oraclouddbadm01: -rw-r--r-- 1 root   root     2666669855 Mar 24 23:55 db11204.tgz

oraclouddbadm02: total 8

oraclouddbadm02: drwxrwxr-x 78 oracle oinstall 4096 Aug 5 21:49 dbhome_2

oraclouddbadm02: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:14 dbhome_1

oraclouddbadm03: total 8

oraclouddbadm03: drwxrwxr-x 78 oracle oinstall 4096 Aug 5 21:47 dbhome_2

oraclouddbadm03: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:15 dbhome_1

oraclouddbadm04: total 8

oraclouddbadm04: drwxrwxr-x 78 oracle oinstall 4096 Aug 8 20:45 dbhome_2

oraclouddbadm04: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:14 dbhome_1

[root@oraclouddbadm01 11.2.0.4]# scp db11204.tgz oraclouddbadm02:/u01/app/oracle/product/11.2.0.4/

[root@oraclouddbadm01 11.2.0.4]# scp db11204.tgz oraclouddbadm03:/u01/app/oracle/product/11.2.0.4/

[root@oraclouddbadm01 11.2.0.4]# scp db11204.tgz oraclouddbadm04:/u01/app/oracle/product/11.2.0.4/

  • Extract the tar ball on all the Compute nodes

[root@oraclouddbadm01 11.2.0.4]# cd dbhome_2

[root@oraclouddbadm01 dbhome]# ls -ltr

[root@oraclouddbadm01 dbhome]# pwd

/u01/app/oracle/product/11.2.0.4/dbhome_2

Node 1:

[root@oraclouddbadm01 dbhome]# tar -zxvf /u01/app/oracle/product/11.2.0.4/db11204.tgz .

Node 2:

[root@oraclouddbadm02 ~]# cd /u01/app/oracle/product/11.2.0.4/dbhome_2

[root@oraclouddbadm02 dbhome]# tar -zxvf /u01/app/oracle/product/11.2.0.4/db11204.tgz .

Node 3:

[root@oraclouddbadm03 ~]# cd /u01/app/oracle/product/11.2.0.4/dbhome_2

[root@oraclouddbadm03 dbhome]# tar -zxvf /u01/app/oracle/product/11.2.0.4/db11204.tgz .

Node 4:

[root@oraclouddbadm04 ~]# cd /u01/app/oracle/product/11.2.0.4/dbhome_2

[root@oraclouddbadm04 dbhome]# tar -zxvf /u01/app/oracle/product/11.2.0.4/db11204.tgz .

  • Change the NEW Oracle home ownership on all the Compute Nodes

Node 1:

[root@oraclouddbadm01 11.2.0.4]# dcli -g ~/dbs_group -l root 'chown -R oracle:oinstall /u01/app/oracle/product/11.2.0.4/dbhome_2'

  • Verify the ownership of New Oracle Home

[root@oraclouddbadm01 11.2.0.4]# dcli -g ~/dbs_group -l root 'ls -l /u01/app/oracle/product/11.2.0.4'

oraclouddbadm01: total 8

oraclouddbadm01: drwxrwxr-x 78 oracle oinstall 4096 Aug 6 08:59 dbhome_2

oraclouddbadm01: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:15 dbhome_1

oraclouddbadm02: total 8

oraclouddbadm02: drwxrwxr-x 78 oracle oinstall 4096 Aug 5 21:49 dbhome_2

oraclouddbadm02: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:14 dbhome_1

oraclouddbadm03: total 8

oraclouddbadm03: drwxrwxr-x 78 oracle oinstall 4096 Aug 5 21:47 dbhome_2

oraclouddbadm03: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:15 dbhome_1

oraclouddbadm04: total 8

oraclouddbadm04: drwxrwxr-x 78 oracle oinstall 4096 Aug 8 20:45 dbhome_2

oraclouddbadm04: drwxrwxr-x 78 oracle oinstall 4096 Aug 9 10:14 dbhome_1

  • Clone the Oracle Home.

Make sure Oracle Base, Oracle Home and PATH are set correctly.

[oracle@oraclouddbadm01 11.2.0.4]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_2

[oracle@oraclouddbadm01 11.2.0.4]$ export ORACLE_BASE=/u01/app/oracle

[oracle@oraclouddbadm01 11.2.0.4]$ export PATH=$PATH:$ORACLE_HOME/bin

Use the following command to Clone Oracle Home.

Run the clone.pl script, which performs the main Oracle RAC cloning tasks.

Run the script as oracle or the user that owns the Oracle RAC software.

Node 1:

[oracle@oraclouddbadm01 11.2.0.4]$ perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome ORACLE_HOME_NAME=OraDb11g_home2 ORACLE_BASE=/u01/app/oracle '-O"CLUSTER_NODES={oraclouddbadm01, oraclouddbadm02, oraclouddbadm03, oraclouddbadm04}"' '-O"LOCAL_NODE=oraclouddbadm01"'

./runInstaller -clone -waitForCompletion "ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome" "ORACLE_HOME_NAME=OraDb11g_home2" "ORACLE_BASE=/u01/app/oracle" "CLUSTER_NODES={oraclouddbadm01, oraclouddbadm02, oraclouddbadm03, oraclouddbadm04}" "LOCAL_NODE=oraclouddbadm01" -silent -noConfig -nowait

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 24575 MB   Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-03-25_02-33-10AM. Please wait ...Oracle Universal Installer, Version 11.2.0.4.0 Production

Copyright (C) 1999, 2013, Oracle. All rights reserved.

You can find the log of this install session at:

/u01/app/oraInventory/logs/cloneActions2015-03-25_02-33-10AM.log

.................................................................................................... 100% Done.

Installation in progress (Wednesday, March 25, 2015 2:33:16 AM CDT)

..............................................................................                                                 78% Done.

Install successful

Linking in progress (Wednesday, March 25, 2015 2:33:20 AM CDT)

Link successful

Setup in progress (Wednesday, March 25, 2015 2:33:37 AM CDT)

Setup successful

End of install phases.(Wednesday, March 25, 2015 2:33:59 AM CDT)

WARNING:

The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.

/u01/app/oracle/product/11.2.0.4/dbhome/root.sh #On nodes oraclouddbadm01

To execute the configuration scripts:

   1. Open a terminal window

   2. Log in as "root"

   3. Run the scripts in each cluster node

The cloning of OraDb11g_home2 was successful.

Please check '/u01/app/oraInventory/logs/cloneActions2015-03-25_02-33-10AM.log' for more details.

Repeat the Clone process on the remaining nodes in the Cluster

Node 2:

[oracle@oraclouddbadm02 11.2.0.4]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome

[oracle@oraclouddbadm02 11.2.0.4]$ export ORACLE_BASE=/u01/app/oracle

[oracle@oraclouddbadm02 11.2.0.4]$ export PATH=$PATH:$ORACLE_HOME/bin

[oracle@oraclouddbadm02 11.2.0.4]$ perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome ORACLE_HOME_NAME=OraDb11g_home2 ORACLE_BASE=/u01/app/oracle '-O"CLUSTER_NODES={oraclouddbadm01, oraclouddbadm02, oraclouddbadm03, oraclouddbadm04}"' '-O"LOCAL_NODE=oraclouddbadm02"'

Node 3:

[oracle@oraclouddbadm03 11.2.0.4]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome

[oracle@oraclouddbadm03 11.2.0.4]$ export ORACLE_BASE=/u01/app/oracle

[oracle@oraclouddbadm03 11.2.0.4]$ export PATH=$PATH:$ORACLE_HOME/bin

[oracle@oraclouddbadm03 11.2.0.4]$ perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome ORACLE_HOME_NAME=OraDb11g_home2 ORACLE_BASE=/u01/app/oracle '-O"CLUSTER_NODES={oraclouddbadm01, oraclouddbadm02, oraclouddbadm03, oraclouddbadm04}"' '-O"LOCAL_NODE=oraclouddbadm03"'

Node 4:

[oracle@oraclouddbadm04 11.2.0.4]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome

[oracle@oraclouddbadm04 11.2.0.4]$ export ORACLE_BASE=/u01/app/oracle

[oracle@oraclouddbadm04 11.2.0.4]$ export PATH=$PATH:$ORACLE_HOME/bin

[oracle@oraclouddbadm04 11.2.0.4]$ perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome ORACLE_HOME_NAME=OraDb11g_home2 ORACLE_BASE=/u01/app/oracle '-O"CLUSTER_NODES={oraclouddbadm01, oraclouddbadm02, oraclouddbadm03, oraclouddbadm04}"' '-O"LOCAL_NODE=oraclouddbadm04"'

  • Run the root.sh with -silent option on all the nodes in the cluster to finish cloning

[root@oraclouddbadm01 11.2.0.4]# /u01/app/oracle/product/11.2.0.4/dbhome/root.sh -silent

[root@oraclouddbadm02 11.2.0.4]# /u01/app/oracle/product/11.2.0.4/dbhome/root.sh -silent

[root@oraclouddbadm03 11.2.0.4]# /u01/app/oracle/product/11.2.0.4/dbhome/root.sh -silent

[root@oraclouddbadm04 11.2.0.4]# /u01/app/oracle/product/11.2.0.4/dbhome/root.sh -silent

Verify Installation

  • View the log file for errors

[oracle@oraclouddbadm01 ~]$ view /u01/app/oraInventory/logs/cloneActions2015-03-25_02-33-10AM.log

[oracle@oraclouddbadm01 ~]$ ls -l oraInstall2015-03-25_02-33-10AM.err

-rw-r----- 1 oracle oinstall       0 Mar 25 02:33 oraInstall2015-03-25_02-33-10AM.err

No errors in the .err log file as the file size is 0 bytes.

  • Verify the inventory is updated with the new Oracle Home

[oracle@oraclouddbadm01 ~]$ cd /u01/app/oraInventory/ContentsXML

[oracle@oraclouddbadm01 ContentXML]$ vi comps.xml

You should see a stanza similar to the following for new Oracle Home

<HOME NAME="OraDb11g_home2" LOC="/u01/app/oracle/product/11.2.0.4/dbhome" TYPE="O" IDX="3">

   <NODE_LIST>

     <NODE NAME="oraclouddbadm01"/>

     <NODE NAME="oraclouddbadm02"/>

     <NODE NAME="oraclouddbadm03"/>

     <NODE NAME="oraclouddbadm04"/>

   </NODE_LIST>

  • Verify that new Home is using RDS Protocol

[root@oraclouddbadm01 ~]# su - oracle

[oracle@oraclouddbadm01 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome

[oracle@oraclouddbadm01 ~]$ echo $ORACLE_HOME

/u01/app/oracle/product/11.2.0.4/dbhome

[oracle@oraclouddbadm01 ~]$ which skgxpinfo

/u01/app/oracle/product/11.2.0.4/dbhome/bin/skgxpinfo

[oracle@oraclouddbadm01 ~]$ /u01/app/oracle/product/11.2.0.4/dbhome/bin/skgxpinfo -v

Oracle RDS/IP (generic)

If the output is other than RDS, this new Home is not using RDS Exadata Protocol for communication.

After the software is installed, you should run skgxpinfo from your new $ORACLE_HOME/bin directory to ensure that the binaries are compiled using the Reliable Datagram Sockets protocol. If not, relink your Oracle binary by issuing the following command:

[oracle@oraclouddbadm01 ~]$ cd $ORACLE_HOME/bin

[oracle@oraclouddbadm01 ~]$ make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk ipc_rds ioracle

It is recommended to use RDS on Exadata as it uses IB network which provides greater bandwidth.

Conclusion

In this article we have learnt how to clone a RAC RDBMS home on Exadata. Cloning is the easiest and fastest way of creating new home.

 

NoSQL & Hadoop

$
0
0

Wiki articles on how to use Oracle with NoSQL and Hadoop will be found here.

Popular File

$
0
0
One of your files is downloaded 50 times.

NoSQL & Hadoop

$
0
0

Wiki articles on how to use Oracle with NoSQL and Hadoop will be found here.

Oracle 12c RAC: Introduction to Grid Infrastructure Management Repository (GIMR)

$
0
0

What is Grid Infrastructure Management Repository (GIMR)

Oracle Grid Infrastructure repository is a container (store) that is used to preserve diagnostic information collected by Cluster Health Monitor (i.e CHM/OS or ora.crf) as well as to store other information related to Oracle database QoS management, Rapid Home provisioning, etcetera.

However, it is primarily used to maintain diagnostic data collected by the Cluster Health Monitor (CHM) which detects and analyzes Operating System (OS) and Clusterware (GI) resource related failure and degradation.

Brief about Cluster Health Monitor (CHM)

Cluster Health Monitor is a Oracle Clusterware (GI) component, which monitors and analyzes the Clusterware as well as Operating System resources and collects information related to any failure or degradation of those resources. CHM runs as Clusterware resource and is identified by the name ora.crf. The status of CHM resource can be queried using the following command

---// syntax to check status of cluster health monitor //---
$GRID_HOME/bin/crsctl status res ora.crf -init

Example:

---// checking status of CHM //---
myracserver2 {/home/oracle}: crsctl status resource ora.crf -init NAME=ora.crf TYPE=ora.crf.type TARGET=ONLINE STATE=ONLINE on myracserver2

CHM makes use of two services to collect the diagnostic data as mentioned below

  1. System Monitor Service (osysmond): The system monitor service (osysmond) is a reat-time monitoring and operating system metric collection service that runs on each cluster node. The collected metrics are then forwarded to Cluster logger service (ologgerd) which stores the data in the Grid Infrastructure Management Repository (GIMR) database
  2. Cluster Logger Service (ologgerd): In a Cluster, there is one cluster logger service (ologgerd) per 32 nodes. Additional logger services are spawned for every additional 32 nodes. As mentioned earlier the cluster logger service (ologgerd) is responsible for persisting the data collected by the system monitor service (osysmond) in the repository database. If the logger service fails and is not able to come up after a fixed number of retries, Oracle Clusterware will relocate and start the service on a different node.

Example:

In the following two node cluster (myracserver1 and myracserver2), we have the system monitor service (osysmond) running on both myracserver1 and myracserver2 where as the cluster logger service (ologgerd) is running just on myracserver2 (since we can have only one logger service per 32 cluster nodes).

---// we have a two node cluster //---
myracserver2 {/home/oracle}: olsnodes myracserver1 myracserver2
---// system monitor service running on first node //--- myracserver1 {/home/oracle}: ps -ef | grep osysmond oracle 24321 31609 0 03:23 pts/0 00:00:00 grep osysmond root 2529 1 0 Aug27 ? 00:07:48 /app/grid/12.1.0.2/bin/osysmond.bin myracserver1 {/home/oracle}:
---// system monitor service running on second node //--- myracserver2 {/home/oracle}: ps -ef | grep osysmond oracle 24321 31609 0 03:25 pts/0 00:00:00 grep osysmond root 2526 1 0 Aug27 ? 00:07:20 /app/grid/12.1.0.2/bin/osysmond.bin myracserver2 {/home/oracle}:
---// cluster logger service running on second node //--- myracserver2 {/home/oracle}: ps -ef | grep ologgerd oracle 25874 31609 0 03:27 pts/0 00:00:00 grep ologgerd root 30748 1 1 Aug27 ? 00:12:31 /app/grid/12.1.0.2/bin/ologgerd -M -d /app/grid/12.1.0.2/crf/db/myracserver2 myracserver2 {/home/oracle}:
---// cluster logger service not running on first node //--- myracserver1 {/home/oracle}: ps -ef | grep ologgerd oracle 3519 1948 0 03:27 pts/1 00:00:00 grep ologgerd myracserver1 {/home/oracle}:

Evolution of diagnostic repository with 12c

Prior to the introduction of Oracle database 12c, the Clusterware diagnostic data was managed in a Berkeley DB (BDB) and the related Berkeley database files were stored by default under $GRID_HOME/crf/db location.

Example:

---// Clusterware diagnostic repository in 11g //---
racserver1_11g {/home/oracle}: oclumon manage -get reppath CHM Repository Path = /app/grid/11.2.0.4/crf/db/racserver1_11g Done

Oracle has take a step further with Oracle database 12c and replaced the Berkeley DB with a Single instance Oracle 12c Container database (having a single pluggable database) called management database (MGMTDB) with its own dedicated listener (MGMTLSNR). This database is completely managed by the Custerware (GI) and runs as a single instance database regardless of the number of cluster nodes. Additionally, since MGMTDB is a single instance database and managed by Clusterware (GI); in case the hosting node is down the database would be automatically failed over to the other node by the Clusterware (GI).

GIMR in Oracle Database 12.1.0.1

While installing the Clusterware (GI) software in Oracle database 12.1.0.1, it was optional to install the Grid Infrastructure Management Repository database (MGMTDB). If not installed, Oracle Clusterware (GI) features such as Cluster Health Monitor (CHM), etc which depends on it will be disabled.

GIMR in Oracle Database 12.1.0.2

Oracle has now made it mandatory to install the Grid Infrastructure Management Repository database (MGMTDB) as part of the Clusterware (GI) installation starting with Oracle Clusterware version 12.1.0.2. We no longer have the option to opt out of installing MGMTDB during the Clusterware (GI) installation.

Overall framework of GIMR

Following diagram depicts a brief architecture/framework of the Grid Infrastructure Management Repository (GIMR) along with the related components. Considering a "N" node cluster, we have the GIMR (MGMTDB/MGMTLSNR) running only on a single node with one Cluster Logger service (ologgerd) running  per 32 nodes and one System Monitor Service (osysmond) running on every node. 

Apart from these integrated components, we have the optional RHP (Rapid Home Provisioning) clients which may communicate with the GIMR (MGMTDB/MGMTLSNR) for persisting/querying metadata related to Oracle Rapid Home Provisioning. We also have the Trace File Analyzer (tfactl) which can communicate with the GIMR (MGMTDB/MGMTLSNR) to query the diagnostic data stored (persisted by cluster logger service) in the repository.

GIMR_12c

When the node hosting the GIMR repository fails, all the repository resources (MGMTDB/MGMTLSNR) automatically fails over to another available cluster node as depicted in the following diagram

GIMR_12c_relocate

Note: Although the diagram shows the repository (MGMTDB/MGMTLSNR)  and cluster logger service (ologgerd) relocating to same to node upon failure (for representation purpose), the cluster logger service (ologgerd) relocation is  independent of the repository (MGMTDB/MGMTLSNR) and can relocate to any available cluster node.

GIMR space requirement

The average growth size of the repository is approximately 650-750 MB. The space requirement completely depends on the retention desired for the repository. For example a 4 node cluster would lead at the default retention of 3 days to an approximate size of 5.9-6.8 GB

In case, where the Cluster has more than 4 nodes, an additional 500 MB is required for each of the additional cluster node.

Here are few test cases that have been performed against two node cluster to find out the size requirement

RETENTION (IN DAYS)SPACE REQUIRED (IN MB)
3 days3896 MB
7 days9091 MB
10 days12986 MB
30 days38958 MB

GIMR database (MGMTDB) location

Starting with Oracle database 12c, the GIMR database (MGMTDB) is by default created within the same Filesystem/ASM-Diskgroup as OCR or VOTING. During the installation of Clusterware (GI) binaries, the OUI will fetch the location (ASM Diskgroup/ File system) of the OCR and VOTING and utilizes the first location to create the datafiles for the MGMTDB database.

For example, if we have the following locations for OCR or VOTING

---// voting disk location //---
myracserver2 {/home/oracle}: crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 38aaf08ea3c74ffabfd258876dd6f97c (/data/clusterfiles/copy1/VOTE-disk01) [] 2. ONLINE 97d73bdbe42c4fa4bfa3c3cb7d741583 (/data/clusterfiles/copy2/VOTE-disk02) [] 3. ONLINE 00b6b258e0724f6cbf1dc6a03d15fd87 (/data/clusterfiles/copy3/VOTE-disk03) [] Located 3 voting disk(s).

Oracle Universal Installer (OUI) will choose the first location i.e. (/data/clusterfiles/copy1) to create the datafiles for repository database MGMTDB. This can be a problem if we have limited space available on the underlying file system and we want to have a higher retention for the diagnostic data in the repository. It also has a potential to impact the OCR/Voting disk availability.

We can however, relocate the MGMTDB database to a different storage location manually as per the MOS Note 1589394.1 or using the MDBUtil tool as per MOS Note  2065175.1

GIMR Clusterware (GI) components

With the introduction of the repository database MGMTDB, we have now two additional components included in the Clusterware stack and these are ora.mgmtdb (repository database resource) and ora.MGMTLSNR (repository database listener) as shown below:

---// GIMR Clusterware resources //---
myracserver2 {/home/oracle}: crsstat | grep -i mgmt ora.MGMTLSNR mgmtlsnr ONLINE ONLINE on myracserver2 192.168.230.15 10.205.87.231 ora.mgmtdb mgmtdb ONLINE ONLINE on myracserver2 Open

Unlike a generic Clusterware database or listener resource, these two resources have their own set of Clusterware commands as listed below.

---// list of srvctl commands available to operate on GIMR resources //---
myracserver2 {/home/oracle}: srvctl -h | grep -i mgmt | sort | awk -F ":" '{print $2}' srvctl add mgmtdb [-domain ] srvctl add mgmtlsnr [-endpoints "[TCP srvctl config mgmtdb [-verbose] [-all] srvctl config mgmtlsnr [-all] srvctl disable mgmtdb [-node ] srvctl disable mgmtlsnr [-node ] srvctl enable mgmtdb [-node ] srvctl enable mgmtlsnr [-node ] srvctl getenv mgmtdb [-envs "[,...]"] srvctl getenv mgmtlsnr [ -envs "[,...]"] srvctl modify mgmtdb [-pwfile ] [-spfile ] srvctl modify mgmtlsnr -endpoints "[TCP srvctl relocate mgmtdb [-node ] srvctl remove mgmtdb [-force] [-noprompt] [-verbose] srvctl remove mgmtlsnr [-force] srvctl setenv mgmtdb {-envs "=[,...]" | -env ""} srvctl setenv mgmtlsnr { -envs "=[,...]" | -env "="} srvctl start mgmtdb [-startoption ] [-node ] srvctl start mgmtlsnr [-node ] srvctl status mgmtdb [-verbose] srvctl status mgmtlsnr [-verbose] srvctl stop mgmtdb [-stopoption ] [-force] srvctl stop mgmtlsnr [-node ] [-force] srvctl unsetenv mgmtdb -envs "[,..]" srvctl unsetenv mgmtlsnr -envs "[,...]" srvctl update mgmtdb -startoption myracserver2 {/home/oracle}:

Locating the GIMR database (MGMTDB)

The repository database (MGMTDB) always runs as a single node instance. We can locate the node hosting MGMTDB in any of the following way.

Using SRVCTL commands

To locate MGMTDB database: srvctl status mgmtdb

To locate MGMTLSNR listener: srvctl status mgmtlsnr

Example:

---// use srvctl to find MGMTDB //---
myracserver2 {/home/oracle}: srvctl status mgmtdb Database is enabled Instance -MGMTDB is running on node myracserver2
---// use srvctl to find MGMTLSNR //--- myracserver2 {/home/oracle}: srvctl status mgmtlsnr Listener MGMTLSNR is enabled Listener MGMTLSNR is running on node(s): myracserver2

Using CRSCTL commands

To locate MGMTDB database: $GRID_HOME/bin/crsctl status resource ora.mgmtdb

To locate MGMTLSNR listener: $GRID_HOME/bin/crsctl status resource ora.MGMTLSNR

Example:

---// use crsctl to find MGMTDB //---
myracserver2 {/home/oracle}: crsctl status resource ora.mgmtdb NAME=ora.mgmtdb TYPE=ora.mgmtdb.type TARGET=ONLINE STATE=ONLINE on myracserver2
---// use crsctl to find MGMTLSNR //--- myracserver2 {/home/oracle}: crsctl status resource ora.MGMTLSNR NAME=ora.MGMTLSNR TYPE=ora.mgmtlsnr.type TARGET=ONLINE STATE=ONLINE on myracserver2

Using OCLUMON utility

To locate the node hosting repository: $GRID_HOME/bin/oclumon manage -get master

Example:

---// use oclumon utility to locate node hosting GIMR //---
myracserver2 {/home/oracle}: oclumon manage -get master Master = myracserver2 myracserver2 {/home/oracle}:

On the hosting node, we can identify the processes associated with these repository database resources as follows

---// locating MGMTDB on the master node //---
myracserver2 {/home/oracle}: ps -ef | grep pmon | grep MGMT oracle 2891 1 0 06:35 ? 00:00:01 mdb_pmon_-MGMTDB
---// locating MGMTLSNR on the master node //--- myracserver2 {/home/oracle}: ps -ef | grep tns | grep MGMT oracle 17666 1 0 05:23 ? 00:00:00 /app/grid/12.1.0.2/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit myracserver2 {/home/oracle}:

The repository database (MGMTDB) is by default associated with SID"-MGMTDB" and an equivalent entry can be located in /etc/oratab as shown below.

---// oratab entry for GIMR database MGMTDB //---
myracserver2 {/home/oracle}: grep -i mgmt /etc/oratab -MGMTDB:/app/grid/12.1.0.2:N myracserver2 {/home/oracle}:

Explore the GIMR database (MGMTDB)

As mentioned earlier in the introductory section, the repository database MGMTDB is created during the Clusterware installation process. This database is a single instance CONTAINER database (CDB) and has only one PLUGGABLE database (PDB) associated with it apart from the SEED database. The pluggable database is the actual repository holding all the diagnostic information. The container (CDB) database is named as _MGMTDB where as the pluggable (PDB) database is named after the Cluster name (with hyphen "-" replaced as underscore "_" in the cluster name)

Example:

---// GIMR container database MGMTDB information //---
SQL> select name,db_unique_name,host_name,cdb from v$database,v$instance; NAME DB_UNIQUE_NAME HOST_NAME CDB --------- ------------------------------ -------------------- --- _MGMTDB _mgmtdb myracserver2 YES
---// pluggable database holding the actual data //--- SQL> select CON_ID,DBID,NAME,OPEN_MODE from v$containers; CON_ID DBID NAME OPEN_MODE ---------- ---------- ------------------------------ ---------- 1 1091149818 CDB$ROOT READ WRITE --> root container CDB 2 1260861561 PDB$SEED READ ONLY --> seed database3 521100791 MY_RAC_CLUSTER READ WRITE--> actual repository---// actual repository is named after the cluster name //--- myracserver2 {/home/oracle}: olsnodes -c my-rac-cluster

Note: In case where the Cluster name has hyphen (-) in between the name it gets replaced by an underscore (_) while naming the pluggable (PDB) database.

The management repository database (MGMTDB.[pdb_name]) is comprised of the following tablespaces.

TABLESPACE_NAME                FILE_NAME                                                                 SIZE_MB     MAX_MB AUT
------------------------------ ---------------------------------------------------------------------- ---------- ---------- ---
SYSAUX                         /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysaux__2318922894015_.dbf          150 32767.9844 YES
SYSGRIDHOMEDATA                /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysgridh__2318922910141_.dbf        100 32767.9844 YES
SYSMGMTDATA                    /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__2318922860778_.dbf       2048          0 NO
SYSMGMTDATADB                  /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__2318922925741_.dbf        100          0 NO
SYSTEM                         /data/clusterfiles/_MGMTDB/datafile/o1_mf_system__2318922876379_.dbf          160 32767.9844 YES
USERS                          /data/clusterfiles/_MGMTDB/datafile/o1_mf_users__2318922940656_.dbf             5 32767.9844 YES

Where:-

TABLESPACEDESCRIPTION
SYSMGMTDATAThis is the primary tablespace and the repository which is used to store the diagnostic data collected by the Cluster Health Monitor (CHM) tool.
SYSMGMTDATADBThere is not much details available about this tablespace and by default, it doesn't contain any object. However, I assume it has something to do with the Change Assistant.
SYSGRIDHOMEDATAThis tablespace is used to store data related to Rapid Home provisioning (in cloud database context). By default, it doesn't contain any objects in it

 

Note: In this example, the repository datafiles are not using the default location (OCR/Vote-disk filesystem). I had relocated the repository to different storage location

 

These set of tablespaces are mapped to the following list of users

---// database users owning repository objects/data //---
SQL> select username,account_status,default_tablespace from dba_users
2 where default_tablespace in ('SYSGRIDHOMEDATA','SYSMGMTDATA','SYSMGMTDATADB'); USERNAME ACCOUNT_STATUS DEFAULT_TABLESPACE -------------- -------------------------------- ------------------------------ GHSUSER EXPIRED & LOCKED SYSGRIDHOMEDATA CHM OPEN SYSMGMTDATA --> User mapped to the Cluster Health Monitor (CHM) CHA EXPIRED & LOCKED SYSMGMTDATADB

By default, only CHM database account is unlocked and is in turn used by the Cluster Health Monitor (CHM) to store the Clusterware diagnostic data in the database. The user account GHSUSER is used for Rapid Home provisioning (in cloud database context) and comes in to picture only when Rapid Home provisioning is used. The CHA user account is related to Cluster Health Adviser (CHA), which is an improvised version of Cluster Health Monitor (CHM) and would be available in the upcoming release of Oracle Clusterware. 

Conclusion

Clusterware diagnostic repository has evolved a lot with Oracle Database 12c. Having a dedicated Oracle database for repository also brings more clarity in terms how the diagnostic data is stored by Oracle as well as it opens up multiple ways to query those diagnostic data. Oracle is likely to leverage the GIMR to store a variety of diagnostic and management data in the upcoming releases.

With Organizations rapidly moving to Oracle cloud infrastructure, this repository database is also going to be extensively used for storing the provisioning metadata used by the cloud deployments.

Oracle 12c RAC: Quick guide to GIMR administration

$
0
0

Introduction

In my last article, we have explored about the architecture of GIMR in 12c. This article describes about the various options available to manage and maintain the Grid Infrastructure Management repository (GIMR). Oracle provides a command line utility called OCLUMON (Oracle Cluster Monitor) which is part of the CHM (Cluster Health Monitor) component and can be used to perform miscellaneous administrative tasks like changing the debug levels of logs, changing repository size/retention, querying repository path, etc.

Apart from OCLUMON utility, we have a set of SRVCTL commands which can be used to perform various administrative tasks on the management repository resources. In the upcoming sections, we are going to explore both OCLUMON and SRVCTL utilities for administrating GIMR repository and its resources.

How to find repository version

Cluster Health Monitor (CHM) is the primary component, which collects Clusterware diagnostic data and persists those data in repository database (MGMTDB). Oracle provides an utility called OCLUMON, which can be used to manage CHM components as well as its associated diagnostic repository. We can use the following command to find the version of OCLUMON utility, which in turn tells us the version of CHM and its repository.

---// command to find OCLUMON version //---
$GRID_HOME/bin/oclumon version

Example:

---// checking CHM version //---
myracserver1 {/home/oracle}: oclumon version Cluster Health Monitor (OS), Version 12.1.0.2.0 - Production Copyright 2007, 2014 Oracle. All rights reserved.

How to find repository location

CHM persists the diagnostic data in the management repository database (MGMTDB), which consists of a set of datafiles. We can use the following OCLUMON command to locate the database file (datafile)  in the MGMTDB database which is associated with the GIMR repository

---// command to find repository path //---
$GRID_HOME/bin/oclumon manage -get reppath

Example:

---// locating GIMR repository path //---
myracserver2 {/home/oracle}: oclumon manage -get reppath CHM Repository Path = /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__374325064041_.dbf myracserver2 {/home/oracle}:

By having this output we can also verify that, this file actually belong to the pluggable (PDB) database created during the MGMTDB database creation

---//
---// validating repository path against MGMTDB //---
---//
SQL> select con_id,name,open_mode 2 from v$pdbs 3 where con_id= 4 ( 5 select con_id 6 from v$datafile 7 where name='/data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__374325064041_.dbf' 8 ); CON_ID NAME OPEN_MODE ---------- ------------------------------ ---------- 3 MY_RAC_CLUSTER READ WRITE

How to find repository size/retention

The diagnostic data in the GIMR repository database is retained based on the size/retention defined for the repository. Once the size/retention threshold is reached, the diagnostic data is overwritten. We can use the following OCLUMON command to find the current size of GIMR repository.

---// command to find repository size/retention //---
$GRID_HOME/bin/oclumon manage -get repsize

Example:

---// finding repository retention/size //---
myracserver2 {/home/oracle}: oclumon manage -get repsize CHM Repository Size = 136320 seconds myracserver2 {/home/oracle}:

Here is the catch. OCLUMON never shows the size of repository in terms of storage units (KB/MB/GB), rather it displays the size of the repository in terms of duration (in seconds). This duration indicates the retention time of the repository data. OCLUMON basically queries the size of the repository and then determines how long it can retain data for the current repository size and displays that information to the user.

To know the actual size of the repository, we can query the database directly as shown below

---// query MGMTDB database to find repository size //---
SQL> alter session set container=MY_RAC_CLUSTER; Session altered. SQL> show con_name CON_NAME ------------------------------ MY_RAC_CLUSTER SQL> select TABLESPACE_NAME,FILE_NAME,BYTES/1024/1024 Size_MB,MAXBYTES/1024/1024 Max_MB,AUTOEXTENSIBLE 2 from dba_data_files 3 where file_name='/data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__374325064041_.dbf'; TABLESPACE_NAME FILE_NAME SIZE_MB MAX_MB AUT ---------------- ------------------------------------------------------------------------- ---------- ---------- --- SYSMGMTDATA /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__374325064041_.dbf 2048 0 NO

Note: Replace container name with Cluster Name and file_name with the output of reppath.

We can see our repository is 2 GB in size and the datafile associated with the repository is not AUTOEXTENSIBLE.

Observation: Oracle by default creates the repository with 2 GB size (136320 secs retention) for a 2 node cluster regardless of space availability on the underlying file system.

How to change repository size

We may want to retain the diagnostic data for a specific number of days. In that case, we can increase (change) the repository size to accommodate more diagnostic data using the following OCLUMON command.

---// command to change repository size //---
$GRID_HOME/bin/oclumon manage -repsos changerepossize <size in MB>

Example:

---// changing repository size //---
myracserver2 {/home/oracle}: oclumon manage -repos changerepossize 2200 The Cluster Health Monitor repository was successfully resized.The new retention is 146400 seconds. myracserver2 {/home/oracle}:

This command acts in dual mode where it first resizes the repository with the specified size (MB) and then recalculates the retention of the repository based on the new repository size. As we can see here, since we had increased the size of repository from 2048 MB (default) to 2200 MB; Oracle has recalculated the retention against the new size and increased it from 136320 seconds (default) to 146400 seconds

We can also validate the retention, following a resize operation.

---// validating new repository size/retention //---
myracserver2 {/home/oracle}: oclumon manage -get repsize CHM Repository Size = 146400 seconds myracserver2 {/home/oracle}:

Internals on repository resize operation

What Oracle did to the MGMTDB database during the resize operation? Well, here is what it did.

---//
---// impact of size change in the repository database //---
---//

SQL> select TABLESPACE_NAME,FILE_NAME,BYTES/1024/1024 Size_MB,MAXBYTES/1024/1024 Max_MB,AUTOEXTENSIBLE 2 from dba_data_files 3 where file_name='/data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__374325064041_.dbf'; TABLESPACE_NAME FILE_NAME SIZE_MB MAX_MB AUT ---------------- ------------------------------------------------------------------------ ---------- ---------- --- SYSMGMTDATA /data/clusterfiles/_MGMTDB/datafile/o1_mf_sysmgmtd__374325064041_.dbf 2200 0 NO

It has re-sized the datafile in the database internally. We can also verify the same by viewing the MGMTDB database alert log file

How to change repository retention

Technically, there is no command available to change the retention of the data stored in the repository. However, there is a alternative way to do that. We can use OCLUMON utility to check whether if a desired retention can be set for the repository using the following command.

---// command to check if a specific retention can be set //---
$GRID_HOME/bin/oclumon manage -repos checkretentiontime <new retention in seconds>

Example:

---// checking if retention 260000 secs can be set //---
myracserver2 {/home/oracle}: oclumon manage -repos checkretentiontime 260000 The Cluster Health Monitor repository is too small for the desired retention. Please first resize the repository to 3908 MB

I know, you have figured it out!

I wanted to change the retention of the repository to 260000 secs. I have used the command "oclumon manage -repos checkretentiontime 260000" to see if that retention can be set. Oracle just came back to me and asked to increase the size of repository to 3908 MB in order to be able to set that retention.

Here is the simple interpretation. Changing the repository retention period is a two phase process.

  1. Use checkretentiontime to find how much more space needs to be added to the repository to satisfy the desired retention
  2. Use changerepossize to change the size of the repository in order to meet the desired retention.

If the desired retention is less than the current retention, then checkretentiontime will show an output like below

---//
myracserver2 {/home/oracle}: oclumon manage -repos checkretentiontime 136320 The Cluster Health Monitor repository can support the desired retention for 2 hosts

How to purge repository data

There is no need to manually purge the repository as it is automatically taken care by the cluster logger service (ologgerd) based on the repository size and retention setup. However, if desired we can simulate a purge of the repository by decreasing the repository size using OCLUMON changerepossize command as shown below

---// trick to manually purge repository data //---
myracserver2 {/home/oracle}: oclumon manage -repos changerepossize 100 Warning: Entire data in Cluster Health Monitor repository will be deleted.Do you want to continue(Yes/No)? No Operation aborted on user request

What we tried to do here is, we tried to decrease the size of the GIMR repository which will in turn delete all the data stored in the repository. Once the data is purged, we can revert the repository size to the required value.

How to locate cluster logger service

We know that the cluster logger service (ologgerd) of the Cluster Health Monitor (CHM)  component is the service responsible for persisting the diagnostic data collected by the system monitor service (osysmond) in the repository (MGMTDB). There is one Cluster Logger Service (ologgerd) running per 32 nodes in a cluster. We can use the following OCLUMON commands to query where the cluster logger services (ologgerd) are running.

---// commands to locate cluster logger services //---
$GRID_HOME/bin/oclumon manage -get alllogger -details (Lists all logger services available in the cluster) $GRID_HOME/bin/oclumon manage -get mylogger (Lists the logger service for the current cluster node)

Example:

---// listing all logger services in the cluster //---
myracserver2 {/home/oracle}: oclumon manage -get alllogger -details Logger = myracserver2 Nodes = myracserver1,myracserver2

In this particular example, I have only one Cluster logger service (ologgerd) running on node myracserver2 for my cluster and is logging diagnostic data for nodes myracserver1 and myracserver2.

How to change logging level

We know that Cluster Health Monitor (CHM) monitors real-time Operating system and Clusterware metrics and logs them in the GIMR repository dayabase. By default, CHM logging level is set to 1 which collects basic diagnostic data. At times we may need to change the CHM logging level to collect extended diagnostic data. That can be done using the following OCLUMON command

---// command to change CHM logging levels //---
$GRID_HOME/bin/oclumon debug [log daemon module:log_level]

The supported daemon and their respective modules with log_level are listed below

DAEMONMODULELOG LEVEL
osysmondCRFMOND, CRFM, allcomp0, 1, 2, 3
ologgerdCRFLOGD, CRFLDREP, CRFM, allcomp0, 1, 2, 3
clientOCLUMON, CRFM, allcomp0, 1, 2, 3
allallcomp0, 1, 2, 3

Example:

The following command sets the logging level of cluster logger service (ologgerd) to level 3

---// changing CHM loggerd logging to level 3 //---
myracserver2 {/home/oracle}: oclumon debug log ologgerd CRFLOGD:3

Manage repository resources with SRVCTL commands

With the introduction of GIMR, we have two additional resources ora.mgmtdb and ora.MGMTLSNR which are added in the Clusterware stack. Oracle provides a dedicated set of SRVCTL commands to monitor and manage these two new clusterware resources. Following are the new set of SRVCTL commands which are specific to GIMR resources (MGMTDB and MGMTLSNR)

 

---// list of srvctl commands available to operate on GIMR resources //---
myracserver2 {/home/oracle}: srvctl -h | grep -i mgmt | sort | awk -F ":" '{print $2}'
srvctl add mgmtdb [-domain ]
srvctl add mgmtlsnr [-endpoints "[TCP
srvctl config mgmtdb [-verbose] [-all]
srvctl config mgmtlsnr [-all]
srvctl disable mgmtdb [-node ]
srvctl disable mgmtlsnr [-node ]
srvctl enable mgmtdb [-node ]
srvctl enable mgmtlsnr [-node ]
srvctl getenv mgmtdb [-envs "[,...]"]
srvctl getenv mgmtlsnr [ -envs "[,...]"]
srvctl modify mgmtdb [-pwfile ] [-spfile ]
srvctl modify mgmtlsnr -endpoints "[TCP
srvctl relocate mgmtdb [-node ]
srvctl remove mgmtdb [-force] [-noprompt] [-verbose]
srvctl remove mgmtlsnr [-force]
srvctl setenv mgmtdb {-envs "=[,...]" | -env ""}
srvctl setenv mgmtlsnr { -envs "=[,...]" | -env "="}
srvctl start mgmtdb [-startoption ] [-node ]
srvctl start mgmtlsnr [-node ]
srvctl status mgmtdb [-verbose]
srvctl status mgmtlsnr [-verbose]
srvctl stop mgmtdb [-stopoption ] [-force]
srvctl stop mgmtlsnr [-node ] [-force]
srvctl unsetenv mgmtdb -envs "[,..]"
srvctl unsetenv mgmtlsnr -envs "[,...]"
srvctl update mgmtdb -startoption
myracserver2 {/home/oracle}:

Lets go though a few examples to get ourselves familiarized with these new set of commands.

We can use the SRVCTL STATUS command to find the current status of repository database and listener as shown below.

 

---// checking MGMTDB status //---
myracserver2 {/home/oracle}: srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node myracserver2
---// checking MGMTLSNR status //---
myracserver2 {/home/oracle}: srvctl status mgmtlsnr
Listener MGMTLSNR is enabled
Listener MGMTLSNR is running on node(s): myracserver2

We can use the SRVCTL CONFIG commands to find out the current configuration of repository database and listener as shown below.

---// finding configuration of MGMTDB //---
myracserver2 {/home/oracle}: srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home:
Oracle user: oracle
Spfile: /data/clusterfiles/_mgmtdb/spfile-MGMTDB.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: my_rac_cluster
PDB service: my_rac_cluster
Cluster name: my-rac-cluster
Database instance: -MGMTDB
---// finding configuration of MGMTLSNR //---
myracserver2 {/home/oracle}: srvctl config MGMTLSNR
Name: MGMTLSNR
Type: Management Listener
Owner: oracle
Home: <CRS home>
End points: TCP:1521
Management listener is enabled.
Management listener is individually enabled on nodes:
Management listener is individually disabled on nodes:

Note: It is not recommended to modify the default configuration of MGMTDB. However, we may choose to modify the default configuration of  MGMTLSNR to change the listener port (by default listens on port 1521) if desired as shown below.

---// change listener port for MGMTLSNR //---
myracserver2 {/home/oracle}: srvctl modify MGMTLSNR -endpoints "TCP:1540"
---// validate new MGMTLSNR configuration //---
myracserver2 {/home/oracle}: srvctl config MGMTLSNR
Name: MGMTLSNR
Type: Management Listener
Owner: oracle
Home: <CRS home>
End points: TCP:1540
Management listener is enabled.
Management listener is individually enabled on nodes:
Management listener is individually disabled on nodes:

Similarly, we can use the other set of commands like SRVCTL MODIFY to change MGMTDB and MGTLSNR properties, SRVCTL SETENV to set specific environment for MGMTDB and MGMTLSNR, SRVCTL DISABLE to disable MGMTDB and MGMTLSNR resources, SRVCTL REMOVE to remove MGMTDB and MGMTLSNR from clusterware stack and so on. 

How to perform manual failover (relocation) of repository resources

The management repository resources (ora.mgmtdb and ora.MGMTLSNR) are entirely managed by the Clusterware stack, which takes care of failing over the repository database resources to other available node when the hosting node fails. However, we can also manually failover these resources to other cluster nodes when desired. We can make use of SRVCTL RELOCATE MGMTDB command to relocate the repository database resources from one cluster node to another cluster node as shown below.

---// command to relocate repository resources //---
srvctl relocate mgmtdb -node <target cluster node>


Example:

---// we have two node cluster with nodes myracserver1 and myracserver2 //---
myracserver2 {/home/oracle}: olsnodes
myracserver1
myracserver2

---// repository database resources are running on myracserver2 //---
myracserver2 {/home/oracle}: srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node myracserver2

myracserver2 {/home/oracle}: srvctl status mgmtlsnr
Listener MGMTLSNR is enabled
Listener MGMTLSNR is running on node(s): myracserver2

---// relocating repository database resources to myracserver1 //---
myracserver2 {/home/oracle}: srvctl relocate mgmtdb -node myracserver1

---// validate the repository resources are relocated //---
myracserver2 {/home/oracle}: srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node myracserver1

myracserver2 {/home/oracle}: srvctl status mgmtlsnr
Listener MGMTLSNR is enabled
Listener MGMTLSNR is running on node(s): myracserver1

Relocating the repository database MGMTDB also results in automatic relocation of the repository database listener as seen in the previous example. This type of manual relocation would be very useful during planned maintenance of the hosting cluster node.

Conclusion

In this article, we have explored various options available to administer and manage the Grid Infrastructure Management repository as well as seen few tricks that can be used to alter the repository attributes/characteristics based on specific requirements. Oracle provides a rich set of commands to monitor and manage the repository and its associated clusterware components.


Oracle 12c: Correct column positioning with invisible columns

$
0
0

Introduction

In one of my last article, I had discussed about Invisible Columns in Oracle database 12c. I had also mentioned that, invisible columns can be used as a method to change ordering of columns in a table.

In today's article, I will discuss about the concept of changing order (position) of table columns with the help of invisible columns. In my earlier post, we have seen, when we add a invisible column or make a column invisible, it is not allocated a column ID (column position) unless it is made visible. Further, when we change a invisible column to visible, it is allocated a column ID (column position) and is placed (positioned) as the last column in the respective table.

We can use this fact, to come up with a trick that can be helpful for changing column ordering in a given table. Let's go through a simple example to understand the trick and it's effectiveness.

Change column order with invisible column

As part of our demonstration, I have created the following table with four columns COL1, COL3, COL4 and COL2 respectively in the noted order.

---//
---// Create table for demonstration //---
---//
SQL> create table TEST_TAB_INV_ORDR
  2  (
  3  COL1 number,
  4  COL3 number,
  5  COL4 number,
  6  COL2 number
  7  );

Table created.

---//
---// desc table to verify column positioning //---
---//
SQL> desc TEST_TAB_INV_ORDR
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 COL1                                               NUMBER
 COL3                                               NUMBER
 COL4                                               NUMBER
 COL2                                               NUMBER

Now, consider we had actually created the table with an incorrect column ordering and the columns should have been positioned in the order COL1, COL2, COL3 and COL4. We will use this example to understand, how the invisible column feature can be utilized to correct the column position within a table.

So far, we know the fact that a invisible column doesn't have a column position within a given table and is tracked internally be a internal ID. This means, when we change a visible column to invisible, the position allocated to that column will lost and once we make the column visible again, the column would be positioned as the last visible column. Let's utilize this fact as a foundation to build our trick. Here is the trick

  1. In the first step, we will make all the table columns invisible except the intended first table column. This will cause all the other columns to loose their column position within the given table . At this point, we will have the first column already positioned in the first place in the table and all the other columns in the invisible state with no assigned column position.
  2. In the next step, we will start changing the invisible columns to visible. However, we shall make them visible in the order in which we want them to be positioned within the table. This is due to the fact that, when we change an invisible column to visible, it is positioned as the last visible column.

Let's work on our example, to have a better understanding of the trick outlined above.

In our example, the table TEST_TAB_INV_ORDR has columns positioned as COL1, COL3, COL4 and COL2. We want the columns to be positioned as COL1, COL2 , COL3 and COL4. Let's make all the columns invisible except COL1, which we want to be positioned as first column in the table.

 
---//
---// making all columns invisible except COL1 //---
---//
SQL>  alter table TEST_TAB_INV_ORDR modify COL3 invisible;

Table altered.

SQL> alter table TEST_TAB_INV_ORDR modify COL4 invisible;

Table altered.

SQL> alter table TEST_TAB_INV_ORDR modify COL2 invisible;

Table altered.

---//
---// verify column position post invisible operation //---
---// COL1 is left visible and is placed as first column //---
---//
SQL> set COLINVISIBLE ON
SQL> desc TEST_TAB_INV_ORDR
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 COL1                                               NUMBER COL3 (INVISIBLE)                                   NUMBER
 COL4 (INVISIBLE)                                   NUMBER
 COL2 (INVISIBLE)                                   NUMBER

As we can observe from above output, we have the column COL1 already positioned as first column in the table and all the other columns are in invisible state. As a next step of correcting the column ordering, lets start changing the invisible columns to visible. Remember, we want the columns to be ordered as COL1, COL2, COL3 and COL4. As we know, the moment we change invisible column to visible, it will be positioned as the last visible column within the table; we can start making the columns visible in the order COL2, COL3 and COL4.

Let's walk through step by step of this process for a better insight. COL1 is already positioned as first column, we want COL2 to be positioned as second column in the table. Lets change the COL2 from invisible to visible as shown below.

---//
---// making COL2 visible to position it as second column //---
---//
SQL> alter table TEST_TAB_INV_ORDR modify COL2 visible;

Table altered.

---//
---// verfiy column order post visible operation //---
---//
SQL> desc TEST_TAB_INV_ORDR
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 COL1                                               NUMBER
 COL2                                               NUMBER COL3 (INVISIBLE)                                   NUMBER
 COL4 (INVISIBLE)                                   NUMBER

The moment we changed COL2 to visible, it got positioned within the table as the last visible column. At this point, we have COL1 and COL2 correctly positioned as first and second column respectively. Lets change COL3 from invisible to visible for positioning it as the third column within the table as shown below.

 
---//
---// making COL3 visible to position it as third column //---
---//
SQL> alter table TEST_TAB_INV_ORDR modify COL3 visible;

Table altered.

---//
---// verfiy column order post visible operation //---
---//
SQL> desc TEST_TAB_INV_ORDR
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 COL1                                               NUMBER
 COL2                                               NUMBER
 COL3                                               NUMBER
 COL4 (INVISIBLE)                                   NUMBER

Now, we have COL1, COL2 and COL3 correctly positioned as first, second and third column respectively. Lets change COL4 from invisible to visible for positioning it as the fourth (last) column within the table as shown below.

---//
---// making COL4 visible to position it as fourth column //---
---//
SQL> alter table TEST_TAB_INV_ORDR modify COL4  visible;

Table altered.

---//
---// verfiy column order post visible operation //---
---//
SQL>  desc TEST_TAB_INV_ORDR
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 COL1                                               NUMBER
 COL2                                               NUMBER
 COL3                                               NUMBER
 COL4                                               NUMBER

Now, we have all the columns positioned correctly within the table. Simple isn't it!

Here is a recap of the trick, that we used to correct column positioning within the table

  1. Leave the intended first column as visible and change all the other columns to invisible
  2. Start changing the invisible columns to visible in the order in which we want them to be positioned within the table.

Why do it manually?

In the previous section, we have seen how we can utilize invisible columns as a trick to correct column positioning within a given table. I have come up with a PL/SQL script (procedure) which converts this trick into a simple algorithm and can be used for correcting column positioning within a given table.

Here is the PL/SQL procedure that I have written based on the trick stated in the previous section. You can refer in-line comments for a brief idea about it's logic.

---//
---// PL/SQL procedure to correct column positing using invisible columns //---
---//
create or replace procedure change_col_order (o_column_list varchar2, e_tab_name varchar2, t_owner varchar2) is --- Custom column separator --- TB constant varchar2(1):=CHR(9); --- exception to handle non existence columns -- col_not_found EXCEPTION; --- exception to handle column count mismatch --- col_count_mismatch EXCEPTION; --- flag to check column existence ---- col_e number; --- variable to hold column count from dba_tab_cols --- col_count_p number; --- variable to hold column count from user given list --- col_count_o number; --- variable to hold first column name --- col_start varchar2(200); --- Creating a cursor of column names from the given column list --- cursor col_l is select regexp_substr(o_column_list,'[^,]+', 1, level) column_name from dual connect by regexp_substr(o_column_list,'[^,]+', 1, level) is not null; col_rec col_l%ROWTYPE; begin select substr(o_column_list,1,instr(o_column_list,',',1) -1) into col_start from dual; --- fetching column count from user given column list --- select count(*) into col_count_p from dual connect by regexp_substr(o_column_list,'[^,]+', 1, level) is not null; --- fetching column count from dba_tab_cols --- select count(*) into col_count_o from dba_tab_cols where owner=t_owner and table_name=e_tab_name and hidden_column='NO'; --- validating column counts --- if col_count_p != col_count_o then raise col_count_mismatch; end if; --- checking column existence --- for col_rec in col_l LOOP select count(*) into col_e from dba_tab_cols where owner=t_owner and table_name=e_tab_name and column_name=col_rec.column_name; if col_e = 0 then raise col_not_found; end if; END LOOP; --- printing current column order --- dbms_output.put_line(TB); dbms_output.put_line('Current column order for table '||t_owner||'.'||e_tab_name||' is:'); for c_rec in (select column_name,data_type from dba_tab_cols where owner=t_owner and table_name=e_tab_name order by column_id ) LOOP dbms_output.put_line(c_rec.column_name||'('||c_rec.data_type||')'); END LOOP; --- making all columns invisible except the starting column --- for col_rec in col_l LOOP if col_rec.column_name != col_start then execute immediate 'alter table '||t_owner||'.'||e_tab_name||' modify '||col_rec.column_name||' invisible'; end if; END LOOP; --- making columns visible to match the required ordering --- for col_rec in col_l LOOP if col_rec.column_name != col_start then execute immediate 'alter table '||t_owner||'.'||e_tab_name||' modify '||col_rec.column_name||' visible'; end if; END LOOP; --- printing current column order --- dbms_output.put_line(TB); dbms_output.put_line('New column order for table '||t_owner||'.'||e_tab_name||' is:'); for c_rec in (select column_name,data_type from dba_tab_cols where owner=t_owner and table_name=e_tab_name order by column_id ) LOOP dbms_output.put_line(c_rec.column_name||'('||c_rec.data_type||')'); END LOOP; EXCEPTION WHEN col_not_found THEN dbms_output.put_line('ORA-100002: column does not exist'); WHEN col_count_mismatch THEN dbms_output.put_line('ORA-100001: mismatch in column counts'); end; /
---//
---// End of procedure change_col_order //---
---//

Lets go through a demonstration to understand how the custom procedure works. The procedure takes three arguments (all strings within single quotes). The first argument is a comma separated list of column names (in the order in which we want the columns to be positioned), the second argument is the name of table for which the columns needs to be re-ordered and the third argument is the schema name to which the table belongs to.

---//
---// changing column positioning using change_col_order procedure //---
---//
SQL> set serveroutput on SQL> exec change_col_order('COL1,COL2,COL3,COL4','TEST_TAB_INV_ORDR','MYAPP'); Current column order for table MYAPP.TEST_TAB_INV_ORDR is: COL4(NUMBER) COL3(NUMBER) COL2(NUMBER) COL1(NUMBER) New column order for table MYAPP.TEST_TAB_INV_ORDR is: COL1(NUMBER) COL2(NUMBER) COL3(NUMBER) COL4(NUMBER) PL/SQL procedure successfully completed. SQL>

As we can observe from the above output, the procedure reads the arguments, displays current column positioning (order) and then applies the algorithm (based on invisible column feature) before listing the final corrected column positioning (order).

Conclusion

In this article, we have explored; how we can utilize the 12c invisible columns feature to correct the positioning of columns within a given table. We have also explored the customized PL/SQL script which can be implemented to automate this trick and can be used as an alternative to the manual approach.

Oracle 12c: Data invisibility (archiving) with In Database Archival

$
0
0

Introduction

In today's article, I am going to discuss about yet another cool feature introduced with Oracle Database 12c. This feature provides an ability to the data to make itself invisible when desired. You might have already figured out the context of this article. Yes, I am taking about the In Database Archiving (row archival) feature introduced with Oracle database 12c; which lets us archive the data within the same database table without a need to move them to a different archive store.

What is In Database Archiving (row archival)?

Archiving is generally defined as process of moving INACTIVE data to a different storage device for long term retention. In database archiving (Raw Archival) provides us the feature of marking these INACTIVE data as ARCHIVED within the same table without actually moving them to a separate storage device, which means the data can still be present in the same database tables without visibility to the application queries.

This feature is typically useful to applications, where there is a requirement to mark application data as deleted/inactive (archived) without physically deleting (moving them to separate storage) those data. Prior to Oracle Database 12c, this type of requirements were met by defining an additional column in the database table with specific flags indicating particular table record is archived (deleted) and then making necessary adjustments in application queries to check this flag while querying data.

Raw Archival also provides additional benefits such as compression and keeping archived data in low tier storage units apart from archiving the data in the same table. In today's article we will explore the basic row archival feature. We will discuss about the additional benefits in a separate article.

How to enable row archival

In Database Archiving is defined at the table level by means of a new clause called ROW ARCHIVAL. Including this clause in a table definition indicates that the table records are enabled for archiving. A table can be either created as row archival enabled by means of CREATE TABLE statement or can be later enabled for row archival by means of ALTER TABLE command.

When we enable a table for row archival, Oracle creates an additional (HIDDEN) column named as ORA_ARCHIVE_STATE for that table. This column (ORA_ARCHIVE_STATE) controls whether a table record is ACTIVE or ARCHIVED. By default the column ORA_ARCHIVE_STATE is set to a value of 0 for each table record, which indicates the data is ACTIVE.


Example:

Lets quickly go through an example for enabling row archival for a database table

----//
----// Creating table with row archival enabled //----
----//SQL> create table test_data_archival
  2  (
  3  id number not null,
  4  name varchar(20) not null,
  5  join_date date not null
  6  )
  7  row archival;

Table created.----//
----// Populate the table with some data //----
----//SQL> insert into test_data_archival
  2  select rownum, rpad('X',15,'X'), sysdate
  3  from dual connect by rownum <= 500;

500 rows created.

SQL> commit;

Commit complete.

SQL> select count(*) from test_data_archival;

  COUNT(*)
----------
       500

In this example we have created a table (test_data_archival) with in database archiving (row archival) enabled and populated it with some dummy data (500 records with ID ranges from 1 to 500).

We can also enable row archival for existing tables by specifying the ROW ARCHIVAL clause along with the ALTER TABLE statement as shown below.

----//
----// Enabling row archival for existing tables //----
----//SQL>  alter table test_data_arch row archival;

Table altered.

Note: Trying to enable row archival for a table which is already enabled for row archival, will result into errors similar to the following

----//SQL> alter table test_data_archival row archival;alter table test_data_archival row archival
*
ERROR at line 1:
ORA-38396: table is already enabled for the ILM feature

Validating row archival

We can't identify if a table is enabled for row archival by just describing (DESC) the table as that would not show different output for a general table and for table with row archival enabled. Since the column ORA_ARCHIVE_STATE (that controls in database archiving)is in hidden state, it is not displayed using DESC command.

----//
----// DESC command doesn't indicate if row archival is enabled or not //----
----//SQL> desc test_data_archival
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                        NOT NULL NUMBER
 NAME                                      NOT NULL VARCHAR2(20)
 JOIN_DATE                                 NOT NULL DATE

However, we can query the DBA/USER/ALL_TAB_COL views to validate if a table is enabled for row archival. If a table has the ORA_ARCHIVE_STATE hidden column listed in these views, then the table is enabled for row archival.

----//
----// query DBA_TAB_COLS to check if we have the HIDDEN column //----
----// ORA_ARCHIVE_STATE available for the database table       //----
----//SQL> select owner,table_name,column_id,column_name,hidden_column
  2  from dba_tab_cols where table_name='TEST_DATA_ARCHIVAL' order by column_id;

OWNER      TABLE_NAME            COLUMN_ID COLUMN_NAME          HID
---------- -------------------- ---------- -------------------- ---
MYAPP      TEST_DATA_ARCHIVAL            1 ID                   NO
                                         2 NAME                 NO
                                         3 JOIN_DATE            NO
                                           ORA_ARCHIVE_STATE    YES ---> This column indicates the table is enabled for row archival

Another way to check, if a table is defined for row archival is to check the table metadata. If the table metadata has a clause "ILM ENABLE LIFECYCLE MANAGEMENT", then it indicates that the table is enabled for row archival. However, this is only applicable to Oracle Database 12c release 12.1.0.2

----//
----// query table metadata to validate row archival enabled or not //----
----//SQL> select dbms_metadata.get_ddl('TABLE','TEST_DATA_ARCHIVAL') ddl from dual;

DDL
--------------------------------------------------------------------------------
CREATE TABLE "MYAPP"."TEST_DATA_ARCHIVAL"
(       "ID" NUMBER NOT NULL ENABLE,
"NAME" VARCHAR2(20) NOT NULL ENABLE,
"JOIN_DATE" DATE NOT NULL ENABLE
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "APPDATA"
ILM ENABLE LIFECYCLE MANAGEMENT ---> This clause indicates that the table is enabled for row archival

ILM (Information Lifecycle Management) is Oracle Database feature that helps to manage data by storing them into different storage and compression tiers based on a Organizations business and performance needs. Raw Archival is ILM feature and hence the table definition has a clause indicating the same. For more details about ILM, please refer here 

Archiving table data

As mentioned earlier, by default Oracle populates the ORA_ARCHIVE_STATE column with a value of 0 (zero), which indicates the table data is in ACTIVE state. This can be verified as follow.

----//
----// validate default value for ORA_ARCHIVE_STATE column //----
----//SQL> select ora_archive_state,count(*) from test_data_archival group by ora_archive_state;

ORA_ARCHIVE_STATE      COUNT(*)
-------------------- ----------
0                           500

We had populated 500 records in our table and we can see all these records have a value 0 (zero) for the row archival column ORA_ARCHIVE_STATE. This means all of these records are ACTIVE and application queries can access them.

To mark a table record ARCHIVED, we need to update the row archival column ORA_ARCHIVE_STATE for that record to a value 1. This is done through a procedure call to DBMS_ILM.ARCHIVESTATENAME. The syntax for marking table record as ARCHIVED is as follows

----//
----// Syntax for archiving data using row archival feature //----
----//UPDATE table_name
SET ORA_ARCHIVE_STATE=DBMS_ILM.ARCHIVESTATENAME(1)
where column_predicates


Example:

Lets say we want to archive the records with ID 100 and 200 in TEST_DATA_ARCHIVAL table. This can be done as follows.

----//
----// Querying records before archiving //----
----//SQL> select * from test_data_archival where id in (100,200);

        ID NAME                 JOIN_DATE
---------- -------------------- ---------
       100 XXXXXXXXXXXXXXX      19-SEP-15
       200 XXXXXXXXXXXXXXX      19-SEP-15----//
----// Archive records with ID 100 and 200 using row archival //----
----//SQL> update test_data_archival
  2  set ora_archive_state=dbms_ilm.archivestatename(1)
  3  where id in (100,200);

2 rows updated.

SQL> commit;

Commit complete.----//
----// Querying records after archiving //----
----//SQL> select * from test_data_archival where id in (100,200);

no rows selected----//
----// Row count also excludes the archived records //----
----//SQL> select count(*) from test_data_archival;

  COUNT(*)
----------
       498

As we can see, we were able to query the table records before archiving them using row archival feature. However, the records went invisible once we archived them using the row archival feature. These archived records are still present in the database and we can view them if desired as explained in the next section.

Note: We can even use the generic update command like " UPDATE table_name SET ORA_ARCHIVE_STATE=0 WHERE column_predicates" to mark the data as ARCHIVED. We can even set the value of ORA_ARCHIVE_STATE to anything other than 0 (zero) to indicate the table record is ARCHIVED. However, the ILM package only recognizes the value 1 (one) as INACTIVE and 0 (zero) as ACTIVE. Setting ORA_ARCHIVE_STATE other than these values may impact the ILM functionalities.

Viewing archived records

Once we archive table records using the row archival feature (by means of DBMS_ILM.ARCHIVESTATENAME procedure), the records are no longer visible to application queries. However, there is a way we can view those archive records. We can enable a database session to view the archived rows by setting the parameter ROW ARCHIVAL VISIBILITY to the value ALL as shown below

----//
----// Enable database session to view archived records //----
----//SQL> alter session set ROW ARCHIVAL VISIBILITY=ALL;

Session altered.----//
----// Query the archived records //----
----//SQL> select * from test_data_archival where id in (100,200);

        ID NAME                 JOIN_DATE
---------- -------------------- ---------
       100 XXXXXXXXXXXXXXX      19-SEP-15
       200 XXXXXXXXXXXXXXX      19-SEP-15----//
----// Row count includes archived records too //----
----//SQL> select count(*) from test_data_archival;

  COUNT(*)
----------
       500

We can set the same session parameter ROW ARCHIVAL VISIBILITY to the value ACTIVE to prevent a database session from viewing archived records as show below

----//
----// with session's visibility to archived records set to ALL //----
----//SQL> select count(*) from test_data_archival;

  COUNT(*)
----------
       500----//
----// change session's visibility to for archived records to ACTIVE //----
----//SQL> alter session set ROW ARCHIVAL VISIBILITY=ACTIVE;

Session altered.

SQL> select count(*) from test_data_archival;

  COUNT(*)
----------
       498

Restoring archived data

In Database Archival (row archival) makes it very easy to restore the archived data back to its original state. Since the data is archived within the same database table (It is just marked as ARCHIVED), we just need to change the state of the archived record as ACTIVE by setting the row archival column ORA_ARCHIVE_STATE back to the value 0 (zero). This can be done by calling the DBMS_ILM.ARCHIVESTATENAME procedure.

However, before the archived data can be marked as ACTIVE (restored), we need to have visibility to the archived data. This is why the restoration of archived data is a two phase process as listed below

  1. Change ROW ARCHIVAL VISIBILITY to ALL
  2. Restore (mark data as ACTIVE) by updating it through DBMS_ILM.ARCHIVESTATENAME procedure using the following syntax
    ----//
    ----// syntax for restoring archived data from row archival //---
    ----//UPDATE table_name
    SET ORA_ARCHIVE_STATE=DBMS_ILM.ARCHIVESTATENAME(0)
    WHERE column_predicates
    


Example:

In the following example, I am restoring the archived record having ID 100 for table TEST_DATA_ARCHIVAL

----//
----// restoring archived record without ROW ARCHIVAL VISIBILITY is not permitted //---
----//SQL> update test_data_archival
  2  set ora_archive_state=dbms_ilm.archivestatename(0)
  3  where id=100;

0 rows updated.----//
----// change ROW ARCHIVAL VISIBILITY to ALL //----
----//SQL> alter session set ROW ARCHIVAL VISIBILITY=ALL;

Session altered.----//
----// restore (mark record as ACTIVE) archived record with ID=100 //----
----//SQL> update test_data_archival
  2  set ora_archive_state=dbms_ilm.archivestatename(0)
  3  where id=100;

1 row updated.

SQL> commit;

Commit complete.----//
----// validate if we can query the record with ROW ARCHIVAL VISIBILITY being set to ACTIVE //----
----//SQL> alter session set ROW ARCHIVAL VISIBILITY=ACTIVE;

Session altered.

SQL> select * from test_data_archival where id=100;

        ID NAME                 JOIN_DATE
---------- -------------------- ---------
       100 XXXXXXXXXXXXXXX      19-SEP-15

Disabling Raw Archival

We can disable Raw Archival for a table using NO ROW ARCHIVAL clause with ALTER TABLE statement and the syntax is.

----//
----// syntax for disabling Raw Archival for a table //----
----//ALTER TABLE table_name NO ROW ARCHIVAL;


Example:

In the following example, I am disabling Raw Archival for table TEST_DATA_ARCHIVAL

----//
----// Record count with row archival being enabled //----
----//SQL> select count(*) from test_data_archival;

  COUNT(*)
----------
       499
----//
----// disable row archival for table //----
----//SQL> alter table test_data_archival no row archival;

Table altered.----//
----// Check if the hidden column ORA_ARCHIVE_STATE exists //----
----//SQL> select owner,table_name,column_id,column_name,hidden_column,default_on_null
  2  from dba_tab_cols where table_name='TEST_DATA_ARCHIVAL' order by column_id;

OWNER      TABLE_NAME            COLUMN_ID COLUMN_NAME          HID DEF
---------- -------------------- ---------- -------------------- --- ---
MYAPP      TEST_DATA_ARCHIVAL            1 ID                   NO  NO
                                         2 NAME                 NO  NO
                                         3 JOIN_DATE            NO  NO----//
----// Record count after disabling Raw Archival //----
----//SQL> select count(*) from test_data_archival;

  COUNT(*)
----------
       500
										 

When we disable the Raw Archival for a table, the hidden column ORA_ARCHIVE_STATE gets dropped automatically, which in turn restores all the table records to ACTIVE state and gets visibility to application queries.

Copying table (CTAS) with Raw Archival enabled

When we create a copy of row archival enabled table with CTAS statement, the resulting tables doesn't get created with row archival enabled. Therefore all the table records become ACTIVE on the resulting table as show below

----//
----// Check count of records in table TEST_DATA_ARCH //----
----//SQL> select count(*) from TEST_DATA_ARCH;

  COUNT(*)
----------
       500----//
----// Archive few records in the table TEST_DATA_ARCH //----
----//SQL> update TEST_DATA_ARCH set ORA_ARCHIVE_STATE=1 where id<101;

100 rows updated.

SQL> commit;

Commit complete.----//
----// Check the count of records after row archival //-----
----//SQL> select count(*) from TEST_DATA_ARCH;

  COUNT(*)
----------
       400----//
----// Create a new table from TEST_DATA_ARCH using CTAS //----
----//SQL> CREATE TABLE TEST_DATA_ARCH_COPY1
  2  AS SELECT * FROM TEST_DATA_ARCH;

Table created.----//
----// Check the count of records on the resulting table //----
----//SQL> select count(*) from TEST_DATA_ARCH_COPY1;

  COUNT(*)
----------
       500

As we can see, even though we had row archival enabled on our source table (TEST_DATA_ARCH), it did not propagate to the resulting table when we created that using CREATE TABLE AS SELECT statement.

Conclusion

We have explored the row archival (In Database Archiving) feature of Oracle Database 12c and how it can be used as a local archive store for storing INACTIVE data rather than moving the data to a remote archive store. This feature would be very useful for specific set of applications where we have a requirement to mark the data as ARCHIVED within the database itself so that the data is not visible to application queries; however is ready to be restored when desired. Row archival also speeds up the archiving process as we do not have to run expensive select/insert/delete queries to archive table records.

We will explore few other aspects (benefits and considerations) of this new feature in a upcoming article. Till then stay tuned...

Oracle 12c: Invisibility is now extended to table columns

$
0
0

Introduction

Oracle Database 12c has brought a never ending list of new features and today I would like to talk about another new feature from this list. Oracle had introduced invisible indexes in Oracle 11g (11.1) which gave us the power to create an index in INVISIBLE mode and then evaluate its functioning before exposing it to database queries.

Oracle has extended that feature of invisibility one step further with the introduction of Oracle database 12c. We can now even create table columns in the INVISIBLE mode, preventing them being exposed to database queries unless explicitly mentioned.

Lets walk through this feature and explore, what it has to offer.

Making columns invisible

We can define a table column in invisible mode either while creating the table using CREATE TABLE statement or later using ALTER TABLE statement. The syntax for defining a column for both of these cases are as follows:

----//
----// syntax to define invisible column with CREATE TABLE //----
----//
CREATE TABLE table_name
(
column_name data_type INVISIBLE column_properties
)

----//
----// syntax to make an existing column invisible //----
----//
ALTER TABLE table_name MODIFY column_name INVISIBLE

In the following example, I am creating a table called TEST_TAB_INV with two invisible columns with column name CONTACT and ADDRESS respectively.

----//
----// creating table TEST_TAB_INV with two invisible columns //----
----//
SQL> create table TEST_TAB_INV
  2  (
  3  id number not null,
  4  name varchar2(15) not null,
  5  join_date date not null,
  6  contact number invisible not null, ----// invisible column, but defined as mandatory //----
  7  address varchar(200) invisible ----// invisible column, defined as optional //----
  8  );

Table created.

SQL> alter table TEST_TAB_INV add constraint PK_TEST_TAB_INV primary key (id);

Table altered.

SQL>

As you can observe, I have defined one of the invisible column (CONTACT) as MANADATORY using the NOT NULL option, while defined the other one (ADDRESS) as optional. The intention behind creating two different type of invisible columns is to test the behaviour of this new feature in case of MANADATORY and OPTIONAL column values.

Listing invisible columns

In general we use the DESCRIBE command to list the columns defined for a table. Lets see, what DESC command shows when we create a table with invisible columns.

----//
----// DESC command doesn't show invisible columns by default //----
----//
SQL> desc TEST_TAB_INV
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                        NOT NULL NUMBER
 NAME                                      NOT NULL VARCHAR2(15)
 JOIN_DATE                                 NOT NULL DATE

The DESC[RIBE] command is not showing the invisible columns that we had defined during table creation. This is the default behaviour of invisible columns and we need to set COLINVISIBLE to ON to be able to view the invisible columns using DESC command as show below

----//
----// set COLINVISIBLE to ON to be able to list invisible columns with DESC command //----
----//
SQL> SET COLINVISIBLE ON

----//
----// DESC now lists the invisible columns as well //----
----//
SQL> desc TEST_TAB_INV
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                        NOT NULL NUMBER
 NAME                                      NOT NULL VARCHAR2(15)
 JOIN_DATE                                 NOT NULL DATE
 CONTACT (INVISIBLE)                       NOT NULL NUMBER
 ADDRESS (INVISIBLE)                                VARCHAR2(200)

We can alternatively query DBA/ALL/USER_TAB_COLS views to find the invisible columns defined for a table as shown below. If a column is marked as YES for the hidden_column property, it is treated as a invisible column.

----//
----// querying invisible columns from dictionary views //----
----//
SQL> select table_name,column_name,column_id,hidden_column from dba_tab_cols where table_name='TEST_TAB_INV';

TABLE_NAME                COLUMN_NAME           COLUMN_ID HID
------------------------- -------------------- ---------- ---
TEST_TAB_INV              ID                            1 NO
                          NAME                          2 NO
                          JOIN_DATE                     3 NO
                          CONTACT                         YES
                          ADDRESS                         YES

As we can observe, Oracle has not allocated any COLUMN_ID for the invisible columns and that is why invisible columns doesn't qualify for column ordering. However, Oracle keeps track of the invisible columns using an internal ID as shown below.

----//
----// Oracle maintains only internal column IDs for invisible columns //----
----//
SQL> select table_name,column_name,column_id,internal_column_id,hidden_column from dba_tab_cols where table_name='TEST_TAB_INV';

TABLE_NAME        COLUMN_NAME           COLUMN_ID INTERNAL_COLUMN_ID HID
----------------- -------------------- ---------- ------------------ ---
TEST_TAB_INV      ID                            1                  1 NO
TEST_TAB_INV      NAME                          2                  2 NO
TEST_TAB_INV      JOIN_DATE                     3                  3 NO
TEST_TAB_INV      CONTACT                                          4 YES
TEST_TAB_INV      ADDRESS                                          5 YES

Inserting records without column reference

Lets try to insert a record in the table TEST_TAB_INV that we had created earlier without referring the column names. In the following example, I am not passing values for the invisible columns CONTACT and ADDRESS.

----//
----// insert record without column_list when one of the invisible column is defined as mandatory //----
----// However, value is not passed for mandatory invisible column //---
----//
SQL> insert into TEST_TAB_INV values (1,'abbas',sysdate);
insert into TEST_TAB_INV values (1,'abbas',sysdate)
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("MYAPP"."TEST_TAB_INV"."CONTACT")

Oracle did not allow me to insert a record, as the column CONTACT was defined as a mandatory column (NOT NULL) even though it was defined as invisible. Ok, lets pass a value for CONTACT column too.

----//
----// insert record without column_list when one of the invisible column is defined as mandatory //----
----// and value passed for the mandatory invisible column //----
----// 
SQL>  insert into TEST_TAB_INV values (1,'abbas',sysdate,999999999);
 insert into TEST_TAB_INV values (1,'abbas',sysdate,999999999)
             *
ERROR at line 1:
ORA-00913: too many values

We are still not allowed to insert a record. Lets pass the values for all the table columns

----//
----// insert record without column_list but values passed for all columns (visible and invisible) //----
----//
SQL> insert into TEST_TAB_INV values (2,'fazal',sysdate,888888888,'bangalore');
insert into TEST_TAB_INV values (2,'fazal',sysdate,888888888,'bangalore')
            *
ERROR at line 1:
ORA-00913: too many values

We are still not allowed to insert a record. The reason is, when we try to insert a record without explicitly referring the table columns; Oracle only considers the columns that are visible by default.

In the first case of insert statement, we had passed values for all the visible columns. However, since the invisible column CONTACT was defined as mandatory; Oracle did not allow us to insert that record and threw the error ORA-01400: cannot insert NULL into ("MYAPP"."TEST_TAB_INV"."CONTACT")

In the second and third case of insert statements, although we had passed additional values for CONTACT and ADDRESS columns; Oracle did not recognize those columns (as those are invisible) and threw the error ORA-00913: too many values. This error indicates that Oracle was expecting less number of column values than what is supplied in the insert statement.

Lets change the invisible column CONTACT from mandatory (NOT NULL) to optional (NULL) and check if we are allowed to insert a record without column reference.

----//
----// making all the invisible columns as optional //----
----//
SQL> alter table TEST_TAB_INV modify CONTACT NULL;

Table altered.

SQL> set COLINVISIBLE ON

SQL> desc TEST_TAB_INV
 Name                                Null?    Type
 ----------------------------------- -------- ------------------------
 ID                                  NOT NULL NUMBER
 NAME                                NOT NULL VARCHAR2(15)
 JOIN_DATE                           NOT NULL DATE
 CONTACT (INVISIBLE)                          NUMBER		---> Invisible and optional (NULL)
 ADDRESS (INVISIBLE)                          VARCHAR2(200)	---> Invisible and optional (NULL)

Now, lets insert a record without column reference and without passing any values for invisible columns

----//
----// insert record without column_list when all invisible columns are optional //---- 
----//
SQL> insert into TEST_TAB_INV values (1,'john',sysdate);

1 row created.

SQL> commit;

Commit complete.

Yes, we are now allowed to insert record without column reference. This was possible as all of the invisible columns (CONTACT and ADDRESS) are now allowed to have NULL.

Inserting records with column reference

When we insert records in a table by referring the table columns, we are allowed to insert data in the invisible columns as well as shown below.

----//
----// insert record in to invisible columns with explicit column reference //----
----//
SQL> insert into TEST_TAB_INV (id,name,join_date,contact) values (2,'mike',sysdate,999999999);

1 row created.

----//
----// insert record in to invisible columns with explicit column reference //----
----//
SQL>  insert into TEST_TAB_INV (id,name,join_date,contact,address) values (3,'peter',sysdate,888888888,'bangalore');

1 row created.

SQL> commit;

Commit complete.

As we can see, even though if a column is defined as invisible; we would still be allowed to populate it with data provided the column is explicitly referred in the insert statements.

Query table having invisible columns

When we select without column reference (SELECT * FROM) from a table having invisible columns, Oracle only returns the result from the visible columns as show below.

----//
----// select from table having invisible columns, without column reference //----
----//

SQL>  select * from TEST_TAB_INV;

        ID NAME            JOIN_DATE
---------- --------------- ---------
         1 john            24-SEP-15
         2 mike            24-SEP-15
         3 peter           24-SEP-15

Oracle internally transforms this query to include only the visible columns

#/----
#/---- Oracle transformed the select query to exclude invisible columns -----/
#/----
Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "TEST_TAB_INV"."ID" "ID","TEST_TAB_INV"."NAME" "NAME","TEST_TAB_INV"."JOIN_DATE" "JOIN_DATE" FROM "MYAPP"."TEST_TAB_INV" "TEST_TAB_INV"
kkoqbc: optimizing query block SEL$1 (#0)

        :
    call(in-use=1136, alloc=16344), compile(in-use=67704, alloc=70816), execution(in-use=2784, alloc=4032)

kkoqbc-subheap (create addr=0x2b89a1d1fb78)
****************
QUERY BLOCK TEXT
****************
select * from TEST_TAB_INV
---------------------

However, we can still query the data from invisible columns by explicitly referring the column names in the SELECT clause as show below.

----//
----// selecting data from invisible with explicit column reference //----
----//

SQL> select id,name,join_date,contact,address from TEST_TAB_INV;

        ID NAME            JOIN_DATE    CONTACT ADDRESS
---------- --------------- --------- ---------- --------------------
         1 john            24-SEP-15
         2 mike            24-SEP-15  999999999
         3 peter           24-SEP-15  888888888 bangalore

Statistics on Invisible columns

Oracle maintains statistics for all the table columns even if a column is defined as invisible as shown below. Invisible columns also qualify for all type of statistics (histograms, extended statistics, etc.)

----//
----// collecting statistics for table with invisible columns //----
----//
SQL> exec dbms_stats.gather_table_stats('MYAPP','TEST_TAB_INV');

PL/SQL procedure successfully completed.


----//
----// Oracle maintains statistics for invisible columns as well //----
----//
SQL> select owner,table_name,column_name,num_distinct,density,last_analyzed
  2  from dba_tab_col_statistics where table_name='TEST_TAB_INV';

OWNER      TABLE_NAME           COLUMN_NAME          NUM_DISTINCT    DENSITY LAST_ANAL
---------- -------------------- -------------------- ------------ ---------- ---------
MYAPP      TEST_TAB_INV         ADDRESS                         1          1 24-SEP-15
MYAPP      TEST_TAB_INV         CONTACT                         2         .5 24-SEP-15
MYAPP      TEST_TAB_INV         JOIN_DATE                       3 .333333333 24-SEP-15
MYAPP      TEST_TAB_INV         NAME                            3 .333333333 24-SEP-15
MYAPP      TEST_TAB_INV         ID                              3 .333333333 24-SEP-15

Making columns visible

We can convert a invisible to visible by modifying the column property using ALTER TABLE statement. The syntax for making a column visible is

----//
----// Syntax for changing a column from INVISIBLE to VISIBLE //----
----//
ALTER TABLE table_name MODIFY column_name VISIBLE;

Lets make the column CONTACT visible in our table TEST_TAB_INV and observer what changes the operation brings along.

----//
----// changing column CONTACT in table TEST_TAB_INV to VISIBLE //----
----//
SQL> alter table TEST_TAB_INV modify CONTACT visible;

Table altered.
 
----//
----// DESC command now lists the changed column //----
----//
SQL> desc TEST_TAB_INV
Name                                Null?    Type
----------------------------------- -------- ------------------------
ID                                  NOT NULL NUMBER
NAME                                NOT NULL VARCHAR2(15)
JOIN_DATE                           NOT NULL DATE
CONTACT                             NOT NULL NUMBER


SQL> SET COLINVISIBLE ON

SQL> desc TEST_TAB_INV
 Name                                Null?    Type
 ----------------------------------- -------- ------------------------
 ID                                  NOT NULL NUMBER
 NAME                                NOT NULL VARCHAR2(15)
 JOIN_DATE                           NOT NULL DATE
 CONTACT                             NOT NULL NUMBER
 ADDRESS (INVISIBLE)                          VARCHAR2(200)

When we make a column visible, it gets listed with the DESCRIBE command. Further, the column is assigned a column ID as well as the column is marked as NOT HIDDEN which can be verified from DBA/ALL/USER_TAB_COLS view as shown below.

 
----//
----// column changed to visible, is allocated a column ID  //----
----// and marked as NO for hidden_column flag //----
----//
SQL>  select table_name,column_name,column_id,hidden_column from dba_tab_cols where table_name='TEST_TAB_INV';

TABLE_NAME                COLUMN_NAME           COLUMN_ID HID
------------------------- -------------------- ---------- ---
TEST_TAB_INV              ADDRESS                         YES
                          CONTACT                       4 NO
                          JOIN_DATE                     3 NO
                          NAME                          2 NO
                          ID                            1 NO

As we can observe, when we change a invisible column to visible, it is placed as the last column in the visible column list. Since the column CONTACT is now made visible, it is exposed to SELECT queries (without column reference) as shown below.

 
----//
----// new visible column is now exposed to SELECT queries (without column reference) //----
----//
SQL> select * from TEST_TAB_INV;

        ID NAME                 JOIN_DATE    CONTACT
---------- -------------------- --------- ----------
         1 abbas                21-SEP-15  999999999
         2 fazal                21-SEP-15  888888888

Indexing Invisible columns

We are also allowed to create index on invisible columns the same way we create index for a generic column.

 		 
----//
----// creating index on invisible columns //----
----//
SQL> create index idx_TEST_TAB_INV on TEST_TAB_INV (name,contact,address);

Index created.

Lets check if Oracle is able to use that index.

 		 
----//
----// checking if index would be used by optimizer //----
----//
SQL> explain plan for select * from TEST_TAB_INV where address='bangalore';

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------
Plan hash value: 3483268732

--------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                  |     1 |    21 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TEST_TAB_INV     |     1 |    21 |     2   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN                   | IDX_TEST_TAB_INV |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ADDRESS"='bangalore')
       filter("ADDRESS"='bangalore')

15 rows selected.

SQL> 

As we can observe, Oracle can utilize an index defined on invisible columns. From the above example, we can also conclude that invisible columns can also be used as query predicates

Conclusion

We have explored Oracle 12c's new feature of defining a column in Invisible mode. Following are the conclusions derived from the observations.

  • Invisible column's are not returned while using SELECT * FROM TABLE statement
  • Data can be still queried from invisible column, provided the column names are explicitly referred in the SELECT clause
  • Records can be inserted in table having invisible columns with INSERT INTO table_name VALUES statement, provided none of the invisible columns are defined as mandatory (NOT NULL)
  • Data can be populated in to invisible columns provided the invisible columns are explicitly referred in the insert statement like INSERT INTO table_name (column_list) VALUES
  • Oracle maintains statistics on invisible columns
  • Invisible columns can be be indexed as well as used as query predicates
  • Invisible columns are not allocated a column ID and are tracked by an internal ID
  • When a invisible column is made visible, it is placed as the last visible column and gets a column ID in that order

or in other words....

  • Invisible column inherits all the properties of that of a visible column with just one exception that it is not visible unless referenced explicitly.

Invisible columns can be useful to test the impact of column addition on the application, before actually exposing the column to application queries. Invisible columns can also be used as a trick to change column ordering for tables, we shall explore that area in an upcoming article

Reference

http://docs.oracle.com/database/121/ADMIN/tables.htm#ADMIN14217

Oracle 12c: No more resource busy wait (ORA-0054) error while dropping index

$
0
0

Introduction

Prior to Oracle database 12c, dropping an index was always an EXCLUSIVE (offline) operation, which required locking the base table in exclusive mode. This sometimes cause the resource busy wait while the base table is already locked for DML operations and the transactions are not yet committed.

Further, when the table is locked in exclusive mode for the index drop operation, no DML is allowed on the base table until the index drop operation is completed. This may not be a problem for small indexes. However, when we are dropping a huge index, it will eventually block all DML on the base table for a higher duration which is sometimes not desired.

Oracle 12c has overcome this limitation of dropping index. Dropping an index no longer requires an exclusive lock (if specified) on the base table. With Oracle database 12c, we have the option of dropping an index ONLINE which allows DML on the base table while the drop index operation is running.

Lets go through a quick demonstration to validate this new feature.

Drop Index in Oracle 11g (Offline)

Lets create a table in a Oracle 11g database for the demonstration

----//
----// query database version //----
----//
SQL> select version from v$instance;

VERSION
-----------------
11.2.0.1.0----//
----// create table T_DROP_IDX_11G for demonstration //----
----//
SQL> create table T_DROP_IDX_11G
  2  (
  3  id number,
  4  name varchar(15),
  5  join_date date
  6  );

Table created.

----//
----// populate table T_DROP_IDX_11G with dummy data //----
----//
SQL>  insert /*+ APPEND */ into T_DROP_IDX_11G
  2  select rownum, rpad('X',15,'X'), sysdate
  3  from dual connect by rownum <=1e6;

1000000 rows created.

SQL> commit;

Commit complete.

Lets create an index on this table

----//
----// create index IDX_T_DROP_IDX_11G on table T_DROP_IDX_11G //----
----//
SQL> create index IDX_T_DROP_IDX_11G on T_DROP_IDX_11G (id, name);

Index created.

Now, lets perform DML (update a record) in this table without committing the transaction

----//
----// update a record in table T_DROP_IDX_11G //----
----//
SQL> SELECT sys_context('USERENV', 'SID') SID  FROM DUAL;

SID
----------
20

SQL> update T_DROP_IDX_11G set name='ABBAS' where id=100;

1 row updated.

----//
----// leave the transaction uncommitted in this session //----
----//

If we query the v$locked_object view, we can see the base table is locked in row exclusive (mode=3) mode by the previous update operation which we haven't yet committed.

----//
----// query v$locked_object to check the locked object //----
----//
SQL> select object_id,session_id,locked_mode from v$locked_object;

 OBJECT_ID SESSION_ID LOCKED_MODE
---------- ---------- -----------
     73451         20           3

SQL> select object_name,object_type from dba_objects where object_id=73451;

OBJECT_NAME               OBJECT_TYPE
------------------------- -------------------
T_DROP_IDX_11G            TABLE

Now, from another session; lets try to drop the index (IDX_T_DROP_IDX_11G) that we had created on this table (T_DROP_IDX_11G).

----//
----// try to drop the index IDX_T_DROP_IDX_11G from another session //----
----//
01:17:07 SQL> drop index IDX_T_DROP_IDX_11G;
drop index IDX_T_DROP_IDX_11G
           *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired


01:17:10 SQL> drop index IDX_T_DROP_IDX_11G;
drop index IDX_T_DROP_IDX_11G
           *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired


01:17:12 SQL> drop index IDX_T_DROP_IDX_11G;
drop index IDX_T_DROP_IDX_11G
           *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

Index drop operation is failing with resource busy wait. This is because when we try to drop an index, Oracle tries to acquire an exclusive lock on the base table and if it fails to acquire that exclusive lock, it throws this resource busy wait.

If we check the 10704 trace for this drop index operation, we can see Oracle tried to acquire a exclusive lock (mode=6) on table T_DROP_IDX_11G and failed (ksqgtl: RETURNS 51) with resource busy wait error (err=54)

#----//
#----// lock trace for the drop index operation //----
#----//
PARSING IN CURSOR #3 len=69 dep=1 uid=85 oct=26 lid=85 tim=1443857735568850 hv=114407125 ad='8ab5b250' sqlid='04zx1kw3d3dqp'
LOCK TABLE  FOR INDEX "IDX_T_DROP_IDX_11G" IN EXCLUSIVE MODE  NOWAIT
END OF STMT
PARSE #3:c=5999,e=5962,p=0,cr=59,cu=0,mis=1,r=0,dep=1,og=1,plh=0,tim=1443857735568850
ksqgtl *** TM-00011eeb-00000000 mode=6 flags=0x401 timeout=0 ***
ksqgtl: xcb=0x8f92aa68, ktcdix=2147483647, topxcb=0x8f92aa68
        ktcipt(topxcb)=0x0
ksucti: init session DID from txn DID:
ksqgtl:
        ksqlkdid: 0001-0017-000000FD
*** ksudidTrace: ksqgtl
        ktcmydid(): 0001-0017-000000FD
        ksusesdi:   0000-0000-00000000
        ksusetxn:   0001-0017-000000FD
ksqcmi: TM,11eeb,0 mode=6 timeout=0
ksqcmi: returns 51
ksqgtl: RETURNS 51
ksqrcl: returns 0
EXEC #3:c=0,e=67,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=0,tim=1443857735568937
ERROR #3:err=54 tim=1443857735568943
CLOSE #3:c=0,e=3,dep=1,type=0,tim=1443857735568967

In this trace file, the object ID is represented in hexadecimal (00011eeb), which can be mapped to object ID as follows.

----//
----// finding object details based on hexadecimal object ID //----
----// 
SQL> select object_id,to_char(object_id,'0XXXXXXX') object_hex,object_name,object_type 
2 from dba_objects where object_name='T_DROP_IDX_11G'; OBJECT_ID OBJECT_HE OBJECT_NAME OBJECT_TYPE ---------- --------- ------------------------- ------------------- 73451 00011EEB T_DROP_IDX_11G TABLE

We will not be allowed to drop the index unless Oracle acquires a exclusive lock (mode=6) on the base table. We can commit/rollback the transaction in the first session (sid=20) which will release the row exclusive lock (mode=3) from the table and will allow Oracle to acquire a exclusive lock on the base table and in turn process the DROP INDEX operation.

Drop Index in Oracle 12c (Online)

Now lets see, how the drop index operation behaves in Oracle 12c. Lets quickly create a table for our demonstration

----//
----// query database version //----
----//
SQL> select version from v$instance;

VERSION
-----------------
12.1.0.2.0----//
----// create table T_DROP_IDX_12C for demonstration //----
----//
SQL> create table T_DROP_IDX_12C
  2  (
  3  id number,
  4  name varchar(15),
  5  join_date date
  6  );

Table created.

----//
----// populate table T_DROP_IDX_11G with dummy data //----
----//
SQL> insert /*+ APPEND */ into T_DROP_IDX_12C
  2  select rownum, rpad('X',15,'X'), sysdate
  3  from dual connect by rownum <=1e6;

1000000 rows created.

SQL> commit;

Commit complete.

Lets create an index on this table.

----//
----// create index IDX_T_DROP_IDX_12C on table T_DROP_IDX_12C //----
----//
SQL>  create index IDX_T_DROP_IDX_12C on T_DROP_IDX_12C (id, name);

Index created.

Now lets perform DML on this table and leave them uncommitted.

----//
----// perform few DML on the table T_DROP_IDX_12C //---
----//
SQL> SELECT sys_context('USERENV', 'SID') SID  FROM DUAL;

SID
----------
20

SQL> insert into T_DROP_IDX_12C values (1000001,'Abbas',sysdate);

1 row created.

SQL> update T_DROP_IDX_12C set name='Fazal' where id=100;

1 row updated.

----//
----// leave the transactions uncommitted in this session //----
----//

If we query the v$locked_object view, we can see the base table is locked in row exclusive (mode=3) mode by the previous update operation which we haven't yet committed.

----//
----// query v$locked_object to check the locked object //----
----//
SQL> select object_id,session_id,locked_mode from v$locked_object;

 OBJECT_ID SESSION_ID LOCKED_MODE
---------- ---------- -----------
     20254         20           3

SQL> select object_name,object_type from dba_objects where object_id=20254;

OBJECT_NAME               OBJECT_TYPE
------------------------- -----------------------
T_DROP_IDX_12C            TABLE

Now, from another session; lets try to drop the index (IDX_T_DROP_IDX_12C) that we had created on this table (T_DROP_IDX_12C).

----//
----// try to drop the index IDX_T_DROP_IDX_11G from another session //----
----//
SQL> SELECT sys_context('USERENV', 'SID') SID  FROM DUAL;

SID
----------
127

SQL> drop index IDX_T_DROP_IDX_12C;
drop index IDX_T_DROP_IDX_12C
           *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

We are still getting the same resource busy wait that we had received in Oracle database 11g. This is because the normal DROP INDEX operation still tries to acquire a exclusive (mode=6) lock on the base table while dropping an index.

Here comes the new feature, the ONLINE option of DROP INDEX. With Oracle 12c, we can add the clause ONLINE while using DROP INDEX command. Lets try to drop the index ONLINE (we haven't yet committed the DMLs on the other session ).

----//
----// try to drop (ONLINE) the index IDX_T_DROP_IDX_11G from another session //----
----//
SQL> SELECT sys_context('USERENV', 'SID') SID  FROM DUAL;

SID
----------
127

SQL> drop index IDX_T_DROP_IDX_12C online;

----//
----// drop index hangs here //----
----//

We no longer get the resource busy error (ORA-0054) here. However, the drop index operation just hangs as it is waiting for the DML operations to commit and release the lock (enqueue) acquired at row level.

If we review the 10704 lock trace, we can see Oracle has acquired a shared lock (mode=2) on the base table and is waiting to acquire a shared transactional lock (TX: enqueue) which is currently blocked by the row exclusive lock mode from the first session (sid=20)

#----//
#----// lock trace for the drop index online operation //----
#----//
PARSING IN CURSOR #47298660689800 len=69 dep=1 uid=63 oct=26 lid=63 tim=1443861257910293 hv=412402270 ad='7f878298' sqlid='6kxaujwc99hky'
LOCK TABLE  FOR INDEX "IDX_T_DROP_IDX_12C" IN ROW SHARE MODE  NOWAIT
END OF STMT
PARSE #47298660689800:c=4999,e=6590,p=1,cr=8,cu=0,mis=1,r=0,dep=1,og=1,plh=0,tim=1443861257910293
ksqgtl *** TM-00004F1E-00000000-00000003-00000000 mode=2 flags=0x400 timeout=0 ***
ksqgtl: xcb=0x88034da8, ktcdix=2147483647, topxcb=0x88034da8
        ktcipt(topxcb)=0x0
ksucti: init session DID from txn DID: 0001-0029-00000094
ksqgtl:
        ksqlkdid: 0001-0029-00000094
*** ksudidTrace: ksqgtl
        ktcmydid(): 0001-0029-00000094
        ksusesdi:   0000-0000-00000000
        ksusetxn:   0001-0029-00000094
ksqgtl: RETURNS 0
EXEC #47298660689800:c=0,e=46,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=0,tim=1443861257910358
CLOSE #47298660689800:c=0,e=1,dep=1,type=0,tim=1443861257910374
..
.. ( output trimmed)
..
#----//
#----// waiting to acquire a shared transactional lock //----
#----//ksqgtl *** TX-0001001E-00000439-00000000-00000000 mode=4 flags=0x10001 timeout=21474836 ***
ksqgtl: xcb=0x88034da8, ktcdix=2147483647, topxcb=0x88034da8
        ktcipt(topxcb)=0x0
ksucti: init session DID from txn DID: 0001-0029-00000094
ksqgtl:
        ksqlkdid: 0001-0029-00000094
*** ksudidTrace: ksqgtl
        ktcmydid(): 0001-0029-00000094
        ksusesdi:   0000-0000-00000000
        ksusetxn:   0001-0029-00000094
ksqcmi: TX-0001001E-00000439-00000000-00000000 mode=4 timeout=21474836

We can also verify from dba_waiters and v$lock views that the DROP INDEX ONLINE operation is waiting to acquire a transactional shared lock (TX:enqueue) which is blocked by the row exclusive (mode=3) lock and in turn by exclusive transactional lock (TX:mode=6) from the first session (sid=20)

----//
----// query dba_waiters to check who is holding the transactional lock on base table //----
----//
SQL> select waiting_session,holding_session,lock_type,mode_held,mode_requested from dba_waiters;

WAITING_SESSION HOLDING_SESSION LOCK_TYPE    MODE_HELD    MODE_REQUE
--------------- --------------- ------------ ------------ ----------
            127              20 Transaction  ExclusiveShare----//
----// query v$lock to find out the lock mode held by the holding session //---- 
----//SQL> select * from v$lock where sid=20;

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK     CON_ID
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- ----------
000000008A67B980 000000008A67B9F8         20 AE        133          0          4          0       8500          0          3
00000000885703C8 0000000088570448         20 TX     327706        994          6          0        123          1          0
00002B78ACA00EA8 00002B78ACA00F10         20 TM      20254          0          3          0        123          0          3
----//
----// query v$lock to find out the lock mode requested	by waiting session //---
----//SQL> select * from v$lock where sid=127;

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK     CON_ID
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- ----------
000000008A67F840 000000008A67F8B8        127 AE        133          0          4          0        845          0          3
000000008857EE28 000000008857EEA8        127 TX     524294        982          6          0          4          0          0
000000008A67F628 000000008A67F6A0        127 TX     327706        994          0          4          4          0          0
00002B78AD08AA08 00002B78AD08AA70        127 TM      20254          0          2          0          4          0          3
000000008A67F228 000000008A67F2A0        127 OD      20275          0          6          0          4          0          3
000000008A67E5F8 000000008A67E670        127 OD      20254          0          4          0          4          0          3

Although the drop index (online) operation hangs (waiting for DMLs to release row exclusive locks and exclusive TX lock), it will not block any new DMLs which get executed against the base table. We can confirm this by running a new DML from a new session as show below.

----//
----// perform new DML when drop index (ONLINE) is running
----//
SQL> SELECT sys_context('USERENV', 'SID') SID  FROM DUAL;

SID
----------
23

SQL> delete from T_DROP_IDX_12C where id=300;

1 row deleted.


----//
----// new DML are able to acquire RX (mode=3) lock on the table //----
----//
SQL>  select object_id,session_id,locked_mode from v$locked_object;

 OBJECT_ID SESSION_ID LOCKED_MODE
---------- ---------- -----------
     20254         20           3 --> lock held by first session where we performed DML and left uncommitted
     20254         23           3 --> lock held by this session to perform delete operation
     20254        127           2 --> session from where we have executed drop index (hangs and waiting for DMLs to commit)


SQL> commit;

Commit complete.

----//
----// lock released by current session upon commit //----
----//
SQL> select object_id,session_id,locked_mode from v$locked_object;

 OBJECT_ID SESSION_ID LOCKED_MODE
---------- ---------- -----------
     20254         20           3
     20254        127           2


As we can see, even through DROP INDEX ONLINE hangs waiting for DMLs to commit; it doesn't block any new DML on the base table. The DROP INDEX ONLINE operation will eventually get completed once the pending transactions are committed.

Lets commit the uncommitted transactions from our first session (sid=20).

----//
----// commit pending transactions from first session //----
----//
SQL> SELECT sys_context('USERENV', 'SID') SID  FROM DUAL;

SID
----------
20

SQL> commit;

Commit complete.

Lets check the status of DROP INDEX ONLINE operation (which was hung on other session)

----//
----// check the status of hanged drop index operation //----
----//
SQL> drop index IDX_T_DROP_IDX_12C online;

Index dropped.

SQL> SELECT sys_context('USERENV', 'SID') SID  FROM DUAL;

SID
----------
127

The moment, pending transactions are committed the DROP INDEX ONLINE operation was resumed and completed automatically as the Row Exclusive (RX:mode=3) locks and TX locks (TX:mode=6) were released from the table (rows) and the DROP INDEX ONLINE was able to acquire a shared transactional lock (mode=4) on the table rows.

We can also verify from lock trace that DROP INDEX ONLINE operation was able to acquire (ksqgtl: RETURNS 0) a shared transactional lock (TX:mode=4) once the DMLs were committed in first session (sid=20)

#----//
#----// drop index acquired shared transactional lock upon commit of pending DML on base table //----
#----//ksqgtl *** TX-0001001E-00000439-00000000-00000000 mode=4 flags=0x10001 timeout=21474836 ***
ksqgtl: xcb=0x88034da8, ktcdix=2147483647, topxcb=0x88034da8
        ktcipt(topxcb)=0x0
ksucti: init session DID from txn DID: 0001-0029-00000094
ksqgtl:
        ksqlkdid: 0001-0029-00000094
*** ksudidTrace: ksqgtl
        ktcmydid(): 0001-0029-00000094
        ksusesdi:   0000-0000-00000000
        ksusetxn:   0001-0029-00000094
ksqcmi: TX-0001001E-00000439-00000000-00000000 mode=4 timeout=21474836ksqcmi: returns 0

*** 2015-10-03 14:30:01.164
ksqgtl: RETURNS 0

Conclusion

Oracle has made significant improvements in the locking mechanism involved with the DROP INDEX operation by introducing the ONLINE feature, which now just needs a shared lock to be acquired on the base table to start with drop operation; allowing DMLs to be executed against the base table during the index drop operation.

Online index drop operation can start without causing any exclusive lock. However, the drop operation will not complete unless all the uncommitted transactions are committed in the base table and the drop operation is able to acquire a shared transactional (TX:mode=4) lock.

Reference

I have used the term lock mode and used tracing to identify the locks at different places through out this article. You can refer the following article by Franck Pachot to get a fair idea about the lock modes and what those values mean and how to trace the locks.

http://blog.dbi-services.com/investigating-oracle-lock-issues-with-event-10704/

Oracle 12c: Optimizing In Database Archival (Row Archival)

$
0
0

Introduction

In my last article, I had discussed about the Oracle 12c new feature In Database Archival. We had explored this new Oracle database feature and how it can be used to archive data within the same database table. We had also familiarized ourselves with the methods available to query and restore the archived data.

In the previous article, we have seen how we can archive data within the same table by means of a new table clause ROW ARCHIVAL. We have also seen, once we set a table for row archival, a new table column with name ORA_ARCHIVE_STATE is introduced by Oracle and is used to control whether a particular record (row) within the table is archived or not. A value of 0 for the column ORA_ARCHIVE_STATE indicates the record is ACTIVE and a non-zero value indicates the record being ARCHIVED and by default all the records are in ACTIVE state.

Today's article is primarily focused on optimizing the utilization of In Database Archival. In today's article, I would be discussing two important aspects of this new Oracle Database 12c archival feature. The first discussion emphasizes on optimizing the space utilization for the archived data and the second discussion is related to optimizing the query performance when querying the table enabled with row archival.

Space optimization for In Database Archiving

With "In Database Archival", the data is archived within the same table. It is physically present in the same database and just the logical representation is altered at the query (optimizer) level when we query the data, by means of the control column ORA_ARCHIVE_STATE. This means, the archived data still occupies the same amount of space (unless compressed) within the database.

Now, consider if the table is on a Tier - 1 storage device; we are wasting a substantial amount of cost just to maintain the archived data. Wouldn't it be great, if we can store those archived records on a lower level storage device and able to compress those records to further cut down the cost involved in the space allocation.

Guess what! this is possible with "In Database Archival" as it provides an option to optimize the space utilization by allowing us partition the table records based on its state. This means we can partition a table on the control column ORA_ARCHIVE_STATE to direct the archived data to be stored on a different storage unit (tablespace), which also enables us to apply compression just on the archived data to further trim down the space utilization for the archived data.

Lets quickly go through a simple demonstration to understand these abilities.

Demonstration

Assumptions:

  • Tablespace APPDATA is located on a Tier-1 storage
  • Tablespace ARCHDATA is located on a Tier-2 storage

Goal:

  • I would like to create a table TEST_DATA_ARCH_PART with ROW ARCHIVAL being enabled. I would want the ACTIVE data to be stored on Tier-1 storage in a NOCOMPRESS format and the ARCHIVED data to be stored on Tier-2 storage in COMPRESSED format. This is to ensure that, we are utilizing the database space to its optimal level.

Lets create our table TEST_DATA_ARCH_PART with ROW ARCHIVAL enabled.

----//
----// Creating table with ROW ARCHIVAL //----
----//
SQL> create table TEST_DATA_ARCH_PART
  2  (
  3  id number,
  4  name varchar(15),
  5  join_date date
  6  )
  7  ROW ARCHIVAL
  8  partition by list (ORA_ARCHIVE_STATE) ---// partitioned on record state //---
  9  (
 10  partition P_ACTIVE values(0) tablespace APPDATA, ---// ACTIVE records //---
 11  partition P_ARCHIVED values(default) tablespace ARCHDATA ROW STORE COMPRESS ADVANCED ---// ARCHIVED records //---
 12  );

Table created.

----//
----// Defining primary key for the table //----
----//
SQL> alter table TEST_DATA_ARCH_PART add constraint PK_TEST_DATA_ARCH_PART primary key (ID);

Table altered.

In the above example, we have created the table TEST_DATA_ARCH_PART with ROW ARCHIVAL enabled. We have partitioned the table on the record state (ORA_ARCHIVE_STATE) to store the ACTIVE data (P_ACTIVE) on Tier-1 storage (APPDATA) and the ARCHIVED data (P_ARCHIVED) on Tier-2 storage (ARCHDATA). We have further enabled COMPRESSION to be applied on all the ARCHIVED records.

Let's populate our table with some data.

----//
----// populating table with data //----
----//
SQL> insert /*+ APPEND */ into TEST_DATA_ARCH_PART
  2  select rownum, rpad('X',15,'X'), sysdate
  3  from dual connect by rownum <=1e6;

1000000 rows created.

SQL> commit;

Commit complete.

We have populated our table with 1000000 records and all the records are in ACTIVE state by default. We can validate that by querying the table as follows.

----//
----// validating table records //----
----//
SQL>  select count(*) from TEST_DATA_ARCH_PART;

  COUNT(*)
----------
   1000000

SQL> select count(*) from TEST_DATA_ARCH_PART partition (p_active);

  COUNT(*)
----------
   1000000

SQL> select count(*) from TEST_DATA_ARCH_PART partition (p_archived);

  COUNT(*)
----------
         0

----//
----// validating active records are on Tier-1 storage device (APPDATA) //----
----//		 
SQL> select owner,segment_name as "Table Name",tablespace_name,sum(bytes)/1024/1024 Size_MB
  2  from dba_segments where segment_name='TEST_DATA_ARCH_PART' group by owner, segment_name,tablespace_name;

OWNER         Table Name                TABLESPACE_NAME         SIZE_MB
------------- ------------------------- -------------------- ----------
MYAPP         TEST_DATA_ARCH_PART       APPDATA                      40
 

As we can see, all of our table records are in ACTIVE state and thus stored on the Tier-1 storage device (APPDATA). Lets archive some records from our table as shown below.

----//
----// archive records by setting ORA_ARCHIVE_STATE to 1 //----
----//
SQL> update TEST_DATA_ARCH_PART
  2  set ORA_ARCHIVE_STATE=1 where id<10001;
update TEST_DATA_ARCH_PART
       *
ERROR at line 1:
ORA-14402: updating partition key column would cause a partition change

We are not allowed to archive the records. This is because, the table records are in the ACTIVE partition P_ACTIVE and archiving would require it to move the records to the ARCHIVED partition P_ARCHIVED. To allow this data movement between the table partitions, we need enable ROW MOVEMENT for the table, which is by default in disabled state. Let's enable ROW MOVEMENT for our table TEST_DATA_ARCH_PART.

----//
----// ROW MOVEMENT is disabled by default //----
----//
SQL> select table_name,row_movement from dba_tables where table_name='TEST_DATA_ARCH_PART';

TABLE_NAME                ROW_MOVE
------------------------- --------
TEST_DATA_ARCH_PART       DISABLED

----//
----// Enabling row movement for table //----
----//
SQL> alter table TEST_DATA_ARCH_PART enable row movement;

Table altered.

SQL> select table_name,row_movement from dba_tables where table_name='TEST_DATA_ARCH_PART';

TABLE_NAME                ROW_MOVE
------------------------- --------
TEST_DATA_ARCH_PART       ENABLED

Let's try again to archive the table records by setting the control column ORA_ARCHIVE_STATE to value 1 as shown below.

----//
----// archiving all table records by setting ORA_ARCHIVE_STATE to 1 //----
----//
SQL> update TEST_DATA_ARCH_PART
  2  set ORA_ARCHIVE_STATE=1;

1000000 rows updated.

SQL> commit;

Commit complete.

As expected, we are now allowed to ARCHIVE the table records. The archived records are stored in a lower level storage tier by means of tablespace ARCHDATA and are compressed to further trim down the space utilization for archived data. We can validate this fact as shown below.

----//
----// No active records present in the table //----
----//
SQL> select count(*) from TEST_DATA_ARCH_PART;

  COUNT(*)
----------
         0

----//
----// Enable archive record visibility //----
----//		 
SQL> alter session set row archival visibility=all;

Session altered.


SQL>  select count(*) from TEST_DATA_ARCH_PART;

  COUNT(*)
----------
   1000000

----//
----// No records present in the ACTIVE partition //----
----//   
SQL> select count(*) from TEST_DATA_ARCH_PART partition (p_active);

  COUNT(*)
----------
         0

----//
----// records are now moved to ARCHIVED partition //----
----//		 
SQL>  select count(*) from TEST_DATA_ARCH_PART partition (p_archived);

  COUNT(*)
----------
   1000000

Let's check how much space is consumed by the ARCHIVED records by querying the database segments as shown below.

   
SQL> select owner,segment_name as "Table Name",tablespace_name,sum(bytes)/1024/1024 Size_MB
  2  from dba_segments where segment_name='TEST_DATA_ARCH_PART' group by  owner,segment_name,tablespace_name;

OWNER         Table Name                TABLESPACE_NAME         SIZE_MB
------------- ------------------------- -------------------- ----------
MYAPP         TEST_DATA_ARCH_PART       ARCHDATA                     16
MYAPP         TEST_DATA_ARCH_PART       APPDATA                      40

We could see the archived records are moved in a compressed format to the Tier-2 tablespace ARCHDATA. However, the space from Tier-1 tablespace APPDATA is not yet released. We need to manually reclaim this space as show below.

   
----//
----// reclaiming unused space from the table //----
----//
SQL> alter table TEST_DATA_ARCH_PART shrink space;

Table altered.

SQL> select owner,segment_name as "Table Name",tablespace_name,sum(bytes)/1024/1024 Size_MB
  2  from dba_segments where segment_name='TEST_DATA_ARCH_PART' group by  owner,segment_name,tablespace_name;

OWNER         Table Name                TABLESPACE_NAME         SIZE_MB
------------- ------------------------- -------------------- ----------
MYAPP         TEST_DATA_ARCH_PART       ARCHDATA                12.9375
MYAPP         TEST_DATA_ARCH_PART       APPDATA                   .1875

As expected, all the records are ARCHIVED and are stored in a COMPRESSED format (Size: ~ 13 MB) on a low level storage device (ARCHDATA) and the unused space is reclaimed from the Tier-1 storage APPDATA. This type of setup and utilization of "In Database Archival" (Row Archival) would help us optimize the space required to store archived data within the same table (database).

Note: We may consider using DBMS_REDEFINITION as an alternate to SHRINK command for reorganizing and reclaiming space ONLINE.

Query Optimization for In Database Archiving

In Database Archiving may lead to potential plan change or performance degradation when data is queried from the tables. This is due to the fact that, the query is transformed to add a filter condition to exclude the ARCHIVED records from query result. Let's quickly go through a simple demonstration to illustrate these facts.

Demonstration

I am using the same table from the previous example. As part of the last demonstration, we had archived all the table records. Let's populate the table with some ACTIVE records.

----//
----// populating table with ACTIVE records //----
----//
SQL> insert /*+ APPEND */ into TEST_DATA_ARCH_PART
  2  select rownum+1e6, rpad('X',15,'X'), sysdate
  3  from dual connect by rownum <=1e6;

1000000 rows created.

SQL> commit;

Commit complete.

We have populated the table with 1000000 ACTIVE records. Let's validate the records from the table.

----//
----// ACTIVE records from the table //----
----//
SQL> select count(*) from TEST_DATA_ARCH_PART;

  COUNT(*)
----------
   1000000

----//
----// enabling row archival visibility //----
----//   
SQL>  alter session set ROW ARCHIVAL VISIBILITY=ALL;

Session altered.

----//
----// Total records from the table //----
----//
SQL> select count(*) from TEST_DATA_ARCH_PART;

  COUNT(*)
----------
   2000000

SQL> select count(*) from TEST_DATA_ARCH_PART partition (p_active);

  COUNT(*)
----------
   1000000

SQL> select count(*) from TEST_DATA_ARCH_PART partition (p_archived);

  COUNT(*)
----------
   1000000

At this point, we have 2000000 records in the table, out of which 1000000 are ACTIVE and 1000000 are in ARCHIVED state. Let's query few records from the table and see how the SQL optimizer handles it.

In the following example, I am querying records with ID ranging between 999000 and 1000005. The query should return only 4 records as the first 1000000 records are in ARCHIVED state.

----//
----// disabling row archival visibility //----
----//  
SQL> alter session set ROW ARCHIVAL VISIBILITY=ACTIVE;

Session altered.

----//
----// selecting records from table //----
----//
SQL> select /*+ gather_plan_statistics */ * from TEST_DATA_ARCH_PART
  2   where id > 999000 and id < 1000005;

        ID NAME            JOIN_DATE
---------- --------------- ---------
   1000001 XXXXXXXXXXXXXXX 23-NOV-15
   1000002 XXXXXXXXXXXXXXX 23-NOV-15
   1000003 XXXXXXXXXXXXXXX 23-NOV-15
   1000004 XXXXXXXXXXXXXXX 23-NOV-15

If we look at the optimizer trace, we can see the query is transformed to include an additional predicate ORA_ARCHIVE_STATE=0. This was done to ensure that only ACTIVE records are returned by this query.

-----//
-----// Query transformed by optimizer (formatted for readability) //----
-----//
Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "TEST_DATA_ARCH_PART"."ID" "ID","TEST_DATA_ARCH_PART"."NAME" "NAME",
"TEST_DATA_ARCH_PART"."JOIN_DATE" "JOIN_DATE" 
FROM "MYAPP"."TEST_DATA_ARCH_PART" "TEST_DATA_ARCH_PART" 
WHERE 
"TEST_DATA_ARCH_PART"."ID">999000 
AND "TEST_DATA_ARCH_PART"."ID"<1000005 
AND "TEST_DATA_ARCH_PART"."ORA_ARCHIVE_STATE"='0'
AND 1000005>999000

Now, lets take a look at the execution plan of this query.

----//
----// Query plan from the optimizer //----
----//SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  3vqgzvvmj3wb9, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ * from TEST_DATA_ARCH_PART  where
id > 999000 and id < 1000005

Plan hash value: 1454597550

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name                   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                        |      1 |        |      4 |00:00:00.01 |       9 |      3 |
|*  1 |  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| TEST_DATA_ARCH_PART    |      1 |     78 |      4 |00:00:00.01 |       9 |      3 |
|*  2 |   INDEX RANGE SCAN                         | PK_TEST_DATA_ARCH_PART |      1 |    134 |   1004 |00:00:00.01 |       7 |      3 |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("TEST_DATA_ARCH_PART"."ORA_ARCHIVE_STATE"='0')
   2 - access("ID">999000 AND "ID"<1000005)

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)


25 rows selected.

As we can see, the optimizer had utilized the primary key index to return the 4 ACTIVE records. However, it had fetched 1004 entries during the index scan and then applied a filter ("TEST_DATA_ARCH_PART"."ORA_ARCHIVE_STATE"='0') to discard the ARCHIVED records after the record fetch from the table. The optimizer had performed 1000 additional record fetch in this case. We can eliminate these additional fetches by modifying the indexes to append ORA_ARCHIVE_STATE column in the index definition.

Let's modify our primarily key index to include the ORA_ARCHIVE_STATE column.

----//
----// Creating Index by appending ORA_ARCHIVE_STATE column into it //----
----//
SQL> create index TEST_DATA_ARCH_PART_PK on TEST_DATA_ARCH_PART (ID, ORA_ARCHIVE_STATE);

Index created.

----//
----// Disabling and dropping the exiting primary key //----
----//
SQL> alter table TEST_DATA_ARCH_PART disable constraint PK_TEST_DATA_ARCH_PART;

Table altered.

SQL> alter table TEST_DATA_ARCH_PART drop constraint PK_TEST_DATA_ARCH_PART;

Table altered.

----//
----// Creating primary key using the new Index //----
----//
SQL>  alter table TEST_DATA_ARCH_PART add constraint PK_TEST_DATA_ARCH_PART primary key (ID) using index TEST_DATA_ARCH_PART_PK;

Table altered.

We have modified the primary key index to include ORA_ARCHIVE_STATE in the index definition. Let's check, how the optimizer now handles the SQL query.

----//
----// Query records from table //----
----//
SQL> select /*+ gather_plan_statistics */ * from TEST_DATA_ARCH_PART
  2  where id > 999000 and id < 1000005;

        ID NAME            JOIN_DATE
---------- --------------- ---------
   1000001 XXXXXXXXXXXXXXX 23-NOV-15
   1000002 XXXXXXXXXXXXXXX 23-NOV-15
   1000003 XXXXXXXXXXXXXXX 23-NOV-15
   1000004 XXXXXXXXXXXXXXX 23-NOV-15

----//
----// Query plan from optimizer //----
----//SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  2zgf279wu9291, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ * from TEST_DATA_ARCH_PART where
id > 999000 and id < 1000005

Plan hash value: 4096429886

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name                   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                        |      1 |        |      4 |00:00:00.01 |      10 |      6 |
|   1 |  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| TEST_DATA_ARCH_PART    |      1 |     78 |      4 |00:00:00.01 |      10 |      6 |
|*  2 |   INDEX RANGE SCAN                         | TEST_DATA_ARCH_PART_PK |      1 |    134 |      4 |00:00:00.01 |       8 |      6 |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ID">999000 AND "TEST_DATA_ARCH_PART"."ORA_ARCHIVE_STATE"='0' AND "ID"<1000005)filter("TEST_DATA_ARCH_PART"."ORA_ARCHIVE_STATE"='0')

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)


25 rows selected.

As we can see, SQL optimizer is now filtering at the access level. Now, it is fetching only 4 records rather than 1004 records when compared to the earlier execution plan. The modified index has helped the optimizer to eliminate unnecessary I/O while fetching the records.

Conclusion

When configuring In Database Archival, we should consider partitioning the table on the ORA_ARCHIVE_STATE column to optimize the space utilization for ARCHIVED records. Don't forget to enable ROW MOVEMENT in the table for archiving to work. Optionally, we may also need to consider reclaiming unused space on a periodic basis which would be left over due to the data movement between ACTIVE and ARCHIVED partitions.

We should also consider appending the ORA_ARCHIVE_STATE column in all of the table indexes to address any performance degradation resulted from In Database Archival, while querying records from the tables.

Reference

Potential SQL Performance Degradation When Using "In Database Row Archiving" (Doc ID 1579790.1)

Viewing all 7636 articles
Browse latest View live