SQL Optimizer for Oracle
Product Documentation
Learn More About SQL Optimizer for Oracle
The following documents provide information about how to get started with SQL Optimizer, a list of what's new in the latest release, and instructions for installing the product.
SQL Optimizer for Oracle 9.1
- Release Notes (html) - Find a list of resolved issues, what's new in this release, and the system requirements.
- New in This Release (html) - Descriptions and screen shots of the new features in this release.
- Installation Guide (html) - Instructions for installing SQL Optimizer.
- User Guide (html) - Learn how to get started with SQL Optimizer using these quick and easy tutorials.
SQL Optimizer for Oracle 9.0
SQL Optimizer for Oracle 8.9.1
SQL Optimizer for Oracle 8.9
SQL Optimizer for Oracle 8.8.1
SQL Optimizer for Oracle 8.8
Please visit SupportLink for current and earlier-version product documentation: https://support.software.dell.com/sql-optimizer-for-oracle/
Code Tester for Oracle Freeware v2.7.0.1026
This is the FREEWARE edition of Code Tester for Oracle. The Freeware edition has certain limitations, and is not intended to be used as a TRIAL for the Commercial edition of Code Tester for Oracle.
Code Tester for Oracle is the first and only automated PL/SQL code testing tool available. Created by one of the world’s most prominent Oracle PL/SQL experts, Steven Feuerstein, Code Tester for Oracle delivers practical and thorough code testing. Without an efficient and reliable way to perform thorough PL/SQL code tests, there is no way to be sure that your code is bug free. You may think, “But there’s no time to perform these tests” …Code Tester for Oracle makes it possible.
Resources |
POST QUESTION / COMMENT
Do you have a question or comment about this freeware? Post it to the product forum:
Histograms Pre-12c and now
Anju Garg is an Oracle Ace Associate with over 12 years of experience in IT Industry in various roles. Since 2010, she has been involved in teaching and has trained more than a hundred DBAs from across the world in various core DBA technologies like RAC, Data guard, Performance Tuning, SQL statement tuning, Database Administration etc. She is a regular speaker at Sangam and OTNYathra.
In this video, Anju discusses new features added to Oracle histograms in 12c.
Toad Extension for Eclipse Installation
Start Using Toad Extension for Eclipse
To install Toad Extension for Eclipse:
Visit the Toad Extension for Eclipse page on the Eclipse Marketplace, or
just drag the icon below onto your Eclipse client!
To upgrade from version 1.9.3 or previous, you will have to use intermediate manual installation using version 2.0.4 ZIP and then upgrade to the latest release.
To display Toad Extension for Eclipse once it's installed:
- Select Window | Open Perspective | Other.
- From the Open Perspective dialog select Toad Extension.
Important
If you don't have Oracle Client installed on your machine or use operating system other than Windows then you may need to download JDBC driver from Oracle site at http://www.oracle.com.
If you don't have PostgreSQL client installed on your machine or use operating system other than Windows you may need to download JDBC driver from the PostgreSQL site at http://www.postgresql.org. Then you need to set the location of the JDBC driver in Preferences | Toad Extension | Database Specific.
You may need to download a JDBC driver for MySQL in order to connect to your database. For downloading, go to Downloads Archive of MySQL at http://www.mysql.com
Having Trouble?
If you're having trouble installing with the drag and drop to install link above, please visit the Toad Extension for Eclipse page on the Eclipse Marketplace directly.
If you need to install Toad Extension for Eclipse Community Edition manually, download the ZIP package. The archive contains two builds of Toad Extension for Eclipse to ensure flawless transition from previous Toad Extension versions.
To install or update from the downloaded ZIP archive:
- Download the ZIP archive
- Extract the ZIP archive
- In menu Help, choose Install New Software... option
- Click Add... button and choose Local..
- Navigate to the directory where you extracted the downloaded ZIP archive and confirm with OK
- Confirm Add Repository dialog with OK
- Select all plugins you want to install/update and click Next
- Review Install Details dialog and click Next
- Review and accept the Software Transaction Agreement and click Finish
- Toad Extension for Eclipse is installed and confirm Software Updates dialog to restart your eclipse and start working!
Interface layout - Toad Extension for Eclipse connected to Oracle database
The following screenshot shows Object Describe feature.
Toad Extension for Eclipse connected to PostgreSQL database
The following screenshot shows SQL Worksheet with Code Completion feature.
Toad Extension for Eclipse connected to MySQL database
The following screenshot shows syntax check and export to XML, CSV or HTML.
Product Videos
See our short Flash movies and find out how to create new Toad Extension project or how to use some standard Eclipse features - e.g. how to edit XML, how to use Local History and more. Visit our Product Videos section.
Product Documentation
Toad Extension for Eclipse 2.3.2 Community Edition:
Toad Extension for Eclipse 2.3.0 Community Edition:
Toad Extension for Eclipse 2.2.4 Community Edition:
Toad Extension for Eclipse 2.2.3 Community Edition:
Toad Extension for Eclipse 2.2.1 Community Edition:
Toad Extension for Eclipse 2.2.0 Community Edition:
Toad Extension for Eclipse 2.1.3 Community Edition:
Toad Extension for Eclipse 2.1.2 Community Edition:
Toad Extension for Eclipse 2.1.1 Community Edition:
Toad Extension for Eclipse 2.1.0 Community Edition:
Toad Extension for Eclipse 2.0.4 Community Edition:
Toad Extension for Eclipse 2.0.3 Community Edition:
Toad Extension for Eclipse 2.0.2 Community Edition:
Toad Extension for Eclipse 2.0.1 Community Edition:
Toad Extension for Eclipse 2.0.0 Community Edition:
Toad Extension for Eclipse 1.9.3 Community Edition:
Code Tester for Oracle Freeware v2.7.0.1026
This is the FREEWARE edition of Code Tester for Oracle. The Freeware edition has certain limitations, and is not intended to be used as a TRIAL for the Commercial edition of Code Tester for Oracle.
Code Tester for Oracle is the first and only automated PL/SQL code testing tool available. Created by one of the world’s most prominent Oracle PL/SQL experts, Steven Feuerstein, Code Tester for Oracle delivers practical and thorough code testing. Without an efficient and reliable way to perform thorough PL/SQL code tests, there is no way to be sure that your code is bug free. You may think, “But there’s no time to perform these tests” …Code Tester for Oracle makes it possible.
Resources |
POST QUESTION / COMMENT
Do you have a question or comment about this freeware? Post it to the product forum:
Histograms Pre-12c and now
Anju Garg is an Oracle Ace Associate with over 12 years of experience in IT Industry in various roles. Since 2010, she has been involved in teaching and has trained more than a hundred DBAs from across the world in various core DBA technologies like RAC, Data guard, Performance Tuning, SQL statement tuning, Database Administration etc. She is a regular speaker at Sangam and OTNYathra.
In this video, Anju discusses new features added to Oracle histograms in 12c.
Toad crash on connect
Toad 6.0 on Win 7
Toad consistently crashes when opening a connection to a database. This is some of the detail from Event Viewer. Seems like it's having trouble with DB2APP.dll
Can anyone give me a hint as to what's wrong?
Application: toad.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: System.AccessViolationException
Stack:
at IBM.Data.DB2.UnsafeNativeMethods+DB232.CSCStartTxnTimerADONET(IntPtr, Int32 ByRef)
at IBM.Data.DB2.DB2CscConnection.StartTxnTimer()
at IBM.Data.DB2.DB2Transaction.BeginTransaction()
at IBM.Data.DB2.DB2Connection.BeginTransactionObject(System.Data.IsolationLevel)
at IBM.Data.DB2.DB2Connection.BeginTransaction(System.Data.IsolationLevel)
at IBM.Data.DB2.DB2Connection.BeginTransaction()
at IBM.Data.DB2.DB2Connection.System.Data.IDbConnection.BeginTransaction()
at Quest.Toad.Db.Connection.BeginTransaction(System.Data.IDbConnection)
at Quest.Toad.DB2.DB2ToadConnection.BeginTransaction(System.Data.IDbConnection)
at Quest.Toad.Db.Connection.OpenConnection(System.Data.IDbConnection)
at Quest.Toad.DB2.DB2ToadConnection.OpenConnection(System.Data.IDbConnection)
at Quest.Toad.Db.Connection.AllocConnection()
at Quest.Toad.Db.Connection.Connect(Boolean)
at Quest.Toad.Db.Provider+BackgroundConnector.CreateBackgroundConnection()
at System.Threading.ThreadHelper.ThreadStart_Context(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
at System.Threading.ThreadHelper.ThreadStart()
=====================================================================
Faulting application name: toad.exe, version: 6.0.0.373, time stamp: 0x54639ac8
Faulting module name: DB2APP.dll, version: 10.5.600.232, time stamp: 0x55bc43c6
Exception code: 0xc0000005
Fault offset: 0x004167c5
Faulting process id: 0xde4
Faulting application start time: 0x01d10c5eb941bf67
Faulting application path: C:\Program Files\Dell\Toad for DB2 6.0\toad.exe
Faulting module path: C:\IBM\SQLLIB\BIN\DB2APP.dll
Report Id: 118309e7-7852-11e5-8582-0023240b2629
Toad - Mac Edition 2.3.0
Version: 2.3.0
Released: 27/10/2015
Toad- Mac Edition is a native Mac application for database development. Designed to help database developers be more productive, the Toad - Mac Edition provides essential database tools for Oracle, MySQL, and PostgreSQL.
Boost your database development productivity on Mac and develop highly-functional database applications fast.
NOTE: You will be redirected to the itunes app store for download.
ResourcesPost |
POST QUESTION / COMMENT
Do you have a question or comment about this freeware? Post it to the product forum:
Using Oracle Database with CDH 5.2 Sqoop 1.4.5
Written by Deepak Vhora
In an earlier tutorial on using Oracle Database with Sqoop the Sqoop (Apache Sqoop 1.4.1 incubating), Oracle Database (Oracle Database 10g Express Edition), Apache Hadoop (1.0.0), Apache Hive (0.90), and Apache HBase (0.94.1) versions were earlier versions. In this tutorial Oracle Database 11g is used with later versions: CDH 5.2 Sqoop 1.4.5 (sqoop-1.4.5-cdh5.2.0), Hadoop 2.5.0 (hadoop-2.5.0-cdh5.2.0), Hive 0.13.1 (hive-0.13.1-cdh5.2.0), and HBase 0.98.6 (hbase-0.98.6-cdh5.2.0). This tutorial has the following sections.
Creating a Oracle Database Table
Setting the Environment
The following software is required for this tutorial.
-Oracle Database 11g
-Sqoop 1.4.5 (sqoop-1.4.5-cdh5.2.0)
-Hadoop 2.5.0 (hadoop-2.5.0-cdh5.2.0)
-Hive 0.13.1 (hive-0.13.1-cdh5.2.0)
-HBase 0.98.6 (hbase-0.98.6-cdh5.2.0)
-Java 7
Create a directory /sqoop to install the software and set its permissions.
mkdir /sqoop
chmod -R 777 /sqoop
cd /sqoop
Add the hadoop group and add the hbase user to the hadoop group.
groupadd hadoop
useradd -g hadoop hbase
Download and extract the Java 7 gz file.
tar zxvf jdk-7u55-linux-i586.gz
Download and extract the Hadoop 2.5.0 tar.gz file.
wget http://archive-primary.cloudera.com/cdh5/cdh/5/hadoop-2.5.0-cdh5.2.0.tar.gz
tar -xvf hadoop-2.5.0-cdh5.2.0.tar.gz
Create symlinks for the Hadoop conf and bin directories.
ln -s /sqoop/hadoop-2.5.0-cdh5.2.0/bin-mapreduce1 /sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1/bin
ln -s /sqoop/hadoop-2.5.0-cdh5.2.0/etc/hadoop /sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1/conf
Download and extract the Sqoop 1.4.5 tar.gz file.
wget http://archive-primary.cloudera.com/cdh5/cdh/5/sqoop-1.4.5-cdh5.2.0.tar.gz
tar -xvf sqoop-1.4.5-cdh5.2.0.tar.gz
Copy the Oracle JDBC Jar file to the Sqoop lib directory.
cp ojdbc6.jar /sqoop/sqoop-1.4.5-cdh5.2.0/lib
Download and install the Hive 0.13.1 tar.gz file.
wget http://archive-primary.cloudera.com/cdh5/cdh/5/hive-0.13.1-cdh5.2.0.tar.gz
tar -xvf hive-0.13.1-cdh5.2.0.tar.gz
Create a hive-site.xml configuration file from the template file.
cp /sqoop/hive-0.13.1-cdh5.2.0/conf/hive-default.xml.template /sqoop/hive-0.13.1-cdh5.2.0/conf/hive-site.xml
Set the following configuration properties in the /sqoop/hive-0.13.1-cdh5.2.0/conf/hive-site.xml file. The host ip address specified in the hive.metastore.warehouse.dir could be different.
<property>
<name>hive.metastore.warehouse.dir</name>
<value>hdfs://10.0.2.15:8020/user/hive/warehouse</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://localhost:10000</value>
</property>
Download and extract the HBase 0.98.6 tar.gz file.
wget http://archive-primary.cloudera.com/cdh5/cdh/5/hbase-0.98.6-cdh5.2.0.tar.gz
tar -xvf hbase-0.98.6-cdh5.2.0.tar.gz
Set the following configuration properties in the /sqoop/hbase-0.98.6-cdh5.2.0/conf/hbase-site.xml file. The ip address specified in the hbase.rootdir could be different.
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://10.0.2.15:8020/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>hbase.regionserver.port</name>
<value>60020</value>
</property>
<property>
<name>hbase.master.port</name>
<value>60000</value>
</property>
</configuration>
Create the directory specified in the hbase.zookeeper.property.dataDir property and set its permissions.
mkdir -p /zookeeper
chmod -R 700 /zookeeper
As root user increase the maximum number of file handles limit in the
/etc/security/limits.conf file.
hdfs - nofile 32768
hbase - nofile 32768
Set the environment variables for Oracle Database, Sqoop, Hadoop, Hive, HBase and Java.
vi ~/.bashrc
export HADOOP_PREFIX=/sqoop/hadoop-2.5.0-cdh5.2.0
export HADOOP_CONF=$HADOOP_PREFIX/etc/hadoop
export HIVE_HOME=/sqoop/hive-0.13.1-cdh5.2.0
export HBASE_HOME=/sqoop/hbase-0.98.6-cdh5.2.0
export SQOOP_HOME=/sqoop/sqoop-1.4.5-cdh5.2.0
export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
export ORACLE_SID=ORCL
export JAVA_HOME=/sqoop/jdk1.7.0_55
export HADOOP_MAPRED_HOME=/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1
export HADOOP_HOME=/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1
export HADOOP_CLASSPATH=$HADOOP_HOME/*:$HADOOP_HOME/lib/*:SQOOP_HOME/lib/*:$HBASE_HOME/lib/*:$HIVE_HOME/lib/*
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_MAPRED_HOME/bin:$SQOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$ORACLE_HOME/bin
export CLASSPATH=$HADOOP_CLASSPATH
export HADOOP_NAMENODE_USER=sqoop
export HADOOP_DATANODE_USER=sqoop
Set the following Hadoop core properties in the /sqoop/hadoop-2.5.0-cdh5.2.0/etc/hadoop/core-site.xml configuration file. The ip address specified in the fs.defaultFS property could be different.
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://10.0.2.15:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/lib/hadoop-0.20/cache</value>
</property>
</configuration>
Create the directory specified in the hadoop.tmp.dir property and set its permissions.
mkdir -p /var/lib/hadoop-0.20/cache
chmod -R 777 /var/lib/hadoop-0.20/cache
Set the following HDFS configuration properties in the /sqoop/hadoop-2.5.0-cdh5.2.0/etc/hadoop/hdfs-site.xml file.
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.permissions.superusergroup</name>
<value>hadoop</value>
</property><property>
<name>dfs.namenode.name.dir</name>
<value>/data/1/dfs/nn</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
</configuration>
Create the NameNode storage directory and set its permissions.
mkdir -p /data/1/dfs/nn
chmod -R 777 /data/1/dfs/nn
Format the NameNode and start the NameNode and the DataNode.
hadoop namenode -format
hadoop namenode
hadoop datanode
Create the HDFS directory specified in the hive.metastore.warehouse.dir property in the hive-site.xml file and set its permissions.
hadoop dfs -mkdir -p hdfs://10.0.2.15:8020/user/hive/warehouse
hadoop dfs -chmod -R 777 hdfs://10.0.2.15:8020/user/hive/warehouse
Create the HDFS directory specified in the hbase.rootdir property in the hbase-site.xml file and set its permissions.
hadoop dfs -mkdir /hbase
hadoop dfs -chmod -R 777 /hbase
We need to copy the Sqoop lib jars to the HDFS to be available in the runtime classpath. Create a HDFS directory path to put the Sqoop lib jars, set the directory path permissions and put the Sqoop lib jars to HDFS.
hadoop dfs -mkdir hdfs://10.0.2.15:8020/sqoop/sqoop-1.4.5-cdh5.2.0/lib
hadoop dfs -chmod -R 777 hdfs://10.0.2.15:8020/sqoop/sqoop-1.4.5-cdh5.2.0/lib
hdfs dfs -put /sqoop/sqoop-1.4.5-cdh5.2.0/lib/* hdfs://10.0.2.15:8020/sqoop/sqoop-1.4.5-cdh5.2.0/lib
Similarly create a HDFS directory path to put the Hive lib jars, set the directory path permissions and put the Hive lib jars to HDFS.
hadoop dfs -mkdir hdfs://10.0.2.15:8020/sqoop/hive-0.13.1-cdh5.2.0/lib
hadoop dfs -chmod -R 777 hdfs://10.0.2.15:8020/sqoop/hive-0.13.1-cdh5.2.0/lib
hdfs dfs -put /sqoop/hive-0.13.1-cdh5.2.0/lib/* hdfs://10.0.2.15:8020/sqoop/hive-0.13.1-cdh5.2.0/lib
Similarly create a HDFS directory path to put the HBase lib jars, set the directory path permissions and put the HBase lib jars to HDFS.
hadoop dfs -mkdir hdfs://10.0.2.15:8020/sqoop/hbase-0.98.6-cdh5.2.0/lib
hadoop dfs -chmod -R 777 hdfs://10.0.2.15:8020/sqoop/hbase-0.98.6-cdh5.2.0/lib
hdfs dfs -put /sqoop/hbase-0.98.6-cdh5.2.0/lib/* hdfs://10.0.2.15:8020/sqoop/hbase-0.98.6-cdh5.2.0/lib
Start the HBase Master, Regionserver and Zookeeper nodes.
hbase-daemon.sh start master
hbase-daemon.sh start regionserver
hbase-daemon.sh start zookeeper
Creating a Oracle Database Table
In this section we shall create the Oracle Database table that is to be used to import/export with Sqoop. In SQL*Plus connect as schema OE. Create a database table wlslog.
CONNECT OE/OE;
CREATE TABLE OE.wlslog (time_stamp VARCHAR2(4000), category VARCHAR2(4000), type VARCHAR2(4000), servername VARCHAR2(4000), code VARCHAR2(4000), msg VARCHAR2(4000));
Run the following SQL script to add data to the wlslog table.
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:16-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STANDBY');
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:17-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STARTING');
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:18-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to ADMIN');
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:19-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RESUMING');
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:20-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000361','Started WebLogic AdminServer');
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:21-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RUNNING');
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:22-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000360','Server started in RUNNING mode');
As the output in SQL*Plus indicates the database table wlslog gets created.
Create another table WLSLOG_COPY, with the same structure as the wlslog table, to be used to export from HDFS.
CREATE TABLE WLSLOG_COPY(time_stamp VARCHAR2(4000), category VARCHAR2(4000), type VARCHAR2(4000), servername VARCHAR2(4000), code VARCHAR2(4000), msg VARCHAR2(4000));
The WLSLOG_COPY table gets created. Do not add data to the table as data is to be exported to it from HDFS.
Importing into HDFS
In this section Sqoop is used to import Oracle Database table data into HDFS. Run the following sqoop import command to import into HDFS.
sqoop import --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --password "OE" --username "OE" --table "wlslog" --columns "time_stamp,category,type,servername,code,msg" --split-by "time_stamp" --target-dir "/oradb/import" –verbose
The sqoop import command arguments are as follows.
Argument | Description | Value |
--connect | Sets the connection url for Oracle Database | "jdbc:oracle:thin:@localhost:1521:ORCL" |
--username | Sets username to connect to Oracle Database | "OE" |
--password | Sets the password for Oracle Database | "OE" |
--table | Sets the Oracle Database table name | "wlslog" |
--columns | Sets the Oracle Database table columns | "time_stamp,category,type,servername,code,msg" |
--split-by | Sets the primary key column | "time_stamp" |
--target-dir | Sets the HDFS directory to import into | "/oradb/import" |
–verbose | Sets verbose output |
|
A MapReduce job runs to import the Oracle Database table data into HDFS.
A more detailed output from the sqoop import command is as follows.
15/04/03 11:09:17 INFO mapred.LocalJobRunner:
15/04/03 11:09:18 INFO mapred.JobClient: map 100% reduce 0%
15/04/03 11:09:22 INFO mapred.Task: Task:attempt_local1162911152_0001_m_000000_0 is done. And is in the process of commiting
15/04/03 11:09:22 INFO mapred.LocalJobRunner:
15/04/03 11:09:22 INFO mapred.Task: Task attempt_local1162911152_0001_m_000000_0 is allowed to commit now
15/04/03 11:09:24 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1162911152_0001_m_000000_0' to /oradb/import
15/04/03 11:09:24 INFO mapred.LocalJobRunner:
15/04/03 11:09:24 INFO mapred.Task: Task 'attempt_local1162911152_0001_m_000000_0' done.
15/04/03 11:09:24 INFO mapred.LocalJobRunner: Finishing task: attempt_local1162911152_0001_m_000000_0
15/04/03 11:09:24 INFO mapred.LocalJobRunner: Map task executor complete.
15/04/03 11:09:25 INFO mapred.JobClient: Job complete: job_local1162911152_0001
15/04/03 11:09:26 INFO mapred.JobClient: Counters: 18
15/04/03 11:09:26 INFO mapred.JobClient: File System Counters
15/04/03 11:09:26 INFO mapred.JobClient: FILE: Number of bytes read=21673941
15/04/03 11:09:26 INFO mapred.JobClient: FILE: Number of bytes written=21996421
15/04/03 11:09:26 INFO mapred.JobClient: FILE: Number of read operations=0
15/04/03 11:09:26 INFO mapred.JobClient: FILE: Number of large read operations=0
15/04/03 11:09:26 INFO mapred.JobClient: FILE: Number of write operations=0
15/04/03 11:09:26 INFO mapred.JobClient: HDFS: Number of bytes read=0
15/04/03 11:09:26 INFO mapred.JobClient: HDFS: Number of bytes written=717
15/04/03 11:09:26 INFO mapred.JobClient: HDFS: Number of read operations=1
15/04/03 11:09:26 INFO mapred.JobClient: HDFS: Number of large read operations=0
15/04/03 11:09:26 INFO mapred.JobClient: HDFS: Number of write operations=2
15/04/03 11:09:26 INFO mapred.JobClient: Map-Reduce Framework
15/04/03 11:09:26 INFO mapred.JobClient: Map input records=7
15/04/03 11:09:26 INFO mapred.JobClient: Map output records=7
15/04/03 11:09:26 INFO mapred.JobClient: Input split bytes=87
15/04/03 11:09:26 INFO mapred.JobClient: Spilled Records=0
15/04/03 11:09:26 INFO mapred.JobClient: CPU time spent (ms)=0
15/04/03 11:09:26 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
15/04/03 11:09:26 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
15/04/03 11:09:26 INFO mapred.JobClient: Total committed heap usage (bytes)=180756480
15/04/03 11:09:26 INFO mapreduce.ImportJobBase: Transferred 717 bytes in 182.2559 seconds (3.934 bytes/sec)
15/04/03 11:09:26 INFO mapreduce.ImportJobBase: Retrieved 7 records.
15/04/03 11:09:26 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@3d4817
Exporting from HDFS
Having imported into HDFS, in this section we shall export the imported data back into Oracle Database using the sqoop export tool. Run the following sqoop export command to export to Oracle Database.
sqoop export --connect "jdbc:oracle:thin:@127.0.0.1:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --export-dir "/oradb/import" --table "WLSLOG_COPY" --verbose
The sqoop export command arguments are as follows.
Argument | Description | Value |
--connect | Sets the connection url for Oracle Database | "jdbc:oracle:thin:@localhost:1521:ORCL" |
--username | Sets username to connect to Oracle Database | "OE" |
--password | Sets the password for Oracle Database | "OE" |
--table | Sets the Oracle Database table name to export to | "WLSLOG_COPY" |
--hadoop-home | Sets the Hadoop home directory | "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" |
--export-dir | Sets the HDFS directory to export from. Should be same as the directory imported into | "/oradb/import" |
–verbose | Sets verbose output |
|
A MapReduce job runs to export HDFS data into Oracle Database.
A more detailed output from the sqoop export command is as follows.
[root@localhost sqoop]# sqoop export --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --export-dir "/oradb/import" --table "WLSLOG_COPY" --verbose
15/04/03 11:13:03 DEBUG manager.DefaultManagerFactory: Trying with scheme: jdbc:oracle:thin:@localhost:1521
15/04/03 11:13:03 DEBUG manager.OracleManager$ConnCache: Instantiated new connection cache.
15/04/03 11:13:03 INFO manager.SqlManager: Using default fetchSize of 1000
15/04/03 11:13:03 DEBUG sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.OracleManager@1101fa5
15/04/03 11:13:03 INFO tool.CodeGenTool: Beginning code generation
15/04/03 11:13:04 DEBUG manager.OracleManager: Using column names query: SELECT t.* FROM WLSLOG_COPY t WHERE 1=0
15/04/03 11:13:04 DEBUG manager.SqlManager: Execute getColumnInfoRawQuery : SELECT t.* FROM WLSLOG_COPY t WHERE 1=0
15/04/03 11:13:05 DEBUG manager.OracleManager: Creating a new connection for jdbc:oracle:thin:@localhost:1521:ORCL, using username: OE
15/04/03 11:13:05 DEBUG manager.OracleManager: No connection paramenters specified. Using regular API for making connection.
15/04/03 11:13:10 INFO manager.OracleManager: Time zone has been set to GMT
15/04/03 11:13:11 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
15/04/03 11:13:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM WLSLOG_COPY t WHERE 1=0
15/04/03 11:13:15 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE
15/04/03 11:13:15 DEBUG orm.ClassWriter: selected columns:
15/04/03 11:13:15 DEBUG orm.ClassWriter: TIME_STAMP
15/04/03 11:13:15 DEBUG orm.ClassWriter: CATEGORY
15/04/03 11:13:15 DEBUG orm.ClassWriter: TYPE
15/04/03 11:13:15 DEBUG orm.ClassWriter: SERVERNAME
15/04/03 11:13:15 DEBUG orm.ClassWriter: CODE
15/04/03 11:13:15 DEBUG orm.ClassWriter: MSG
15/04/03 11:14:00 INFO mapreduce.ExportJobBase: Beginning export of WLSLOG_COPY
15/04/03 11:14:00 DEBUG util.ClassLoaderStack: Checking for existing class: WLSLOG_COPY
15/04/03 11:14:52 DEBUG mapreduce.JobBase: Using InputFormat: class org.apache.sqoop.mapreduce.ExportInputFormat
15/04/03 11:14:54 DEBUG db.DBConfiguration: Securing password into job credentials store
15/04/03 11:14:54 DEBUG manager.OracleManager$ConnCache: Got cached connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE
15/04/03 11:15:23 INFO input.FileInputFormat: Total input paths to process : 1
15/04/03 11:15:23 DEBUG mapreduce.ExportInputFormat: Target numMapTasks=4
15/04/03 11:15:23 DEBUG mapreduce.ExportInputFormat: Total input bytes=717
15/04/03 11:15:23 DEBUG mapreduce.ExportInputFormat: maxSplitSize=179
15/04/03 11:15:23 INFO input.FileInputFormat: Total input paths to process : 1
15/04/03 11:15:25 DEBUG mapreduce.ExportInputFormat: Generated splits:
15/04/03 11:15:25 DEBUG mapreduce.ExportInputFormat: Paths:/oradb/import/part-m-00000:0+179 Locations:localhost:;
15/04/03 11:15:25 DEBUG mapreduce.ExportInputFormat: Paths:/oradb/import/part-m-00000:179+179 Locations:localhost:;
15/04/03 11:15:25 DEBUG mapreduce.ExportInputFormat: Paths:/oradb/import/part-m-00000:358+179 Locations:localhost:;
15/04/03 11:15:25 DEBUG mapreduce.ExportInputFormat: Paths:/oradb/import/part-m-00000:537+90,/oradb/import/part-m-00000:627+90 Locations:localhost:;
15/04/03 11:16:35 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/04/03 11:16:35 INFO mapred.JobClient: Running job: job_local596048800_0001
15/04/03 11:16:35 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.sqoop.mapreduce.NullOutputCommitter
15/04/03 11:16:36 INFO mapred.LocalJobRunner: Waiting for map tasks
15/04/03 11:16:36 INFO mapred.LocalJobRunner: Starting task: attempt_local596048800_0001_m_000000_0
15/04/03 11:16:37 INFO mapred.JobClient: map 0% reduce 0%
15/04/03 11:16:38 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/04/03 11:16:40 INFO util.ProcessTree: setsid exited with exit code 0
15/04/03 11:16:41 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@9487e9
15/04/03 11:16:41 INFO mapred.MapTask: Processing split: Paths:/oradb/import/part-m-00000:537+90,/oradb/import/part-m-00000:627+90
15/04/03 11:16:41 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/oradb/import/part-m-00000
15/04/03 11:16:41 DEBUG db.DBConfiguration: Fetching password from job credentials store
15/04/03 11:16:46 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/oradb/import/part-m-00000
15/04/03 11:16:46 INFO mapred.LocalJobRunner:
15/04/03 11:16:48 DEBUG mapreduce.AsyncSqlOutputFormat: Committing transaction of 1 statements
15/04/03 11:16:48 INFO mapred.Task: Task:attempt_local596048800_0001_m_000000_0 is done. And is in the process of commiting
15/04/03 11:16:49 INFO mapred.LocalJobRunner:
15/04/03 11:16:49 INFO mapred.Task: Task 'attempt_local596048800_0001_m_000000_0' done.
15/04/03 11:16:49 INFO mapred.LocalJobRunner: Finishing task: attempt_local596048800_0001_m_000000_0
15/04/03 11:16:49 INFO mapred.LocalJobRunner: Starting task: attempt_local596048800_0001_m_000001_0
15/04/03 11:16:49 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/04/03 11:16:49 INFO mapred.JobClient: map 25% reduce 0%
15/04/03 11:16:49 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@318c80
15/04/03 11:16:49 INFO mapred.MapTask: Processing split: Paths:/oradb/import/part-m-00000:0+179
15/04/03 11:16:49 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/oradb/import/part-m-00000
15/04/03 11:16:49 DEBUG db.DBConfiguration: Fetching password from job credentials store
15/04/03 11:16:53 INFO mapred.LocalJobRunner:
15/04/03 11:16:53 DEBUG mapreduce.AsyncSqlOutputFormat: Committing transaction of 1 statements
15/04/03 11:16:53 INFO mapred.Task: Task:attempt_local596048800_0001_m_000001_0 is done. And is in the process of commiting
15/04/03 11:16:53 INFO mapred.LocalJobRunner:
15/04/03 11:16:53 INFO mapred.Task: Task 'attempt_local596048800_0001_m_000001_0' done.
15/04/03 11:16:53 INFO mapred.LocalJobRunner: Finishing task: attempt_local596048800_0001_m_000001_0
15/04/03 11:16:53 INFO mapred.LocalJobRunner: Starting task: attempt_local596048800_0001_m_000002_0
15/04/03 11:16:53 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/04/03 11:16:53 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@19d4b58
15/04/03 11:16:53 INFO mapred.MapTask: Processing split: Paths:/oradb/import/part-m-00000:179+179
15/04/03 11:16:53 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/oradb/import/part-m-00000
15/04/03 11:16:53 DEBUG db.DBConfiguration: Fetching password from job credentials store
15/04/03 11:16:54 INFO mapred.JobClient: map 50% reduce 0%
15/04/03 11:16:58 DEBUG mapreduce.AutoProgressMapper: Progress thread shutdown detected.
15/04/03 11:16:58 INFO mapred.LocalJobRunner:
15/04/03 11:16:58 DEBUG mapreduce.AsyncSqlOutputFormat: Committing transaction of 1 statements
15/04/03 11:16:58 INFO mapred.Task: Task:attempt_local596048800_0001_m_000002_0 is done. And is in the process of commiting
15/04/03 11:16:58 INFO mapred.LocalJobRunner:
15/04/03 11:16:58 INFO mapred.Task: Task 'attempt_local596048800_0001_m_000002_0' done.
15/04/03 11:16:58 INFO mapred.LocalJobRunner: Finishing task: attempt_local596048800_0001_m_000002_0
15/04/03 11:16:58 INFO mapred.LocalJobRunner: Starting task: attempt_local596048800_0001_m_000003_0
15/04/03 11:16:58 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/04/03 11:16:58 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1059ca1
15/04/03 11:16:58 INFO mapred.MapTask: Processing split: Paths:/oradb/import/part-m-00000:358+179
15/04/03 11:16:58 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/oradb/import/part-m-00000
15/04/03 11:16:58 DEBUG db.DBConfiguration: Fetching password from job credentials store
15/04/03 11:16:59 INFO mapred.JobClient: map 75% reduce 0%
15/04/03 11:17:02 INFO mapred.LocalJobRunner:
15/04/03 11:17:02 DEBUG mapreduce.AsyncSqlOutputFormat: Committing transaction of 1 statements
15/04/03 11:17:03 INFO mapred.Task: Task:attempt_local596048800_0001_m_000003_0 is done. And is in the process of commiting
15/04/03 11:17:03 INFO mapred.LocalJobRunner:
15/04/03 11:17:03 INFO mapred.Task: Task 'attempt_local596048800_0001_m_000003_0' done.
15/04/03 11:17:03 INFO mapred.LocalJobRunner: Finishing task: attempt_local596048800_0001_m_000003_0
15/04/03 11:17:03 INFO mapred.LocalJobRunner: Map task executor complete.
15/04/03 11:17:03 INFO mapred.JobClient: map 100% reduce 0%
15/04/03 11:17:03 INFO mapred.JobClient: Job complete: job_local596048800_0001
15/04/03 11:17:04 INFO mapred.JobClient: Counters: 18
15/04/03 11:17:04 INFO mapred.JobClient: File System Counters
15/04/03 11:17:04 INFO mapred.JobClient: FILE: Number of bytes read=86701670
15/04/03 11:17:04 INFO mapred.JobClient: FILE: Number of bytes written=87982780
15/04/03 11:17:04 INFO mapred.JobClient: FILE: Number of read operations=0
15/04/03 11:17:04 INFO mapred.JobClient: FILE: Number of large read operations=0
15/04/03 11:17:04 INFO mapred.JobClient: FILE: Number of write operations=0
15/04/03 11:17:04 INFO mapred.JobClient: HDFS: Number of bytes read=4720
15/04/03 11:17:04 INFO mapred.JobClient: HDFS: Number of bytes written=0
15/04/03 11:17:04 INFO mapred.JobClient: HDFS: Number of read operations=78
15/04/03 11:17:04 INFO mapred.JobClient: HDFS: Number of large read operations=0
15/04/03 11:17:04 INFO mapred.JobClient: HDFS: Number of write operations=0
15/04/03 11:17:04 INFO mapred.JobClient: Map-Reduce Framework
15/04/03 11:17:04 INFO mapred.JobClient: Map input records=7
15/04/03 11:17:04 INFO mapred.JobClient: Map output records=7
15/04/03 11:17:04 INFO mapred.JobClient: Input split bytes=576
15/04/03 11:17:04 INFO mapred.JobClient: Spilled Records=0
15/04/03 11:17:04 INFO mapred.JobClient: CPU time spent (ms)=0
15/04/03 11:17:04 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
15/04/03 11:17:04 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
15/04/03 11:17:04 INFO mapred.JobClient: Total committed heap usage (bytes)=454574080
15/04/03 11:17:04 INFO mapreduce.ExportJobBase: Transferred 4.6094 KB in 128.7649 seconds (36.656 bytes/sec)
15/04/03 11:17:04 INFO mapreduce.ExportJobBase: Exported 7 records.
15/04/03 11:17:04 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@3d4817
Run a SELECT statement in SQL*PLUS to list the exported data.
The 7 rows of data exported to the WLSLOG_COPY table get listed.
Importing into HBase
In this section Sqoop is used to import data into HDFS. Run the following sqoop import command to import into HBase.
sqoop import --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --hbase-create-table --hbase-table "WLS_LOG" --column-family "wls" --table "wlslog" –verbose
The sqoop import command arguments are as follows.
Argument | Description | Value |
--connect | Sets the connection url for Oracle Database | "jdbc:oracle:thin:@localhost:1521:ORCL" |
--username | Sets username to connect to Oracle Database | "OE" |
--password | Sets the password for Oracle Database | "OE" |
--table | Sets the Oracle Database table name | "wlslog" |
--hadoop-home | Sets the Hadoop home directory | "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" |
--hbase-create-table | Creates the HBase table |
|
--hbase-table | Sets the HBase table name | "WLS_LOG" |
--column-family | Sets the HBase column family | "wls" |
–verbose | Sets verbose output |
|
A MapReduce job runs to import Oracle Database data into HBase.
A more detailed output from the sqoop import command is as follows.
[root@localhost sqoop]# sqoop import --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --hbase-create-table --hbase-table "WLS_LOG" --column-family "wls" --table "WLSLOG" --verbose
15/04/03 13:56:26 DEBUG sqoop.ConnFactory: Trying ManagerFactory: com.cloudera.sqoop.manager.DefaultManagerFactory
15/04/03 13:56:26 DEBUG manager.DefaultManagerFactory: Trying with scheme: jdbc:oracle:thin:@localhost:1521
15/04/03 13:56:26 DEBUG manager.OracleManager$ConnCache: Instantiated new connection cache.
15/04/03 13:56:26 INFO manager.SqlManager: Using default fetchSize of 1000
15/04/03 13:56:26 DEBUG sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.OracleManager@704f33
15/04/03 13:56:26 INFO tool.CodeGenTool: Beginning code generation
15/04/03 13:56:26 DEBUG manager.OracleManager: Using column names query: SELECT t.* FROM WLSLOG t WHERE 1=0
15/04/03 13:56:26 DEBUG manager.SqlManager: Execute getColumnInfoRawQuery : SELECT t.* FROM WLSLOG t WHERE 1=0
15/04/03 13:56:28 DEBUG manager.OracleManager: Creating a new connection for jdbc:oracle:thin:@localhost:1521:ORCL, using username: OE
15/04/03 13:56:28 DEBUG manager.OracleManager: No connection paramenters specified. Using regular API for making connection.
15/04/03 13:57:09 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
15/04/03 13:57:09 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM WLSLOG t WHERE 1=0
15/04/03 13:57:22 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE
15/04/03 13:57:22 DEBUG orm.ClassWriter: selected columns:
15/04/03 13:57:22 DEBUG orm.ClassWriter: TIME_STAMP
15/04/03 13:57:22 DEBUG orm.ClassWriter: CATEGORY
15/04/03 13:57:22 DEBUG orm.ClassWriter: TYPE
15/04/03 13:57:22 DEBUG orm.ClassWriter: SERVERNAME
15/04/03 13:57:22 DEBUG orm.ClassWriter: CODE
15/04/03 13:57:22 DEBUG orm.ClassWriter: MSG
15/04/03 13:58:46 DEBUG db.DBConfiguration: Securing password into job credentials store
15/04/03 13:58:46 DEBUG manager.OracleManager$ConnCache: Got cached connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE
15/04/03 13:58:46 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE
15/04/03 13:58:46 DEBUG mapreduce.DataDrivenImportJob: Using table class: WLSLOG
15/04/03 13:58:46 DEBUG mapreduce.DataDrivenImportJob: Using InputFormat: class com.cloudera.sqoop.mapreduce.db.OracleDataDrivenDBInputFormat
15/04/03 13:58:47 DEBUG manager.OracleManager$ConnCache: Got cached connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE
15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/i386:/lib:/usr/lib
15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.39-400.247.1.el6uek.i686
15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:user.name=root
15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:user.dir=/sqoop
15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x8fffea, quorum=localhost:2181, baseZNode=/hbase
15/04/03 13:59:09 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/10.0.2.15:2181. Will not attempt to authenticate using SASL (unknown error)
15/04/03 13:59:10 INFO zookeeper.ClientCnxn: Socket connection established to localhost/10.0.2.15:2181, initiating session
15/04/03 13:59:11 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/10.0.2.15:2181, sessionid = 0x14c806c5f420006, negotiated timeout = 40000
15/04/03 13:59:47 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
15/04/03 13:59:54 INFO zookeeper.RecoverableZooKeeper: Process identifier=catalogtracker-on-hconnection-0x8fffea connecting to ZooKeeper ensemble=localhost:2181
15/04/03 13:59:54 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x8fffea, quorum=localhost:2181, baseZNode=/hbase
15/04/03 13:59:54 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/10.0.2.15:2181. Will not attempt to authenticate using SASL (unknown error)
15/04/03 13:59:54 INFO zookeeper.ClientCnxn: Socket connection established to localhost/10.0.2.15:2181, initiating session
15/04/03 13:59:55 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/10.0.2.15:2181, sessionid = 0x14c806c5f420007, negotiated timeout = 40000
15/04/03 14:00:07 INFO zookeeper.ZooKeeper: Session: 0x14c806c5f420007 closed
15/04/03 14:00:07 INFO mapreduce.HBaseImportJob: Creating missing HBase table WLS_LOG
15/04/03 14:00:07 INFO zookeeper.ClientCnxn: EventThread shut down
15/04/03 14:00:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=catalogtracker-on-hconnection-0x8fffea connecting to ZooKeeper ensemble=localhost:2181
15/04/03 14:00:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x8fffea, quorum=localhost:2181, baseZNode=/hbase
15/04/03 14:00:14 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/04/03 14:00:15 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
15/04/03 14:00:15 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x14c806c5f420008, negotiated timeout = 40000
15/04/03 14:00:15 INFO zookeeper.ClientCnxn: EventThread shut down
15/04/03 14:00:15 INFO zookeeper.ZooKeeper: Session: 0x14c806c5f420008 closed
15/04/03 14:00:18 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
15/04/03 14:00:18 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
15/04/03 14:00:20 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
15/04/03 14:00:44 DEBUG db.DBConfiguration: Fetching password from job credentials store
15/04/03 14:00:49 INFO db.DBInputFormat: Using read commited transaction isolation
15/04/03 14:00:49 DEBUG db.DataDrivenDBInputFormat: Creating input split with lower bound '1=1' and upper bound '1=1'
15/04/03 14:02:39 INFO mapred.JobClient: Running job: job_local1040061811_0001
15/04/03 14:02:39 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/04/03 14:02:39 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.sqoop.mapreduce.NullOutputCommitter
15/04/03 14:02:40 INFO mapred.LocalJobRunner: Waiting for map tasks
15/04/03 14:02:40 INFO mapred.LocalJobRunner: Starting task: attempt_local1040061811_0001_m_000000_0
15/04/03 14:02:40 INFO mapred.JobClient: map 0% reduce 0%
15/04/03 14:02:46 DEBUG db.DBConfiguration: Fetching password from job credentials store
15/04/03 14:02:50 INFO db.DBInputFormat: Using read commited transaction isolation
15/04/03 14:02:50 INFO mapred.MapTask: Processing split: 1=1 AND 1=1
15/04/03 14:02:51 INFO db.OracleDBRecordReader: Time zone has been set to GMT
15/04/03 14:02:53 INFO db.DBRecordReader: Working on split: 1=1 AND 1=1
15/04/03 14:02:53 DEBUG db.DataDrivenDBRecordReader: Using query: SELECT TIME_STAMP, CATEGORY, TYPE, SERVERNAME, CODE, MSG FROM WLSLOG WHERE ( 1=1 ) AND ( 1=1 )
15/04/03 14:02:53 DEBUG db.DBRecordReader: Using fetchSize for next query: 1000
15/04/03 14:02:53 INFO db.DBRecordReader: Executing query: SELECT TIME_STAMP, CATEGORY, TYPE, SERVERNAME, CODE, MSG FROM WLSLOG WHERE ( 1=1 ) AND ( 1=1 )
15/04/03 14:03:01 DEBUG mapreduce.AutoProgressMapper: Instructing auto-progress thread to quit.
15/04/03 14:03:01 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
15/04/03 14:03:01 DEBUG mapreduce.AutoProgressMapper: Waiting for progress thread shutdown...
15/04/03 14:03:01 DEBUG mapreduce.AutoProgressMapper: Progress thread shutdown detected.
15/04/03 14:03:01 INFO mapred.LocalJobRunner:
15/04/03 14:03:06 INFO mapred.LocalJobRunner:
15/04/03 14:03:07 INFO mapred.Task: Task:attempt_local1040061811_0001_m_000000_0 is done. And is in the process of commiting
15/04/03 14:03:07 INFO mapred.JobClient: map 100% reduce 0%
15/04/03 14:03:07 INFO mapred.LocalJobRunner:
15/04/03 14:03:07 INFO mapred.Task: Task 'attempt_local1040061811_0001_m_000000_0' done.
15/04/03 14:03:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1040061811_0001_m_000000_0
15/04/03 14:03:07 INFO mapred.LocalJobRunner: Map task executor complete.
15/04/03 14:03:08 INFO mapred.JobClient: Job complete: job_local1040061811_0001
15/04/03 14:03:08 INFO mapred.JobClient: Counters: 18
15/04/03 14:03:08 INFO mapred.JobClient: File System Counters
15/04/03 14:03:08 INFO mapred.JobClient: FILE: Number of bytes read=39829434
15/04/03 14:03:08 INFO mapred.JobClient: FILE: Number of bytes written=40338352
15/04/03 14:03:08 INFO mapred.JobClient: FILE: Number of read operations=0
15/04/03 14:03:08 INFO mapred.JobClient: FILE: Number of large read operations=0
15/04/03 14:03:08 INFO mapred.JobClient: FILE: Number of write operations=0
15/04/03 14:03:08 INFO mapred.JobClient: HDFS: Number of bytes read=0
15/04/03 14:03:08 INFO mapred.JobClient: HDFS: Number of bytes written=0
15/04/03 14:03:08 INFO mapred.JobClient: HDFS: Number of read operations=0
15/04/03 14:03:08 INFO mapred.JobClient: HDFS: Number of large read operations=0
15/04/03 14:03:08 INFO mapred.JobClient: HDFS: Number of write operations=0
15/04/03 14:03:08 INFO mapred.JobClient: Map-Reduce Framework
15/04/03 14:03:08 INFO mapred.JobClient: Map input records=7
15/04/03 14:03:08 INFO mapred.JobClient: Map output records=7
15/04/03 14:03:08 INFO mapred.JobClient: Input split bytes=87
15/04/03 14:03:08 INFO mapred.JobClient: Spilled Records=0
15/04/03 14:03:08 INFO mapred.JobClient: CPU time spent (ms)=0
15/04/03 14:03:08 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
15/04/03 14:03:08 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
15/04/03 14:03:08 INFO mapred.JobClient: Total committed heap usage (bytes)=180756480
15/04/03 14:03:08 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 171.3972 seconds (0 bytes/sec)
15/04/03 14:03:09 INFO mapreduce.ImportJobBase: Retrieved 7 records.
15/04/03 14:03:09 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@3d4817
Start the HBase shell.
hbase shell
Run the scan command to list the data imported into the WLS_LOG table.
scan "WLS_LOG"
The scan command lists the HBase table data.
The 7 rows of data imported into HBase get listed.
Importing into Hive
In this section Sqoop is used to import Oracle database table data into Hive. Run the following sqoop import command to import into Hive.
sqoop import --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --hive-import --create-hive-table --hive-table "WLSLOG" --table "WLSLOG_COPY" --split-by "time_stamp" –verbose
The sqoop import command arguments are as follows.
Argument | Description | Value |
--connect | Sets the connection url for Oracle Database | "jdbc:oracle:thin:@localhost:1521:ORCL" |
--username | Sets username to connect to Oracle Database | "OE" |
--password | Sets the password for Oracle Database | "OE" |
--table | Sets the Oracle Database table name to import from | "WLSLOG_COPY" |
--hadoop-home | Sets the Hadoop home directory | "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" |
--hive-import | Import into Hive |
|
--create-hive-table | Sets to create Hive table |
|
--hive-table | Sets the Hive table name | "WLSLOG" |
--split-by | Sets the Oracle Database table primary key | "time_stamp" |
–verbose | Sets verbose output |
|
A MapReduce job runs to import Oracle Database table data into Hive.
A more detailed output from the sqoop import command is as follows.
[root@localhost sqoop]# sqoop import --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --hive-import --create-hive-table --hive-table "WLSLOG" --table "WLSLOG_COPY" --split-by "time_stamp" --verbose
15/04/03 13:20:42 DEBUG sqoop.ConnFactory: Trying ManagerFactory: com.cloudera.sqoop.manager.DefaultManagerFactory
15/04/03 13:20:42 DEBUG manager.DefaultManagerFactory: Trying with scheme: jdbc:oracle:thin:@localhost:1521
15/04/03 13:20:43 DEBUG manager.OracleManager$ConnCache: Instantiated new connection cache.
15/04/03 13:20:43 INFO manager.SqlManager: Using default fetchSize of 1000
15/04/03 13:20:43 DEBUG sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.OracleManager@9ed26e
15/04/03 13:20:44 INFO tool.CodeGenTool: Beginning code generation
15/04/03 13:20:44 DEBUG manager.OracleManager: Using column names query: SELECT t.* FROM WLSLOG_COPY t WHERE 1=0
15/04/03 13:20:44 DEBUG manager.SqlManager: Execute getColumnInfoRawQuery : SELECT t.* FROM WLSLOG_COPY t WHERE 1=0
15/04/03 13:20:51 DEBUG manager.OracleManager: Creating a new connection for jdbc:oracle:thin:@localhost:1521:ORCL, using username: OE
15/04/03 13:20:51 DEBUG manager.OracleManager: No connection paramenters specified. Using regular API for making connection.
15/04/03 13:21:18 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
15/04/03 13:21:18 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM WLSLOG_COPY t WHERE 1=0
15/04/03 13:21:30 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE
15/04/03 13:21:30 DEBUG orm.ClassWriter: selected columns:
15/04/03 13:21:30 DEBUG orm.ClassWriter: TIME_STAMP
15/04/03 13:21:30 DEBUG orm.ClassWriter: CATEGORY
15/04/03 13:21:30 DEBUG orm.ClassWriter: TYPE
15/04/03 13:21:30 DEBUG orm.ClassWriter: SERVERNAME
15/04/03 13:21:30 DEBUG orm.ClassWriter: CODE
15/04/03 13:21:30 DEBUG orm.ClassWriter: MSG
15/04/03 13:21:31 DEBUG orm.ClassWriter: Writing source file: /tmp/sqoop-root/compile/6235c3beba4d629be2f91c2c832c8033/WLSLOG_COPY.java
15/04/03 13:21:31 DEBUG orm.ClassWriter: Table name: WLSLOG_COPY
15/04/03 13:21:31 DEBUG orm.ClassWriter: Columns: TIME_STAMP:12, CATEGORY:12, TYPE:12, SERVERNAME:12, CODE:12, MSG:12,
15/04/03 13:21:52 INFO mapreduce.ImportJobBase: Beginning import of WLSLOG_COPY
15/04/03 13:21:53 DEBUG util.ClassLoaderStack: Checking for existing class: WLSLOG_COPY
15/04/03 13:22:04 DEBUG db.DBConfiguration: Securing password into job credentials store
15/04/03 13:22:04 DEBUG manager.OracleManager$ConnCache: Got cached connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE
15/04/03 13:22:04 INFO manager.OracleManager: Time zone has been set to GMT
15/04/03 13:22:04 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE
15/04/03 13:22:05 DEBUG mapreduce.DataDrivenImportJob: Using table class: WLSLOG_COPY
15/04/03 13:22:05 DEBUG mapreduce.DataDrivenImportJob: Using InputFormat: class com.cloudera.sqoop.mapreduce.db.OracleDataDrivenDBInputFormat
15/04/03 13:24:39 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/04/03 13:24:39 INFO mapred.JobClient: Running job: job_local846992281_0001
15/04/03 13:24:40 INFO mapred.JobClient: map 0% reduce 0%
15/04/03 13:24:40 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
15/04/03 13:24:42 INFO mapred.LocalJobRunner: Waiting for map tasks
15/04/03 13:24:42 INFO mapred.LocalJobRunner: Starting task: attempt_local846992281_0001_m_000000_0
15/04/03 13:24:43 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/04/03 13:24:45 INFO util.ProcessTree: setsid exited with exit code 0
15/04/03 13:24:46 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1e1a108
15/04/03 13:24:46 DEBUG db.DBConfiguration: Fetching password from job credentials store
15/04/03 13:24:50 INFO db.DBInputFormat: Using read commited transaction isolation
15/04/03 13:24:50 INFO mapred.MapTask: Processing split: 1=1 AND 1=1
15/04/03 13:24:50 INFO db.OracleDBRecordReader: Time zone has been set to GMT
15/04/03 13:24:53 INFO db.DBRecordReader: Working on split: 1=1 AND 1=1
15/04/03 13:24:53 DEBUG db.DataDrivenDBRecordReader: Using query: SELECT TIME_STAMP, CATEGORY, TYPE, SERVERNAME, CODE, MSG FROM WLSLOG_COPY WHERE ( 1=1 ) AND ( 1=1 )
15/04/03 13:24:53 DEBUG db.DBRecordReader: Using fetchSize for next query: 1000
15/04/03 13:24:53 INFO db.DBRecordReader: Executing query: SELECT TIME_STAMP, CATEGORY, TYPE, SERVERNAME, CODE, MSG FROM WLSLOG_COPY WHERE ( 1=1 ) AND ( 1=1 )
15/04/03 13:25:01 INFO mapred.LocalJobRunner:
15/04/03 13:25:06 INFO mapred.LocalJobRunner:
15/04/03 13:25:07 INFO mapred.JobClient: map 100% reduce 0%
15/04/03 13:25:12 INFO mapred.Task: Task:attempt_local846992281_0001_m_000000_0 is done. And is in the process of commiting
15/04/03 13:25:12 INFO mapred.LocalJobRunner:
15/04/03 13:25:12 INFO mapred.Task: Task attempt_local846992281_0001_m_000000_0 is allowed to commit now
15/04/03 13:25:14 INFO output.FileOutputCommitter: Saved output of task 'attempt_local846992281_0001_m_000000_0' to WLSLOG_COPY
15/04/03 13:25:14 INFO mapred.LocalJobRunner:
15/04/03 13:25:14 INFO mapred.Task: Task 'attempt_local846992281_0001_m_000000_0' done.
15/04/03 13:25:14 INFO mapred.LocalJobRunner: Finishing task: attempt_local846992281_0001_m_000000_0
15/04/03 13:25:14 INFO mapred.LocalJobRunner: Map task executor complete.
15/04/03 13:25:15 INFO mapred.JobClient: Job complete: job_local846992281_0001
15/04/03 13:25:15 INFO mapred.JobClient: Counters: 18
15/04/03 13:25:15 INFO mapred.JobClient: File System Counters
15/04/03 13:25:15 INFO mapred.JobClient: FILE: Number of bytes read=21673967
15/04/03 13:25:15 INFO mapred.JobClient: FILE: Number of bytes written=21996158
15/04/03 13:25:15 INFO mapred.JobClient: FILE: Number of read operations=0
15/04/03 13:25:15 INFO mapred.JobClient: FILE: Number of large read operations=0
15/04/03 13:25:15 INFO mapred.JobClient: FILE: Number of write operations=0
15/04/03 13:25:15 INFO mapred.JobClient: HDFS: Number of bytes read=0
15/04/03 13:25:15 INFO mapred.JobClient: HDFS: Number of bytes written=717
15/04/03 13:25:15 INFO mapred.JobClient: HDFS: Number of read operations=1
15/04/03 13:25:15 INFO mapred.JobClient: HDFS: Number of large read operations=0
15/04/03 13:25:15 INFO mapred.JobClient: HDFS: Number of write operations=2
15/04/03 13:25:15 INFO mapred.JobClient: Map-Reduce Framework
15/04/03 13:25:15 INFO mapred.JobClient: Map input records=7
15/04/03 13:25:15 INFO mapred.JobClient: Map output records=7
15/04/03 13:25:16 INFO mapred.JobClient: Input split bytes=87
15/04/03 13:25:16 INFO mapred.JobClient: Spilled Records=0
15/04/03 13:25:16 INFO mapred.JobClient: CPU time spent (ms)=0
15/04/03 13:25:16 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
15/04/03 13:25:16 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
15/04/03 13:25:16 INFO mapred.JobClient: Total committed heap usage (bytes)=180756480
15/04/03 13:25:16 INFO mapreduce.ImportJobBase: Transferred 717 bytes in 182.8413 seconds (3.9214 bytes/sec)
15/04/03 13:25:16 INFO mapreduce.ImportJobBase: Retrieved 7 records.
15/04/03 13:25:16 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@3d4817
15/04/03 13:25:16 DEBUG hive.HiveImport: Hive.inputTable: WLSLOG_COPY
15/04/03 13:25:16 DEBUG hive.HiveImport: Hive.outputTable: WLS_LOG
15/04/03 13:25:16 DEBUG manager.OracleManager: Using column names query: SELECT t.* FROM WLSLOG_COPY t WHERE 1=0
15/04/03 13:25:16 DEBUG manager.SqlManager: Execute getColumnInfoRawQuery : SELECT t.* FROM WLSLOG_COPY t WHERE 1=0
15/04/03 13:25:16 DEBUG manager.OracleManager$ConnCache: Got cached connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE
15/04/03 13:25:18 INFO manager.OracleManager: Time zone has been set to GMT
15/04/03 13:25:18 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
15/04/03 13:25:18 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM WLSLOG_COPY t WHERE 1=0
15/04/03 13:25:21 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE
15/04/03 13:25:21 DEBUG hive.TableDefWriter: Create statement: CREATE TABLE `WLS_LOG` ( `TIME_STAMP` STRING, `CATEGORY` STRING, `TYPE` STRING, `SERVERNAME` STRING, `CODE` STRING, `MSG` STRING) COMMENT 'Imported by sqoop on 2015/04/03 13:25:21' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE
15/04/03 13:25:21 DEBUG hive.TableDefWriter: Load statement: LOAD DATA INPATH 'hdfs://10.0.2.15:8020/user/root/WLSLOG_COPY' INTO TABLE `WLS_LOG`
15/04/03 13:25:21 INFO hive.HiveImport: Loading uploaded data into Hive
15/04/03 13:25:23 DEBUG hive.HiveImport: Using in-process Hive instance.
15/04/03 13:25:23 DEBUG util.SubprocessSecurityManager: Installing subprocess security manager
Logging initialized using configuration in jar:file:/sqoop/hive-0.13.1-cdh5.2.0/lib/hive-common-0.13.1-cdh5.2.0.jar!/hive-log4j.properties
OK
Time taken: 75.724 seconds
Loading data to table default.wls_log
Table default.wls_log stats: [numFiles=1, numRows=0, totalSize=717, rawDataSize=0]
OK
Time taken: 36.523 seconds
Start the Hive Thrift Server.
hive --service hiveserver
Start the Hive shell.
>hive
Run the following SELECT statement in the Hive shell to list the imported data.
SELECT * FROM default.wls_log
The 7 rows of data imported from Oracle Database gets listed.
In this tutorial we used Sqoop 1.4.5 with Oracle Database 11g.
2015-10: October Issue
![]() | ||||||||
Top stories for October 2015 | ||||||||
|
Wait Types
See Also: [[wiki:Main Page|Main_Page]] - [[wiki:Monitoring & Tuning|Monitoring & Tuning]] - [[wiki:Wait Events|Wait Events]]
Contents |
What Are SQL Server Waits?
Instead of measuring activity of CPU, storage, or memory, why not ask what SQL Server has been waiting on when executing queries? Starting with SQL Server 2005, some of SQL Server's [[wiki:DMVs|Dynamic Management Views (DMVs)]] return wait data - measurements of what the database engine has been waiting on.
In general there are three categories of waits that could affect any given request:
- Resource waits are caused by a particular resource, perhaps a specific lock that is unavailable when the requested is submitted. Resource waits are the ones you should focus on for troubleshooting the large majority of performance issues.
- External waits occur when SQL Server worker thread is waiting on an external process, such as extended stored procedure to be completed. External wait does not necessarily mean that the connection is idle; rather it might mean that SQL Server is executing an external code which it cannot control. Finally the queue waits occur if a worker thread is idle and is waiting for work to be assigned to it.
- Queue waits normally apply to internal background tasks, such as ghost cleanup, which physically removes records that have been previously deleted. Normally you don't have to worry about any performance degradation due to queue waits.
You should expect some waits on a busy system. This is completely normal and doesn't necessarily translate into a performance issue. Wait events become a problem if they tend to be consistently long over a significant period of time. For example, waits that take few milliseconds over a 2 hour monitoring window are not concerning. Those waits taking over 15 minutes over a 2 hour monitoring window should be investigated more closely.
Queries to Check SQL Server Waits
- [[wiki:Misc DMV queries|Current SQL Server Activity]] - a replacement for SP_Who2 that checks active queries, waits one second, then checks again. For all active queries, it shows their command and what wait type is holding them up.
Want to add more queries here? Go to the [[wiki:Transact SQL Code Library|Transact SQL Code Library]], click Edit, and add a new link on that page to describe your query. Just copy/paste one of the other links and edit it. After you save the page, your newly created link will appear red. You can click on it to edit a new page. Then come back here and add a link to it.
Explanations of SQL Server Wait Types
Some of these waits occur for internal operations and no tuning is necessary to avoid such waits - we identify those as well. Some of the following have more than one wait type. If you're looking for QUERY_NOTIFICATION_SUBSCRIPTION_MUTEX, for example, click on the QUERY_NOTIFICATION_* group and each of the underlying waits will be listed there.
- [[wiki:ABR|ABR]] -
- [[wiki:ASSEMBLY LOAD|ASSEMBLY_LOAD]] -
- [[wiki:ASYNC DISKPOOL LOCK|ASYNC_DISKPOOL_LOCK]] - I/O
- [[wiki:ASYNC IO COMPLETION|ASYNC_IO_COMPLETION]] - I/O Used to indicate a worker is waiting on a asynchronous I/O operation to complete not associated with database pages
- [[wiki:ASYNC NETWORK IO|ASYNC_NETWORK_IO]] - Network
- [[wiki:AUDIT GROUPCACHE LOCK|AUDIT_GROUPCACHE_LOCK]] -
- [[wiki:AUDIT LOGINCACHE LOCK|AUDIT_LOGINCACHE_LOCK]] -
- [[wiki:AUDIT ON DEMAND TARGET LOCK|AUDIT_ON_DEMAND_TARGET_LOCK]] -
- [[wiki:AUDIT XE SESSION MGR|AUDIT_XE_SESSION_MGR]] -
- [[wiki:BACKUP|BACKUP]] - Backup
- [[wiki:BACKUP CLIENTLOCK|BACKUP_CLIENTLOCK]] - Backup
- [[wiki:BACKUP OPERATOR|BACKUP_OPERATOR]] - Backup
- [[wiki:BACKUPBUFFER|BACKUPBUFFER]] - Backup
- [[wiki:BACKUPIO|BACKUPIO]] - Backup
- [[wiki:BACKUPTHREAD|BACKUPTHREAD]] - Backup
- [[wiki:BAD PAGE PROCESS|BAD_PAGE_PROCESS]] - Memory
- [[wiki:BROKER *|BROKER_*]] - Service Broker
- [[wiki:BUILTIN HASHKEY MUTEX|BUILTIN_HASHKEY_MUTEX]] - Internal
- [[wiki:CHECK PRINT RECORD|CHECK_PRINT_RECORD]] -
- [[wiki:CHECKPOINT QUEUE|CHECKPOINT_QUEUE]] - Buffer Used by background worker that waits on events on queue to process checkpoint requests. This is an "optional" wait type see Important Notes section in blog
- [[wiki:CHKPT|CHKPT]] - Buffer Used to coordinate the checkpoint background worker thread with recovery of master so checkpoint won't start accepting queue requests until master online
- [[wiki:CLEAR DB|CLEAR_DB]] -
- [[wiki:CLR *|CLR_*]] - Common Language Runtime (CLR)
- [[wiki:CLRHOST STATE ACCESS|CLRHOST_STATE_ACCESS]] -
- [[wiki:CMEMTHREAD|CMEMTHREAD]] - Memory
- [[wiki:COMMIT TABLE|COMMIT_TABLE]] -
- [[wiki:CURSOR|CURSOR]] - Internal
- [[wiki:CURSOR ASYNC|CURSOR_ASYNC]] - Internal
- [[wiki:CXPACKET|CXPACKET]] - Query Used to synchronize threads involved in a parallel query. This wait type only means a parallel query is executing.
- [[wiki:CXROWSET SYNC|CXROWSET_SYNC]] -
- [[wiki:DAC INIT|DAC_INIT]] -
- [[wiki:DBMIRROR *|DBMIRROR_*]] - Database Mirroring
- [[wiki:DBMIRRORING CMD|DBMIRRORING_CMD]] - Database Mirroring
- [[wiki:DBTABLE|DBTABLE]] - Internal
- [[wiki:DEADLOCK ENUM MUTEX|DEADLOCK_ENUM_MUTEX]] - Lock
- [[wiki:DEADLOCK TASK SEARCH|DEADLOCK_TASK_SEARCH]] - Lock
- [[wiki:DEBUG|DEBUG]] - Internal
- [[wiki:DISABLE VERSIONING|DISABLE_VERSIONING]] - Row versioning
- [[wiki:DISKIO SUSPEND|DISKIO_SUSPEND]] - BACKUP Used to indicate a worker is waiting to process I/O for a database or log file associated with a SNAPSHOT BACKUP
- [[wiki:DISPATCHER QUEUE SEMAPHORE|DISPATCHER_QUEUE_SEMAPHORE]] -
- [[wiki:DLL LOADING MUTEX|DLL_LOADING_MUTEX]] - XML
- [[wiki:DROPTEMP|DROPTEMP]] - Temporary Objects
- [[wiki:DTC|DTC]] - Distributed Transaction Coordinator (DTC)
- [[wiki:DTC ABORT REQUEST|DTC_ABORT_REQUEST]] - DTC
- [[wiki:DTC RESOLVE|DTC_RESOLVE]] - DTC
- [[wiki:DTC STATE|DTC_STATE]] - DTC
- [[wiki:DTC TMDOWN REQUEST|DTC_TMDOWN_REQUEST]] - DTC
- [[wiki:DTC WAITFOR OUTCOME|DTC_WAITFOR_OUTCOME]] - DTC
- [[wiki:DUMP LOG *|DUMP_LOG_*]] -
- [[wiki:DUMPTRIGGER|DUMPTRIGGER]] -
- [[wiki:EC|EC]] -
- [[wiki:EE PMOLOCK|EE_PMOLOCK]] - Memory
- [[wiki:EE SPECPROC MAP INIT|EE_SPECPROC_MAP_INIT]] - Internal
- [[wiki:ENABLE VERSIONING|ENABLE_VERSIONING]] - Row versioning
- [[wiki:ERROR REPORTING MANAGER|ERROR_REPORTING_MANAGER]] - Internal
- [[wiki:EXCHANGE|EXCHANGE]] - Parallelism (processor)
- [[wiki:EXECSYNC|EXECSYNC]] - Parallelism (processor)
- [[wiki:EXECUTION PIPE EVENT INTERNAL|EXECUTION_PIPE_EVENT_INTERNAL]] -
- [[wiki:Failpoint|Failpoint]] -
- [[wiki:FCB REPLICA *|FCB_REPLICA_*]] - Database snapshot
- [[wiki:FS FC RWLOCK|FS_FC_RWLOCK]] -
- [[wiki:FS GARBAGE COLLECTOR SHUTDOWN|FS_GARBAGE_COLLECTOR_SHUTDOWN]] -
- [[wiki:FS HEADER RWLOCK|FS_HEADER_RWLOCK]] -
- [[wiki:FS LOGTRUNC RWLOCK|FS_LOGTRUNC_RWLOCK]] -
- [[wiki:FSA FORCE OWN XACT|FSA_FORCE_OWN_XACT]] -
- [[wiki:FSAGENT|FSAGENT]] -
- [[wiki:FSTR CONFIG *|FSTR_CONFIG_*]] -
- [[wiki:FT *|FT_*]] - Full Text Search
- [[wiki:GUARDIAN|GUARDIAN]] -
- [[wiki:HTTP ENDPOINT COLLCREATE|HTTP_ENDPOINT_COLLCREATE]] -
- [[wiki:HTTP ENUMERATION|HTTP_ENUMERATION]] - Service Broker
- [[wiki:HTTP START|HTTP_START]] - Service Broker
- [[wiki:IMP IMPORT MUTEX|IMP_IMPORT_MUTEX]] -
- [[wiki:IMPPROV IOWAIT|IMPPROV_IOWAIT]] - I/O
- [[wiki:INDEX USAGE STATS MUTEX|INDEX_USAGE_STATS_MUTEX]] -
- [[wiki:INTERNAL TESTING|INTERNAL_TESTING]] -
- [[wiki:IO AUDIT MUTEX|IO_AUDIT_MUTEX]] - Profiler Trace
- [[wiki:IO COMPLETION|IO_COMPLETION]] - I/O Used to indicate a wait for I/O for operation (typically synchronous) like sorts and various situations where the engine needs to do a synchronous I/O
- [[wiki:IO RETRY|IO_RETRY]] -
- [[wiki:IOAFF RANGE QUEUE|IOAFF_RANGE_QUEUE]] -
- [[wiki:KSOURCE WAKEUP|KSOURCE_WAKEUP]] - Shutdown Used by the background worker "signal handler" which waits for a signal to shutdown SQL Server
- [[wiki:KTM *|KTM_*]] -
- [[wiki:LATCH *|LATCH_*]] - Latch
- [[wiki:LAZYWRITER SLEEP|LAZYWRITER_SLEEP]] - Buffer Used by the Lazywriter background worker to indicate it is sleeping waiting to wake up and check for work to do
- [[wiki:LCK M *|LCK_M_*]] - Lock
- [[wiki:LOGBUFFER|LOGBUFFER]] - Transaction Log Used to indicate a worker thread is waiting for a log buffer to write log blocks for a transaction
- [[wiki:LOGGENERATION|LOGGENERATION]] -
- [[wiki:LOGMGR *|LOGMGR_*]] - Internal
- [[wiki:LOWFAIL MEMMGR QUEUE|LOWFAIL_MEMMGR_QUEUE]] - Memory
- [[wiki:METADATA LAZYCACHE RWLOCK|METADATA_LAZYCACHE_RWLOCK]] -
- [[wiki:MIRROR SEND MESSAGE|MIRROR_SEND_MESSAGE]] -
- [[wiki:MISCELLANEOUS|MISCELLANEOUS]] - Ignore This really should be called "Not Waiting".
- [[wiki:MSQL DQ|MSQL_DQ]] - Distributed Query
- [[wiki:MSQL SYNC PIPE|MSQL_SYNC_PIPE]] -
- [[wiki:MSQL XACT MGR MUTEX|MSQL_XACT_MGR_MUTEX]] - Transaction
- [[wiki:MSQL XACT MUTEX|MSQL_XACT_MUTEX]] - Transaction
- [[wiki:MSQL XP|MSQL_XP]] - Extended Procedure
- [[wiki:MSSEARCH|MSSEARCH]] - Full-Text Search
- [[wiki:NET WAITFOR PACKET|NET_WAITFOR_PACKET]] - Network
- [[wiki:NODE CACHE MUTEX|NODE_CACHE_MUTEX]] -
- [[wiki:OLEDB|OLEDB]] - OLEDB
- [[wiki:ONDEMAND TASK QUEUE|ONDEMAND_TASK_QUEUE]] - Internal
- [[wiki:PAGEIOLATCH *|PAGEIOLATCH_*]] - Latch
- [[wiki:PAGELATCH *|PAGELATCH_*]] - Latch
- [[wiki:PARALLEL BACKUP QUEUE|PARALLEL_BACKUP_QUEUE]] - Backup or Restore
- [[wiki:PERFORMANCE COUNTERS RWLOCK|PERFORMANCE_COUNTERS_RWLOCK]] -
- [[wiki:PREEMPTIVE ABR|PREEMPTIVE_ABR]] -
- [[wiki:PREEMPTIVE AUDIT *|PREEMPTIVE_AUDIT_*]] -
- [[wiki:PREEMPTIVE CLOSEBACKUPMEDIA|PREEMPTIVE_CLOSEBACKUPMEDIA]] -
- [[wiki:PREEMPTIVE CLOSEBACKUPTAPE|PREEMPTIVE_CLOSEBACKUPTAPE]] -
- [[wiki:PREEMPTIVE CLOSEBACKUPVDIDEVICE|PREEMPTIVE_CLOSEBACKUPVDIDEVICE]] -
- [[wiki:PREEMPTIVE CLUSAPI CLUSTERRESOURCECONTROL|PREEMPTIVE_CLUSAPI_CLUSTERRESOURCECONTROL]] -
- [[wiki:PREEMPTIVE COM *|PREEMPTIVE_COM_*]] -
- [[wiki:PREEMPTIVE CONSOLEWRITE|PREEMPTIVE_CONSOLEWRITE]] -
- [[wiki:PREEMPTIVE CREATEPARAM|PREEMPTIVE_CREATEPARAM]] -
- [[wiki:PREEMPTIVE DEBUG|PREEMPTIVE_DEBUG]] -
- [[wiki:PREEMPTIVE DFSADDLINK|PREEMPTIVE_DFSADDLINK]] -
- [[wiki:PREEMPTIVE DFS*|PREEMPTIVE_DFS*]] -
- [[wiki:PREEMPTIVE DTC *|PREEMPTIVE_DTC_*]] -
- [[wiki:PREEMPTIVE FILESIZEGET|PREEMPTIVE_FILESIZEGET]] -
- [[wiki:PREEMPTIVE FSAOLEDB *|PREEMPTIVE_FSAOLEDB_*]] -
- [[wiki:PREEMPTIVE FSRECOVER UNCONDITIONALUNDO|PREEMPTIVE_FSRECOVER_UNCONDITIONALUNDO]] -
- [[wiki:PREEMPTIVE GETRMINFO|PREEMPTIVE_GETRMINFO]] -
- [[wiki:PREEMPTIVE LOCKMONITOR|PREEMPTIVE_LOCKMONITOR]] -
- [[wiki:PREEMPTIVE MSS RELEASE|PREEMPTIVE_MSS_RELEASE]] -
- [[wiki:PREEMPTIVE ODBCOPS|PREEMPTIVE_ODBCOPS]] -
- [[wiki:PREEMPTIVE OLE UNINIT|PREEMPTIVE_OLE_UNINIT]] -
- [[wiki:PREEMPTIVE OLEDB *|PREEMPTIVE_OLEDB_*]] -
- [[wiki:PREEMPTIVE OLEDBOPS|PREEMPTIVE_OLEDBOPS]] -
- [[wiki:PREEMPTIVE OS *|PREEMPTIVE_OS_*]] -
- [[wiki:PREEMPTIVE REENLIST|PREEMPTIVE_REENLIST]] -
- [[wiki:PREEMPTIVE RESIZELOG|PREEMPTIVE_RESIZELOG]] -
- [[wiki:PREEMPTIVE ROLLFORWARDREDO|PREEMPTIVE_ROLLFORWARDREDO]] -
- PREEMPTIVE_ROLLFORWARDUNDO -
- PREEMPTIVE_SB_STOPENDPOINT -
- PREEMPTIVE_SERVER_STARTUP -
- PREEMPTIVE_SETRMINFO -
- PREEMPTIVE_SHAREDMEM_GETDATA -
- PREEMPTIVE_SNIOPEN -
- PREEMPTIVE_SOSHOST -
- PREEMPTIVE_SOSTESTING -
- PREEMPTIVE_STARTRM -
- PREEMPTIVE_STREAMFCB_CHECKPOINT -
- PREEMPTIVE_STREAMFCB_RECOVER -
- PREEMPTIVE_STRESSDRIVER -
- PREEMPTIVE_TESTING -
- PREEMPTIVE_TRANSIMPORT -
- PREEMPTIVE_UNMARSHALPROPAGATIONTOKEN -
- PREEMPTIVE_VSS_CREATESNAPSHOT -
- PREEMPTIVE_VSS_CREATEVOLUMESNAPSHOT -
- [[wiki:PREEMPTIVE XE *|PREEMPTIVE_XE_*]] -
- PREEMPTIVE_XETESTING -
- PREEMPTIVE_XXX - Varies Used to indicate a worker is running coded that is not under the SQLOS Scheduling Systems
- PRINT_ROLLBACK_PROGRESS - Alter Database state
- QNMANAGER_ACQUIRE -
- QPJOB_KILL - Update of statistics
- QPJOB_WAITFOR_ABORT - Update of statistics
- QRY_MEM_GRANT_INFO_MUTEX -
- QUERY_ERRHDL_SERVICE_DONE -
- QUERY_EXECUTION_INDEX_SORT_EVENT_OPEN - Building indexes
- [[wiki:QUERY NOTIFICATION *|QUERY_NOTIFICATION_*]] - Query Notification Manager
- QUERY_OPTIMIZER_PRINT_MUTEX - Query Notification Manager
- QUERY_TRACEOUT - Query Notification Manager
- QUERY_WAIT_ERRHDL_SERVICE -
- RECOVER_CHANGEDB - Internal
- REPL_CACHE_ACCESS - Replication
- REPL_HISTORYCACHE_ACCESS -
- REPL_SCHEMA_ACCESS - Replication
- REPL_TRANHASHTABLE_ACCESS -
- REPLICA_WRITES - Database Snapshots
- REQUEST_DISPENSER_PAUSE - Backup or Restore
- REQUEST_FOR_DEADLOCK_SEARCH - Lock Used by background worker "Lock Monitor" to search for deadlocks. This is an "optional" wait type see Important Notes section in blog
- RESMGR_THROTTLED -
- RESOURCE_QUERY_SEMAPHORE_COMPILE - Query Used to indicate a worker is waiting to compile a query due to too many other concurrent query compilations that require "not small" amounts of memory.
- RESOURCE_QUEUE - Internal
- [[wiki:RESOURCE SEMAPHORE *|RESOURCE_SEMAPHORE_*]] - Query Used to indicate a worker is waiting to be allowed to perform an operation requiring "query memory" such as hashes and sorts
- RG_RECONFIG -
- SEC_DROP_TEMP_KEY - Security
- SECURITY_MUTEX -
- SEQUENTIAL_GUID -
- SERVER_IDLE_CHECK - Internal
- SHUTDOWN - Internal
- [[wiki:SLEEP *|SLEEP_*]] - Internal
- [[wiki:SNI *|SNI_*]] - Internal
- [[wiki:SOAP *|SOAP_*]] - SOAP
- [[wiki:SOS *|SOS_*]] - Internal
- [[wiki:SOSHOST *|SOSHOST_*]] - CLR
- [[wiki:SQLCLR *|SQLCLR_*]] - CLR
- SQLSORT_NORMMUTEX -
- SQLSORT_SORTMUTEX -
- [[wiki:SQLTRACE *|SQLTRACE_*]] - Trace
- SRVPROC_SHUTDOWN -
- TEMPOBJ -
- THREADPOOL - SQLOS Indicates a wait for a task to be assigned to a worker thread
- TIMEPRIV_TIMEPERIOD -
- TRACE_EVTNOTIF -
- [[wiki:TRACEWRITE|TRACEWRITE]] -
- [[wiki:TRAN *|TRAN_*]] - TRAN_MARKLATCH
- TRANSACTION_MUTEX -
- UTIL_PAGE_ALLOC -
- VIA_ACCEPT -
- VIEW_DEFINITION_MUTEX -
- WAIT_FOR_RESULTS -
- WAITFOR - Background
- WAITFOR_TASKSHUTDOWN -
- WAITSTAT_MUTEX -
- WCC -
- WORKTBL_DROP -
- WRITE_COMPLETION -
- WRITELOG - I/O Indicates a worker thread is waiting for LogWriter to flush log blocks.
- XACT_OWN_TRANSACTION -
- XACT_RECLAIM_SESSION -
- XACTLOCKINFO -
- XACTWORKSPACE_MUTEX -
- [[wiki:XE *|XE_*]] - XEvent
Related Reading
- Microsoft CSS SQL Server Engineers Wait Type List - the blog post that started it all. The Microsoft team wanted to build a list of wait types.
- Jason Strate's Wait Stat Categories - scripts to create a set of tables with wait types.
- The Ozar Family Tradition of Performance Monitoring - Brent Ozar talks about why you should use waits for tuning.
Toad Editions and Features Matrix
This matrix lists Toad for IBM DB2's features and the editions in which they are available.
DOWNLOAD: Toad for IBM DB2 6.1 Functional Matrix.pdf
(Sample image: Click the download link above for the full document)
Toad for Oracle Freeware v12.8 (32-bit)
This is the FREEWARE edition of Toad™ for Oracle. The Freeware edition has certain limitations, and is not intended to be used as a TRIAL for the Commercial edition of Toad for Oracle.
Notes:
- The Toad for Oracle Freeware version may be used for a maximum of five (5) Seats within Customer's organization and expires each year after the date of its initial download ("Freeware Term"). Upon expiration of the Freeware Term, the same 5 Seats may be downloaded again by the same users for the Freeware Term. For more than five (5) users within an organization, Customer will need to purchase licenses of Commercial Toad for Oracle. This license does not entitle Customer to receive hard-copy documentation, technical support, telephone assistance, or enhancements or updates to the Freeware from Dell Software. The terms "Seat" and "Freeware" shall have the same meaning as those set forth in the Product Guide.
- It is recommended that your client version be of the same release (or higher) as your database server. In addition, to take advantage of Toad's new Unicode support, you must be working with Oracle client/server 9i or above.
- All versions of the Oracle client are not necessarily compatible with all versions of the Oracle Server, which may cause errors within Toad. See Oracle’s Metalink article 207303.1 "Client / Server / Interoperability Support Between Different Oracle Versions" for more information about possible compatibility issues.
Resources |
POST QUESTION / COMMENT
Do you have a question or comment about this freeware? Post it to the product forum:
Toad for Oracle Freeware v12.8 (64-bit)
This is the FREEWARE edition of Toad™ for Oracle. The Freeware edition has certain limitations, and is not intended to be used as a TRIAL for the Commercial edition of Toad for Oracle.
Notes:
- The Toad for Oracle Freeware version may be used for a maximum of five (5) Seats within Customer's organization and expires each year after the date of its initial download ("Freeware Term"). Upon expiration of the Freeware Term, the same 5 Seats may be downloaded again by the same users for the Freeware Term. For more than five (5) users within an organization, Customer will need to purchase licenses of Commercial Toad for Oracle. This license does not entitle Customer to receive hard-copy documentation, technical support, telephone assistance, or enhancements or updates to the Freeware from Dell Software. The terms "Seat" and "Freeware" shall have the same meaning as those set forth in the Product Guide.
- It is recommended that your client version be of the same release (or higher) as your database server. In addition, to take advantage of Toad's new Unicode support, you must be working with Oracle client/server 9i or above.
- All versions of the Oracle client are not necessarily compatible with all versions of the Oracle Server, which may cause errors within Toad. See Oracle’s Metalink article 207303.1 "Client / Server / Interoperability Support Between Different Oracle Versions" for more information about possible compatibility issues.
Resources |
POST QUESTION / COMMENT
Do you have a question or comment about this freeware? Post it to the product forum:
SQL Optimizer for Oracle
Product Documentation
Learn More About SQL Optimizer for Oracle
The following documents provide information about how to get started with SQL Optimizer, a list of what's new in the latest release, and instructions for installing the product.
SQL Optimizer for Oracle 9.1
- Release Notes (html) - Find a list of resolved issues, what's new in this release, and the system requirements.
- New in This Release (html) - Descriptions and screen shots of the new features in this release.
- Installation Guide (html) - Instructions for installing SQL Optimizer.
- User Guide (html) - Learn how to get started with SQL Optimizer using these quick and easy tutorials.
SQL Optimizer for Oracle 9.0
SQL Optimizer for Oracle 8.9.1
SQL Optimizer for Oracle 8.9
SQL Optimizer for Oracle 8.8.1
SQL Optimizer for Oracle 8.8
Please visit SupportLink for current and earlier-version product documentation: https://support.software.dell.com/sql-optimizer-for-oracle/
How can i connect to maria db?
i tried insert user and privilege into db, but can't connect . please~~~ help me~~~
Exadata Storage Cell Reports - RS-7445 [Serv MS Leaking Memory] Error
Introduction
Recently we patched our Exadata Database Machines to Exadata Storage Software version 12.1.1.1.2. Everything went fine and the patching was successful. After running the new version for some time we starting seeing lots of alerts coming out from storage cells stating "RS-7445 [Serv MS leaking memory] [It will be restarted] [] [] [] [] [] [] [] [] [] []". Upon investigation we found that the MS process is restarting because of the memory leaking issue.
After opening an SR with Oracle Support, they confirmed that it is a BUG and the problem is due to the JDK version on Exadata Storage cells. Ideally the MS process uses around 1GB of physical memory but due this BUG memory allocation can grow up to 2GB.
Checking the processes with large memory
# ps -feal|sort -n -k 10,10
0 S root 4741 11529 0 80 0 - 265986 futex_ Oct08 ? 00:13:31 /usr/java/default/bin/java -Xms256m -Xmx512m -XX:-UseLargePages Djava.library.path=/opt/oracle/cell/cellsrv/lib -Ddisable.checkForUpdate=true -jar /opt/oracle/cell/oc4j/ms/j2ee/home/oc4j.jar -out /opt/oracle/cell/cellsrv/deploy/log/ms.lst -err/opt/oracle/cell/cellsrv/deploy/log/ms.err
265986 * 4096 ~ 1GB
Initially this BUG only affected the ESS version 11.2.3.3.1 and 12.1.1.1.1. But later this is also seen in the versions 12.1.2.1.1, 12.1.2.1.0 and 12.1.1.1.2. The patch developed for the BUG is 20328167. Patch provided for fixing the issue can be applied online without an outage.
The [Serv MS leaking memory] BUG hits following Exadata Storage software version:
11.2.3.3.1
12.1.1.1.1
12.1.1.1.2
12.1.2.1.1
12.1.2.1.0
Make sure that you download the patch based on the ESS version you are running currently. In ESS version 11.2.3.3.1, 12.1.1.1.1 and 12.1.1.1.2, the patch will be applied to only Storage Cells and whereas in ESS version 12.1.2.1.1 and 12.1.2.1.0, the patch will be applied to both Storage cells and Compute nodes. Starting with ESS version 12.1.2.1.0, MS process also runs on Compute nodes to execute DBMCLI commands.
You can choose to ignore the errors as the MS process will be restarted automatically, which will reset the process and memory used without any impact. Ignoring errors can cause lots of noise as the alerts keeps coming from the storage cells.
This BUG is fixes in the ESS version 12.1.2.1.2.
Let's understand what is MS process is in brief.
Management Server (MS) process
- It helps execute CELLCLI commands
- It provides Java interface to Enterprise Manager 12c Plugins.
- You can't execute CELLCI commands if MS process is down.
Managing MS process:
To stop MS process
# cellcli -e alter cell shutdown services ms
To start MS process
# cellcli -e alter cell startup services ms
To restart MS process
# cellcli -e alter cell restart services ms
In this article I am going to show you how to apply the Exadata Storage one-off patch (20328167) on a Live Exadata X4-2 running ESS version 12.1.1.1.2.
Assumption
- root user password for Compute nodes and Storage Cells
- root user equivalence must be setup between compute nodes and storage cells.
- No outage is required for database and storage cell services.
Environment
Exadata Model | X4-2 Half Rack HC 4TB |
Exadata Components | Storage Cell (7), Compute node (4) & Infiniband Switch (2) |
Exadata Storage cells | oracloudceladm01 – oracloudceladm07 |
Exadata Compute nodes | oraclouddbadm01 – oraclouddbadm04 |
Exadata Software Version | 12.1.1.1.2 |
Exadata DB Version | 11.2.0.4 BP16 |
Steps
Unless otherwise stated all the steps will be executed as 'root' user.
- Identify the Exadata Storage software version.
From one of the compute nodes or storage cells, execute the following command as 'root' user.
[root@oraclouddbadm01 ~]# ssh oracloudceladm01 imageinfo
Kernel version: 2.6.39-400.128.21.el5uek #1 SMP Thu Apr 2 15:13:06 PDT 2015 x86_64
Cell version: OSS_12.1.1.1.2_LINUX.X64_150411
Cell rpm version: cell-12.1.1.1.2_LINUX.X64_150411-1
Active image version: 12.1.1.1.2.150411
Active image activated: 2015-05-28 21:40:16 -0500
Active image status: success
Active system partition on device: /dev/md6
Active software partition on device: /dev/md8
In partition rollback: Impossible
Cell boot usb partition: /dev/sdm1
Cell boot usb version: 12.1.1.1.2.150411
Inactive image version: 12.1.1.1.1.140712
Inactive image activated: 2014-11-23 00:34:06 -0800
Inactive image status: success
Inactive system partition on device: /dev/md5
Inactive software partition on device: /dev/md7
Boot area has rollback archive for the version: 12.1.1.1.1.140712
Rollback to the inactive partitions: Possible
- Download the patch (20328167) from https://support.oracle.com
Here is the direct link to the patch. Choose the patch based on your Exadata Storage software version.
- Copy the patch to compute node 1 under staging directory.
Use Winscp to copy the patch from your desktop/laptop to Database server.
Example: oraclouddbadm01:/u01/app/oracle/software
[root@oraclouddbadm01 ~]# cd /u01/app/oracle/software/
[root@oraclouddbadm01 software]# ls -ltr p20328167_121112_Linux-x86-64.zip
-rw-r--r-- 1 root root 248037734 Oct 30 05:25 p20328167_121112_Linux-x86-64.zip
- Unzip the patch p20328167_121112_Linux-x86-64.zip
[root@oraclouddbadm01 software]# unzip p20328167_121112_Linux-x86-64.zip
Archive: p20328167_121112_Linux-x86-64.zip
inflating: jdk-1.7.0_55-fcs.x86_64.rpm
inflating: jdk-7u72-linux-x64.rpm
inflating: README.txt
- Copy the package jdk-7u72-linux-x64.rpm to all the Storage cells under /tmp
[root@oraclouddbadm01 software]# cd /u01/app/oracle/software
[root@oraclouddbadm01 software]# dcli -l root -g ~/cell_group -f /u01/app/oracle/software/jdk-7u72-linux-x64.rpm -d /tmp
[root@oraclouddbadm01 software]# dcli -l root -g ~/cell_group ls -l /tmp/jdk*
oracloudceladm01: -rwxr-xr-x 1 root root 126702776 Oct 30 05:55 /tmp/jdk-7u72-linux-x64.rpm
oracloudceladm02: -rwxr-xr-x 1 root root 126702776 Oct 30 05:55 /tmp/jdk-7u72-linux-x64.rpm
oracloudceladm03: -rwxr-xr-x 1 root root 126702776 Oct 30 05:55 /tmp/jdk-7u72-linux-x64.rpm
oracloudceladm04: -rwxr-xr-x 1 root root 126702776 Oct 30 05:55 /tmp/jdk-7u72-linux-x64.rpm
oracloudceladm05: -rwxr-xr-x 1 root root 126702776 Oct 30 05:55 /tmp/jdk-7u72-linux-x64.rpm
oracloudceladm06: -rwxr-xr-x 1 root root 126702776 Oct 30 05:55 /tmp/jdk-7u72-linux-x64.rpm
oracloudceladm07: -rwxr-xr-x 1 root root 126702776 Oct 30 05:55 /tmp/jdk-7u72-linux-x64.rpm
- Get the current Java Version
[root@oraclouddbadm01 software]# dcli -l root -g ~/cell_group java -version | grep "java version"
oracloudceladm01: java version "1.7.0_55"
oracloudceladm02: java version "1.7.0_55"
oracloudceladm03: java version "1.7.0_55"
oracloudceladm04: java version "1.7.0_55"
oracloudceladm05: java version "1.7.0_55"
oracloudceladm06: java version "1.7.0_55"
oracloudceladm07: java version "1.7.0_55"
- Shutdown the MS process on all the Storage Cells.
Note: Shutting down MS process will NOT cause any outage.
[root@oraclouddbadm01 software]# dcli -l root -g ~/cell_group cellcli -e alter cell shutdown services ms
oracloudceladm01:
oracloudceladm01: Stopping MS services...
oracloudceladm01: The SHUTDOWN of MS services was successful.
oracloudceladm02:
oracloudceladm02: Stopping MS services...
oracloudceladm02: The SHUTDOWN of MS services was successful.
oracloudceladm03:
oracloudceladm03: Stopping MS services...
oracloudceladm03: The SHUTDOWN of MS services was successful.
oracloudceladm04:
oracloudceladm04: Stopping MS services...
oracloudceladm04: The SHUTDOWN of MS services was successful.
oracloudceladm05:
oracloudceladm05: Stopping MS services...
oracloudceladm05: The SHUTDOWN of MS services was successful.
oracloudceladm06:
oracloudceladm06: Stopping MS services...
oracloudceladm06: The SHUTDOWN of MS services was successful.
oracloudceladm07:
oracloudceladm07: Stopping MS services...
oracloudceladm07: The SHUTDOWN of MS services was successful.
- Remove the current jdk package.
[root@oraclouddbadm01 software]# dcli -l root -g ~/cell_group rpm -e --nodeps jdk-1.7.0_55-fcs.x86_64
- Apply the jdk patch to all the Storage Cells.
Note: java version should become "1.7.0_72"
[root@oraclouddbadm01 software]# dcli -l root -g ~/cell_group rpm -i /tmp/jdk-7u72-linux-x64.rpm
oracloudceladm01: Unpacking JAR files...
oracloudceladm01: rt.jar...
oracloudceladm01: jsse.jar...
oracloudceladm01: charsets.jar...
oracloudceladm01: tools.jar...
oracloudceladm01: localedata.jar...
oracloudceladm01: jfxrt.jar...
oracloudceladm02: Unpacking JAR files...
oracloudceladm02: rt.jar...
oracloudceladm02: jsse.jar...
oracloudceladm02: charsets.jar...
oracloudceladm02: tools.jar...
oracloudceladm02: localedata.jar...
oracloudceladm02: jfxrt.jar...
oracloudceladm03: Unpacking JAR files...
oracloudceladm03: rt.jar...
oracloudceladm03: jsse.jar...
oracloudceladm03: charsets.jar...
oracloudceladm03: tools.jar...
oracloudceladm03: localedata.jar...
oracloudceladm03: jfxrt.jar...
oracloudceladm04: Unpacking JAR files...
oracloudceladm04: rt.jar...
oracloudceladm04: jsse.jar...
oracloudceladm04: charsets.jar...
oracloudceladm04: tools.jar...
oracloudceladm04: localedata.jar...
oracloudceladm04: jfxrt.jar...
oracloudceladm05: Unpacking JAR files...
oracloudceladm05: rt.jar...
oracloudceladm05: jsse.jar...
oracloudceladm05: charsets.jar...
oracloudceladm05: tools.jar...
oracloudceladm05: localedata.jar...
oracloudceladm05: jfxrt.jar...
oracloudceladm06: Unpacking JAR files...
oracloudceladm06: rt.jar...
oracloudceladm06: jsse.jar...
oracloudceladm06: charsets.jar...
oracloudceladm06: tools.jar...
oracloudceladm06: localedata.jar...
oracloudceladm06: jfxrt.jar...
oracloudceladm07: Unpacking JAR files...
oracloudceladm07: rt.jar...
oracloudceladm07: jsse.jar...
oracloudceladm07: charsets.jar...
oracloudceladm07: tools.jar...
oracloudceladm07: localedata.jar...
oracloudceladm07: jfxrt.jar...
- Verify the updated Java Version
[root@oraclouddbadm01 software]# dcli -l root -g ~/cell_group java -version | grep "java version"
oracloudceladm01: java version "1.7.0_72"
oracloudceladm02: java version "1.7.0_72"
oracloudceladm03: java version "1.7.0_72"
oracloudceladm04: java version "1.7.0_72"
oracloudceladm05: java version "1.7.0_72"
oracloudceladm06: java version "1.7.0_72"
oracloudceladm07: java version "1.7.0_72"
- Restart the MS process on all the Storage Cells.
[root@oraclouddbadm01 software]# dcli -l root -g ~/cell_group cellcli -e alter cell startup services ms
oracloudceladm01:
oracloudceladm01: Starting MS services...
oracloudceladm01: The STARTUP of MS services was successful.
oracloudceladm02:
oracloudceladm02: Starting MS services...
oracloudceladm02: The STARTUP of MS services was successful.
oracloudceladm03:
oracloudceladm03: Starting MS services...
oracloudceladm03: The STARTUP of MS services was successful.
oracloudceladm04:
oracloudceladm04: Starting MS services...
oracloudceladm04: The STARTUP of MS services was successful.
oracloudceladm05:
oracloudceladm05: Starting MS services...
oracloudceladm05: The STARTUP of MS services was successful.
oracloudceladm06:
oracloudceladm06: Starting MS services...
oracloudceladm06: The STARTUP of MS services was successful.
oracloudceladm07:
oracloudceladm07: Starting MS services...
oracloudceladm07: The STARTUP of MS services was successful.
- Cleanup patch file
[root@oraclouddbadm01 software]# dcli -l root -g ~/cell_group rm /tmp/jdk-7u72-linux-x64.rpm
[root@oraclouddbadm01 software]# dcli -l root -g ~/cell_group ls -l /tmp/jdk-7u72-linux-x64.rpm
oracloudceladm01: ls: /tmp/jdk-7u72-linux-x64.rpm: No such file or directory
oracloudceladm02: ls: /tmp/jdk-7u72-linux-x64.rpm: No such file or directory
oracloudceladm03: ls: /tmp/jdk-7u72-linux-x64.rpm: No such file or directory
oracloudceladm04: ls: /tmp/jdk-7u72-linux-x64.rpm: No such file or directory
oracloudceladm05: ls: /tmp/jdk-7u72-linux-x64.rpm: No such file or directory
oracloudceladm06: ls: /tmp/jdk-7u72-linux-x64.rpm: No such file or directory
oracloudceladm07: ls: /tmp/jdk-7u72-linux-x64.rpm: No such file or directory
- Verify the uptime
[root@oraclouddbadm01 software]# dcli -l root -g ~/cell_group uptime
oracloudceladm01: 05:59:52 up 153 days, 17:20, 0 users, load average: 0.77, 0.78, 0.86
oracloudceladm02: 05:59:52 up 154 days, 8:26, 0 users, load average: 0.81, 0.79, 0.81
oracloudceladm03: 05:59:52 up 154 days, 8:26, 0 users, load average: 0.84, 0.72, 0.78
oracloudceladm04: 05:59:52 up 154 days, 8:26, 0 users, load average: 0.96, 0.80, 0.74
oracloudceladm05: 05:59:52 up 154 days, 8:26, 0 users, load average: 0.87, 0.95, 0.92
oracloudceladm06: 05:59:52 up 154 days, 8:26, 0 users, load average: 0.55, 0.60, 0.69
oracloudceladm07: 05:59:52 up 154 days, 8:26, 0 users, load average: 1.12, 0.86, 0.80
Conclusion
In this article we have learnt how to update the Java to latest version on storage cells to resolve the MS process Memory leak issue. We have also learned about the MS process which is responsible for executing cellcli commands on Storage cells and dbmcli on compute nodes and provides an interface to OEM 12c.
Reference
Exadata Storage Cell reports error RS-7445 [Serv MS Leaking Memory] (Doc ID 1954357.1)