Quantcast
Channel: Steve Hilker's Groups Activities
Viewing all 7636 articles
Browse latest View live

Getting Started with Toad for DB2 Chapter 7 The Basics- Blob Data

$
0
0

This is Chapter 7 of Getting Started with Toad for DB2 from Dell Software.


Getting Started with Toad for DB2 Chapter 8 The Basics- Using Multiple Databases

$
0
0

This is Chapter 8 of Getting Started with Toad for DB2 from Dell Software.

Getting Started with Toad for DB2 Chapter 9 The Basics- Customize Features and Tools

$
0
0

This is Chapter 9 of Getting Started with Toad for DB2 from Dell Software.

Delimiter Option issue?

$
0
0

Hello,

I am using the Delimiter option in TOAD but have noticed it does not always show a line strip for me.

I have enabled the option via Tools --> Options --> Editor --> General --> Click box 'show delimiter strip after'.

I start a new script it shows the line, but if i open a file it will not show the delimiter.  I have also noticed if i drag and drop a file within Toad from Windows Explore it will only show the strip after my text in the file.

Is this a known issue? Or is there a way I can fix it?

The reason I need the delimiter is because I can't upload a file with more than 70 chars long to the mainframe.

TOAD Version: 6.1.0.134 Toad for IBM DB2 for Z/OS

I have attached a few screen shots to better explain the issue.

Thanks again for all your help!!!

-Rober

t

Academic program Verification link not working

Toad crash on connect

$
0
0

Toad 6.0 on Win 7

Toad consistently crashes when opening a connection to a database. This is some of the detail from Event Viewer. Seems like it's having trouble with DB2APP.dll

Can anyone give me a hint as to what's wrong?

Application: toad.exe

Framework Version: v4.0.30319

Description: The process was terminated due to an unhandled exception.

Exception Info: System.AccessViolationException

Stack:

at IBM.Data.DB2.UnsafeNativeMethods+DB232.CSCStartTxnTimerADONET(IntPtr, Int32 ByRef)

at IBM.Data.DB2.DB2CscConnection.StartTxnTimer()

at IBM.Data.DB2.DB2Transaction.BeginTransaction()

at IBM.Data.DB2.DB2Connection.BeginTransactionObject(System.Data.IsolationLevel)

at IBM.Data.DB2.DB2Connection.BeginTransaction(System.Data.IsolationLevel)

at IBM.Data.DB2.DB2Connection.BeginTransaction()

at IBM.Data.DB2.DB2Connection.System.Data.IDbConnection.BeginTransaction()

at Quest.Toad.Db.Connection.BeginTransaction(System.Data.IDbConnection)

at Quest.Toad.DB2.DB2ToadConnection.BeginTransaction(System.Data.IDbConnection)

at Quest.Toad.Db.Connection.OpenConnection(System.Data.IDbConnection)

at Quest.Toad.DB2.DB2ToadConnection.OpenConnection(System.Data.IDbConnection)

at Quest.Toad.Db.Connection.AllocConnection()

at Quest.Toad.Db.Connection.Connect(Boolean)

at Quest.Toad.Db.Provider+BackgroundConnector.CreateBackgroundConnection()

at System.Threading.ThreadHelper.ThreadStart_Context(System.Object)

at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)

at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)

at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)

at System.Threading.ThreadHelper.ThreadStart()

=====================================================================

Faulting application name: toad.exe, version: 6.0.0.373, time stamp: 0x54639ac8

Faulting module name: DB2APP.dll, version: 10.5.600.232, time stamp: 0x55bc43c6

Exception code: 0xc0000005

Fault offset: 0x004167c5

Faulting process id: 0xde4

Faulting application start time: 0x01d10c5eb941bf67

Faulting application path: C:\Program Files\Dell\Toad for DB2 6.0\toad.exe

Faulting module path: C:\IBM\SQLLIB\BIN\DB2APP.dll

Report Id: 118309e7-7852-11e5-8582-0023240b2629

Toad - Mac Edition 2.3.0

$
0
0

Version: 2.3.0
Released: 27/10/2015

Toad- Mac Edition is a native Mac application for database development. Designed to help database developers be more productive, the Toad - Mac Edition provides essential database tools for Oracle, MySQL, and PostgreSQL.

Boost your database development productivity on Mac and develop highly-functional database applications fast.

NOTE: You will be redirected to the itunes app store for download.

ResourcesPost

 

 

POST QUESTION / COMMENT

Do you have a question or comment about this freeware?  Post it to the product forum:

Go to Forum

Using Oracle Database with CDH 5.2 Sqoop 1.4.5

$
0
0

Written by Deepak Vhora

 

In an earlier tutorial on using Oracle Database with Sqoop the Sqoop (Apache Sqoop 1.4.1 incubating), Oracle Database (Oracle Database 10g Express Edition), Apache Hadoop (1.0.0), Apache Hive (0.90), and Apache HBase (0.94.1) versions were earlier versions. In this tutorial Oracle Database 11g is used with later versions: CDH 5.2 Sqoop 1.4.5 (sqoop-1.4.5-cdh5.2.0), Hadoop 2.5.0 (hadoop-2.5.0-cdh5.2.0), Hive 0.13.1 (hive-0.13.1-cdh5.2.0), and HBase 0.98.6 (hbase-0.98.6-cdh5.2.0). This tutorial has the following sections.

 

Setting the Environment

Creating a Oracle Database Table

Importing into HDFS

Exporting from HDFS

Importing into HBase

Importing into Hive

 

Setting the Environment

 

The following software is required for this tutorial.

 

-Oracle Database 11g

-Sqoop 1.4.5 (sqoop-1.4.5-cdh5.2.0)

-Hadoop 2.5.0 (hadoop-2.5.0-cdh5.2.0)

-Hive 0.13.1 (hive-0.13.1-cdh5.2.0)

-HBase 0.98.6 (hbase-0.98.6-cdh5.2.0)

-Java 7

 

Create a directory /sqoop to install the software and set its permissions.

 

mkdir /sqoop

chmod -R 777 /sqoop

cd /sqoop

Add the hadoop group and add the hbase user to the hadoop group.

 

groupadd hadoop

useradd -g hadoop hbase

 

 

Download and extract the Java 7 gz file.

 

tar zxvf jdk-7u55-linux-i586.gz

 

 

Download and extract the Hadoop 2.5.0 tar.gz file.

 

wget http://archive-primary.cloudera.com/cdh5/cdh/5/hadoop-2.5.0-cdh5.2.0.tar.gz

tar -xvf hadoop-2.5.0-cdh5.2.0.tar.gz

 

Create symlinks for the Hadoop conf and bin directories.

 

ln -s /sqoop/hadoop-2.5.0-cdh5.2.0/bin-mapreduce1 /sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1/bin

ln -s /sqoop/hadoop-2.5.0-cdh5.2.0/etc/hadoop /sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1/conf

 

Download and extract the Sqoop 1.4.5 tar.gz file.

 

wget http://archive-primary.cloudera.com/cdh5/cdh/5/sqoop-1.4.5-cdh5.2.0.tar.gz

tar -xvf sqoop-1.4.5-cdh5.2.0.tar.gz

 

Copy the Oracle JDBC Jar file to the Sqoop lib directory.

 

cp ojdbc6.jar /sqoop/sqoop-1.4.5-cdh5.2.0/lib

 

Download and install the Hive 0.13.1 tar.gz file.

 

wget http://archive-primary.cloudera.com/cdh5/cdh/5/hive-0.13.1-cdh5.2.0.tar.gz

tar -xvf hive-0.13.1-cdh5.2.0.tar.gz

 

Create a hive-site.xml configuration file from the template file.

 

cp /sqoop/hive-0.13.1-cdh5.2.0/conf/hive-default.xml.template /sqoop/hive-0.13.1-cdh5.2.0/conf/hive-site.xml

 

Set the following configuration properties in the /sqoop/hive-0.13.1-cdh5.2.0/conf/hive-site.xml file. The host ip address specified in the hive.metastore.warehouse.dir could be different.

 

<property>

<name>hive.metastore.warehouse.dir</name>

<value>hdfs://10.0.2.15:8020/user/hive/warehouse</value>

</property>

 

<property>

<name>hive.metastore.uris</name>

<value>thrift://localhost:10000</value>

</property>

 

Download and extract the HBase 0.98.6 tar.gz file.

 

wget http://archive-primary.cloudera.com/cdh5/cdh/5/hbase-0.98.6-cdh5.2.0.tar.gz

tar -xvf hbase-0.98.6-cdh5.2.0.tar.gz

 

 

Set the following configuration properties in the /sqoop/hbase-0.98.6-cdh5.2.0/conf/hbase-site.xml file. The ip address specified in the hbase.rootdir could be different.

 

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<property>

<name>hbase.rootdir</name>

<value>hdfs://10.0.2.15:8020/hbase</value>

</property>

<property>

<name>hbase.zookeeper.property.dataDir</name>

<value>/zookeeper</value>

</property>

<property>

<name>hbase.zookeeper.property.clientPort</name>

<value>2181</value>

</property>

<property>

<name>hbase.zookeeper.quorum</name>

<value>localhost</value>

</property>

<property>

<name>hbase.regionserver.port</name>

<value>60020</value>

</property>

<property>

<name>hbase.master.port</name>

<value>60000</value>

</property>

</configuration>

 

Create the directory specified in the hbase.zookeeper.property.dataDir property and set its permissions.

 

mkdir -p /zookeeper

chmod -R 700 /zookeeper

 

As root user increase the maximum number of file handles limit in the

/etc/security/limits.conf file.

 

hdfs - nofile 32768

hbase - nofile 32768

 

Set the environment variables for Oracle Database, Sqoop, Hadoop, Hive, HBase and Java.

 

vi ~/.bashrc

export HADOOP_PREFIX=/sqoop/hadoop-2.5.0-cdh5.2.0

export HADOOP_CONF=$HADOOP_PREFIX/etc/hadoop

export HIVE_HOME=/sqoop/hive-0.13.1-cdh5.2.0

export HBASE_HOME=/sqoop/hbase-0.98.6-cdh5.2.0

export SQOOP_HOME=/sqoop/sqoop-1.4.5-cdh5.2.0

export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1

export ORACLE_SID=ORCL

export JAVA_HOME=/sqoop/jdk1.7.0_55

export HADOOP_MAPRED_HOME=/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1

export HADOOP_HOME=/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1

export HADOOP_CLASSPATH=$HADOOP_HOME/*:$HADOOP_HOME/lib/*:SQOOP_HOME/lib/*:$HBASE_HOME/lib/*:$HIVE_HOME/lib/*

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_MAPRED_HOME/bin:$SQOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$ORACLE_HOME/bin

export CLASSPATH=$HADOOP_CLASSPATH

export HADOOP_NAMENODE_USER=sqoop

export HADOOP_DATANODE_USER=sqoop

 

Set the following Hadoop core properties in the /sqoop/hadoop-2.5.0-cdh5.2.0/etc/hadoop/core-site.xml configuration file. The ip address specified in the fs.defaultFS property could be different.

 

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://10.0.2.15:8020</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/var/lib/hadoop-0.20/cache</value>

</property>

</configuration>

 

Create the directory specified in the hadoop.tmp.dir property and set its permissions.

 

mkdir -p /var/lib/hadoop-0.20/cache

chmod -R 777 /var/lib/hadoop-0.20/cache

 

Set the following HDFS configuration properties in the /sqoop/hadoop-2.5.0-cdh5.2.0/etc/hadoop/hdfs-site.xml file.

 

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

<property>

<name>dfs.permissions.superusergroup</name>

<value>hadoop</value>

</property><property>

<name>dfs.namenode.name.dir</name>

<value>/data/1/dfs/nn</value>

</property>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

 

<property>

<name>dfs.datanode.max.xcievers</name>

<value>4096</value>

</property>

 

</configuration>

 

Create the NameNode storage directory and set its permissions.

 

mkdir -p /data/1/dfs/nn

chmod -R 777 /data/1/dfs/nn

 

Format the NameNode and start the NameNode and the DataNode.

 

hadoop namenode -format

hadoop namenode

hadoop datanode

 

Create the HDFS directory specified in the hive.metastore.warehouse.dir property in the hive-site.xml file and set its permissions.

 

hadoop dfs -mkdir -p hdfs://10.0.2.15:8020/user/hive/warehouse

hadoop dfs -chmod -R 777 hdfs://10.0.2.15:8020/user/hive/warehouse

 

Create the HDFS directory specified in the hbase.rootdir property in the hbase-site.xml file and set its permissions.

 

hadoop dfs -mkdir /hbase

hadoop dfs -chmod -R 777 /hbase

 

We need to copy the Sqoop lib jars to the HDFS to be available in the runtime classpath. Create a HDFS directory path to put the Sqoop lib jars, set the directory path permissions and put the Sqoop lib jars to HDFS.

 

hadoop dfs -mkdir hdfs://10.0.2.15:8020/sqoop/sqoop-1.4.5-cdh5.2.0/lib

hadoop dfs -chmod -R 777 hdfs://10.0.2.15:8020/sqoop/sqoop-1.4.5-cdh5.2.0/lib

hdfs dfs -put /sqoop/sqoop-1.4.5-cdh5.2.0/lib/* hdfs://10.0.2.15:8020/sqoop/sqoop-1.4.5-cdh5.2.0/lib

 

Similarly create a HDFS directory path to put the Hive lib jars, set the directory path permissions and put the Hive lib jars to HDFS.

 

hadoop dfs -mkdir hdfs://10.0.2.15:8020/sqoop/hive-0.13.1-cdh5.2.0/lib

hadoop dfs -chmod -R 777 hdfs://10.0.2.15:8020/sqoop/hive-0.13.1-cdh5.2.0/lib

hdfs dfs -put /sqoop/hive-0.13.1-cdh5.2.0/lib/* hdfs://10.0.2.15:8020/sqoop/hive-0.13.1-cdh5.2.0/lib

 

Similarly create a HDFS directory path to put the HBase lib jars, set the directory path permissions and put the HBase lib jars to HDFS.

 

hadoop dfs -mkdir hdfs://10.0.2.15:8020/sqoop/hbase-0.98.6-cdh5.2.0/lib

hadoop dfs -chmod -R 777 hdfs://10.0.2.15:8020/sqoop/hbase-0.98.6-cdh5.2.0/lib

hdfs dfs -put /sqoop/hbase-0.98.6-cdh5.2.0/lib/* hdfs://10.0.2.15:8020/sqoop/hbase-0.98.6-cdh5.2.0/lib

 

 

Start the HBase Master, Regionserver and Zookeeper nodes.

 

hbase-daemon.sh start master

hbase-daemon.sh start regionserver

hbase-daemon.sh start zookeeper

 

Creating a Oracle Database Table

 

In this section we shall create the Oracle Database table that is to be used to import/export with Sqoop. In SQL*Plus connect as schema OE. Create a database table wlslog.

 

CONNECT OE/OE;

 

CREATE TABLE OE.wlslog (time_stamp VARCHAR2(4000), category VARCHAR2(4000), type VARCHAR2(4000), servername VARCHAR2(4000), code VARCHAR2(4000), msg VARCHAR2(4000));

 

Run the following SQL script to add data to the wlslog table.

 

INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:16-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STANDBY');

INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:17-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STARTING');

INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:18-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to ADMIN');

INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:19-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RESUMING');

INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:20-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000361','Started WebLogic AdminServer');

INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:21-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RUNNING');

INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:22-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000360','Server started in RUNNING mode');

 

As the output in SQL*Plus indicates the database table wlslog gets created.

 


 

Create another table WLSLOG_COPY, with the same structure as the wlslog table, to be used to export from HDFS.

 

CREATE TABLE WLSLOG_COPY(time_stamp VARCHAR2(4000), category VARCHAR2(4000), type VARCHAR2(4000), servername VARCHAR2(4000), code VARCHAR2(4000), msg VARCHAR2(4000));

 

The WLSLOG_COPY table gets created. Do not add data to the table as data is to be exported to it from HDFS.

 


 

 

Importing into HDFS

 

In this section Sqoop is used to import Oracle Database table data into HDFS. Run the following sqoop import command to import into HDFS.

 

sqoop import --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --password "OE" --username "OE" --table "wlslog" --columns "time_stamp,category,type,servername,code,msg" --split-by "time_stamp" --target-dir "/oradb/import" –verbose

 

The sqoop import command arguments are as follows.

 

Argument

Description

Value

--connect

Sets the connection url for Oracle Database

"jdbc:oracle:thin:@localhost:1521:ORCL"

--username

Sets username to connect to Oracle Database

"OE"

--password

Sets the password for Oracle Database

"OE"

--table

Sets the Oracle Database table name

"wlslog"

--columns

Sets the Oracle Database table columns

"time_stamp,category,type,servername,code,msg"

--split-by

Sets the primary key column

"time_stamp"

--target-dir

Sets the HDFS directory to import into

"/oradb/import"

–verbose

Sets verbose output

 

 

A MapReduce job runs to import the Oracle Database table data into HDFS.

 


 

 

A more detailed output from the sqoop import command is as follows.

 

15/04/03 11:09:17 INFO mapred.LocalJobRunner:

15/04/03 11:09:18 INFO mapred.JobClient: map 100% reduce 0%

15/04/03 11:09:22 INFO mapred.Task: Task:attempt_local1162911152_0001_m_000000_0 is done. And is in the process of commiting

15/04/03 11:09:22 INFO mapred.LocalJobRunner:

15/04/03 11:09:22 INFO mapred.Task: Task attempt_local1162911152_0001_m_000000_0 is allowed to commit now

15/04/03 11:09:24 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1162911152_0001_m_000000_0' to /oradb/import

15/04/03 11:09:24 INFO mapred.LocalJobRunner:

15/04/03 11:09:24 INFO mapred.Task: Task 'attempt_local1162911152_0001_m_000000_0' done.

15/04/03 11:09:24 INFO mapred.LocalJobRunner: Finishing task: attempt_local1162911152_0001_m_000000_0

15/04/03 11:09:24 INFO mapred.LocalJobRunner: Map task executor complete.

15/04/03 11:09:25 INFO mapred.JobClient: Job complete: job_local1162911152_0001

15/04/03 11:09:26 INFO mapred.JobClient: Counters: 18

15/04/03 11:09:26 INFO mapred.JobClient: File System Counters

15/04/03 11:09:26 INFO mapred.JobClient: FILE: Number of bytes read=21673941

15/04/03 11:09:26 INFO mapred.JobClient: FILE: Number of bytes written=21996421

15/04/03 11:09:26 INFO mapred.JobClient: FILE: Number of read operations=0

15/04/03 11:09:26 INFO mapred.JobClient: FILE: Number of large read operations=0

15/04/03 11:09:26 INFO mapred.JobClient: FILE: Number of write operations=0

15/04/03 11:09:26 INFO mapred.JobClient: HDFS: Number of bytes read=0

15/04/03 11:09:26 INFO mapred.JobClient: HDFS: Number of bytes written=717

15/04/03 11:09:26 INFO mapred.JobClient: HDFS: Number of read operations=1

15/04/03 11:09:26 INFO mapred.JobClient: HDFS: Number of large read operations=0

15/04/03 11:09:26 INFO mapred.JobClient: HDFS: Number of write operations=2

15/04/03 11:09:26 INFO mapred.JobClient: Map-Reduce Framework

15/04/03 11:09:26 INFO mapred.JobClient: Map input records=7

15/04/03 11:09:26 INFO mapred.JobClient: Map output records=7

15/04/03 11:09:26 INFO mapred.JobClient: Input split bytes=87

15/04/03 11:09:26 INFO mapred.JobClient: Spilled Records=0

15/04/03 11:09:26 INFO mapred.JobClient: CPU time spent (ms)=0

15/04/03 11:09:26 INFO mapred.JobClient: Physical memory (bytes) snapshot=0

15/04/03 11:09:26 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0

15/04/03 11:09:26 INFO mapred.JobClient: Total committed heap usage (bytes)=180756480

15/04/03 11:09:26 INFO mapreduce.ImportJobBase: Transferred 717 bytes in 182.2559 seconds (3.934 bytes/sec)

15/04/03 11:09:26 INFO mapreduce.ImportJobBase: Retrieved 7 records.

15/04/03 11:09:26 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@3d4817

 

 

Exporting from HDFS

 

Having imported into HDFS, in this section we shall export the imported data back into Oracle Database using the sqoop export tool. Run the following sqoop export command to export to Oracle Database.

 

sqoop export --connect "jdbc:oracle:thin:@127.0.0.1:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --export-dir "/oradb/import" --table "WLSLOG_COPY" --verbose

 

The sqoop export command arguments are as follows.

Argument

Description

Value

--connect

Sets the connection url for Oracle Database

"jdbc:oracle:thin:@localhost:1521:ORCL"

--username

Sets username to connect to Oracle Database

"OE"

--password

Sets the password for Oracle Database

"OE"

--table

Sets the Oracle Database table name to export to

"WLSLOG_COPY"

--hadoop-home

Sets the Hadoop home directory

"/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1"

--export-dir

Sets the HDFS directory to export from. Should be same as the directory imported into

"/oradb/import"

–verbose

Sets verbose output

 

 

A MapReduce job runs to export HDFS data into Oracle Database.

 


 

A more detailed output from the sqoop export command is as follows.

 

[root@localhost sqoop]# sqoop export --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --export-dir "/oradb/import" --table "WLSLOG_COPY" --verbose

15/04/03 11:13:03 DEBUG manager.DefaultManagerFactory: Trying with scheme: jdbc:oracle:thin:@localhost:1521

15/04/03 11:13:03 DEBUG manager.OracleManager$ConnCache: Instantiated new connection cache.

15/04/03 11:13:03 INFO manager.SqlManager: Using default fetchSize of 1000

15/04/03 11:13:03 DEBUG sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.OracleManager@1101fa5

15/04/03 11:13:03 INFO tool.CodeGenTool: Beginning code generation

15/04/03 11:13:04 DEBUG manager.OracleManager: Using column names query: SELECT t.* FROM WLSLOG_COPY t WHERE 1=0

15/04/03 11:13:04 DEBUG manager.SqlManager: Execute getColumnInfoRawQuery : SELECT t.* FROM WLSLOG_COPY t WHERE 1=0

15/04/03 11:13:05 DEBUG manager.OracleManager: Creating a new connection for jdbc:oracle:thin:@localhost:1521:ORCL, using username: OE

15/04/03 11:13:05 DEBUG manager.OracleManager: No connection paramenters specified. Using regular API for making connection.

15/04/03 11:13:10 INFO manager.OracleManager: Time zone has been set to GMT

15/04/03 11:13:11 DEBUG manager.SqlManager: Using fetchSize for next query: 1000

15/04/03 11:13:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM WLSLOG_COPY t WHERE 1=0

15/04/03 11:13:15 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE

15/04/03 11:13:15 DEBUG orm.ClassWriter: selected columns:

15/04/03 11:13:15 DEBUG orm.ClassWriter: TIME_STAMP

15/04/03 11:13:15 DEBUG orm.ClassWriter: CATEGORY

15/04/03 11:13:15 DEBUG orm.ClassWriter: TYPE

15/04/03 11:13:15 DEBUG orm.ClassWriter: SERVERNAME

15/04/03 11:13:15 DEBUG orm.ClassWriter: CODE

15/04/03 11:13:15 DEBUG orm.ClassWriter: MSG

15/04/03 11:14:00 INFO mapreduce.ExportJobBase: Beginning export of WLSLOG_COPY

15/04/03 11:14:00 DEBUG util.ClassLoaderStack: Checking for existing class: WLSLOG_COPY

15/04/03 11:14:52 DEBUG mapreduce.JobBase: Using InputFormat: class org.apache.sqoop.mapreduce.ExportInputFormat

15/04/03 11:14:54 DEBUG db.DBConfiguration: Securing password into job credentials store

15/04/03 11:14:54 DEBUG manager.OracleManager$ConnCache: Got cached connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE

15/04/03 11:15:23 INFO input.FileInputFormat: Total input paths to process : 1

15/04/03 11:15:23 DEBUG mapreduce.ExportInputFormat: Target numMapTasks=4

15/04/03 11:15:23 DEBUG mapreduce.ExportInputFormat: Total input bytes=717

15/04/03 11:15:23 DEBUG mapreduce.ExportInputFormat: maxSplitSize=179

15/04/03 11:15:23 INFO input.FileInputFormat: Total input paths to process : 1

15/04/03 11:15:25 DEBUG mapreduce.ExportInputFormat: Generated splits:

15/04/03 11:15:25 DEBUG mapreduce.ExportInputFormat: Paths:/oradb/import/part-m-00000:0+179 Locations:localhost:;

15/04/03 11:15:25 DEBUG mapreduce.ExportInputFormat: Paths:/oradb/import/part-m-00000:179+179 Locations:localhost:;

15/04/03 11:15:25 DEBUG mapreduce.ExportInputFormat: Paths:/oradb/import/part-m-00000:358+179 Locations:localhost:;

15/04/03 11:15:25 DEBUG mapreduce.ExportInputFormat: Paths:/oradb/import/part-m-00000:537+90,/oradb/import/part-m-00000:627+90 Locations:localhost:;

15/04/03 11:16:35 INFO mapred.LocalJobRunner: OutputCommitter set in config null

15/04/03 11:16:35 INFO mapred.JobClient: Running job: job_local596048800_0001

15/04/03 11:16:35 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.sqoop.mapreduce.NullOutputCommitter

15/04/03 11:16:36 INFO mapred.LocalJobRunner: Waiting for map tasks

15/04/03 11:16:36 INFO mapred.LocalJobRunner: Starting task: attempt_local596048800_0001_m_000000_0

15/04/03 11:16:37 INFO mapred.JobClient: map 0% reduce 0%

15/04/03 11:16:38 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead

15/04/03 11:16:40 INFO util.ProcessTree: setsid exited with exit code 0

15/04/03 11:16:41 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@9487e9

15/04/03 11:16:41 INFO mapred.MapTask: Processing split: Paths:/oradb/import/part-m-00000:537+90,/oradb/import/part-m-00000:627+90

15/04/03 11:16:41 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/oradb/import/part-m-00000

15/04/03 11:16:41 DEBUG db.DBConfiguration: Fetching password from job credentials store

15/04/03 11:16:46 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/oradb/import/part-m-00000

15/04/03 11:16:46 INFO mapred.LocalJobRunner:

15/04/03 11:16:48 DEBUG mapreduce.AsyncSqlOutputFormat: Committing transaction of 1 statements

15/04/03 11:16:48 INFO mapred.Task: Task:attempt_local596048800_0001_m_000000_0 is done. And is in the process of commiting

15/04/03 11:16:49 INFO mapred.LocalJobRunner:

15/04/03 11:16:49 INFO mapred.Task: Task 'attempt_local596048800_0001_m_000000_0' done.

15/04/03 11:16:49 INFO mapred.LocalJobRunner: Finishing task: attempt_local596048800_0001_m_000000_0

15/04/03 11:16:49 INFO mapred.LocalJobRunner: Starting task: attempt_local596048800_0001_m_000001_0

15/04/03 11:16:49 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead

15/04/03 11:16:49 INFO mapred.JobClient: map 25% reduce 0%

15/04/03 11:16:49 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@318c80

15/04/03 11:16:49 INFO mapred.MapTask: Processing split: Paths:/oradb/import/part-m-00000:0+179

15/04/03 11:16:49 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/oradb/import/part-m-00000

15/04/03 11:16:49 DEBUG db.DBConfiguration: Fetching password from job credentials store

15/04/03 11:16:53 INFO mapred.LocalJobRunner:

15/04/03 11:16:53 DEBUG mapreduce.AsyncSqlOutputFormat: Committing transaction of 1 statements

15/04/03 11:16:53 INFO mapred.Task: Task:attempt_local596048800_0001_m_000001_0 is done. And is in the process of commiting

15/04/03 11:16:53 INFO mapred.LocalJobRunner:

15/04/03 11:16:53 INFO mapred.Task: Task 'attempt_local596048800_0001_m_000001_0' done.

15/04/03 11:16:53 INFO mapred.LocalJobRunner: Finishing task: attempt_local596048800_0001_m_000001_0

15/04/03 11:16:53 INFO mapred.LocalJobRunner: Starting task: attempt_local596048800_0001_m_000002_0

15/04/03 11:16:53 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead

15/04/03 11:16:53 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@19d4b58

15/04/03 11:16:53 INFO mapred.MapTask: Processing split: Paths:/oradb/import/part-m-00000:179+179

15/04/03 11:16:53 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/oradb/import/part-m-00000

15/04/03 11:16:53 DEBUG db.DBConfiguration: Fetching password from job credentials store

15/04/03 11:16:54 INFO mapred.JobClient: map 50% reduce 0%

15/04/03 11:16:58 DEBUG mapreduce.AutoProgressMapper: Progress thread shutdown detected.

15/04/03 11:16:58 INFO mapred.LocalJobRunner:

15/04/03 11:16:58 DEBUG mapreduce.AsyncSqlOutputFormat: Committing transaction of 1 statements

15/04/03 11:16:58 INFO mapred.Task: Task:attempt_local596048800_0001_m_000002_0 is done. And is in the process of commiting

15/04/03 11:16:58 INFO mapred.LocalJobRunner:

15/04/03 11:16:58 INFO mapred.Task: Task 'attempt_local596048800_0001_m_000002_0' done.

15/04/03 11:16:58 INFO mapred.LocalJobRunner: Finishing task: attempt_local596048800_0001_m_000002_0

15/04/03 11:16:58 INFO mapred.LocalJobRunner: Starting task: attempt_local596048800_0001_m_000003_0

15/04/03 11:16:58 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead

15/04/03 11:16:58 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1059ca1

15/04/03 11:16:58 INFO mapred.MapTask: Processing split: Paths:/oradb/import/part-m-00000:358+179

15/04/03 11:16:58 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/oradb/import/part-m-00000

15/04/03 11:16:58 DEBUG db.DBConfiguration: Fetching password from job credentials store

15/04/03 11:16:59 INFO mapred.JobClient: map 75% reduce 0%

15/04/03 11:17:02 INFO mapred.LocalJobRunner:

15/04/03 11:17:02 DEBUG mapreduce.AsyncSqlOutputFormat: Committing transaction of 1 statements

15/04/03 11:17:03 INFO mapred.Task: Task:attempt_local596048800_0001_m_000003_0 is done. And is in the process of commiting

15/04/03 11:17:03 INFO mapred.LocalJobRunner:

15/04/03 11:17:03 INFO mapred.Task: Task 'attempt_local596048800_0001_m_000003_0' done.

15/04/03 11:17:03 INFO mapred.LocalJobRunner: Finishing task: attempt_local596048800_0001_m_000003_0

15/04/03 11:17:03 INFO mapred.LocalJobRunner: Map task executor complete.

15/04/03 11:17:03 INFO mapred.JobClient: map 100% reduce 0%

15/04/03 11:17:03 INFO mapred.JobClient: Job complete: job_local596048800_0001

15/04/03 11:17:04 INFO mapred.JobClient: Counters: 18

15/04/03 11:17:04 INFO mapred.JobClient: File System Counters

15/04/03 11:17:04 INFO mapred.JobClient: FILE: Number of bytes read=86701670

15/04/03 11:17:04 INFO mapred.JobClient: FILE: Number of bytes written=87982780

15/04/03 11:17:04 INFO mapred.JobClient: FILE: Number of read operations=0

15/04/03 11:17:04 INFO mapred.JobClient: FILE: Number of large read operations=0

15/04/03 11:17:04 INFO mapred.JobClient: FILE: Number of write operations=0

15/04/03 11:17:04 INFO mapred.JobClient: HDFS: Number of bytes read=4720

15/04/03 11:17:04 INFO mapred.JobClient: HDFS: Number of bytes written=0

15/04/03 11:17:04 INFO mapred.JobClient: HDFS: Number of read operations=78

15/04/03 11:17:04 INFO mapred.JobClient: HDFS: Number of large read operations=0

15/04/03 11:17:04 INFO mapred.JobClient: HDFS: Number of write operations=0

15/04/03 11:17:04 INFO mapred.JobClient: Map-Reduce Framework

15/04/03 11:17:04 INFO mapred.JobClient: Map input records=7

15/04/03 11:17:04 INFO mapred.JobClient: Map output records=7

15/04/03 11:17:04 INFO mapred.JobClient: Input split bytes=576

15/04/03 11:17:04 INFO mapred.JobClient: Spilled Records=0

15/04/03 11:17:04 INFO mapred.JobClient: CPU time spent (ms)=0

15/04/03 11:17:04 INFO mapred.JobClient: Physical memory (bytes) snapshot=0

15/04/03 11:17:04 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0

15/04/03 11:17:04 INFO mapred.JobClient: Total committed heap usage (bytes)=454574080

15/04/03 11:17:04 INFO mapreduce.ExportJobBase: Transferred 4.6094 KB in 128.7649 seconds (36.656 bytes/sec)

15/04/03 11:17:04 INFO mapreduce.ExportJobBase: Exported 7 records.

15/04/03 11:17:04 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@3d4817

 

Run a SELECT statement in SQL*PLUS to list the exported data.

 


 

 

The 7 rows of data exported to the WLSLOG_COPY table get listed.

 


 

 

Importing into HBase

 

In this section Sqoop is used to import data into HDFS. Run the following sqoop import command to import into HBase.

 

sqoop import --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --hbase-create-table --hbase-table "WLS_LOG" --column-family "wls" --table "wlslog" –verbose

 

The sqoop import command arguments are as follows.

Argument

Description

Value

--connect

Sets the connection url for Oracle Database

"jdbc:oracle:thin:@localhost:1521:ORCL"

--username

Sets username to connect to Oracle Database

"OE"

--password

Sets the password for Oracle Database

"OE"

--table

Sets the Oracle Database table name

"wlslog"

--hadoop-home

Sets the Hadoop home directory

"/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1"

--hbase-create-table

Creates the HBase table

 

--hbase-table

Sets the HBase table name

"WLS_LOG"

--column-family

Sets the HBase column family

"wls"

–verbose

Sets verbose output

 

 

A MapReduce job runs to import Oracle Database data into HBase.

 


 

 

A more detailed output from the sqoop import command is as follows.

 

[root@localhost sqoop]# sqoop import --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --hbase-create-table --hbase-table "WLS_LOG" --column-family "wls" --table "WLSLOG" --verbose

15/04/03 13:56:26 DEBUG sqoop.ConnFactory: Trying ManagerFactory: com.cloudera.sqoop.manager.DefaultManagerFactory

15/04/03 13:56:26 DEBUG manager.DefaultManagerFactory: Trying with scheme: jdbc:oracle:thin:@localhost:1521

15/04/03 13:56:26 DEBUG manager.OracleManager$ConnCache: Instantiated new connection cache.

15/04/03 13:56:26 INFO manager.SqlManager: Using default fetchSize of 1000

15/04/03 13:56:26 DEBUG sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.OracleManager@704f33

15/04/03 13:56:26 INFO tool.CodeGenTool: Beginning code generation

15/04/03 13:56:26 DEBUG manager.OracleManager: Using column names query: SELECT t.* FROM WLSLOG t WHERE 1=0

15/04/03 13:56:26 DEBUG manager.SqlManager: Execute getColumnInfoRawQuery : SELECT t.* FROM WLSLOG t WHERE 1=0

15/04/03 13:56:28 DEBUG manager.OracleManager: Creating a new connection for jdbc:oracle:thin:@localhost:1521:ORCL, using username: OE

15/04/03 13:56:28 DEBUG manager.OracleManager: No connection paramenters specified. Using regular API for making connection.

15/04/03 13:57:09 DEBUG manager.SqlManager: Using fetchSize for next query: 1000

15/04/03 13:57:09 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM WLSLOG t WHERE 1=0

15/04/03 13:57:22 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE

15/04/03 13:57:22 DEBUG orm.ClassWriter: selected columns:

15/04/03 13:57:22 DEBUG orm.ClassWriter: TIME_STAMP

15/04/03 13:57:22 DEBUG orm.ClassWriter: CATEGORY

15/04/03 13:57:22 DEBUG orm.ClassWriter: TYPE

15/04/03 13:57:22 DEBUG orm.ClassWriter: SERVERNAME

15/04/03 13:57:22 DEBUG orm.ClassWriter: CODE

15/04/03 13:57:22 DEBUG orm.ClassWriter: MSG

15/04/03 13:58:46 DEBUG db.DBConfiguration: Securing password into job credentials store

15/04/03 13:58:46 DEBUG manager.OracleManager$ConnCache: Got cached connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE

15/04/03 13:58:46 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE

15/04/03 13:58:46 DEBUG mapreduce.DataDrivenImportJob: Using table class: WLSLOG

15/04/03 13:58:46 DEBUG mapreduce.DataDrivenImportJob: Using InputFormat: class com.cloudera.sqoop.mapreduce.db.OracleDataDrivenDBInputFormat

15/04/03 13:58:47 DEBUG manager.OracleManager$ConnCache: Got cached connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE

15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/i386:/lib:/usr/lib

15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp

15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>

15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux

15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386

15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.39-400.247.1.el6uek.i686

15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:user.name=root

15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:user.home=/root

15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Client environment:user.dir=/sqoop

15/04/03 13:59:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x8fffea, quorum=localhost:2181, baseZNode=/hbase

15/04/03 13:59:09 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/10.0.2.15:2181. Will not attempt to authenticate using SASL (unknown error)

15/04/03 13:59:10 INFO zookeeper.ClientCnxn: Socket connection established to localhost/10.0.2.15:2181, initiating session

15/04/03 13:59:11 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/10.0.2.15:2181, sessionid = 0x14c806c5f420006, negotiated timeout = 40000

15/04/03 13:59:47 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available

15/04/03 13:59:54 INFO zookeeper.RecoverableZooKeeper: Process identifier=catalogtracker-on-hconnection-0x8fffea connecting to ZooKeeper ensemble=localhost:2181

15/04/03 13:59:54 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x8fffea, quorum=localhost:2181, baseZNode=/hbase

15/04/03 13:59:54 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/10.0.2.15:2181. Will not attempt to authenticate using SASL (unknown error)

15/04/03 13:59:54 INFO zookeeper.ClientCnxn: Socket connection established to localhost/10.0.2.15:2181, initiating session

15/04/03 13:59:55 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/10.0.2.15:2181, sessionid = 0x14c806c5f420007, negotiated timeout = 40000

15/04/03 14:00:07 INFO zookeeper.ZooKeeper: Session: 0x14c806c5f420007 closed

15/04/03 14:00:07 INFO mapreduce.HBaseImportJob: Creating missing HBase table WLS_LOG

15/04/03 14:00:07 INFO zookeeper.ClientCnxn: EventThread shut down

15/04/03 14:00:14 INFO zookeeper.RecoverableZooKeeper: Process identifier=catalogtracker-on-hconnection-0x8fffea connecting to ZooKeeper ensemble=localhost:2181

15/04/03 14:00:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x8fffea, quorum=localhost:2181, baseZNode=/hbase

15/04/03 14:00:14 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)

15/04/03 14:00:15 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session

15/04/03 14:00:15 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x14c806c5f420008, negotiated timeout = 40000

15/04/03 14:00:15 INFO zookeeper.ClientCnxn: EventThread shut down

15/04/03 14:00:15 INFO zookeeper.ZooKeeper: Session: 0x14c806c5f420008 closed

15/04/03 14:00:18 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id

15/04/03 14:00:18 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=

15/04/03 14:00:20 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.

15/04/03 14:00:44 DEBUG db.DBConfiguration: Fetching password from job credentials store

15/04/03 14:00:49 INFO db.DBInputFormat: Using read commited transaction isolation

15/04/03 14:00:49 DEBUG db.DataDrivenDBInputFormat: Creating input split with lower bound '1=1' and upper bound '1=1'

15/04/03 14:02:39 INFO mapred.JobClient: Running job: job_local1040061811_0001

15/04/03 14:02:39 INFO mapred.LocalJobRunner: OutputCommitter set in config null

15/04/03 14:02:39 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.sqoop.mapreduce.NullOutputCommitter

15/04/03 14:02:40 INFO mapred.LocalJobRunner: Waiting for map tasks

15/04/03 14:02:40 INFO mapred.LocalJobRunner: Starting task: attempt_local1040061811_0001_m_000000_0

15/04/03 14:02:40 INFO mapred.JobClient: map 0% reduce 0%

15/04/03 14:02:46 DEBUG db.DBConfiguration: Fetching password from job credentials store

15/04/03 14:02:50 INFO db.DBInputFormat: Using read commited transaction isolation

15/04/03 14:02:50 INFO mapred.MapTask: Processing split: 1=1 AND 1=1

15/04/03 14:02:51 INFO db.OracleDBRecordReader: Time zone has been set to GMT

15/04/03 14:02:53 INFO db.DBRecordReader: Working on split: 1=1 AND 1=1

15/04/03 14:02:53 DEBUG db.DataDrivenDBRecordReader: Using query: SELECT TIME_STAMP, CATEGORY, TYPE, SERVERNAME, CODE, MSG FROM WLSLOG WHERE ( 1=1 ) AND ( 1=1 )

15/04/03 14:02:53 DEBUG db.DBRecordReader: Using fetchSize for next query: 1000

15/04/03 14:02:53 INFO db.DBRecordReader: Executing query: SELECT TIME_STAMP, CATEGORY, TYPE, SERVERNAME, CODE, MSG FROM WLSLOG WHERE ( 1=1 ) AND ( 1=1 )

15/04/03 14:03:01 DEBUG mapreduce.AutoProgressMapper: Instructing auto-progress thread to quit.

15/04/03 14:03:01 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false

15/04/03 14:03:01 DEBUG mapreduce.AutoProgressMapper: Waiting for progress thread shutdown...

15/04/03 14:03:01 DEBUG mapreduce.AutoProgressMapper: Progress thread shutdown detected.

15/04/03 14:03:01 INFO mapred.LocalJobRunner:

15/04/03 14:03:06 INFO mapred.LocalJobRunner:

15/04/03 14:03:07 INFO mapred.Task: Task:attempt_local1040061811_0001_m_000000_0 is done. And is in the process of commiting

15/04/03 14:03:07 INFO mapred.JobClient: map 100% reduce 0%

15/04/03 14:03:07 INFO mapred.LocalJobRunner:

15/04/03 14:03:07 INFO mapred.Task: Task 'attempt_local1040061811_0001_m_000000_0' done.

15/04/03 14:03:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1040061811_0001_m_000000_0

15/04/03 14:03:07 INFO mapred.LocalJobRunner: Map task executor complete.

15/04/03 14:03:08 INFO mapred.JobClient: Job complete: job_local1040061811_0001

15/04/03 14:03:08 INFO mapred.JobClient: Counters: 18

15/04/03 14:03:08 INFO mapred.JobClient: File System Counters

15/04/03 14:03:08 INFO mapred.JobClient: FILE: Number of bytes read=39829434

15/04/03 14:03:08 INFO mapred.JobClient: FILE: Number of bytes written=40338352

15/04/03 14:03:08 INFO mapred.JobClient: FILE: Number of read operations=0

15/04/03 14:03:08 INFO mapred.JobClient: FILE: Number of large read operations=0

15/04/03 14:03:08 INFO mapred.JobClient: FILE: Number of write operations=0

15/04/03 14:03:08 INFO mapred.JobClient: HDFS: Number of bytes read=0

15/04/03 14:03:08 INFO mapred.JobClient: HDFS: Number of bytes written=0

15/04/03 14:03:08 INFO mapred.JobClient: HDFS: Number of read operations=0

15/04/03 14:03:08 INFO mapred.JobClient: HDFS: Number of large read operations=0

15/04/03 14:03:08 INFO mapred.JobClient: HDFS: Number of write operations=0

15/04/03 14:03:08 INFO mapred.JobClient: Map-Reduce Framework

15/04/03 14:03:08 INFO mapred.JobClient: Map input records=7

15/04/03 14:03:08 INFO mapred.JobClient: Map output records=7

15/04/03 14:03:08 INFO mapred.JobClient: Input split bytes=87

15/04/03 14:03:08 INFO mapred.JobClient: Spilled Records=0

15/04/03 14:03:08 INFO mapred.JobClient: CPU time spent (ms)=0

15/04/03 14:03:08 INFO mapred.JobClient: Physical memory (bytes) snapshot=0

15/04/03 14:03:08 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0

15/04/03 14:03:08 INFO mapred.JobClient: Total committed heap usage (bytes)=180756480

15/04/03 14:03:08 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 171.3972 seconds (0 bytes/sec)

15/04/03 14:03:09 INFO mapreduce.ImportJobBase: Retrieved 7 records.

15/04/03 14:03:09 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@3d4817

 

Start the HBase shell.

 

hbase shell

 

Run the scan command to list the data imported into the WLS_LOG table.

 

scan "WLS_LOG"

 

The scan command lists the HBase table data.

 


 

 

The 7 rows of data imported into HBase get listed.

 


 

 

 

Importing into Hive

 

In this section Sqoop is used to import Oracle database table data into Hive. Run the following sqoop import command to import into Hive.

 

sqoop import --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --hive-import --create-hive-table --hive-table "WLSLOG" --table "WLSLOG_COPY" --split-by "time_stamp" –verbose

 

The sqoop import command arguments are as follows.

Argument

Description

Value

--connect

Sets the connection url for Oracle Database

"jdbc:oracle:thin:@localhost:1521:ORCL"

--username

Sets username to connect to Oracle Database

"OE"

--password

Sets the password for Oracle Database

"OE"

--table

Sets the Oracle Database table name to import from

"WLSLOG_COPY"

--hadoop-home

Sets the Hadoop home directory

"/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1"

--hive-import

Import into Hive

 

--create-hive-table

Sets to create Hive table

 

--hive-table

Sets the Hive table name

"WLSLOG"

--split-by

Sets the Oracle Database table primary key

"time_stamp"

–verbose

Sets verbose output

 

 

A MapReduce job runs to import Oracle Database table data into Hive.

 


 

 

A more detailed output from the sqoop import command is as follows.

 

[root@localhost sqoop]# sqoop import --connect "jdbc:oracle:thin:@localhost:1521:ORCL" --hadoop-home "/sqoop/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --hive-import --create-hive-table --hive-table "WLSLOG" --table "WLSLOG_COPY" --split-by "time_stamp" --verbose

15/04/03 13:20:42 DEBUG sqoop.ConnFactory: Trying ManagerFactory: com.cloudera.sqoop.manager.DefaultManagerFactory

15/04/03 13:20:42 DEBUG manager.DefaultManagerFactory: Trying with scheme: jdbc:oracle:thin:@localhost:1521

15/04/03 13:20:43 DEBUG manager.OracleManager$ConnCache: Instantiated new connection cache.

15/04/03 13:20:43 INFO manager.SqlManager: Using default fetchSize of 1000

15/04/03 13:20:43 DEBUG sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.OracleManager@9ed26e

15/04/03 13:20:44 INFO tool.CodeGenTool: Beginning code generation

15/04/03 13:20:44 DEBUG manager.OracleManager: Using column names query: SELECT t.* FROM WLSLOG_COPY t WHERE 1=0

15/04/03 13:20:44 DEBUG manager.SqlManager: Execute getColumnInfoRawQuery : SELECT t.* FROM WLSLOG_COPY t WHERE 1=0

15/04/03 13:20:51 DEBUG manager.OracleManager: Creating a new connection for jdbc:oracle:thin:@localhost:1521:ORCL, using username: OE

15/04/03 13:20:51 DEBUG manager.OracleManager: No connection paramenters specified. Using regular API for making connection.

15/04/03 13:21:18 DEBUG manager.SqlManager: Using fetchSize for next query: 1000

15/04/03 13:21:18 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM WLSLOG_COPY t WHERE 1=0

15/04/03 13:21:30 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE

15/04/03 13:21:30 DEBUG orm.ClassWriter: selected columns:

15/04/03 13:21:30 DEBUG orm.ClassWriter: TIME_STAMP

15/04/03 13:21:30 DEBUG orm.ClassWriter: CATEGORY

15/04/03 13:21:30 DEBUG orm.ClassWriter: TYPE

15/04/03 13:21:30 DEBUG orm.ClassWriter: SERVERNAME

15/04/03 13:21:30 DEBUG orm.ClassWriter: CODE

15/04/03 13:21:30 DEBUG orm.ClassWriter: MSG

15/04/03 13:21:31 DEBUG orm.ClassWriter: Writing source file: /tmp/sqoop-root/compile/6235c3beba4d629be2f91c2c832c8033/WLSLOG_COPY.java

15/04/03 13:21:31 DEBUG orm.ClassWriter: Table name: WLSLOG_COPY

15/04/03 13:21:31 DEBUG orm.ClassWriter: Columns: TIME_STAMP:12, CATEGORY:12, TYPE:12, SERVERNAME:12, CODE:12, MSG:12,

15/04/03 13:21:52 INFO mapreduce.ImportJobBase: Beginning import of WLSLOG_COPY

15/04/03 13:21:53 DEBUG util.ClassLoaderStack: Checking for existing class: WLSLOG_COPY

15/04/03 13:22:04 DEBUG db.DBConfiguration: Securing password into job credentials store

15/04/03 13:22:04 DEBUG manager.OracleManager$ConnCache: Got cached connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE

15/04/03 13:22:04 INFO manager.OracleManager: Time zone has been set to GMT

15/04/03 13:22:04 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE

15/04/03 13:22:05 DEBUG mapreduce.DataDrivenImportJob: Using table class: WLSLOG_COPY

15/04/03 13:22:05 DEBUG mapreduce.DataDrivenImportJob: Using InputFormat: class com.cloudera.sqoop.mapreduce.db.OracleDataDrivenDBInputFormat

15/04/03 13:24:39 INFO mapred.LocalJobRunner: OutputCommitter set in config null

15/04/03 13:24:39 INFO mapred.JobClient: Running job: job_local846992281_0001

15/04/03 13:24:40 INFO mapred.JobClient: map 0% reduce 0%

15/04/03 13:24:40 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter

15/04/03 13:24:42 INFO mapred.LocalJobRunner: Waiting for map tasks

15/04/03 13:24:42 INFO mapred.LocalJobRunner: Starting task: attempt_local846992281_0001_m_000000_0

15/04/03 13:24:43 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead

15/04/03 13:24:45 INFO util.ProcessTree: setsid exited with exit code 0

15/04/03 13:24:46 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1e1a108

15/04/03 13:24:46 DEBUG db.DBConfiguration: Fetching password from job credentials store

15/04/03 13:24:50 INFO db.DBInputFormat: Using read commited transaction isolation

15/04/03 13:24:50 INFO mapred.MapTask: Processing split: 1=1 AND 1=1

15/04/03 13:24:50 INFO db.OracleDBRecordReader: Time zone has been set to GMT

15/04/03 13:24:53 INFO db.DBRecordReader: Working on split: 1=1 AND 1=1

15/04/03 13:24:53 DEBUG db.DataDrivenDBRecordReader: Using query: SELECT TIME_STAMP, CATEGORY, TYPE, SERVERNAME, CODE, MSG FROM WLSLOG_COPY WHERE ( 1=1 ) AND ( 1=1 )

15/04/03 13:24:53 DEBUG db.DBRecordReader: Using fetchSize for next query: 1000

15/04/03 13:24:53 INFO db.DBRecordReader: Executing query: SELECT TIME_STAMP, CATEGORY, TYPE, SERVERNAME, CODE, MSG FROM WLSLOG_COPY WHERE ( 1=1 ) AND ( 1=1 )

15/04/03 13:25:01 INFO mapred.LocalJobRunner:

15/04/03 13:25:06 INFO mapred.LocalJobRunner:

15/04/03 13:25:07 INFO mapred.JobClient: map 100% reduce 0%

15/04/03 13:25:12 INFO mapred.Task: Task:attempt_local846992281_0001_m_000000_0 is done. And is in the process of commiting

15/04/03 13:25:12 INFO mapred.LocalJobRunner:

15/04/03 13:25:12 INFO mapred.Task: Task attempt_local846992281_0001_m_000000_0 is allowed to commit now

15/04/03 13:25:14 INFO output.FileOutputCommitter: Saved output of task 'attempt_local846992281_0001_m_000000_0' to WLSLOG_COPY

15/04/03 13:25:14 INFO mapred.LocalJobRunner:

15/04/03 13:25:14 INFO mapred.Task: Task 'attempt_local846992281_0001_m_000000_0' done.

15/04/03 13:25:14 INFO mapred.LocalJobRunner: Finishing task: attempt_local846992281_0001_m_000000_0

15/04/03 13:25:14 INFO mapred.LocalJobRunner: Map task executor complete.

15/04/03 13:25:15 INFO mapred.JobClient: Job complete: job_local846992281_0001

15/04/03 13:25:15 INFO mapred.JobClient: Counters: 18

15/04/03 13:25:15 INFO mapred.JobClient: File System Counters

15/04/03 13:25:15 INFO mapred.JobClient: FILE: Number of bytes read=21673967

15/04/03 13:25:15 INFO mapred.JobClient: FILE: Number of bytes written=21996158

15/04/03 13:25:15 INFO mapred.JobClient: FILE: Number of read operations=0

15/04/03 13:25:15 INFO mapred.JobClient: FILE: Number of large read operations=0

15/04/03 13:25:15 INFO mapred.JobClient: FILE: Number of write operations=0

15/04/03 13:25:15 INFO mapred.JobClient: HDFS: Number of bytes read=0

15/04/03 13:25:15 INFO mapred.JobClient: HDFS: Number of bytes written=717

15/04/03 13:25:15 INFO mapred.JobClient: HDFS: Number of read operations=1

15/04/03 13:25:15 INFO mapred.JobClient: HDFS: Number of large read operations=0

15/04/03 13:25:15 INFO mapred.JobClient: HDFS: Number of write operations=2

15/04/03 13:25:15 INFO mapred.JobClient: Map-Reduce Framework

15/04/03 13:25:15 INFO mapred.JobClient: Map input records=7

15/04/03 13:25:15 INFO mapred.JobClient: Map output records=7

15/04/03 13:25:16 INFO mapred.JobClient: Input split bytes=87

15/04/03 13:25:16 INFO mapred.JobClient: Spilled Records=0

15/04/03 13:25:16 INFO mapred.JobClient: CPU time spent (ms)=0

15/04/03 13:25:16 INFO mapred.JobClient: Physical memory (bytes) snapshot=0

15/04/03 13:25:16 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0

15/04/03 13:25:16 INFO mapred.JobClient: Total committed heap usage (bytes)=180756480

15/04/03 13:25:16 INFO mapreduce.ImportJobBase: Transferred 717 bytes in 182.8413 seconds (3.9214 bytes/sec)

15/04/03 13:25:16 INFO mapreduce.ImportJobBase: Retrieved 7 records.

15/04/03 13:25:16 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@3d4817

15/04/03 13:25:16 DEBUG hive.HiveImport: Hive.inputTable: WLSLOG_COPY

15/04/03 13:25:16 DEBUG hive.HiveImport: Hive.outputTable: WLS_LOG

15/04/03 13:25:16 DEBUG manager.OracleManager: Using column names query: SELECT t.* FROM WLSLOG_COPY t WHERE 1=0

15/04/03 13:25:16 DEBUG manager.SqlManager: Execute getColumnInfoRawQuery : SELECT t.* FROM WLSLOG_COPY t WHERE 1=0

15/04/03 13:25:16 DEBUG manager.OracleManager$ConnCache: Got cached connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE

15/04/03 13:25:18 INFO manager.OracleManager: Time zone has been set to GMT

15/04/03 13:25:18 DEBUG manager.SqlManager: Using fetchSize for next query: 1000

15/04/03 13:25:18 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM WLSLOG_COPY t WHERE 1=0

15/04/03 13:25:21 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:@localhost:1521:ORCL/OE

15/04/03 13:25:21 DEBUG hive.TableDefWriter: Create statement: CREATE TABLE `WLS_LOG` ( `TIME_STAMP` STRING, `CATEGORY` STRING, `TYPE` STRING, `SERVERNAME` STRING, `CODE` STRING, `MSG` STRING) COMMENT 'Imported by sqoop on 2015/04/03 13:25:21' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001' LINES TERMINATED BY '\012' STORED AS TEXTFILE

15/04/03 13:25:21 DEBUG hive.TableDefWriter: Load statement: LOAD DATA INPATH 'hdfs://10.0.2.15:8020/user/root/WLSLOG_COPY' INTO TABLE `WLS_LOG`

15/04/03 13:25:21 INFO hive.HiveImport: Loading uploaded data into Hive

15/04/03 13:25:23 DEBUG hive.HiveImport: Using in-process Hive instance.

15/04/03 13:25:23 DEBUG util.SubprocessSecurityManager: Installing subprocess security manager

Logging initialized using configuration in jar:file:/sqoop/hive-0.13.1-cdh5.2.0/lib/hive-common-0.13.1-cdh5.2.0.jar!/hive-log4j.properties

OK

Time taken: 75.724 seconds

Loading data to table default.wls_log

Table default.wls_log stats: [numFiles=1, numRows=0, totalSize=717, rawDataSize=0]

OK

Time taken: 36.523 seconds

 

Start the Hive Thrift Server.

 

hive --service hiveserver

 

Start the Hive shell.

 

>hive

 

Run the following SELECT statement in the Hive shell to list the imported data.

 

SELECT * FROM default.wls_log

 

The 7 rows of data imported from Oracle Database gets listed.

 


 

 

 

In this tutorial we used Sqoop 1.4.5 with Oracle Database 11g.


Toad crash on connect

$
0
0

Toad 6.0 on Win 7

Toad consistently crashes when opening a connection to a database. This is some of the detail from Event Viewer. Seems like it's having trouble with DB2APP.dll

Can anyone give me a hint as to what's wrong?

Application: toad.exe

Framework Version: v4.0.30319

Description: The process was terminated due to an unhandled exception.

Exception Info: System.AccessViolationException

Stack:

at IBM.Data.DB2.UnsafeNativeMethods+DB232.CSCStartTxnTimerADONET(IntPtr, Int32 ByRef)

at IBM.Data.DB2.DB2CscConnection.StartTxnTimer()

at IBM.Data.DB2.DB2Transaction.BeginTransaction()

at IBM.Data.DB2.DB2Connection.BeginTransactionObject(System.Data.IsolationLevel)

at IBM.Data.DB2.DB2Connection.BeginTransaction(System.Data.IsolationLevel)

at IBM.Data.DB2.DB2Connection.BeginTransaction()

at IBM.Data.DB2.DB2Connection.System.Data.IDbConnection.BeginTransaction()

at Quest.Toad.Db.Connection.BeginTransaction(System.Data.IDbConnection)

at Quest.Toad.DB2.DB2ToadConnection.BeginTransaction(System.Data.IDbConnection)

at Quest.Toad.Db.Connection.OpenConnection(System.Data.IDbConnection)

at Quest.Toad.DB2.DB2ToadConnection.OpenConnection(System.Data.IDbConnection)

at Quest.Toad.Db.Connection.AllocConnection()

at Quest.Toad.Db.Connection.Connect(Boolean)

at Quest.Toad.Db.Provider+BackgroundConnector.CreateBackgroundConnection()

at System.Threading.ThreadHelper.ThreadStart_Context(System.Object)

at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)

at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)

at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)

at System.Threading.ThreadHelper.ThreadStart()

=====================================================================

Faulting application name: toad.exe, version: 6.0.0.373, time stamp: 0x54639ac8

Faulting module name: DB2APP.dll, version: 10.5.600.232, time stamp: 0x55bc43c6

Exception code: 0xc0000005

Fault offset: 0x004167c5

Faulting process id: 0xde4

Faulting application start time: 0x01d10c5eb941bf67

Faulting application path: C:\Program Files\Dell\Toad for DB2 6.0\toad.exe

Faulting module path: C:\IBM\SQLLIB\BIN\DB2APP.dll

Report Id: 118309e7-7852-11e5-8582-0023240b2629

2015-10: October Issue

$
0
0

Top stories for October 2015

Feature article

Statistica Analytics Forum Now Open
for Toad Users

As you join the growing number of Toad users who are feeding data into the Dell Statistica analytics platform, and discovering the benefits of end-to-end solutions, be sure to tap into the expertise of your new peers in the Statistica User Discussion Forum. Come post content and questions about all things analytics ― and especially about the integration of Statistica with Toad Data Point and Toad Intelligence Central.

This new forum is monitored by Dell experts and peers alike, so your posts will be addressed as the forum grows. Describe your best practices, seek feedback on vexing challenges, make product suggestions…and steer your fellow Toad users here, too.  Additional Statistica forums include education and Statistica visual basic topics. We’re looking forward to “meeting” you online!

Check Out the Forum

 

 

From the pipelines

Oracle®: Calling REST Web Services from ADF

by Sten Vesterli

Web services used to be offered only as SOAP web services. These have many advantages like a robust description in a Web Service Definition Language (WSDL file) and good error handling.

However, many developers find working with SOAP to be cumbersome and prefer the simpler interface of REST web services. In a REST web service, the call is simply an HTTP request to a specific URL with all necessary parameters passed as part of the URL. The response is a web page, often in JSON or XML format that can then be parsed.

In this article, learn how to call a simple REST API by constructing a URL, receiving a JSON response and parsing it to extract the relevant parts.

Read Full Article

 

SQL Server®: Where Is My Data Stored?

by Andrew Pruski

Tables within SQL Server can contain a maximum of 8060 bytes per row. However, even though columns are limited to a maximum of 8000 bytes in size, the combined size of all the columns in a table can exceed the 8060 limit. But what happens when this limit is exceeded?

In this insightful article, you’ll learn where and why SQL Server is really storing your data.

Read Full Article

 

IBM® DB2®: Understanding DB2 for LUW Messages

by Craig Mullins

There is a lot of information and technology for users of DB2 for Linux, Unix and Windows to learn and master in order to be an effective DBA or SQL developer. At a minimum, you will need to understand relational database design, SQL and the basics of DB2, like how to issue a command, how to use Control Center and the like. But you will also need to know what to do when you are working with DB2 and you get an error message. In this article, you’ll learn how to interpret the string of characters and numbers included in a DB2 LUW error message.

Read Full Article

 

Data Analytics: How to Use Business Analytics on Agile Projects

by John Weathington

What happened to the role of the business analyst on agile software projects? Extreme programming, a pioneering agile methodology, talks a lot about the interactions between developers and customers, but there's no mention of business analysts. Scrum, one of the more popular agile methodologies these days, talks about a scrum team, which may include an analyst, but there's not much prescription about what a business analyst should be doing on a scrum team.


In this article, learn how the business analyst can provide an invaluable service on an agile project. 

Read Full Article

 

Priceline.com Achieves Close to 100 Percent Uptime Over a Decade of SharePlex Use

Priceline.com has relied on SharePlex for a decade to ensure high availability and performance for its active website, which can exceed 10 million unique visitors per month. The company not only enjoys nearly 100 percent availability, but also seamless migrations and minimized hardware and storage costs.

Check out the new Priceline.com case study to see how they used SharePlex to offload reporting, reduce the risks of hardware and software changes, perform impact-free Oracle platform migrations and maintain high availability and disaster recovery in their complex environment.

See how the right toolset can help your organization ensure continuous uptime of Oracle databases to improve overall system availability and keep customers happy.

Read Case Study

Upcoming Events

Dell World Software User Forum
October 20 – 22, 2015

Join us in Austin, Texas, to solve your biggest IT challenges head on. You’ll propel your career into the future with direct access to the engineers and experts behind the software products you depend on every day. Deep dive into advanced analytics and Anypoint Systems Management. Learn the ins and outs of secure network access. Get hands-on with data protection and more.

Oracle OpenWorld, San Francisco, CA
October 26-28, 2015

Join Dell Software at the Moscone Center in San Francisco for the world’s most insightful sessions on Oracle. 

PASS Summit, Seattle, WA
October 28-30

Join Dell Software at the largest, most intensive conference for Microsoft SQL Server and BI professionals. 

 

Support tips and tricks

Did you know about the Information Management blog community on Dell’s TechCenter?

Check out the top three blogs:

 

Dell
1 Dell Way, Round Rock, TX 78664 U.S.A
Refer to our web site for international office information

Toad World
www.toadworld.com
E-mail: admin@toadworld.com

Wait Types

$
0
0

See Also: [[wiki:Main Page|Main_Page]] - [[wiki:Monitoring & Tuning|Monitoring & Tuning]] - [[wiki:Wait Events|Wait Events]]

This article is the Collaboration of the Month for February 2010. Find out how it can be improved, read [[wiki:How To Help|how to edit articles]], then jump in to make this an article we can be proud of!

Contents

What Are SQL Server Waits?

Instead of measuring activity of CPU, storage, or memory, why not ask what SQL Server has been waiting on when executing queries? Starting with SQL Server 2005, some of SQL Server's [[wiki:DMVs|Dynamic Management Views (DMVs)]] return wait data - measurements of what the database engine has been waiting on.

In general there are three categories of waits that could affect any given request:

  • Resource waits are caused by a particular resource, perhaps a specific lock that is unavailable when the requested is submitted. Resource waits are the ones you should focus on for troubleshooting the large majority of performance issues.
  • External waits occur when SQL Server worker thread is waiting on an external process, such as extended stored procedure to be completed. External wait does not necessarily mean that the connection is idle; rather it might mean that SQL Server is executing an external code which it cannot control. Finally the queue waits occur if a worker thread is idle and is waiting for work to be assigned to it.
  • Queue waits normally apply to internal background tasks, such as ghost cleanup, which physically removes records that have been previously deleted. Normally you don't have to worry about any performance degradation due to queue waits.

You should expect some waits on a busy system. This is completely normal and doesn't necessarily translate into a performance issue. Wait events become a problem if they tend to be consistently long over a significant period of time. For example, waits that take few milliseconds over a 2 hour monitoring window are not concerning. Those waits taking over 15 minutes over a 2 hour monitoring window should be investigated more closely.

Queries to Check SQL Server Waits

  • [[wiki:Misc DMV queries|Current SQL Server Activity]] - a replacement for SP_Who2 that checks active queries, waits one second, then checks again. For all active queries, it shows their command and what wait type is holding them up.

Want to add more queries here? Go to the [[wiki:Transact SQL Code Library|Transact SQL Code Library]], click Edit, and add a new link on that page to describe your query. Just copy/paste one of the other links and edit it. After you save the page, your newly created link will appear red. You can click on it to edit a new page. Then come back here and add a link to it.

Explanations of SQL Server Wait Types

Some of these waits occur for internal operations and no tuning is necessary to avoid such waits - we identify those as well. Some of the following have more than one wait type. If you're looking for QUERY_NOTIFICATION_SUBSCRIPTION_MUTEX, for example, click on the QUERY_NOTIFICATION_* group and each of the underlying waits will be listed there.

  • [[wiki:ABR|ABR]] -
  • [[wiki:ASSEMBLY LOAD|ASSEMBLY_LOAD]] -
  • [[wiki:ASYNC DISKPOOL LOCK|ASYNC_DISKPOOL_LOCK]] - I/O
  • [[wiki:ASYNC IO COMPLETION|ASYNC_IO_COMPLETION]] - I/O Used to indicate a worker is waiting on a asynchronous I/O operation to complete not associated with database pages
  • [[wiki:ASYNC NETWORK IO|ASYNC_NETWORK_IO]] - Network
  • [[wiki:AUDIT GROUPCACHE LOCK|AUDIT_GROUPCACHE_LOCK]] -
  • [[wiki:AUDIT LOGINCACHE LOCK|AUDIT_LOGINCACHE_LOCK]] -
  • [[wiki:AUDIT ON DEMAND TARGET LOCK|AUDIT_ON_DEMAND_TARGET_LOCK]] -
  • [[wiki:AUDIT XE SESSION MGR|AUDIT_XE_SESSION_MGR]] -
  • [[wiki:BACKUP|BACKUP]] - Backup
  • [[wiki:BACKUP CLIENTLOCK|BACKUP_CLIENTLOCK]] - Backup
  • [[wiki:BACKUP OPERATOR|BACKUP_OPERATOR]] - Backup
  • [[wiki:BACKUPBUFFER|BACKUPBUFFER]] - Backup
  • [[wiki:BACKUPIO|BACKUPIO]] - Backup
  • [[wiki:BACKUPTHREAD|BACKUPTHREAD]] - Backup
  • [[wiki:BAD PAGE PROCESS|BAD_PAGE_PROCESS]] - Memory
  • [[wiki:BROKER *|BROKER_*]] - Service Broker
  • [[wiki:BUILTIN HASHKEY MUTEX|BUILTIN_HASHKEY_MUTEX]] - Internal
  • [[wiki:CHECK PRINT RECORD|CHECK_PRINT_RECORD]] -
  • [[wiki:CHECKPOINT QUEUE|CHECKPOINT_QUEUE]] - Buffer Used by background worker that waits on events on queue to process checkpoint requests. This is an "optional" wait type see Important Notes section in blog
  • [[wiki:CHKPT|CHKPT]] - Buffer Used to coordinate the checkpoint background worker thread with recovery of master so checkpoint won't start accepting queue requests until master online
  • [[wiki:CLEAR DB|CLEAR_DB]] -
  • [[wiki:CLR *|CLR_*]] - Common Language Runtime (CLR)
  • [[wiki:CLRHOST STATE ACCESS|CLRHOST_STATE_ACCESS]] -
  • [[wiki:CMEMTHREAD|CMEMTHREAD]] - Memory
  • [[wiki:COMMIT TABLE|COMMIT_TABLE]] -
  • [[wiki:CURSOR|CURSOR]] - Internal
  • [[wiki:CURSOR ASYNC|CURSOR_ASYNC]] - Internal
  • [[wiki:CXPACKET|CXPACKET]] - Query Used to synchronize threads involved in a parallel query. This wait type only means a parallel query is executing.
  • [[wiki:CXROWSET SYNC|CXROWSET_SYNC]] -
  • [[wiki:DAC INIT|DAC_INIT]] -
  • [[wiki:DBMIRROR *|DBMIRROR_*]] - Database Mirroring
  • [[wiki:DBMIRRORING CMD|DBMIRRORING_CMD]] - Database Mirroring
  • [[wiki:DBTABLE|DBTABLE]] - Internal
  • [[wiki:DEADLOCK ENUM MUTEX|DEADLOCK_ENUM_MUTEX]] - Lock
  • [[wiki:DEADLOCK TASK SEARCH|DEADLOCK_TASK_SEARCH]] - Lock
  • [[wiki:DEBUG|DEBUG]] - Internal
  • [[wiki:DISABLE VERSIONING|DISABLE_VERSIONING]] - Row versioning
  • [[wiki:DISKIO SUSPEND|DISKIO_SUSPEND]] - BACKUP Used to indicate a worker is waiting to process I/O for a database or log file associated with a SNAPSHOT BACKUP
  • [[wiki:DISPATCHER QUEUE SEMAPHORE|DISPATCHER_QUEUE_SEMAPHORE]] -
  • [[wiki:DLL LOADING MUTEX|DLL_LOADING_MUTEX]] - XML
  • [[wiki:DROPTEMP|DROPTEMP]] - Temporary Objects
  • [[wiki:DTC|DTC]] - Distributed Transaction Coordinator (DTC)
  • [[wiki:DTC ABORT REQUEST|DTC_ABORT_REQUEST]] - DTC
  • [[wiki:DTC RESOLVE|DTC_RESOLVE]] - DTC
  • [[wiki:DTC STATE|DTC_STATE]] - DTC
  • [[wiki:DTC TMDOWN REQUEST|DTC_TMDOWN_REQUEST]] - DTC
  • [[wiki:DTC WAITFOR OUTCOME|DTC_WAITFOR_OUTCOME]] - DTC
  • [[wiki:DUMP LOG *|DUMP_LOG_*]] -
  • [[wiki:DUMPTRIGGER|DUMPTRIGGER]] -
  • [[wiki:EC|EC]] -
  • [[wiki:EE PMOLOCK|EE_PMOLOCK]] - Memory
  • [[wiki:EE SPECPROC MAP INIT|EE_SPECPROC_MAP_INIT]] - Internal
  • [[wiki:ENABLE VERSIONING|ENABLE_VERSIONING]] - Row versioning
  • [[wiki:ERROR REPORTING MANAGER|ERROR_REPORTING_MANAGER]] - Internal
  • [[wiki:EXCHANGE|EXCHANGE]] - Parallelism (processor)
  • [[wiki:EXECSYNC|EXECSYNC]] - Parallelism (processor)
  • [[wiki:EXECUTION PIPE EVENT INTERNAL|EXECUTION_PIPE_EVENT_INTERNAL]] -
  • [[wiki:Failpoint|Failpoint]] -
  • [[wiki:FCB REPLICA *|FCB_REPLICA_*]] - Database snapshot
  • [[wiki:FS FC RWLOCK|FS_FC_RWLOCK]] -
  • [[wiki:FS GARBAGE COLLECTOR SHUTDOWN|FS_GARBAGE_COLLECTOR_SHUTDOWN]] -
  • [[wiki:FS HEADER RWLOCK|FS_HEADER_RWLOCK]] -
  • [[wiki:FS LOGTRUNC RWLOCK|FS_LOGTRUNC_RWLOCK]] -
  • [[wiki:FSA FORCE OWN XACT|FSA_FORCE_OWN_XACT]] -
  • [[wiki:FSAGENT|FSAGENT]] -
  • [[wiki:FSTR CONFIG *|FSTR_CONFIG_*]] -
  • [[wiki:FT *|FT_*]] - Full Text Search
  • [[wiki:GUARDIAN|GUARDIAN]] -
  • [[wiki:HTTP ENDPOINT COLLCREATE|HTTP_ENDPOINT_COLLCREATE]] -
  • [[wiki:HTTP ENUMERATION|HTTP_ENUMERATION]] - Service Broker
  • [[wiki:HTTP START|HTTP_START]] - Service Broker
  • [[wiki:IMP IMPORT MUTEX|IMP_IMPORT_MUTEX]] -
  • [[wiki:IMPPROV IOWAIT|IMPPROV_IOWAIT]] - I/O
  • [[wiki:INDEX USAGE STATS MUTEX|INDEX_USAGE_STATS_MUTEX]] -
  • [[wiki:INTERNAL TESTING|INTERNAL_TESTING]] -
  • [[wiki:IO AUDIT MUTEX|IO_AUDIT_MUTEX]] - Profiler Trace
  • [[wiki:IO COMPLETION|IO_COMPLETION]] - I/O Used to indicate a wait for I/O for operation (typically synchronous) like sorts and various situations where the engine needs to do a synchronous I/O
  • [[wiki:IO RETRY|IO_RETRY]] -
  • [[wiki:IOAFF RANGE QUEUE|IOAFF_RANGE_QUEUE]] -
  • [[wiki:KSOURCE WAKEUP|KSOURCE_WAKEUP]] - Shutdown Used by the background worker "signal handler" which waits for a signal to shutdown SQL Server
  • [[wiki:KTM *|KTM_*]] -
  • [[wiki:LATCH *|LATCH_*]] - Latch
  • [[wiki:LAZYWRITER SLEEP|LAZYWRITER_SLEEP]] - Buffer Used by the Lazywriter background worker to indicate it is sleeping waiting to wake up and check for work to do
  • [[wiki:LCK M *|LCK_M_*]] - Lock
  • [[wiki:LOGBUFFER|LOGBUFFER]] - Transaction Log Used to indicate a worker thread is waiting for a log buffer to write log blocks for a transaction
  • [[wiki:LOGGENERATION|LOGGENERATION]] -
  • [[wiki:LOGMGR *|LOGMGR_*]] - Internal
  • [[wiki:LOWFAIL MEMMGR QUEUE|LOWFAIL_MEMMGR_QUEUE]] - Memory
  • [[wiki:METADATA LAZYCACHE RWLOCK|METADATA_LAZYCACHE_RWLOCK]] -
  • [[wiki:MIRROR SEND MESSAGE|MIRROR_SEND_MESSAGE]] -
  • [[wiki:MISCELLANEOUS|MISCELLANEOUS]] - Ignore This really should be called "Not Waiting".
  • [[wiki:MSQL DQ|MSQL_DQ]] - Distributed Query
  • [[wiki:MSQL SYNC PIPE|MSQL_SYNC_PIPE]] -
  • [[wiki:MSQL XACT MGR MUTEX|MSQL_XACT_MGR_MUTEX]] - Transaction
  • [[wiki:MSQL XACT MUTEX|MSQL_XACT_MUTEX]] - Transaction
  • [[wiki:MSQL XP|MSQL_XP]] - Extended Procedure
  • [[wiki:MSSEARCH|MSSEARCH]] - Full-Text Search
  • [[wiki:NET WAITFOR PACKET|NET_WAITFOR_PACKET]] - Network
  • [[wiki:NODE CACHE MUTEX|NODE_CACHE_MUTEX]] -
  • [[wiki:OLEDB|OLEDB]] - OLEDB
  • [[wiki:ONDEMAND TASK QUEUE|ONDEMAND_TASK_QUEUE]] - Internal
  • [[wiki:PAGEIOLATCH *|PAGEIOLATCH_*]] - Latch
  • [[wiki:PAGELATCH *|PAGELATCH_*]] - Latch
  • [[wiki:PARALLEL BACKUP QUEUE|PARALLEL_BACKUP_QUEUE]] - Backup or Restore
  • [[wiki:PERFORMANCE COUNTERS RWLOCK|PERFORMANCE_COUNTERS_RWLOCK]] -
  • [[wiki:PREEMPTIVE ABR|PREEMPTIVE_ABR]] -
  • [[wiki:PREEMPTIVE AUDIT *|PREEMPTIVE_AUDIT_*]] -
  • [[wiki:PREEMPTIVE CLOSEBACKUPMEDIA|PREEMPTIVE_CLOSEBACKUPMEDIA]] -
  • [[wiki:PREEMPTIVE CLOSEBACKUPTAPE|PREEMPTIVE_CLOSEBACKUPTAPE]] -
  • [[wiki:PREEMPTIVE CLOSEBACKUPVDIDEVICE|PREEMPTIVE_CLOSEBACKUPVDIDEVICE]] -
  • [[wiki:PREEMPTIVE CLUSAPI CLUSTERRESOURCECONTROL|PREEMPTIVE_CLUSAPI_CLUSTERRESOURCECONTROL]] -
  • [[wiki:PREEMPTIVE COM *|PREEMPTIVE_COM_*]] -
  • [[wiki:PREEMPTIVE CONSOLEWRITE|PREEMPTIVE_CONSOLEWRITE]] -
  • [[wiki:PREEMPTIVE CREATEPARAM|PREEMPTIVE_CREATEPARAM]] -
  • [[wiki:PREEMPTIVE DEBUG|PREEMPTIVE_DEBUG]] -
  • [[wiki:PREEMPTIVE DFSADDLINK|PREEMPTIVE_DFSADDLINK]] -
  • [[wiki:PREEMPTIVE DFS*|PREEMPTIVE_DFS*]] -
  • [[wiki:PREEMPTIVE DTC *|PREEMPTIVE_DTC_*]] -
  • [[wiki:PREEMPTIVE FILESIZEGET|PREEMPTIVE_FILESIZEGET]] -
  • [[wiki:PREEMPTIVE FSAOLEDB *|PREEMPTIVE_FSAOLEDB_*]] -
  • [[wiki:PREEMPTIVE FSRECOVER UNCONDITIONALUNDO|PREEMPTIVE_FSRECOVER_UNCONDITIONALUNDO]] -
  • [[wiki:PREEMPTIVE GETRMINFO|PREEMPTIVE_GETRMINFO]] -
  • [[wiki:PREEMPTIVE LOCKMONITOR|PREEMPTIVE_LOCKMONITOR]] -
  • [[wiki:PREEMPTIVE MSS RELEASE|PREEMPTIVE_MSS_RELEASE]] -
  • [[wiki:PREEMPTIVE ODBCOPS|PREEMPTIVE_ODBCOPS]] -
  • [[wiki:PREEMPTIVE OLE UNINIT|PREEMPTIVE_OLE_UNINIT]] -
  • [[wiki:PREEMPTIVE OLEDB *|PREEMPTIVE_OLEDB_*]] -
  • [[wiki:PREEMPTIVE OLEDBOPS|PREEMPTIVE_OLEDBOPS]] -
  • [[wiki:PREEMPTIVE OS *|PREEMPTIVE_OS_*]] -
  • [[wiki:PREEMPTIVE REENLIST|PREEMPTIVE_REENLIST]] -
  • [[wiki:PREEMPTIVE RESIZELOG|PREEMPTIVE_RESIZELOG]] -
  • [[wiki:PREEMPTIVE ROLLFORWARDREDO|PREEMPTIVE_ROLLFORWARDREDO]] -
  • PREEMPTIVE_ROLLFORWARDUNDO -
  • PREEMPTIVE_SB_STOPENDPOINT -
  • PREEMPTIVE_SERVER_STARTUP -
  • PREEMPTIVE_SETRMINFO -
  • PREEMPTIVE_SHAREDMEM_GETDATA -
  • PREEMPTIVE_SNIOPEN -
  • PREEMPTIVE_SOSHOST -
  • PREEMPTIVE_SOSTESTING -
  • PREEMPTIVE_STARTRM -
  • PREEMPTIVE_STREAMFCB_CHECKPOINT -
  • PREEMPTIVE_STREAMFCB_RECOVER -
  • PREEMPTIVE_STRESSDRIVER -
  • PREEMPTIVE_TESTING -
  • PREEMPTIVE_TRANSIMPORT -
  • PREEMPTIVE_UNMARSHALPROPAGATIONTOKEN -
  • PREEMPTIVE_VSS_CREATESNAPSHOT -
  • PREEMPTIVE_VSS_CREATEVOLUMESNAPSHOT -
  • [[wiki:PREEMPTIVE XE *|PREEMPTIVE_XE_*]] -
  • PREEMPTIVE_XETESTING -
  • PREEMPTIVE_XXX - Varies Used to indicate a worker is running coded that is not under the SQLOS Scheduling Systems
  • PRINT_ROLLBACK_PROGRESS - Alter Database state
  • QNMANAGER_ACQUIRE -
  • QPJOB_KILL - Update of statistics
  • QPJOB_WAITFOR_ABORT - Update of statistics
  • QRY_MEM_GRANT_INFO_MUTEX -
  • QUERY_ERRHDL_SERVICE_DONE -
  • QUERY_EXECUTION_INDEX_SORT_EVENT_OPEN - Building indexes
  • [[wiki:QUERY NOTIFICATION *|QUERY_NOTIFICATION_*]] - Query Notification Manager
  • QUERY_OPTIMIZER_PRINT_MUTEX - Query Notification Manager
  • QUERY_TRACEOUT - Query Notification Manager
  • QUERY_WAIT_ERRHDL_SERVICE -
  • RECOVER_CHANGEDB - Internal
  • REPL_CACHE_ACCESS - Replication
  • REPL_HISTORYCACHE_ACCESS -
  • REPL_SCHEMA_ACCESS - Replication
  • REPL_TRANHASHTABLE_ACCESS -
  • REPLICA_WRITES - Database Snapshots
  • REQUEST_DISPENSER_PAUSE - Backup or Restore
  • REQUEST_FOR_DEADLOCK_SEARCH - Lock Used by background worker "Lock Monitor" to search for deadlocks. This is an "optional" wait type see Important Notes section in blog
  • RESMGR_THROTTLED -
  • RESOURCE_QUERY_SEMAPHORE_COMPILE - Query Used to indicate a worker is waiting to compile a query due to too many other concurrent query compilations that require "not small" amounts of memory.
  • RESOURCE_QUEUE - Internal
  • [[wiki:RESOURCE SEMAPHORE *|RESOURCE_SEMAPHORE_*]] - Query Used to indicate a worker is waiting to be allowed to perform an operation requiring "query memory" such as hashes and sorts
  • RG_RECONFIG -
  • SEC_DROP_TEMP_KEY - Security
  • SECURITY_MUTEX -
  • SEQUENTIAL_GUID -
  • SERVER_IDLE_CHECK - Internal
  • SHUTDOWN - Internal
  • [[wiki:SLEEP *|SLEEP_*]] - Internal
  • [[wiki:SNI *|SNI_*]] - Internal
  • [[wiki:SOAP *|SOAP_*]] - SOAP
  • [[wiki:SOS *|SOS_*]] - Internal
  • [[wiki:SOSHOST *|SOSHOST_*]] - CLR
  • [[wiki:SQLCLR *|SQLCLR_*]] - CLR
  • SQLSORT_NORMMUTEX -
  • SQLSORT_SORTMUTEX -
  • [[wiki:SQLTRACE *|SQLTRACE_*]] - Trace
  • SRVPROC_SHUTDOWN -
  • TEMPOBJ -
  • THREADPOOL - SQLOS Indicates a wait for a task to be assigned to a worker thread
  • TIMEPRIV_TIMEPERIOD -
  • TRACE_EVTNOTIF -
  • [[wiki:TRACEWRITE|TRACEWRITE]] -
  • [[wiki:TRAN *|TRAN_*]] - TRAN_MARKLATCH
  • TRANSACTION_MUTEX -
  • UTIL_PAGE_ALLOC -
  • VIA_ACCEPT -
  • VIEW_DEFINITION_MUTEX -
  • WAIT_FOR_RESULTS -
  • WAITFOR - Background
  • WAITFOR_TASKSHUTDOWN -
  • WAITSTAT_MUTEX -
  • WCC -
  • WORKTBL_DROP -
  • WRITE_COMPLETION -
  • WRITELOG - I/O Indicates a worker thread is waiting for LogWriter to flush log blocks.
  • XACT_OWN_TRANSACTION -
  • XACT_RECLAIM_SESSION -
  • XACTLOCKINFO -
  • XACTWORKSPACE_MUTEX -
  • [[wiki:XE *|XE_*]] - XEvent

Related Reading

Toad Editions and Features Matrix

Toad for Oracle Freeware v12.8 (32-bit)

$
0
0

This is the FREEWARE edition of Toad™ for Oracle. The Freeware edition has certain limitations, and is not intended to be used as a TRIAL for the Commercial edition of Toad for Oracle.

Notes: 

  • The Toad for Oracle Freeware version may be used for a maximum of five (5) Seats within Customer's organization and expires each year after the date of its initial download ("Freeware Term"). Upon expiration of the Freeware Term, the same 5 Seats may be downloaded again by the same users for the Freeware Term. For more than five (5) users within an organization, Customer will need to purchase licenses of Commercial Toad for Oracle. This license does not entitle Customer to receive hard-copy documentation, technical support, telephone assistance, or enhancements or updates to the Freeware from Dell Software. The terms "Seat" and "Freeware" shall have the same meaning as those set forth in the Product Guide.
     
  • It is recommended that your client version be of the same release (or higher) as your database server. In addition, to take advantage of Toad's new Unicode support, you must be working with Oracle client/server 9i or above.
     
  • All versions of the Oracle client are not necessarily compatible with all versions of the Oracle Server, which may cause errors within Toad. See Oracle’s Metalink article 207303.1 "Client / Server / Interoperability Support Between Different Oracle Versions" for more information about possible compatibility issues.
     

Resources

 

 

 

POST QUESTION / COMMENT

Do you have a question or comment about this freeware?  Post it to the product forum:

Go to Forum

Toad for Oracle Freeware v12.8 (64-bit)

$
0
0

    

This is the FREEWARE edition of Toad™ for Oracle. The Freeware edition has certain limitations, and is not intended to be used as a TRIAL for the Commercial edition of Toad for Oracle.

Notes:

  • The Toad for Oracle Freeware version may be used for a maximum of five (5) Seats within Customer's organization and expires each year after the date of its initial download ("Freeware Term"). Upon expiration of the Freeware Term, the same 5 Seats may be downloaded again by the same users for the Freeware Term. For more than five (5) users within an organization, Customer will need to purchase licenses of Commercial Toad for Oracle. This license does not entitle Customer to receive hard-copy documentation, technical support, telephone assistance, or enhancements or updates to the Freeware from Dell Software. The terms "Seat" and "Freeware" shall have the same meaning as those set forth in the Product Guide.
     
  • It is recommended that your client version be of the same release (or higher) as your database server. In addition, to take advantage of Toad's new Unicode support, you must be working with Oracle client/server 9i or above. 
     
  • All versions of the Oracle client are not necessarily compatible with all versions of the Oracle Server, which may cause errors within Toad. See Oracle’s Metalink article 207303.1 "Client / Server / Interoperability Support Between Different Oracle Versions" for more information about possible compatibility issues.

Resources

 

 

POST QUESTION / COMMENT

Do you have a question or comment about this freeware?  Post it to the product forum:

Go to Forum

SQL Optimizer for Oracle

$
0
0
SQL Optimizer proactively identifies potential performance issues and automates SQL optimization by scanning and analyzing running SQL statements. It explores every possible way of improving Oracle SQL performance tuning.

Product Documentation

$
0
0

Learn More About SQL Optimizer for Oracle

The following documents provide information about how to get started with SQL Optimizer, a list of what's new in the latest release, and instructions for installing the product.

SQL Optimizer for Oracle 9.1

SQL Optimizer for Oracle 9.0

SQL Optimizer for Oracle 8.9.1

SQL Optimizer for Oracle 8.9

SQL Optimizer for Oracle 8.8.1

SQL Optimizer for Oracle 8.8

Please visit SupportLink for current and earlier-version product documentation: https://support.software.dell.com/sql-optimizer-for-oracle/

How can i connect to maria db?

$
0
0

i tried insert user and privilege  into db, but can't connect . please~~~ help me~~~ 

Toad Editions and Features Matrix

Toad for Oracle Freeware v12.8 (32-bit)

$
0
0

This is the FREEWARE edition of Toad™ for Oracle. The Freeware edition has certain limitations, and is not intended to be used as a TRIAL for the Commercial edition of Toad for Oracle.

Notes: 

  • The Toad for Oracle Freeware version may be used for a maximum of five (5) Seats within Customer's organization and expires each year after the date of its initial download ("Freeware Term"). Upon expiration of the Freeware Term, the same 5 Seats may be downloaded again by the same users for the Freeware Term. For more than five (5) users within an organization, Customer will need to purchase licenses of Commercial Toad for Oracle. This license does not entitle Customer to receive hard-copy documentation, technical support, telephone assistance, or enhancements or updates to the Freeware from Dell Software. The terms "Seat" and "Freeware" shall have the same meaning as those set forth in the Product Guide.
     
  • It is recommended that your client version be of the same release (or higher) as your database server. In addition, to take advantage of Toad's new Unicode support, you must be working with Oracle client/server 9i or above.
     
  • All versions of the Oracle client are not necessarily compatible with all versions of the Oracle Server, which may cause errors within Toad. See Oracle’s Metalink article 207303.1 "Client / Server / Interoperability Support Between Different Oracle Versions" for more information about possible compatibility issues.
     

Resources

 

 

 

POST QUESTION / COMMENT

Do you have a question or comment about this freeware?  Post it to the product forum:

Go to Forum

Toad for Oracle Freeware v12.8 (64-bit)

$
0
0

    

This is the FREEWARE edition of Toad™ for Oracle. The Freeware edition has certain limitations, and is not intended to be used as a TRIAL for the Commercial edition of Toad for Oracle.

Notes:

  • The Toad for Oracle Freeware version may be used for a maximum of five (5) Seats within Customer's organization and expires each year after the date of its initial download ("Freeware Term"). Upon expiration of the Freeware Term, the same 5 Seats may be downloaded again by the same users for the Freeware Term. For more than five (5) users within an organization, Customer will need to purchase licenses of Commercial Toad for Oracle. This license does not entitle Customer to receive hard-copy documentation, technical support, telephone assistance, or enhancements or updates to the Freeware from Dell Software. The terms "Seat" and "Freeware" shall have the same meaning as those set forth in the Product Guide.
     
  • It is recommended that your client version be of the same release (or higher) as your database server. In addition, to take advantage of Toad's new Unicode support, you must be working with Oracle client/server 9i or above. 
     
  • All versions of the Oracle client are not necessarily compatible with all versions of the Oracle Server, which may cause errors within Toad. See Oracle’s Metalink article 207303.1 "Client / Server / Interoperability Support Between Different Oracle Versions" for more information about possible compatibility issues.

Resources

 

 

POST QUESTION / COMMENT

Do you have a question or comment about this freeware?  Post it to the product forum:

Go to Forum

Viewing all 7636 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>