31 December 2015

Migrating your Software Library in Oracle Enterprise Manager 12.1.0.5

The Software Library should be kept on a location where multiple OMS instances can reach it under the same path-name when working with multiple-OMS.
So some sort of shared storage is needed. This may lead to the need to migrate your software libary.

This is a very simple task once you've seen it. Below I will move the software libary from:
/oracle/software/softlib to a new mountpoint /swlib .

First, go to the Software Libary under Setup > Provisioning and Patching.



Second, observe the software library you have and click Add button to create a new one.




Third, select the old software library and select Migrate and Remove button.



Fourth, in the popup window, select your target software libary from the pulldown.


 Observe the confirmation box and click View Job.


The job runs for a few minutes, just wait or get some coffee.


Now the old software library is gone from the Cloud Control:

 But it's still on your host:



So you can free up that space by removing the old directory.

Further reading:
https://docs.oracle.com/cd/E24628_01/doc.121/e24473/softwarelib.htm#EMADM11710

23 December 2015

First installation of Cloud Control 13c

Just yesterday downloaded, and already running in my Virtualbox.
Some things to look for: make sure you have enough memory on your OMS host, at least 10Gb.
Other things I had to correct:

yum install glibc-devel.i686

/etc/sysctl.conf:
   net.ipv4.ip_local_port_range = 11000 65000
   sysctl -p

Physical Memory: 10240 MB

Make sure your EMREP database is precreated and runs in 12.1.0.2, with parameter settings:
compatible: 12.1.0.2
optimizer_adaptive_features=FALSE
session_cached_cursors=200 (200 to 500 will do)


Diskspace in my test environment:
 1.3Gb agent
 6.2Gb app
 2.1Gb gc_inst
 14Gb Middleware
 5.2Gb Oradata --> local Repository
700Mb swlib

This is just a small install without any targets. The software itself is not counted in this.
The installations is different than we saw before, you get 5 files, 1 .bin and 4 zips. Don't extract the zips. Just chmod+x the bin and run ./em13100_linux.bin
This will show:
0%...............................................100%
and then launch the installer.

Ok, installation took me about two hours. The login screen is finally re-designed (what a relief).


And the Enterprise Summary screen looks a lot familiar, new template, but feels familiar.


Can't wait to explore it all... 

11 November 2015

dgmgrl switchover or convert gives ORA-01017

I have this setup of dataguard with broker:
Host:     paris.localdomain
Instance: TESTDB  (primary)
Host:     london.localdomain
Instance: TESTDBS (physical standby)

When I do:
dgmgrl /
DGMGRL> show configuration;
It all seems fine.

Errors on switchover
If I do a switchover, I get the following errors:

DGMGRL> switchover to 'TESTDBS';
Performing switchover NOW, please wait...
Operation requires a connection to instance "TESTDBS" on database "TESTDBS"
Connecting to instance "TESTDBS"...
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

    connect to instance "TESTDBS" of database "TESTDBS"

DGMGRL>       

Errors on convert command

Also, when I do a convert to snapshot standby, it works fine, but back to physical standby fails:

DGMGRL> convert database 'TESTDBS' to snapshot standby;
Converting database "TESTDBS" to a Snapshot Standby database, please wait...
Database "TESTDBS" converted successfully
DGMGRL> show configuration;

Configuration - DRSolution

  Protection Mode: MaxPerformance
  Databases:
    TESTDB  - Primary database
    TESTDBS - Snapshot standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> convert database 'TESTDBS' to physical standby;
Converting database "TESTDBS" to a Physical Standby database, please wait...
Operation requires shutdown of instance "TESTDBS" on database "TESTDBS"
Shutting down instance "TESTDBS"...
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

Please complete the following steps and reissue the CONVERT command:
    shut down instance "TESTDBS" of database "TESTDBS"
    start up and mount instance "TESTDBS" of database "TESTDBS"

DGMGRL>


This leads to a lot of manual work, which is not nice - certainly not in times of emergency.

Solution
Note, that when you do such operations, you should login without the slash:

[oracle@paris ~]$ dgmgrl
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys
Password: ******
Connected.
DGMGRL>

 

The password is explicitly needed!! 
So do NOT use dgmgrl / for these operations.

This took me a few hours to find out, hope to never forget it anymore.

Documentation
The documentation that I should have read is:
http://docs.oracle.com/cd/B28359_01/server.111/b28295/dgmgrl.htm#BABHECFB and then under 8.1.1 the phrase:
       (remote database restarts will not work),
How could I have missed that....



11 September 2015

Upgrade from 11.2 to 12.1 with just 24 seconds downtime

Rolling upgrade with Transient Logical Standby is known as a MAA (Maximum Availability Architecture) technique, to minimize downtime during upgrade of Oracle database. 

The white paper: Database Rolling Upgrade Using Transient Logical Standby: Oracle Data Guard 11g has been available for quite some time. But the steps involved in testing this technique require a lot of skills, patience hardware and experience, aka blood sweat and tears.

Limitations
  • Important limitations are, that you need to be able to install the old and the new Oracle software on both nodes. For instance, you can install Oracle 10.2 on Oracle Linux 6.4, but that is not supported (follow OraToolkit if you need to do this). So search for a platform that supports both versions.
  • You might have unsupported data types, read the white paper (above) to check this.
  • You cannot use Dataguard Broker during this setup.

Last week I finally got this working on Virtualbox on my laptop. I give several hints how.

Create two nodes in Virtualbox
Create two Oracle Linux 6.4 hosts ‘london’ and ‘paris’. Make sure they can ssh and talk to each other. Provide ssh keys so that copying is easy.
Setup Oracle 11.2.0.4 and Oracle 12.1.0.2 on both nodes.
Setup a listener and demo database on ‘london’.

Setup standby

Test the physical standby, make sure that logs are applied (which may take a few minutes to start I experienced). You might want to use blog: http://sys-admin.wikidot.com/check-dataguard

Make sure you have a large db_recovery_file_dest_size on both instances. The restore point will require this space during database upgrade.

Test the switchover (and back), you might want to use blog: http://www.oracledistilled.com/oracle-database/data-guard-switchover-to-a-physical-standby for this.

Run the preupgrd.sql from $ORACLE12/rdbms/admin  and resolve problems if any.

Use the physru.sh script
Via the note Oracle11g Data Guard: Database Rolling Upgrade Shell Script (Doc ID 949322.1) you can download the physru.sh script. How this is used in a practical manner, is explained in a blog Minimal downtime rolling database upgrade to 12c Release 1 by Gavin Soorma. Follow this note and you will execute the physru.sh three times (from the ‘london’ Primary host). Gavin explains (in detail) how this works (copied the following text from his blog):

First execution
  • Create control file backups for both the primary and the target physical standby database
  • Creates Guaranteed Restore Points (GRP) on both the primary database and the physical standby database that can be used to flashback to beginning of the process or any other  intermediate steps along the way.
  • Converts a physical standby into a transient logical standby database.


Second execution
  • Use SQL apply to synchronize the transient logical standby database and make it current with the primary
  • Performs a switchover to the upgraded 12c transient logical standby and  the standby database becomes the primary
  • Performs a flashback on the original primary database to the initial Guaranteed Restore Point  and converts the original primary into a physical standby


Third execution
  • Starts Redo Apply on the new physical standby database (the original primary database) to apply all redo that has been generated during the rolling upgrade process, including any SQL statements that have been executed on the transient logical standby as part of the upgrade.
  • When synchronized, the script offers the option of performing a final switchover to return the databases to their original roles of primary and standby, but now on the new 12c database software version.
  • Removes all Guaranteed Restore Points


Results
The results are displayed after the Third execution of physru.sh. As you can see, the whole process took a lot of time (about 6 hours) mainly because my laptop was running out of space. In the end, the steps were succesfully completed, with a service downtime of just 24 seconds.


Second attempt
A few days later, I retried the technique. Upgrade went much smoother, and also switched back at the end. This will give you additional downtime (switchover), in the screenshot below, seen as 19 seconds. Total procedure of upgrade took just over 1 hour, which might be useful for those situations that require maximum availability.



17 July 2015

ORA-39083 ORA-02304 on impdp datapump import: TRANSFORM parameter

During impdp we get:

ORA-39083: Object type TYPE failed to create with error:
ORA-02304: invalid object identifier literal
Failing sql is:
CREATE TYPE … 

Fix it by adding TRANSFORM parameter on impdp:

impdp system/welcome@orcl directory=DUMPDIR dumpfile=mydump.dmp logfile=import01.log schemas=ABC TRANSFORM=OID:N:TYPE



09 July 2015

SP2-1503 on AIX calling a sqlplus script

Which library path?

I ran a job, calling a shell script from Cloud Control on an AIX 5.3 host. This script in turn calls sqlplus and does some sql. Now I got the error in Cloud Control:

SP2-1503: Unable to initialize Oracle call interface
SP2-0152: ORACLE may not be functioning properly

It appeared that some environment settings were missing. In a note by IBM, I read that you might need LIBPATH defined. Check the note here: http://www-01.ibm.com/support/docview.wss?uid=isg3T1015835

Hence, my full script is :


export ORACLE_HOME=/app/oracle/11.2.0
export ORACLE_SID=mydb
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$PATH

sqlplus '/as sysdba' <<EOF
@myscript.sql
exit
EOF

This worked fine. Hope this helps.

10 June 2015

Setup a simple Data Redaction demo

The following will give you a simple demo for data redaction.

Setup

First we start with a cleanup, so that the script may be run several times.

conn /as sysdba
-- cleanup
drop user APP_OWN cascade;
drop user MIKE cascade;
drop user BOSS cascade;
drop role ROLE_SALES;
drop role ROLE_MANAGERS;

Now create users. One for the application itself, one for the manager Boss, and one for the employee Mike:
-- create users
create user APP_OWN identified by APP_OWN;
create user MIKE identified by MIKE;
create user BOSS identified by BOSS;

grant create session, create sequence, create table, unlimited tablespace to APP_OWN;
grant execute on dbms_redact to APP_OWN;
grant create session to MIKE;
grant create session to BOSS;


We will be working with roles to get the security setup:
-- create roles
create role ROLE_SALES;
create role ROLE_MANAGERS;
grant select any table to ROLE_SALES;
grant select any table to ROLE_MANAGERS;

grant ROLE_SALES to MIKE;
grant ROLE_MANAGERS to BOSS;

Now create a table in the application schema.
-- setup demo table
conn APP_OWN/APP_OWN
create table emp (id number generated always as identity, name varchar2(30), salary number(15,2));
insert into emp (name,salary) values ('TOM',9999.99);
insert into emp (name,salary) values ('NANCY',8888.88);
commit;

-- setup policy
BEGIN
 DBMS_REDACT.ADD_POLICY  (
    OBJECT_SCHEMA => 'APP_OWN',
    object_name => 'EMP',
    policy_name => 'DEMO_EMP_POLICY',
    expression  => 'SYS_CONTEXT(''SYS_SESSION_ROLES'',''ROLE_MANAGERS'')    = ''FALSE''',
    column_name => '"SALARY"',
    function_type => DBMS_REDACT.RANDOM );
END;
/

Results

-- checking results:
col NAME format a7

conn APP_OWN/APP_OWN
select USER  from dual;
select * from APP_OWN.EMP;

conn BOSS/BOSS
select USER  from dual;
select * from APP_OWN.EMP;

conn MIKE/MIKE
select USER  from dual;
select * from APP_OWN.EMP;

This will give you the results:

Connected.

USER
------------------------------
APP_OWN


        ID NAME        SALARY
---------- ------- ----------
         1 TOM        9074.99
         2 NANCY      8315.64

Note that the application owner does not have the manager role and will not see the data itself!!!

Connected.

USER
------------------------------
BOSS


        ID NAME        SALARY
---------- ------- ----------
         1 TOM        9999.99
         2 NANCY      8888.88


As you can see, the BOSS user sees the real data.

Connected.

USER
------------------------------
MIKE


        ID NAME        SALARY
---------- ------- ----------
         1 TOM        4128.09
         2 NANCY      2698.32


And the MIKE user does not have access to the real data.

What if.... Mike tries to break into the data?

conn MIKE/MIKE
Connected.

create table MYCOPY as select * from APP_OWN.EMP;
create table MYCOPY as select * from APP_OWN.EMP
                              *
ERROR at line 1:
ORA-28081: Insufficient privileges - the command references a redacted object.

Risks

!!!NOTE that sysdba has all rights, you will need to implement Database Vault to prevent the following:

conn / as sysdba
Connected.
select * from APP_OWN.EMP;

        ID NAME                               SALARY
---------- ------------------------------ ----------
         1 TOM                               9999.99
         2 NANCY                             8888.88

Also, a second weakness is the implicit way to figure out values:

conn MIKE/MIKE
Connected.

select * from APP_OWN.EMP where SALARY between 9999 and 10000;

        ID NAME                               SALARY
---------- ------------------------------ ----------
         1 TOM                                3660.5



02 June 2015

dbms_scheduler log history - purging manually

Today, I came across a SYSAUX tablespace that was asking for more space. It was about 12Gb, which seems a bit large to me. If you use the $ORACLE_HOME/rdbms/admin/awrinfo.sql , you will get a report of the occupants.  Occupant JOB_SCHEDULER took over 11Gb space, which makes you think.

The default out-of-the-box maintainance job, uses the global attribute LOG_HISTORY to remove old dbms_scheduler job logging:

select * from DBA_SCHEDULER_GLOBAL_ATTRIBUTE;

Now this is default 30 days, and can be set with:

exec DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE('log_history','15');

But what if your maintainance jobs are failing for some reason (timeout/bugs?). You may need to clean it manually. The easy way to do this, is to use:

exec DBMS_SCHEDULER.PURGE_LOG(<days>, which_log=>'JOB_LOG');

so for example:

exec DBMS_SCHEDULER.PURGE_LOG(15,which_log=>'JOB_LOG');

But if the number of logs is very large, you have to do this carefully, step by step:

First determine the distribution of logging you have.

Select count(1) from dba_scheduler_job_log where log_date < sysdate – 100;

Make sure that you get an idea like:


older than 300 days12 rows
older than 250 days12831 rows
older than 200 days438121 rows


Now step by step clean it, by 5 or days at a time:

exec DBMS_SCHEDULER.PURGE_LOG(150,which_log=>'JOB_LOG');
exec DBMS_SCHEDULER.PURGE_LOG(140,which_log=>'JOB_LOG');
etc.

Until you can do:

exec DBMS_SCHEDULER.PURGE_LOG(15,which_log=>'JOB_LOG');

Now you can set:

exec DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE('log_history','15');

and watch your maintainance job (PURGE_LOG) over the next days.

What if your PURGE_LOG job doesn't run? I had this, and a simple disable/enable was needed to get the right NEXT_RUN_DATE in dba_scheduler_jobs:

exec dbms_scheduler.disable('PURGE_LOG');
exec dbms_scheduler.enable('PURGE_LOG');

Now suppose that you cleaned up the job logging. After a few hours work, it's almost done. Just run:

exec DBMS_SCHEDULER.PURGE_LOG();

Which should be fairly quick (maybe 2 minutes) now.

The tables need  a shrink to release the space in the SYSAUX tablespace.

alter table sys.scheduler$_event_log enable row movement;
alter table sys.scheduler$_event_log shrink space cascade;

alter table sys.scheduler$_job_run_details enable row movement;
alter table sys.scheduler$_job_run_details space cascade;

25 March 2015

Generate SQL*Loader control files for a schema

Someday, you may need to load all tables with SQL*Loader. That can be in a migration situation or similar situations. It can be a tedious task to create control files for SQL*Loader. For this, I created the following setup, that gives you the basic control files.

It's based on Linux. You can make adjustments as you wish. Always check your requirements.


This method, builds a table with the contents of the sqlloader controlfiles.
It has three columns  (s1,s2,s3) that contain owner,table_name and sort.

These columns are used for getting the output in the correct order.
When the table is build, you create a second SQL from it, that will run and spool all separate controlfiles.

Things you might want to change are:
  • the owner of the schema  (now APPOWNER)
  • the location where the controlfiles ares spooled to (now /tmp/work)
  • the csv delimiter (now a caret symbol ^)


There are five selects in the create table:
  • spool /tmp/work/table_name.ctl   (lines with s3= -999 )
  • -- controlfile: table_name.ctl   (s3= -99 )
  • load data infile ... into table ...   etc.  (s3= 0 )
  • the columns part, separated by commas and a ) for the last column ( s3 = column_id)
  • spool off   ( s3 = 999 )
The s3 column is very easy to get the output sorted.
I'd like to hear if you can use this script :-)

drop table system.tool_cr_ctl;

create table system.tool_cr_ctl as
select owner s1,table_name s2,-999 s3,'spool /tmp/work/'
       ||lower(table_name)||'.ctl'  SOURCE
  from dba_tables
  where owner like 'APPOWNER'
union
select owner s1,table_name s2,-99 s3,'prompt-- Controlfile:   '
       ||lower(table_name)||'.ctl'  SOURCE
  from dba_tables
  where owner like 'APPOWNER'
union
select owner s1,table_name s2,0 s3,'prompt load data infile '
      ||chr(39)||'/tmp/work/'||lower(owner)
      ||'.'||lower(table_name)||'.csv'||chr(39)
      ||' replace into table '||owner||'.'||table_name
      ||' fields terminated by '||chr(34)||'^'||chr(34)
      || ' optionally enclosed by '
      || chr(39)||chr(34)||chr(39)||' trailing nullcols ('
  from dba_tables
  where owner like 'APPOWNER'
union
select owner s1,table_name s2,c.column_id s3,'prompt     '
      ||chr(34)||c.column_name||chr(34)||
      decode(column_id,
         (select max(column_id)
         from dba_tab_columns d
         where d.owner=c.owner
         and d.table_name=c.table_name),
         ')'  ,  ','   )
      from DBA_TAB_COLUMNS c
  where c.owner like 'APPOWNER'
union
select owner s1,table_name s2,999 s3,'spool off '  SOURCE
       from dba_tables
  where owner like 'APPOWNER'
       order by 1,2,3
;

-- Now you need to run the following, to create the files:

Set pages 0
Set lines 9999
Set trimspool on
Set heading off
spool run_me.sql
select source from system.TOOL_CR_CTL order by s1,s2,s3;
spool off
@run_me.sql

Example output:


-- Controlfile:   emp.ctl
load data infile '/tmp/work/scott.emp.csv' replace into table SCOTT.EMP fields terminated by "^" optionally enclosed by '"' trailing nullcols (
"EMPNO",
"ENAME",
"JOB",
"MGR",
"HIREDATE",
"SAL",
"COMM",
"DEPTNO")

30 January 2015

deinstall obsolete Oracle Home tries to remove LISTENER

This post describes, that a listener can be removed by accident if you remove an obsolete Oracle home.


After upgrading a single database from 11.2.0.3 (dbhome_1) to 11.2.0.4 (dbhome_2), the old 11.2.0.3 home becomes obsolete. The listener has already been moved to the new home as well.

oracle@demohost01:/home/oracle> which lsnrctl
/opt/oracle/product/11.2.0/dbhome_2/bin/lsnrctl

oracle@demohost01:/home/oracle> ps -ef | grep LISTENER
oracle   10305     1  0 Jan29 ?        00:00:13 /opt/oracle/product/11.2.0/dbhome_2/bin/tnslsnr LISTENER -inherit
oracle   12642  9408  0 08:30 pts/0    00:00:00 grep LISTENER

From your home directory, deinstall (on Linux) as follows:

oracle@demohost01:/opt/oracle/product/11.2.0/dbhome_1/deinstall/deinstall

Now some checks are performed.

oracle@demohost01:/home/oracle> /opt/oracle/product/11.2.0/dbhome_1/deinstall/deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /opt/oracle/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /opt/oracle/product/11.2.0/dbhome_1
Oracle Home type selected for deinstall is: Oracle Single Instance Database
Oracle Base selected for deinstall is: /opt/oracle
Checking for existence of central inventory location /opt/oracle/oraInventory
Checking for sufficient temp space availability on node(s) : 'demohost01'

## [END] Install check configuration ##


Network Configuration check config START

Network de-configuration trace file location: /opt/oracle/oraInventory/logs/netdc_check2015-01-30_08-28-54-AM.log

Specify all Single Instance listeners that are to be de-configured [LISTENER]:

WAIT !! This is strange…. The listener is running on the new home. So why is it trying to deinstall it? Press ctrl-C to abort this deinstallation!!

Check if there is still a copy of the old listener.ora in the network/admin folder:

oracle@demohost01:/home/oracle> ls -l /opt/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
-rw-------. 1 oracle oinstall 568 Sep 20  2013 /opt/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora

If we move or rename this to listener.ora_old, the de-installation runs fine.

oracle@demohost01:/opt/oracle/product/11.2.0/dbhome_1/network/admin> mv listener.ora listener.ora_old

So try again:

oracle@demohost01:/home/oracle> /opt/oracle/product/11.2.0/dbhome_1/deinstall/deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /opt/oracle/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /opt/oracle/product/11.2.0/dbhome_1
Oracle Home type selected for deinstall is: Oracle Single Instance Database
Oracle Base selected for deinstall is: /opt/oracle
Checking for existence of central inventory location /opt/oracle/oraInventory
Checking for sufficient temp space availability on node(s) : 'demohost01'

## [END] Install check configuration ##


Network Configuration check config START

Network de-configuration trace file location: /opt/oracle/oraInventory/logs/netdc_check2015-01-30_08-40-09-AM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /opt/oracle/oraInventory/logs/databasedc_check2015-01-30_08-40-09-AM.log

Use comma as separator when specifying list of values as input

Specify the list of database names that are configured in this Oracle home []:

Hit Enter here, there are no databases found active on this Oracle Home.

Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /opt/oracle/oraInventory/logs/emcadc_check2015-01-30_08-40-28-AM.log

Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /opt/oracle/oraInventory/logs//ocm_check3812.log
Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Home selected for deinstall is: /opt/oracle/product/11.2.0/dbhome_1
Inventory Location where the Oracle home registered is: /opt/oracle/oraInventory
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y

The y-response is entered manually, and then the process starts running.

A log of this session will be written to: '/opt/oracle/oraInventory/logs/deinstall_deconfig2015-01-30_08-40-06-AM.out'
Any error messages from this session will be written to: '/opt/oracle/oraInventory/logs/deinstall_deconfig2015-01-30_08-40-06-AM.err'

######################## CLEAN OPERATION START ########################

[ *** output removed, no errors ***]

######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/opt/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node.
Successfully deleted directory '/opt/oracle/product/11.2.0/dbhome_1' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############