05 December 2014

Change BYTE to CHAR for all columns, all tables

When you setup your varchar2 fields without any specification CHAR or BYTE, the default value specified in nls_length_semantics is used. Which is BYTE by default.

https://docs.oracle.com/cd/E24693_01/server.11203/e24448/initparams149.htm

Now you may encounter situations, where a whole datamodel is to be changed, because many fields are defined as BYTE where a CHAR is more useful. The following script can be used for changing all tables, all columns from BYTE to CHAR. Use at own risk...


set pages 0
set lines 999
set trimspool on
set heading off

spool modify_BC.tmp

SELECT 'alter table '||owner||'.'|| table_name ||
       ' modify '||column_name||' varchar2('|| 
       data_length ||' CHAR); '
  FROM all_tab_columns
 WHERE data_type = 'VARCHAR2'
   AND char_used = 'B'
   and owner in ( 'SCOTT','HR')
;

spool off
set heading on

--now run @modify_BC.tmp



08 October 2014

Drop tablespace - check for non-segment objects

Suppose we create a tablespace, with an empty table on it, and then try to drop it.

SQL> create tablespace PAULTS datafile '/data/DEMO/PAULTS.dbf' size 10M;

Tablespace created.

SQL> create table TLEEG (a char(10)) tablespace PAULTS;

Table created.

SQL> select * from dba_segments where tablespace_name='PAULTS';

no rows selected

SQL> drop tablespace PAULTS;
drop tablespace PAULTS
*
ERROR at line 1:
ORA-01549: tablespace not empty, use INCLUDING CONTENTS option

Ok, so there are no segments, but the table is in the tablespace… tricky. Don’t drop the tablespace, because the table will be gone:

SQL> drop tablespace PAULTS including contents;

Tablespace dropped.

SQL> desc TLEEG
ERROR:
ORA-04043: object TLEEG does not exist

So I recreate the tablespace and table as in the sample above. But how to make sure the tablespace is really-really empty?

Let me, in addition to the table, add another table:
create table TWITHLOB (a int,b clob) tablespace PAULTS;

Now I’ve written the following script:

define MYTS=&Possible_empty_tablespace
set verify off
col CFnd heading "Contents Found"

select 'Table: '||owner||'.'||table_name as CFnd
from dba_tables where tablespace_name='&&MYTS'
  union
select 'Index: '||owner||'.'||index_name as CFnd
from dba_indexes where tablespace_name='&&MYTS'
  union
select 'Lob segment: '||owner||'.'||table_name||'('||column_name||')' as CFnd
from dba_lobs where tablespace_name='&&MYTS'
;

undefine MYTS

This will give:
Enter value for possible_empty_tablespace: PAULTS

Contents Found
--------------------------------------------
Index: PAUL.SYS_IL0000222132C00002$$
Lob segment: PAUL.TWITHLOB(B)
Table: PAUL.TLEEG
Table: PAUL.TWITHLOB

So this gives me sort of a checklist, of objects that are in the tablespace.

There are things I forgot, like table_partitions.

CREATE TABLE invoices
(invoice_no    NUMBER NOT NULL,
 invoice_date  DATE   NOT NULL,
 comments      VARCHAR2(500))
PARTITION BY HASH (invoice_no)
PARTITIONS 2
STORE IN (PAULTS,PAULTS);

If I change my script above, adding the phrase

union
select 'Table Partitions: '||table_owner||'.'||table_name||'.'||partition_name as CFnd
from dba_tab_partitions where tablespace_name='&&MYTS'

this will give me:

Contents Found
--------------------------------------------
Index: PAUL.SYS_IL0000222132C00002$$
Lob segment: PAUL.TWITHLOB(B)
Table Partitions: PAUL.INVOICES.SYS_P61
Table Partitions: PAUL.INVOICES.SYS_P62
Table: PAUL.TLEEG
Table: PAUL.TWITHLOB

6 rows selected.

My discussion point is the following. Which objects should I consider to make my check script complete? Ok, approach might be:

select table_name from dba_tab_columns
where column_name='TABLESPACE_NAME' and table_name like 'DBA%';

This gives us about 34 objects that might need to be investigated. Some are obviously not relevant for our script, like DBA_EXTENTS and DBA_DATA_FILES. But we found more that we should include.

TABLE_NAME
------------------------------
DBA_ALL_TABLES
DBA_AUTO_SEGADV_CTL
DBA_CLUSTERS
DBA_DATA_FILES
DBA_EXTENTS
DBA_FILE_GROUP_TABLES
DBA_FILE_GROUP_TABLESPACES
DBA_FLASHBACK_ARCHIVE_TS
DBA_FREE_SPACE
DBA_FREE_SPACE_COALESCED
DBA_HIST_SEG_STAT_OBJ
DBA_INDEXES
DBA_IND_PARTITIONS
DBA_IND_SUBPARTITIONS
DBA_LOBS
DBA_LOB_PARTITIONS
DBA_LOB_SUBPARTITIONS
DBA_LOB_TEMPLATES
DBA_OBJECT_TABLES
DBA_ROLLBACK_SEGS
DBA_SEGMENTS
DBA_SEGMENTS_OLD
DBA_SUBPARTITION_TEMPLATES
DBA_TABLES
DBA_TABLESPACES
DBA_TABLESPACE_GROUPS
DBA_TABLESPACE_THRESHOLDS
DBA_TABLESPACE_USAGE_METRICS
DBA_TAB_PARTITIONS
DBA_TAB_SUBPARTITIONS
DBA_TEMP_FILES
DBA_TEMP_FREE_SPACE
DBA_TS_QUOTAS
DBA_UNDO_EXTENTS

So our script should check for: 
  • Tables, tab-partitions, tab-subpartitions
  • Indexes, ind-partitions, ind-subpartitions
  • Lob, lob-partitions, lob-subpartitions


Am I forgetting something? If not, my script will check for the above objects in my tablespace. If not found, it’s really empty and I can drop the tablespace.

Hence, my full script will be (for now):

define MYTS=&Possible_empty_tablespace
set verify off
col CFnd heading "Contents Found"
select 'Table: '||owner||'.'||table_name as CFnd
from dba_tables
where tablespace_name='&&MYTS'
union
select 'Table partitions: '||table_owner||'.'||table_name||'.'||partition_name as CFnd
from dba_tab_partitions
where tablespace_name='&&MYTS'
union
select 'Table sub-parts: '||table_owner||'.'||table_name||'.'||partition_name||'.'||subpartition_name as CFnd
from dba_tab_subpartitions
where tablespace_name='&&MYTS'
union
select 'Index: '||owner||'.'||index_name as CFnd
from dba_indexes where tablespace_name='&&MYTS'
union
select 'Index partitions: '||index_owner||'.'||index_name||'.'||partition_name as CFnd
from dba_ind_partitions
where tablespace_name='&&MYTS'
union
select 'Index sub-parts: '||index_owner||'.'||index_name||'.'||partition_name||'.'||subpartition_name as CFnd
from dba_ind_subpartitions
where tablespace_name='&&MYTS'
union
select 'Lob segment: '||owner||'.'||table_name||'('||column_name||')' as CFnd
from dba_lobs
where tablespace_name='&&MYTS'
union
select 'Lob partitions: '||table_owner||'.'||table_name||'.'||partition_name as CFnd
from dba_lob_partitions
where tablespace_name='&&MYTS'
union
select 'Lob sub-parts: '||table_owner||'.'||table_name||'.'||lob_partition_name||'.'||subpartition_name as CFnd
from dba_lob_subpartitions
where tablespace_name='&&MYTS'
;
undefine MYTS

If this returns no rows, it should be save to

drop tablespace PAULTS including contents and datafiles;



19 June 2014

Import and Export Reports from Cloud Control

If you create a nice Cloud Control report, you might want to re-use it in another Cloud Control environment. The emcli command gives you the option to export the report in an xml format and import it again in the same (if you edit it) or in another Cloud Control.

It's very easy to do so, but you have to be familiar with the emcli (Enterprise Manager Command Line Interface).

Let's give an example, starting on the OEM server (Linux) as user oracle.

First login to the emcli. This creates a session.

oracle@oemsv01:/opt/oracle/Middleware/oms/bin> ./emcli login -username=SYSMAN -password=xxxxxxxxx
Login successful

Now you can export the report. You need to have the name of the report as you see it in Cloud Control.

oracle@oemsv01:/opt/oracle/Middleware/oms/bin> ./emcli export_report -title="Oracle Database Backups" -owner="SYSMAN" -output_file="$HOME/CC_db_backups_report.xml"
Report "Oracle Database Backups", owned by "SYSMAN", has been exported from the repository.
Exported reports may not contain all information from original report.  Repository specific information such as targets, administrator access, and schedules are not exported.  Imported reports should be edited after import to supply all necessary report parameters.


You can now import it. But if you don't change the name, you will get a duplicate error.

oracle@oemsv01:/opt/oracle/Middleware/oms/bin> ./emcli import_report  -files="$HOME/CC_db_backups_report.xml"
Error: The report named "Oracle Database Backups", owned by "SYSMAN", already exists in the repository. Use -force option to overwrite this report.


Edit title in xml file, to prevent duplicate, or use force option.

oracle@oemsv01:/opt/oracle/Middleware/oms/bin> ./emcli import_report  -files="$HOME/CC_db_backups_report.xml"
File "/home/oracle/CC_db_backups_report.xml" has been imported into the repository.
Imported reports should be edited after import to supply all necessary report parameters.

There are many advantages of using this. You can develop the reports and deploy them in a production-Cloud Control. You can create versions, and easily store them in your version tooling. You might think of other options.


Enjoy!

06 May 2014

Migrate PostgreSQL to Oracle - first attempts

For a customer case, I have been investigating whether it’s possible to migrate from PostgreSQL to Oracle. Recently, I did a MS Access to Oracle migration quite simply using SQL*Developer. Hoping that it was this simple with PostgreSQL too, started to Google around.

It seems that you can connect to PostgreSQL using SQL*Developer 4.0. Go to the site: http://jdbc.postgresql.org/download.html  and get the correct JDBC PostgreSQL driver (depending on your java version). Unpack this jar file and add it to SQL*Developer using the description in http://www.oracle.com/technetwork/developer-tools/sql-developer/thirdparty-095608.html .

Now when you create a new connection, you can select the tab PostgreSQL:



For this demo to work, I installed postgresql local, with the dvdrental demo database.

So what do you see when this is Connected in SQL*Developer? You can open up the database tree:


Also, you can view the data, so for example in table 'actor':




Now let’s try to migrate. I first tried to Tools > Migration > Migrate and follow the migration wizard. For this database, this takes some 5-10 minutes. It seems to work, but the results are very disappointing. When you select a target Oracle database, a schema will be created PUBLIC_DVDRENTAL where the tables should appear. The scripts that are created on your workstation, contain the Oracle-formatted DDL, but these appear to have some errors that need to be filtered out.

With some trial and error in the DDL scripting, I was able to get the table definitions right. The data however was not imported. You can choose for a separate export from Postgres (using SQL*Developer for example) and table-by-table move the data after creating the tables. The easiest way it seems to me, is to select all PostgreSQL tables, right-mouse Export, export them to csv in a directory as separate files, and then import them using SQL Loader.



Summarizing, I think this method gives some options to get started with your migration. It’s far from perfect however. We should wait for Oracle to get this tooling right in SQL*Developer.