Sunday, 23 October 2022

Copying Archivelog from ASM to local file system using script

Copying  Archive log  from ASM to local file system using script 


  • Set archivelog  in ASM location (OPTIONAL)

SQL> ALTER SYSTEM SET log_archive_dest_1='LOCATION=+FRA/archivelog' scope=memory;


System altered.


SQL> archive log list;

Database log mode        Archive Mode

Automatic archival        Enabled

Archive destination        +FRA/archivelog

Oldest online log sequence     119

Next log sequence to archive   120

Current log sequence        120

SQL> alter system switch logfile;


System altered.


SQL> /

System altered.

SQL> 

  • Mention the local file and  archivelog location , Script details below :


[grid@srv1 archivelog]$ cat asm_archivelog.sh 
#!/bin/bash



#



# This script copies files from FRA on ASM to local disk



#



export ORACLE_SID=+ASM1



export ORACLE_HOME=/u01/app/12.2.0/grid  ##{Grid OH}

export PATH=$ORACLE_HOME/perl/bin:$PATH

ASMLS=/u01/archivelog/asm_log.txt ##{ASM files list}


FRA=+FRA/archivelog ##{source location of files}


LOCALBACKUPDIR=/u01/archivelog  ##{destination filesystem location }


LOG=/u01/archivelog/asm_grid_smrpod.txt ##{log file location }



#



# Get the list of files



#



$ORACLE_HOME/bin/asmcmd > $ASMLS <<EOF



ls $FRA



exit



EOF



#



# Clean the list by removing "ASMCMD>"



#



sed -i 's/ASMCMD> //g' $ASMLS



##cat $ASMLS



echo `date` > $LOG



#



# Copy files one by one



#







for FILENAME in `cat $ASMLS`



do



if [[ ! -f $LOCALBACKUPDIR/${FILENAME} ]]



then



$ORACLE_HOME/bin/asmcmd >> $LOG <<EOF



cp $FRA/$FILENAME $LOCALBACKUPDIR



EOF



fi



done



echo `date` >> $LOG
[grid@srv1 archivelog]$ 


[grid@srv1 archivelog]$ sh asm_archivelog.sh 

  • OUTPUT 

[grid@srv1 archivelog]$ ll
total 4888
-rw-r----- 1 grid oinstall 1960448 Feb  5 20:36 1_120_1088276576.dbf
-rw-r----- 1 grid oinstall    6656 Feb  5 20:36 1_121_1088276576.dbf
-rw-r----- 1 grid oinstall    3072 Feb  5 20:36 1_122_1088276576.dbf
-rw-r----- 1 grid oinstall    4096 Feb  5 20:36 1_123_1088276576.dbf
-rw-r----- 1 grid oinstall  174592 Feb  5 20:36 1_124_1088276576.dbf
-rw-r----- 1 grid oinstall 2664960 Feb  5 20:36 2_88_1088276576.dbf
-rw-r----- 1 grid oinstall  162816 Feb  5 20:36 2_89_1088276576.dbf
-rw-r----- 1 grid oinstall    3072 Feb  5 20:36 2_90_1088276576.dbf
-rw-r----- 1 grid oinstall    2048 Feb  5 20:36 2_91_1088276576.dbf


Example - Create SQL Tuning Set


Example - Create SQL Tuning Set


Step 1) Login source database  and  Create SQL Tuning Set


BEGIN

  DBMS_SQLTUNE.CREATE_SQLSET ( sqlset_name => 'SQLSET_01',description => 'SQLSET created for migration');

END;

/

Step 2) --Note the minimum and maximum snap id' available.


select min(snap_id), max(SNAP_ID) from dba_hist_snapshot;


here min(snap_id) =  13472

and max(SNAP_ID) = 14220


Step 3) Replace the 2 values in step2 below.


-- Load STS from AWR

DECLARE

stscur dbms_sqltune.sqlset_cursor;

  sql_rec sqlset_row;

BEGIN

   OPEN stscur FOR

       SELECT VALUE(P) FROM TABLE(DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY

        ( 13472,14220,

        'parsing_schema_name <> ''SYS''',

                   null, null, null, null, 1, null, 'ALL')) P ;

    fetch stscur into sql_rec ;

    sys.dbms_sqltune.load_sqlset (sqlset_name => 'SQLSET_01',  populate_cursor => stscur);

EXCEPTION WHEN OTHERS THEN

  raise_application_error(-20001,'An error was encountered - '||SQLCODE|| ' -ERROR- '||SQLERRM||' - '||sql_rec.sql_id);

END;

/

Step 4) Check object count details


select count(*) from dba_sqlset_statements where sqlset_name = 'SQLSET_01' order by sql_id;


Step 5) Create the Staging Table ON THE SOURCE SYSTEM ( IN this case 10.2 )


execute DBMS_SQLTUNE.CREATE_STGTAB_SQLSET(table_name =>'STS_TABLE');


SQL> select count(*) from STS_TABLE;


Step 6) Populate the table TEST using DBMS_SQLTUNE.PACK_STGTAB_SQLSET THE SOURCE SYSTEM (IN this case 10.2 )


execute DBMS_SQLTUNE.PACK_STGTAB_SQLSET(sqlset_name => 'SQLSET_01',staging_table_name => 'STS_TABLE');


SQL> select count(*) from STS_TABLE;


COUNT(*)

----------

         4

Step 7) Take export backup and copy dump to target system


exp file=STS.dmp tables=<username>.MIGRATION1 log=exp_STS.log


Step 8) Login to target system and import using "imp". Change VID below 


imp file=STS.dmp tables=STS_TABLE log=imp_STS.log FROMUSER=<username> TOUSER=<username>


Step 9) Unpack the table using DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET on the Destination system ( IN this CASE it was 11.2)


  • Verify the contents of DBA_SQLSET or USER_SQLSET on the Destination system


Select NAME,CREATED,STATEMENT_COUNT FROM DBA_SQLSET;


  • Based on the results,  a tuning set might need to be created  before unpacking the table:


execute DBMS_SQLTUNE.CREATE_SQLSET(sqlset_name => 'SQLSET_01');


execute DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET( sqlset_name => 'MIGRATION_SQLSET_01',replace => TRUE,staging_table_name => 'STS_TABLE');


Select NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET; 


select count(*) from STS_TABLE;

#######################       Example    ##################################


SQL> BEGIN

  DBMS_SQLTUNE.CREATE_SQLSET ( sqlset_name => 'MIGRATION_SQLSET_01',description => 'SQLSET created for migration');

END;

/  2    3    4


PL/SQL procedure successfully completed.


SQL> select min(snap_id), max(SNAP_ID) from dba_hist_snapshot;


MIN(SNAP_ID) MAX(SNAP_ID)

------------ ------------

       77981        79065


SQL> DECLARE

stscur dbms_sqltune.sqlset_cursor;

  2    3    sql_rec sqlset_row;

  4  BEGIN

  5     OPEN stscur FOR

  6         SELECT VALUE(P) FROM TABLE(DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY

  7          ( 77981,79065,

        'parsing_schema_name <> ''SYS''',

  8    9                     null, null, null, null, 1, null, 'ALL')) P ;

 10      fetch stscur into sql_rec ;

 11      sys.dbms_sqltune.load_sqlset (sqlset_name => 'MIGRATION_SQLSET_01',  populate_cursor => stscur);

 12  EXCEPTION WHEN OTHERS THEN

 13    raise_application_error(-20001,'An error was encountered - '||SQLCODE|| ' -ERROR- '||SQLERRM||' - '||sql_rec.sql_id);

 14  END;

/ 15


PL/SQL procedure successfully completed.


SQL> select count(*) from dba_sqlset_statements where sqlset_name = 'MIGRATION_SQLSET_01' order by sql_id;


  COUNT(*)

----------

      1065


SQL> execute DBMS_SQLTUNE.CREATE_STGTAB_SQLSET(table_name =>'MIGRATION1');


PL/SQL procedure successfully completed.


SQL> select count(*) from MIGRATION1;


  COUNT(*)

----------

         0



SQL> execute DBMS_SQLTUNE.PACK_STGTAB_SQLSET(sqlset_name => 'MIGRATION_SQLSET_01',staging_table_name => 'MIGRATION1');


PL/SQL procedure successfully completed.


SQL> select count(*) from MIGRATION1;


  COUNT(*)

----------

     47233


SQL>


exp file=STS.dmp tables=<username>.MIGRATION1 log=exp_STS.log


Target database 

imp file=MIGRATION.dmp tables=MIGRATION1 log=imp_MIGRATION.log FROMUSER=<username> TOUSER=<username>

SQL> select count(*) from MIGRATION1;


  COUNT(*)

----------

     47233


SQL> Select NAME,CREATED,STATEMENT_COUNT FROM DBA_SQLSET;


NAME

--------------------------------------------------------------------------------

CREATED   STATEMENT_COUNT

--------- ---------------

SYS_AUTO_STS

17-NOV-21               0


MIGRATION_SQLSET_01

15-OCT-22               0



SQL> execute DBMS_SQLTUNE.CREATE_SQLSET(sqlset_name => 'SQLSET_01');


PL/SQL procedure successfully completed.


SQL> execute DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET( sqlset_name => 'MIGRATION_SQLSET_01',replace => TRUE,staging_table_name => 'MIGRATION1');


PL/SQL procedure successfully completed.


SQL> Select NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET;


NAME

--------------------------------------------------------------------------------

OWNER

--------------------------------------------------------------------------------

CREATED   STATEMENT_COUNT

--------- ---------------

SYS_AUTO_STS

SYS

17-NOV-20               0


MIGRATION_SQLSET_01

<username>

15-OCT-22             974


NAME

--------------------------------------------------------------------------------

OWNER

--------------------------------------------------------------------------------

CREATED   STATEMENT_COUNT

--------- ---------------


MIGRATION_SQLSET_01

<username>

15-OCT-22               0




SQL> select count(*) from MIGRATION1;


  COUNT(*)

----------

     47233

Thursday, 20 October 2022

RAC commands --

 To get cluster stack health status

$crsctl check crs



Check visibility of CRS across nodes

$crsctl check cluster -n node_name



Stop/Start Cluster

$crsctl start cluster -n node_name

$crsctl stop cluster -n node_name

$crsctl start cluster –all

$crsctl stop cluster -all



Start/Stop CRS

$crsctl start crs

$crsctl stop crs (or)

$/etc/init.d/init.ohasd run

$/etc/init.d/init.crs stop



Enable/Disable CRS

$crsctl enable crs

$crsctl disable crs (or)

$/etc/init.d/init.crs enable

$/etc/init.d/init.crs disable


Check status of running deamons

$ps -ef | grep d.bin

$crsctl check cssd

$crsctl check crsd

$crsctl check evmd

$crsctl check oprocd

$crsctl check ctss



Query voting disk files

$crsctl query css votedisk



Add/Delete voting disks

For non-ASM

$crsctl add css votedisk <<PATH>>

$crsctl add css votedisk <<PATH>> -force — if Clusterware is not running

$crsctl delete css votedisk <<PATH>>

$crsctl delete css votedisk <<PATH>> -force — if Clusterware is not running


For ASM

$crsctl replace votedisk <<DISK_GROUP_NAME>>


Set VD parameters

$crsctl set css misscount 100

$crsctl unset css misscount — — sets CSS parameter to its default

$crsctl get css disktimeout

$crsctl get css miscount

$crsctl get css reboottime


Working with resources

$crsctl start resource ora.DATA.dg

$crsctl status resource –t

$crsctl stat resource -t



Listing modules

$crsctl lsmodules crs

$crsctl lsmodules css

$crsctl lsmodules evm



Working with HAS

$crsctl start has (HAS – High Availability Services)

$crsctl stop has

$crsctl check has



OCR Modules — cannot be listed with crsctl lsmodules command

OCRAPI

OCRCLI

OCRSRV

OCRMAS

OCRMSG

OCRCAC

OCRRAW

OCRUTL

OCROSD


Debugging

$crsctl debug statedump crs — dumps state info for crs objects

$crsctl debug statedump css — dumps state info for css objects

$crsctl debug statedump evm — dumps state info for evm objects

$crsctl debug log crs [module:level]{,module:level} …



–Turns on debugging for CRS

$crsctl debug log crs CRSEVT:5,CRSAPP:5,CRSTIMER:5,CRSRES:5,CRSRTI:1,CRSCOMM:2

$crsctl debug log css [module:level]{,module:level} …



– Turns on debugging for CSS

$crsctl debug log css CSSD:1

$crsctl debug log evm [module:level]{,module:level} …



– Turns on debugging for EVM

$crsctl debug log evm EVMCOMM:1

$crsctl debug trace crs — dumps CRS in-memory tracing cache

$crsctl debug trace css — dumps CSS in-memory tracing cache

$crsctl debug trace evm — dumps EVM in-memory tracing cache

$crsctl debug log res resource_name:level — turns on debugging for resources

$crsctl debug log res “ora.lnx04.vip:1″

$crsctl trace all_the_above_commands — tracing by adding a “trace” argument.

$crsctl trace check css


====================================================


Dual Environment CRSCTL Commands:

crsctl add resource

crsctl add type

crsctl check css

crsctl delete resource

crsctl delete type

crsctl get hostname

crsctl getperm resource

crsctl getperm type

crsctl modify resource

crsctl modify type

crsctl setperm resource

crsctl setperm type

crsctl start resource

crsctl status resource

crsctl status type

crsctl stop resource


Oracle RAC Environment CRSCTL Commands:

crsctl add crs administrator

crsctl add css votedisk

crsctl add serverpool

crsctl check cluster

crsctl check crs

crsctl check resource

crsctl check ctss

crsctl config crs

crsctl delete crs administrator

crsctl delete css votedisk

crsctl delete node

crsctl delete serverpool

crsctl disable crs

crsctl enable crs

crsctl get css

crsctl get css ipmiaddr

crsctl get nodename

crsctl getperm serverpool

crsctl lsmodules

crsctl modify serverpool

crsctl pin css

crsctl query crs administrator

crsctl query crs activeversion

crsctl query crs releaseversion

crsctl query crs softwareversion

crsctl query css ipmidevice

crsctl query css votedisk

crsctl relocate resource

crsctl relocate server

crsctl replace discoverystring

crsctl replace votedisk

crsctl set css

crsctl set css ipmiaddr

crsctl set css ipmiadmin

crsctl setperm serverpool

crsctl start cluster

crsctl start crs

crsctl status server

crsctl status serverpool

crsctl stop cluster

crsctl stop crs

crsctl unpin css

crsctl unset css


Oracle Restart Environment CRSCTL Commands:

crsctl check has

crsctl config has

crsctl disable has

crsctl enable has

crsctl query has releaseversion

crsctl query has softwareversion

crsctl start has

crsctl stop has


Deprecated in Oracle Clusterware 11g release 2 (11.2):

crs_stat

crs_register

crs_unregister

crs_start

crs_stop

crs_getperm

crs_profile

crs_relocate

crs_setperm

crsctl check crsd

crsctl check cssd

crsctl check evmd

crsctl debug log

crsctl set css votedisk

crsctl start resources

crsctl stop resources

====================================================


crsctl – Cluster Ready Service Control


To Get CRSCTL  help

$crsctl -h


To Get Active Cluster Version


$crsctl query crs activeversion


To Get cluster version of given node


$crsctl query crs softwareversion [node_name]


Ex:$crsctl query crs softwareversion rac1


#crsctl start crs


#crsctl stop crs


(or)


#/etc/init.d/init.crs start


#/etc/init.d/init.crs stop


#crsctl enable crs


#crsctl disable crs


(or)


#/etc/init.d/init.crs enable


#/etc/init.d/init.crs disable


To get cluster stack health status


$crsctl check crs


To get the viability of CSS across nodes


$crsctl check cluster [-node node_name]   — 11gR2 RACcommand


#crsctl start cluster -n HostName — 11g R2


#crsctl stop cluster -n HostName — 11g R2


#crsctl stop cluster -all  — 11g R2


$ps -ef | grep d.bin


$crsctl check cssd


$crsctl check crsd


$crsctl check evmd


$crsctl check oprocd


$crsctl check ctss


#/etc/init.d/init.cssd stop


#/etc/init.d/init.cssd start


#/etc/rc.d/init.d/init.evmd

#/etc/rc.d/init.d/init.cssd

#/etc/rc.d/init.d/init.crsd

#mv /etc/rc3.d/S96init.cssd /etc/rc3.d/_S96init.cssd — to stop cssd from autostarting after reboot


#crsctl check css votedisk


#crsctl query css votedisk — lists the voting disks used by CSS command in 11gR2


#crsctl add css votedisk PATH


#crsctl add css votedisk PATH -force — if Clusterware is not running


#crsctl delete css votedisk PATH


#crsctl delete css votedisk PATH -force — if Clusterware is not running


#crsctl set css parameter_name value — set parameters on OCR


#crsctl set css misscount 100


#crsctl unset css parameter_name — sets CSS parameter to its default


#crsctl unset css misscount


#crsctl get css parameter_name — gets the value of a CSS parameter


#crsctl get css disktimeout


#crsctl get css misscount


#crsctl get css reboottime


#crsctl start resources — starts Clusterware resources


#crsctl start resource ora.DATA.dg

#crsctl stop resources — stops Clusterware resources


$crsctl status resource


$crsctl status resource -t


$crsctl stat resource -t


#crsctl lsmodules crs — lists CRS modules that can be used for debugging( need root privilage)


List CRSD Debug Module: AGENT


List CRSD Debug Module: AGFW


List CRSD Debug Module: CLSFRAME


List CRSD Debug Module: CLSVER


List CRSD Debug Module: CLUCLS


List CRSD Debug Module: COMMCRS


List CRSD Debug Module: COMMNS


List CRSD Debug Module: CRSAPP


List CRSD Debug Module: CRSCCL


List CRSD Debug Module: CRSCEVT


List CRSD Debug Module: CRSCOMM


List CRSD Debug Module: CRSD


List CRSD Debug Module: CRSEVT


List CRSD Debug Module: CRSMAIN


List CRSD Debug Module: CRSOCR


List CRSD Debug Module: CRSPE


List CRSD Debug Module: CRSPLACE


List CRSD Debug Module: CRSRES


List CRSD Debug Module: CRSRPT


List CRSD Debug Module: CRSRTI


List CRSD Debug Module: CRSSE


List CRSD Debug Module: CRSSEC


List CRSD Debug Module: CRSTIMER


List CRSD Debug Module: CRSUI


List CRSD Debug Module: CSSCLNT


List CRSD Debug Module: OCRAPI


List CRSD Debug Module: OCRASM


List CRSD Debug Module: OCRCAC


List CRSD Debug Module: OCRCLI


List CRSD Debug Module: OCRMAS


List CRSD Debug Module: OCRMSG


List CRSD Debug Module: OCROSD


List CRSD Debug Module: OCRRAW


List CRSD Debug Module: OCRSRV


List CRSD Debug Module: OCRUTL


List CRSD Debug Module: SuiteTes


List CRSD Debug Module: UiServer


$crsctl lsmodules css — lists CSS modules that can be used for debugging


The following are the Cluster Synchronization Services modules::


CSSD


COMMCRS


COMMNS


CLSF


SKGFD


$crsctl lsmodules evm — lists EVM modules that can be used for debugging


The following are the Cluster Synchronization Services modules::


CSSD


COMMCRS


COMMNS


CLSF


SKGFD

$crsctl start has   (HAS – High Availability Services)

$crsctl stop has


$crsctl check has


CRS-4638 Oracle High Availability Service is online


OCR Modules — cannot be listed with crsctl lsmodules command


OCRAPI


OCRCLI


OCRSRV


OCRMAS


OCRMSG


OCRCAC


OCRRAW


OCRUTL


OCROSD


#crsctl debug statedump crs — dumps state info for crs objects


#crsctl debug statedump css — dumps state info for css objects


#crsctl debug statedump evm — dumps state info for evm objects


#crsctl debug log crs [module:level]{,module:level} …


— Turns on debugging for CRS


#crsctl debug log crs CRSEVT:5,CRSAPP:5,CRSTIMER:5,CRSRES:5,CRSRTI:1,CRSCOMM:2


#crsctl debug log css [module:level]{,module:level} …


— Turns on debugging for CSS


#crsctl debug log css CSSD:1


#crsctl debug log evm [module:level]{,module:level} …


— Turns on debugging for EVM


#crsctl debug log evm EVMCOMM:1


#crsctl debug trace crs — dumps CRS in-memory tracing cache


#crsctl debug trace css — dumps CSS in-memory tracing cache


#crsctl debug trace evm — dumps EVM in-memory tracing cache


#crsctl debug log res resource_name:level — turns on debugging for resources


#crsctl debug log res “ora.lnx04.vip:1”


#crsctl trace all_the_above_commands — tracing by adding a “trace” argument.


#crsctl trace check css


#crsctl backup -h


#crsctl backup css votedisk


Here is the list of the options for CRSCTL in 11gR2:


crsctl add       – add a resource, type or other entity


crsctl backup    – back up voting disk for CSS


crsctl check     – check a service, resource or other entity


crsctl config    – output autostart configuration


crsctl debug     – obtain or modify debug state


crsctl delete    – delete a resource, type or other entity


crsctl disable   – disable autostart


crsctl discover  – discover DHCP server


crsctl enable    – enable autostart


crsctl get       – get an entity value


crsctl getperm   – get entity permissions


crsctl lsmodules – list debug modules


crsctl modify    – modify a resource, type or other entity


crsctl query     – query service state


crsctl pin       – Pin the nodes in the nodelist


crsctl relocate  – relocate a resource, server or other entity


crsctl replace   – replaces the location of voting files


crsctl release   – release a DHCP lease


crsctl request   – request a DHCP lease


crsctl setperm   – set entity permissions


crsctl set       – set an entity value


crsctl start     – start a resource, server or other entity


crsctl status    – get status of a resource or other entity


crsctl stop      – stop a resource, server or other entity


crsctl unpin     – unpin the nodes in the nodelist


crsctl unset     – unset a entity value, restoring its default


How do I identify the voting disk/file location?

#crsctl query css votedisk


How to take backup of voting file/disk?

crsctl backup css votedisk


#################################################

Sunday, 9 October 2022

Expdp - AWR dump example awrextr.sql


Expdp - AWR dump example awrextr.sql

SQL> start $ORACLE_HOME/rdbms/admin/awrextr.sql

~~~~~~~~~~~~~

AWR EXTRACT

~~~~~~~~~~~~~

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

~  This script will extract the AWR data for a range of snapshots  ~

~  into a dump file.  The script will prompt users for the         ~

~  following information:                                          ~

~     (1) database id                                              ~

~     (2) snapshot range to extract                                ~

~     (3) name of directory object                                 ~

~     (4) name of dump file                                        ~

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



Databases in this Workload Repository schema

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


   DB Id     DB Name      Host

------------ ------------ ------------

* 123456 Test_db      Test_hostname


The default database id is the local one: '123456'.  To use this

database id, press <return> to continue, otherwise enter an alternative.


Enter value for dbid: 123456


Using 123456 for Database ID



Specify the number of days of snapshots to choose from

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Entering the number of days (n) will result in the most recent

(n) days of snapshots being listed.  Pressing <return> without

specifying a number lists all completed snapshots.



Enter value for num_days: 2


Listing the last 2 days of Completed Snapshots


DB Name        Snap Id    Snap Started

------------ --------- ------------------

Test_db          44424 07 Sep 2022 00:00

                 44425 07 Sep 2022 01:00

                 44426 07 Sep 2022 02:00

                 44427 07 Sep 2022 03:00

                 44428 07 Sep 2022 04:00

                 44429 07 Sep 2022 05:01

                 44430 07 Sep 2022 06:00

                 44431 07 Sep 2022 07:00

                 44432 07 Sep 2022 08:00

                 44433 07 Sep 2022 09:00

                 44434 07 Sep 2022 10:00


DB Name        Snap Id    Snap Started

------------ --------- ------------------

Test_db          44435 07 Sep 2022 11:00

                 44436 07 Sep 2022 12:00

                 44437 07 Sep 2022 13:00

                 44438 07 Sep 2022 14:00

                 44439 07 Sep 2022 15:00

                 44440 07 Sep 2022 16:00

                 44441 07 Sep 2022 17:00

                 44442 07 Sep 2022 18:00

                 44443 07 Sep 2022 19:00

                 44444 07 Sep 2022 20:00

                 44445 07 Sep 2022 21:00


DB Name        Snap Id    Snap Started

------------ --------- ------------------

Test_db          44446 07 Sep 2022 22:00

                 44447 07 Sep 2022 23:00

                 44448 08 Sep 2022 00:00

                 44449 08 Sep 2022 01:00

                 44450 08 Sep 2022 02:00

                 44451 08 Sep 2022 03:00

                 44452 08 Sep 2022 04:00

                 44453 08 Sep 2022 05:00

                 44454 08 Sep 2022 06:00

                 44455 08 Sep 2022 07:00

                 44456 08 Sep 2022 08:00


DB Name        Snap Id    Snap Started

------------ --------- ------------------

Test_db          44457 08 Sep 2022 09:00

                 44458 08 Sep 2022 10:00

                 44459 08 Sep 2022 11:00

                 44460 08 Sep 2022 12:00

                 44461 08 Sep 2022 13:00

                 44462 08 Sep 2022 14:00

                 44463 08 Sep 2022 15:00

                 44464 08 Sep 2022 16:00

                 44465 08 Sep 2022 17:00

                 44466 08 Sep 2022 18:00



Specify the Begin and End Snapshot Ids

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Enter value for begin_snap: 44457

Begin Snapshot Id specified: 44457


Enter value for end_snap: 44466

End   Snapshot Id specified: 44466



Specify the Directory Name

~~~~~~~~~~~~~~~~~~~~~~~~~~


Directory Name                 Directory Path

------------------------------ -------------------------------------------------

DATA_PUMP_DIR                  <location>/Test_dbdump/exp


DATA_PUMP_DIR                  <location>/Test_dbdump/exp

Directory Name                 Directory Path

------------------------------ -------------------------------------------------

DATA_PUMP_DIR                  <location>/Test_dbdump/exp

DATA_PUMP_DIR                  <location>/Test_dbdump/exp


Choose a Directory Name from the above list (case-sensitive).


Enter value for directory_name: DATA_PUMP_DIR


Using the dump directory: DATA_PUMP_DIR


Specify the Name of the Extract Dump File

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The prefix for the default dump file name is awrdat_44457_44466.

To use this name, press <return> to continue, otherwise enter

an alternative.


Enter value for file_name: awrdat_44457_44466


Using the dump file prefix: awrdat_44457_44466

|

| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

|  The AWR extract dump file will be located

|  in the following directory/file:

|   <location>/Test_dbdump/exp

|   awrdat_44457_44466.dmp

| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

|

|  *** AWR Extract Started ...

|

|  This operation will take a few moments. The

|  progress of the AWR extract operation can be

|  monitored in the following directory/file:

|   <location>/Test_dbdump/exp

|   awrdat_44457_44466.log

|


End of AWR Extract

SQL> exit

Impdp example using attach increase parallel from 10 to 17

Impdp example using attach increase parallel from 10 to 17 


[oracle@TEST_hostname <Dir>]$ impdp attach=SYS_IMPORT_SCHEMA_05


Export: Release 19.0.0.0.0 - Production on Tue Aug 23 10:08:27 2022

Version 19.16.0.0.0


Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.


Username: / as sysdba



Import: Release 19.0.0.0.0 - Production on Tue Aug 23 10:08:49 2022

Version 19.16.0.0.0


Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.


Username: / as sysdba


Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production


Job: SYS_IMPORT_SCHEMA_05

  Owner: SYS

  Operation: IMPORT

  Creator Privs: TRUE

  

  Start Time: Monday, 22 August, 2022 8:18:03

  Mode: SCHEMA

  Instance: TEST_DB

  Max Parallelism: 10

  Timezone: -05:00

  Timezone version: 32

  Endianness: LITTLE

  NLS character set: AL32UTF8

  NLS NCHAR character set: AL16UTF16

  EXPORT Job Parameters:

  Parameter Name      Parameter Value:

     CLIENT_COMMAND        "/******** AS SYSDBA" parfile=expdp_.par

     LOGTIME               ALL

     METRICS               1

     TRACE                 0

  IMPORT Job Parameters:

  Parameter Name      Parameter Value:

     CLIENT_COMMAND        "/******** AS SYSDBA" parfile=impdp_.par

     LOGTIME               ALL

     METRICS               1

     TRACE                 0

  State: EXECUTING

  Bytes Processed: 107,367,434,768

  Percent Done: 99

  Current Parallelism: 10

  Job Error Count: 0

  Job heartbeat: 1511

  Dump File: <Directory>%u.dmp

  Dump File: <Directory>01.dmp

  Dump File: <Directory>02.dmp

  Dump File: <Directory>03.dmp

  Dump File: <Directory>04.dmp

  Dump File: <Directory>05.dmp

  Dump File: <Directory>06.dmp

  Dump File: <Directory>07.dmp

  Dump File: <Directory>08.dmp

  Dump File: <Directory>09.dmp

  Dump File: <Directory>10.dmp

  Dump File: <Directory>11.dmp

  Dump File: <Directory>12.dmp

  Dump File: <Directory>13.dmp

  Dump File: <Directory>14.dmp

  Dump File: <Directory>15.dmp

  Dump File: <Directory>16.dmp

  Dump File: <Directory>17.dmp


Worker 1 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:51:57

  Object status at: Monday, 22 August, 2022 8:51:57

  Process Name: DW00

  State: WORK WAITING


Worker 2 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:51:59

  Object status at: Monday, 22 August, 2022 8:51:59

  Process Name: DW01

  State: WORK WAITING


Worker 3 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:50:09

  Object status at: Monday, 22 August, 2022 8:50:09

  Process Name: DW02

  State: WORK WAITING


Worker 4 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Object start time: Monday, 22 August, 2022 8:18:22

  Object status at: Monday, 22 August, 2022 8:18:22

  Process Name: DW03

  State: EXECUTING

  Object Schema: VRAADMIN

  Object Name: PROPERTY_VALUES

  Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA

  Completed Objects: 7

  Completed Rows: 3,424,119

  Completed Bytes: 305,046,291,952

  Percent Done: 63

  Worker Parallelism: 8


Worker 5 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:50:14

  Object status at: Monday, 22 August, 2022 8:50:14

  Process Name: DW04

  State: WORK WAITING


Worker 6 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:58:33

  Object status at: Monday, 22 August, 2022 8:58:33

  Process Name: DW05

  State: WORK WAITING


Worker 7 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:50:14

  Object status at: Monday, 22 August, 2022 8:50:14

  Process Name: DW06

  State: WORK WAITING


Worker 8 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:51:57

  Object status at: Monday, 22 August, 2022 8:51:57

  Process Name: DW07

  State: WORK WAITING


Worker 9 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:51:59

  Object status at: Monday, 22 August, 2022 8:51:59

  Process Name: DW08

  State: WORK WAITING


Worker 10 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:50:09

  Object status at: Monday, 22 August, 2022 8:50:09

  Process Name: DW09

  State: WORK WAITING


Import> parallel=17


Import> status


Job: SYS_IMPORT_SCHEMA_05

  Operation: IMPORT

  Mode: SCHEMA

  State: EXECUTING

  Bytes Processed: 107,367,434,768

  Percent Done: 99

  Current Parallelism: 17

  Job Error Count: 0

  Job heartbeat: 2

  Dump File: <Directory>%u.dmp

  Dump File: <Directory>01.dmp

  Dump File: <Directory>02.dmp

  Dump File: <Directory>03.dmp

  Dump File: <Directory>04.dmp

  Dump File: <Directory>05.dmp

  Dump File: <Directory>06.dmp

  Dump File: <Directory>07.dmp

  Dump File: <Directory>08.dmp

  Dump File: <Directory>09.dmp

  Dump File: <Directory>10.dmp

  Dump File: <Directory>11.dmp

  Dump File: <Directory>12.dmp

  Dump File: <Directory>13.dmp

  Dump File: <Directory>14.dmp

  Dump File: <Directory>15.dmp

  Dump File: <Directory>16.dmp

  Dump File: <Directory>17.dmp


Worker 1 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:51:57

  Object status at: Monday, 22 August, 2022 8:51:57

  Process Name: DW00

  State: WORK WAITING


Worker 2 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:51:59

  Object status at: Monday, 22 August, 2022 8:51:59

  Process Name: DW01

  State: WORK WAITING


Worker 3 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:50:09

  Object status at: Monday, 22 August, 2022 8:50:09

  Process Name: DW02

  State: WORK WAITING


Worker 4 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Object start time: Monday, 22 August, 2022 8:18:22

  Object status at: Monday, 22 August, 2022 8:18:22

  Process Name: DW03

  State: EXECUTING

  Object Schema: VRAADMIN

  Object Name: PROPERTY_VALUES

  Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA

  Completed Objects: 7

  Completed Rows: 3,424,119

  Completed Bytes: 305,046,291,952

  Percent Done: 63

  Worker Parallelism: 8


Worker 5 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:50:14

  Object status at: Monday, 22 August, 2022 8:50:14

  Process Name: DW04

  State: WORK WAITING


Worker 6 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:58:33

  Object status at: Monday, 22 August, 2022 8:58:33

  Process Name: DW05

  State: WORK WAITING


Worker 7 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:50:14

  Object status at: Monday, 22 August, 2022 8:50:14

  Process Name: DW06

  State: WORK WAITING


Worker 8 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:51:57

  Object status at: Monday, 22 August, 2022 8:51:57

  Process Name: DW07

  State: WORK WAITING


Worker 9 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:51:59

  Object status at: Monday, 22 August, 2022 8:51:59

  Process Name: DW08

  State: WORK WAITING


Worker 10 Status:

  Instance ID: 1

  Instance name: TEST_DB

  Host name: TEST_hostname

  Access method: direct_path

  Object start time: Monday, 22 August, 2022 8:50:09

  Object status at: Monday, 22 August, 2022 8:50:09

  Process Name: DW09

  State: WORK WAITING


Import>


cursor_sharing histogram


9

PROFESSIONAL DBA

Oracle12c Database Administration

Performance Tuning

SQL TUNING

SQL Statement Processing

Processing Phases

The four most important phases in SQL statement processing are

• Parsing

• Binding

• Executing and

• Fetching

Note

The Fetch phase is only to queries.

DataSphere

Parse Phase

• Checks shared pool for the statement.

• Checks the syntax, specifications of the statement.

• Checks the semantics of the statement and ensures that objects required are

valid.

• Check for the privileges to process the statement.

• If required transforms the statement into an equivalent statements, if the

statement is on a view, a subquery, rewriting the statement.

• Builds the Execution plan if required.

• Stores the execution plan or uses an existing execution plan.

Bind Phase

• Checks for references of bind variables in the statement.

• A value to each variable is assigned or reassigned.

Consideration of Binding

The bind variable values are not knows to optimizer while building the execution

plan.

Advantage

Rebind & Execute without reparsing is done faster, thus saves time and memory.

Disadvantage

Cannot estimate predicate selectivity.

DataSphere

Execute Phase

• The parsing tree is used to access the data buffers.

• Same parse tree is used by multiple users.

• Physical reads, logical reads, writes for DML statements is performed.

Fetch Phase

If statement is SELECT then fetch the data.

Typically multiple rows are retrieved using an array fetch.

V$LIBRARYCACHE view

The V$LIBRARYCACHE view the amount of caching of SQL statements.

SQL>select gethitratio, pinhitratio

from v$librarycache

where namespace = ’SQL AREA’;

V$SQLAREA view

The V$SQLAREA view provides the SQL statements and their usage.

SQL>select sql_text, version_count, loads, invalidations, parse_calls, sorts

from v$sqlarea

where parsing_user_id > 0

and command_type = 3

Order by sql_text;

DataSphere

Cursor Sharing

Whenever a statement is issued, optimizer checks the shared SQL area for the

statement.

If the statement is present in the SQL area then parsing is avoided by using the

existing cursor.

Cursor Sharing Benefits

• Avoids parsing and thus saves time.

• Based on the statement executed, the memory is dynamically adjusted.

• Memory usage performance is improve dramatically.

Which statement can share cursor

The SQL statements those having identical elements such as

• Text

▪ Uppercase and lowercase

▪ White space like spaces, tabs, carriage returns

▪ Comments

• Referenced objects

◦ The SQL statements must resolve to the same objects to which they are

referenced.

• Data types of the bind variables

◦ SQL statements must usue the same type of bind variables.

DataSphere

Cursor_Sharing parameter

Cursor_Sharing parameter values are

• Exact

• Simlar

• Force

EXACT

Is the default value.

This value force the parser to use a statement in the cursor only if it is identical is all

aspects to use the existing cursor.

SIMILAR

The parser is allowed to use a statement in the cursor that is identical except literal

values.

After the statement is identified the parsing check that the execution plan is

applicable for the statement.

FORCE

Is same as SIMILAR, except that the execution plan is always used, regardless of its

applicability.

Note

CURSOR_SHARING = SIMILAR or FORCE is not recommended for Complex

Quiries, Decision Support Systems, Data Warehousing environments.

DataSphere

Histograms

The optimizer must estimate the number of rows processed by a given query.

By default optimizer assumes that the data is evenly distributed.

The accuracy is estimated by the optimizer based on its knowledge on the data

distribution.

Histograms provides the proper data distribution information to the optimizer.

When to Use Histograms

• When the data distribution is not even and it is skewed then use histograms.

• If the column is used in WHERE clause.

• If the column is using non unique and using other than equality predicates

Avoid using histograms whenever

• All predicates on the column use bind variables

• The column data is uniformly distributed

Popular and Nonpopular Values

Popular Value

For a bucket endpoint value, the Popular values appear multiple times.

Popular Value

If the values appear only once or does not appear at all, then it is considered as the

NonPopular value.

Density

Is calculated based on 1 divided by NDV - Number of Distinct Values.

This avoids bais of optimizer it causes due to repetition of popular values.

DataSphere

Types of Histograms

Frequency Histogram

In Frequency Histogram, each buckets stores a distinct column value.

As the number of values vary for each distinct value, the buckets will contain

different number of values.

The Frequency Histogram is used when the NDV – Number of Distinct Values is

less than or equal to the n - number of buckets.

Default and maximum number of buckets is 254.

Note

Default Frequency Histogram is captured from Oracle12c if the

AUTO_SAMPLE_SIZE is used while gathering statistics.

Top Frequency Histogram

Is same as Frequency Histogram except that it ignores the Non Popular values those

are insignificant in numbers.

The Top Fequency Histogram is used when

1. NDV - Number of Distinct values are greater than n – number of buckets.

2. The percentage of rows occupied by top n frequent values is equal to greater

than p - the threshold. Where p is (1 - ( 1/n))*100

3. If AUTO_SAMPLE_SIZE is used for Estimate_Percent while gathering

statistics.

DataSphere

Height Balanced Histogram

Is a legacy method of histogram.

In height balanced histograms, each bucket will more or less contains same number

of rows.

The rows are distributed across the available number of buckets.

If many rows contains the same value then it may be spread to multiple buckets to

balance the height of the bucket.

The Height Balance Histogram is used when

• NDV - Number of Distinct values are greater than n – number of buckets.

• If AUTO_SAMPLE_SIZE is NOT used for Estimate_Percent while gathering

statistics.

Gathering Histogram

Using Analyze

• Generate statistics for the CUSTOMERS table and for the STATE_ID column

by specifying maximum 50 buckets.

SQL>analyze table customers compute statistics

for table for columns STATE_ID

size 50 ;

• Compute the statistics in Column Level for a column without specifying the

number of buckets.

SQL> analyze table customers compute statistics

for columns STATE_ID ;

DataSphere

Using DBMS_STATS

• Generate statistics for the CUSTOMERS table and for the STATE_ID column

by specifying maximum 50 buckets.

SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS

('RAM','CUSTOMERS', METHOD_OPT =>

'FOR COLUMNS SIZE 50 STATE_ID') ;

• The SIZE specifies the maximum number of buckets for the histogram.

or

SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS

(ownname => 'RAM',

tabname => 'CUSTOMERS',

METHOD_OPT => 'FOR COLUMNS STATE_ID SIZE 50') ;

or

SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS

(ownname => 'RAM',

tabname => 'CUSTOMERS',

METHOD_OPT => 'FOR COLUMNS STATE_ID SIZE 10',

estimate_percent => AUTO_SAMPLE_SIZE ) ;

DataSphere

Histogram Consideration

• Use the FOR ALL INDEXED COLUMNS option.

• If data distribution is not static then gather histograms frequently.

• Do not gather histograms for columns using bind variables in WHERE clauses.

• Unless improvement in performance avoid using histograms.

• Histograms are stored in data dictionary and needs substantial additional

storage.

VIEWS

SQL> Select table_name, column_name, num_distinct, histogram

From User_Tab_Col_Statistics

Where table_name = 'CUSTOMERS'

and column_name = 'STATE_ID';

SQL> Select endpoint_number, endpoint_value

From user_histograms

Where table_name = 'CUSTOMERS'

and column_name = 'STATE_ID';

DataSphere


Eg.

exec dbms_stats.gather_dictionary_stats;

 

exec dbms_stats.gather_fixed_objects_stats;


exec dbms_stats.gather_database_stats(cascade => TRUE, method_opt => 'FOR ALL COLUMNS SIZE SKEWONLY', degree => 18 );


Tablespace - Datafile

 

Tablespace - Datafile  details query

set lines 200

col FILE_NAME for a60

select FILE_NAME,TABLESPACE_NAME,bytes/1024/1024/1024 from DBA_DATA_FILES where TABLESPACE_NAME='<Tablespace_name>';



col FILE_NAME for a60


select FILE_NAME,TABLESPACE_NAME,BYTES/1024/1024/1024 "IN GB" from dba_temp_files where TABLESPACE_NAME='<Tablespace_name>';

ROLES Query Examples

ROLES  Query Examples 

set lines 200 pages 1000 ver off

col grantee for a30

col role for a16

col type for a10

col pv for a80 hea 'PRIVILEGE OR ROLE'

bre on role on type skip 1

define usercheck ='<user_name>'

select grantee, 'ROL' type, granted_role pv

from dba_role_privs where grantee = '&usercheck' union

select grantee, 'PRV' type, privilege pv

from dba_sys_privs where grantee = '&usercheck' union

select grantee, 'OBJ' type,

max(decode(privilege,'WRITE','WRITE,'))||max(decode(privilege,'READ','READ'))||

max(decode(privilege,'EXECUTE','EXECUTE'))||max(decode(privilege,'SELECT','SELECT'))||

max(decode(privilege,'DELETE',',DELETE'))||max(decode(privilege,'UPDATE',',UPDATE'))||

max(decode(privilege,'INSERT',',INSERT'))||' ON '||object_type||'  "'||a.owner||'.'||table_name||'"' pv

from dba_tab_privs a, dba_objects b

where a.owner=b.owner and a.table_name = b.object_name and a.grantee='&usercheck'

group by a.owner,table_name,object_type,grantee union

select username grantee, '---' type, 'empty user ---' pv from dba_users

where not username in (select distinct grantee from dba_role_privs) and

not username in (select distinct grantee from dba_sys_privs) and

not username in (select distinct grantee from dba_tab_privs) and username like  '%&usercheck%'

group by username

order by grantee, type, pv;

SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT','SPDPAMF') from dual;

set heading off;
set echo off;
set pagesize 0;      
set long 99999;      
set linesize 32767;  
set trimspool on;   
SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT','SUDAWA2') from dual;

SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT','SPDPAME') from dual;


col GRANTEE for a40
col OWNER for a30
col TABLE_NAME for a30
col GRANTOR for a30
col PRIVILEGE for a30
set lines 200
SELECT * FROM DBA_TAB_PRIVS  WHERE GRANTEE IN (SELECT granted_role FROM DBA_ROLE_PRIVS WHERE GRANTEE = 'E593469');

SQL> SET lines 100
COL privilege FOR a50
SELECT grantee, granted_role, default_role
FROM dba_role_privs
WHERE granted_role IN ('DBA', 'EXP_FULL_DATABASE', 'IMP_FULL_DATABASE')
ORDER BY 1,2; 

GRANTEE                                  GRANTED_ROLE                                                                                                                     DEF
---------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- ---
DATAPUMP_EXP_FULL_DATABASE               EXP_FULL_DATABASE                                                                                                                YES
DATAPUMP_IMP_FULL_DATABASE               EXP_FULL_DATABASE                                                                                                                YES
DATAPUMP_IMP_FULL_DATABASE               IMP_FULL_DATABASE                                                                                                                YES
DBA                                      EXP_FULL_DATABASE                                                                                                                YES
DBA                                      IMP_FULL_DATABASE                                                                                                                YES
DBA_ROLE                                 DBA                                                                                                                              YES
DBMAINT                                  DBA                                                                                                                              YES
EDP_SUPPORT_UPDATE_ROLE                  EXP_FULL_DATABASE                                                                                                                YES
EDP_SUPPORT_UPDATE_ROLE                  IMP_FULL_DATABASE                                                                                                                YES
IOC_ROLE                                 EXP_FULL_DATABASE                                                                                                                YES
IOC_ROLE                                 IMP_FULL_DATABASE                                                                                                                YES

GRANTEE                                  GRANTED_ROLE                                                                                                                     DEF
---------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- ---
SYS                                      DBA                                                                                                                              YES
SYS                                      EXP_FULL_DATABASE                                                                                                                YES
SYS                                      IMP_FULL_DATABASE                                                                                                                YES
SYSTEM                                   DBA          

SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT','LINCPAM_DMM_UPDATE_ROLE') from dual;

SELECT dbms_metadata.get_ddl('USER','SCOTT') FROM dual;

SELECT DBMS_METADATA.GET_GRANTED_DDL(
  2  'SYSTEM_GRANT','RESOURCE') from dual;
  
  select granted_role,admin_option,default_role from dba_role_privs where grantee='20';
  
  select * from dba_role_privs where grantee='20';
  
  select * from dba_sys_privs where grantee='20';
  
  select * from USER_ROLE_PRIVS where USERNAME=82';
select * from USER_TAB_PRIVS where Grantee = 82';
select * from USER_SYS_PRIVS where USERNAME = 'E
2';

Generate on source. 
select 'create role '||role||';' from dba_roles;
2. Than generate a ddl from the dump for grants only
include=grant
sqlfile=grant.sql

Profile example

 SQL> set lines 200

SQL> select * from dba_profiles where PROFILE='<profile_name>';


PROFILE                        RESOURCE_NAME                    RESOURCE LIMIT

------------------------------ -------------------------------- -------- ----------------------------------------

<profile_name>         COMPOSITE_LIMIT                  KERNEL   DEFAULT

<profile_name>         SESSIONS_PER_USER                KERNEL   DEFAULT

<profile_name>         CPU_PER_SESSION                  KERNEL   DEFAULT

<profile_name>         CPU_PER_CALL                     KERNEL   DEFAULT

<profile_name>         LOGICAL_READS_PER_SESSION        KERNEL   DEFAULT

<profile_name>         LOGICAL_READS_PER_CALL           KERNEL   DEFAULT

<profile_name>         IDLE_TIME                        KERNEL   DEFAULT

<profile_name>         CONNECT_TIME                     KERNEL   DEFAULT

<profile_name>         PRIVATE_SGA                      KERNEL   DEFAULT

<profile_name>         FAILED_LOGIN_ATTEMPTS            PASSWORD UNLIMITED

<profile_name>         PASSWORD_LIFE_TIME               PASSWORD UNLIMITED


PROFILE                        RESOURCE_NAME                    RESOURCE LIMIT

------------------------------ -------------------------------- -------- ----------------------------------------

<profile_name>         PASSWORD_REUSE_TIME              PASSWORD UNLIMITED

<profile_name>         PASSWORD_REUSE_MAX               PASSWORD UNLIMITED

<profile_name>         PASSWORD_VERIFY_FUNCTION         PASSWORD FN_PASSWORDVERIFY

<profile_name>         PASSWORD_LOCK_TIME               PASSWORD UNLIMITED

<profile_name>         PASSWORD_GRACE_TIME              PASSWORD UNLIMITED


16 rows selected.


SQL>


OS level CPU check

 CPU Check - GV$SESSION;

OS level CPU check 

col sid format 9999999

col spid format a10

col machine format a25

col username format a15

col "OS User" format a15

col Program format a20

col LOGON_TIME format a30

SELECT s.sid, s.serial#,p.spid "OS Pid",machine, s.username "Username",

s.osuser "OS User", s.program "Program",to_char(s.logon_time, 'dd-mon-yy hh24:mi') LOGON_TIME, a.sql_id,s.PREV_SQL_ID

FROM gv$session s, gv$sqlarea a, gv$process p

WHERE s.sql_hash_value = a.hash_value (+)

AND s.sql_address = a.address (+)

AND s.paddr = p.addr

and s.sid in (select s.sid from gv$session s, gv$process p where s.paddr = p.addr and p.spid in ('16146','422','8201','17056'));


CPU Check - V$SESSION;

col sid format 9999999

col spid format a10

col machine format a25

col username format a15

col "OS User" format a15

col Program format a20

col LOGON_TIME format a30

SELECT s.sid, s.serial#,p.spid "OS Pid",machine, s.username "Username",

s.osuser "OS User", s.program "Program",to_char(s.logon_time, 'dd-mon-yy hh24:mi') LOGON_TIME, a.sql_id,s.PREV_SQL_ID

FROM v$session s, v$sqlarea a, v$process p

WHERE s.sql_hash_value = a.hash_value (+)

AND s.sql_address = a.address (+)

AND s.paddr = p.addr

and s.sid in (select s.sid from v$session s, v$process p where s.paddr = p.addr and p.spid in ('16146','422','8201','17056'));

UNDO Percentage Query

 

UNDO Percentage Query 

select a.tablespace_name, SIZEMB, USAGEMB, (SIZEMB - USAGEMB) FREEMB

from (select sum(bytes) / 1024 / 1024 SIZEMB, b.tablespace_name

from dba_data_files a, dba_tablespaces b

where a.tablespace_name = b.tablespace_name

and b.contents = 'UNDO'

group by b.tablespace_name) a,

(select c.tablespace_name, sum(bytes) / 1024 / 1024 USAGEMB

from DBA_UNDO_EXTENTS c

where status <> 'EXPIRED'

group by c.tablespace_name) b

where a.tablespace_name = b.tablespace_name;

check patch applied or not using oracle command

 check patch applied or not using oracle command


-- For upto 11g

COLUMN action_time FORMAT A20

COLUMN action FORMAT A20


COLUMN version FORMAT A10

COLUMN comments FORMAT A30

COLUMN bundle_series FORMAT A10


SELECT TO_CHAR(action_time, 'DD-MON-YYYY HH24:MI:SS') AS action_time,

action,

 version, id, comments, bundle_series

FROM   sys.registry$history

ORDER by action_time;


--From 12c onwards

col action_time for a28

col action for a8

col version for a8

col comments for a30

col status for a10

set line 999 pages 999

select patch_id,  version, status, Action,Action_time from dba_registry_sqlpatch order by action_time;


Multiple _kill session example

Multiple kill session example

SCHEMAS

 select 'alter system kill session '''||SID||','||SERIAL#||''' immediate;' from v$session where username='<schema_name>';


Session with username not sys

set pagesize 200

select  'alter system kill session '''|| sid||','||serial#||''' immediate ;' from v$session  where type <> 'BACKGROUND' and username <> 'SYS';


SQL ID 

SQL>select 'alter system kill session '''||SID||','||SERIAL#||''' immediate;' from v$session where sql_id='1amhfbhjmfn6y';

Expdp - Par file examples

 Expdp -  Par file examples 



Table par file examples

cat expdp_TABLE.par


USERID = '/ as sysdba'

DIRECTORY   = <Directory_name>

DUMPFILE    = Table_Target_%U.dmp

LOGFILE     = expdp_Table.log

tables     = <schema_name>.<Table_name>

PARALLEL    = 4

FILESIZE = 10G

EXCLUDE=STATISTICS

METRCIS=Y

LOGTIME=ALL



Schema par file examples 

cat expdp_Schemas.par

USERID = '/ as sysdba'

DIRECTORY   = <dump_directory>

DUMPFILE    = <dumpfile_name>%U.dmp

LOGFILE     = <logfile_name>.log

schemas    = <schema_name>,<schema_name>,<schema_name>

PARALLEL    = 4

FILESIZE = 10G

EXCLUDE=STATISTICS

METRCIS=Y

LOGTIME=ALL


Exclude Tables Examples : 


directory=<directory_name>

SCHEMAS=<schema_name>.<schema_name>

exclude=TABLE:"IN(select table_name from dba_tables where table_name like '<schema_name>.<table_name>')"

dumpfile=EXPDP_source%U.DMP

logfile=IMPDP_source.log

parallel=30


directory=<directory_name>

content =metadata_only

schemas    = <schema_name>,<schema_name>,<schema_name>

dumpfile=metadata_.DMP.%U

logfile=metadata__.log

full=y 


col OWNER for a20

col DIRECTORY_NAME for a30

col DIRECTORY_PATH for a70

set lines 200

select * from dba_directories;


Export example 

exp \'/ as sysdba\' file=<directory_location>/six_tabs.dmp log=<directory_location>/six_tabs.log tables=<schema_name>.<Table_name>,<schema_name>.<Table_name>,<schema_name>.<Table_name>  statistics=none feedback=10000

  

  BUFFER=100000

set lines 200

col JOB_NAME for a20

col OWNER_NAME for a30

col OPERATION for a20

col JOB_MODE for a30

col STATE for a30


SELECT owner_name, job_name, operation, job_mode,state, attached_sessions FROM dba_datapump_jobs ORDER BY 1,2;

select distinct tablespace_name from dba_segments where owner='<schema_name>';

select sum(bytes/1024/1024/1024) from dba_segments where owner in('<schema_name>','<schema_name>','<schema_name>','<schema_name>','<schema_name>','<schema_name>');

select count(*),object_type,status from dba_objects where owner='<schema_name>' group by status,object_type;

SELECT sum(bytes)/1024/1024/1024 as "Size in GB" from dba_segments WHERE owner = UPPER('&schema_name');


Set heading off 

SQL> select name from v$database;

SQL> spool /<directory_path>/drop_objects.sql

SQL> select 'drop '||object_type||' '||owner||'."'|| object_name||'"' ||  DECODE(OBJECT_TYPE,'TABLE',' CASCADE CONSTRAINTS purge;',';') from dba_objects where owner='<schema_name>';

SQL> spool off

set heading on

set echo on

set pagesize 0

set feedback on

set verify on

SQL> spool /<directory_path>/drop_objects.log

SQL> @/dvra01/<directory_path>/drop_objects.sql

SQL> spool off


SQL>


import  same schema but different tablespace:

 cat impdp.par

directory=<directory_name>

dumpfile=export_schemas.dmp

logfile=import_schemas.log

schemas=<Schema_name>,<schema_name>

REMAP_TABLESPACE=<source_tablespace>:<Target_tablespace>

REMAP_TABLESPACE=<source_tablespace>:<Target_tablespace>

REMAP_TABLESPACE=<source_tablespace>:<Target_tablespace>

parallel=2

Data only impdp on different tables 

impdp directory= dumpfile=.DMP logfile=impdp1.log  remap_table=<schema_name_source>.<tab_name_source>:<schema_name_trgt>.<trgt>  CONTENT=DATA_ONLY DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS


Impdp examples

cat impdp_target.par

directory=<dir_name>

SCHEMAS=<schema_name>

dumpfile=1.%U

logfile=IMPDP.log

parallel=11

transform=disable_archive_logging:Y

nohup expdp \"/ as sysdba\" parfile=fulldb_export.par &


cat impdp_table.par

USERID = '/ as sysdba'

DIRECTORY   = <>

DUMPFILE    = <>.dmp.%U

LOGFILE     = <>_impdp.log

CONTENT     = ALL

PARALLEL    = 40

tables     = <schema_name>.<table_name>,<schema_name>.<table_name>

TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:table

exclude=STATISTICS

METRICS=Y

LOGTIME=ALL


Object _count 

SQL> SET SERVEROUTPUT ON SIZE 1000000
SQL> --
SQL> DECLARE
  2  VAL NUMBER;
  3  BEGIN
  4      DBMS_OUTPUT.ENABLE(1000000);
  5      FOR I IN (SELECT OBJECT_NAME FROM DBA_OBJECTS WHERE OWNER IN ('<schema_name>') AND OBJECT_TYPE = 'TABLE'  ORDER BY OBJECT_NAME) LOOP
  6          EXECUTE IMMEDIATE 'SELECT count(*) FROM <schema_name>.' || i.OBJECT_NAME INTO val;
  7          DBMS_OUTPUT.PUT_LINE(I.OBJECT_NAME || ': ' || VAL );
  8      END LOOP;
  9  END;
 10  /
select SEGMENT_NAME,sum(bytes/1024/1024/1024) GB from dba_segments where OWNER='<>' and SEGMENT_NAME='<>' group by segment_name,segment_type;

TRIGGERS

SQL> select OWNER,TRIGGER_NAME,TABLE_OWNER,STATUS from DBA_TRIGGERS where OWNER='<>' and TRIGGER_NAME IN ('<>,<>,<>');

Table size check 

select segment_name,segment_type, sum(bytes/1024/1024/1024) GB
 from dba_segments
 where segment_name='&Your_Table_Name' 
group by segment_name,segment_type; 

Nohup expdp and impdp example 

nohup expdp \"/ as sysdba\" parfile=fulldb_export.par &

nohup impdp \"/ as sysdba\" parfile=fulldb_export.par &
pbrun -u oracle runshell
$nohup expdp parfile=expdp_DBNAME_SCHEMA.par &
$nohup impdp parfile=impdp_DBNAME_SCHEMA.par &



Oracle Zabbix by ODBC

 


 


Oracle Zabbix by ODBC


Oracle Database, version 12c2, 18c, 19c


·       Create an Oracle DB user for monitoring:


·       Create User on oracle db user for monitoring on target database


·       Install Oracle Client on Zabbix mysql database server


·       Setup environment  variables on bash_profile


·       Create tnsnames.ora file with sample connect descriptor


·       Add entry on odbc.ini


·       Test using i sql connection


·       Add host entry & select template options  


·       Add Macro Entry on Zabbix Console


 


https://www.zabbix.com/integrations/oracle


 


STEP 1:


 


CREATE USER zabbix_mon IDENTIFIED BY <PASSWORD>;


-- Grant access to the zabbix_mon user.


GRANT CONNECT, CREATE SESSION TO zabbix_mon;


GRANT SELECT ON v$instance TO zabbix_mon;


GRANT SELECT ON v$database TO zabbix_mon;


GRANT SELECT ON v$sysmetric TO zabbix_mon;


GRANT SELECT ON v$system_parameter TO zabbix_mon;


GRANT SELECT ON v$session TO zabbix_mon;


GRANT SELECT ON v$recovery_file_dest TO zabbix_mon;


GRANT SELECT ON v$active_session_history TO zabbix_mon;


GRANT SELECT ON v$osstat TO zabbix_mon;


GRANT SELECT ON v$restore_point TO zabbix_mon;


GRANT SELECT ON v$process TO zabbix_mon;


GRANT SELECT ON v$datafile TO zabbix_mon;


GRANT SELECT ON v$pgastat TO zabbix_mon;


GRANT SELECT ON v$sgastat TO zabbix_mon;


GRANT SELECT ON v$log TO zabbix_mon;


GRANT SELECT ON v$archive_dest TO zabbix_mon;


GRANT SELECT ON v$asm_diskgroup TO zabbix_mon;


GRANT SELECT ON sys.dba_data_files TO zabbix_mon;


GRANT SELECT ON DBA_TABLESPACES TO zabbix_mon;


GRANT SELECT ON DBA_TABLESPACE_USAGE_METRICS TO zabbix_mon;


GRANT SELECT ON DBA_USERS TO zabbix_mon;


 


 


 


 


 


STEP 2:


 


Install Oracle Client on Zabbix mysql Repository database server


 


 


STEP 3 :


 


Set Environment variable


 


export ORACLE_HOME=/usr/lib/oracle/12.2/client64


export PATH=$PATH:$ORACLE_HOME/bin


export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib64:/usr/lib:$ORACLE_HOME/bin


export TNS_ADMIN=$ORACLE_HOME/network/admin


 


 


STEP 4 :


 


Add tnsnames.ora entry on oracle client


 


cat =/usr/lib/oracle/12.2/client64/network/admin/tnsnames.ora


 


 


Add entry on tnsnames.ora file


 


 


ORCL =


  (DESCRIPTION =


    (ADDRESS = (PROTOCOL = TCP)(HOST = ORCL.EXAMPLE.COM)(PORT = 1523))


    (CONNECT_DATA =


      (SERVER = DEDICATED)


      (SERVICE_NAME = ORCL)


    )


  )


 


 


STEP 4 :


 


 


Add Value in /cat /odbc.ini file


 


 


[root@0209RUPAV006TVS etc]# cat odbc.ini


[db12c]


Driver = Oracle12c


Driver =/usr/lib/oracle/12.2/client64/lib/libsqora.so.12.1


Servername = 10.33.104.254:1523/ ORCL


 


Database = ORCL


Port = 1523


UserID = zabbix_mon


Password = zabbix_mon


DatabaseCharacterSet=AL16UTF16


 


 


STEP 5:


 


Test on sql console


 


[root@test etc]# isql -v ORCL


+---------------------------------------+


| Connected!                            |


|                                       |


| sql-statement                         |


| help [tablename]                      |


| quit                                  |


|                                       |


+---------------------------------------+


 


SQL> select name from v$database;


+----------+


| NAME     |


+----------+


| ORCL   |


+----------+


SQLRowCount returns -1


1 rows fetched


 


 


 


STEP 6 :


 



 


Login Zabbix Console   : username/password


 


(Entry is added on mysql database repository server)


 







 


Graphical user interface, text, application


Description automatically generated


 


Graphical user interface, text, application, email


Description automatically generated


 


STEP 7 :


 


 


Go to configuration setting  and  select host option then create host  


 




 


 


 


 


 


 


 


 


 


 


 


 


·                Mention host group for monitoring  


 


Graphical user interface, application


Description automatically generated


 


 


·                Select Template


 


Graphical user interface, application


Description automatically generated


 


 


 


 


 


 


 


 


 


·                Select oracle ODBC Template


Graphical user interface, application


Description automatically generated


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


·                Provide IP address DNS and Proxy server details


Graphical user interface, text, application, email


Description automatically generated


 




 


 


Graphical user interface, text, application


Description automatically generated


 


 


 


 


 


 


 


 


·                Go to Macro Entry and mentioned details same as in odbc.ini file


Graphical user interface, application


Description automatically generated


 


·                Advanced Macro Entry (optional)


 


Graphical user interface, application


Description automatically generated


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


·                Template Details


Graphical user interface, text, application, email


Description automatically generated


 


 


·                Zabbix Graph Monitoring


 


Graphical user interface, text, application, email


Description automatically generated


 


 


A screenshot of a computer


Description automatically generated


 


 


 


 


 


 


 


Graphical user interface, chart, application


Description automatically generated


 


Graphical user interface, text, application, email


Description automatically generated


Table


Description automatically generated Table


Description automatically generated