Sunday 16 May 2021

PT - OS performance Monitoring using VMSTAT

 

PT - OS performance Monitoring using  VMSTAT 

vmstat

process ---- Memory ---- swap ---- i/o ----- system ----  cpu 

Process : r | b

Memory : spwd | spwd | free | buffer | cache

Swap : si | so

I/O:  bi | bo

System : cs | us | sy 

CPU : id |wt | st 

Field Meaning


r number of processes waiting for run time  
b number of processes in uninterruptible
sleep
spwd Virtual memory swapped out to disk

spwd Virtual memory swapped out to disk
free Amount of free memory

si Amount of memory swapped in from disk
so Amount of memory swapped to disk
us User CPU time
bi Blocks received from a block        device
(blocks/s)                        
bo Blocks sent to a block device
sy System CPU time
id CPU idle time


  • vmstat command output example 




  • vmstat 2 ( output every 2 second)





  • Example to apply load on OS by executing i/o 30 times using apply_io_stress.sh file.

cat > apply_io_stress.sh <<EOL

#!/bin/bash

counter=1

while [ \$counter -le 30 ]

do

dd if=/dev/zero of=/tmp/test1.img bs=500M count=1 oflag=dsync

((counter++))

done

rm -f /tmp/test1.img

EOL

  • Provide execute permission below.

chmod +x apply_io_stress.sh

srv1:~ # sh apply_io_stress.sh



^Z

[1]+  Stopped                 sh apply_io_stress.sh


srv1:~ #

  • We can see after executing sh file number of r b  process increases also I/O bi bo  reading blocks and write block increase.



^Z

[1]+  Stopped                 sh apply_io_stress.sh


  • Once stopping apply_io_stress.sh file processes and I/O value came back to normal.



vmstat -d 2

  • vmstat -d 2 ( Disk output at every 2 seconds) Examples below.











Tuesday 11 May 2021

OCI : Create DIRECTORIES for expdp impdp on autonomous database

 OCI : Create DIRECTORIES for expdp impdp on autonomous database  and uploading  files.

  • We can not drop directory from sql prompt if bucket having data stored on particular directory .

dbms_cloud package : 

  • dbms_cloud.get_object : Copy object from storage bucket to autonomous DB
  • dbms_cloud.put_object : Copy object from autonomous Db to object bucket storage


  • Check the current status of database directories

SELECT * FROM DBA_DIRECTORIES;



  • Create new directory  and provide  privilege's to test user

GRANT CREATE ANY DIRECTORY TO TEST;

GRANT DROP ANY DIRECTORY TO TEST;




  • Create a directory that points to the root directory of the user. 

CREATE OR REPLACE DIRECTORY ROOT_DIR AS '';




  • Check the location of root directories
SELECT * FROM DBA_DIRECTORIES WHERE DIRECTORY_NAME='ROOT_DIR';





CREATE OR REPLACE DIRECTORY ROOT_DIR AS '';




  • Create a directory object to points to a sub-directory named as stage
CREATE DIRECTORY staging AS 'stage';


  • Check the location of staging directories

SELECT * FROM dba_directories where directory_name='STAGING';




  • The LIST_FILES function returns no row because there is no file under any directory. 

SELECT * FROM DBMS_CLOUD.LIST_FILES('ROOT_DIR');



  • Copying Files from bucket to object storage.
  • Upload object into bucket  storage and check the details. 





  • Copy object profiles 







  • Generate token 


  • Copy the authentication  token clipboard


  • Authentication token obtained from the previous step

SET DEFINE OFF

BEGIN

DBMS_CLOUD.CREATE_CREDENTIAL (

 CREDENTIAL_NAME => 'ADMIN_CRED_NAME',

 USERNAME => 'varunyadav@gmail.com',

 PASSWORD => '.ypEHus4L01bAYeZd+H0

);

END;





  • Verify the staging directory 
SELECT CREDENTIAL_NAME FROM USER_CREDENTIALS;



  • Uploaded file from the object storage to the directory objects.

BEGIN

DBMS_CLOUD.GET_OBJECT(

CREDENTIAL_NAME => 'ADMIN_CRED_NAME',

OBJECT_URI => 'https://objectstorage.ap.xxxxxx.oraclecloud.com/n/bmdsoh48o0kf/b/bucket-20210415-1806/o/wallet%20zip%20fileWallet_ORCLADB.zip',

DIRECTORY_NAME => 'STAGING');

END;


  • Check the staging file directory status

 SELECT OBJECT_NAME FROM DBMS_CLOUD.LIST_FILES('STAGING');

  • Check files in the staging directory status from root directory

SELECT OBJECT_NAME FROM DBMS_CLOUD.LIST_FILES('ROOT_DIR');




Monday 10 May 2021

AHF - 21c Autonomous Health Framework Report on 12.1.0.2.0 Grid


Install and Extract 21c Autonomous Health Framework report on  2 node 12.1.0.2.0 grid

  • Download : SRDC - How to Collect Diagnostics Information for Database ASM (DBASM) Issues (Doc ID 2520692.1)

  • Unzip AHF-LINUX_v21.1.1.zip downloaded file 

-rw-r--r-- 1 oracle oinstall  399082945 May  4 13:13 AHF-LINUX_v21.1.1.zip

-rw-r--r-- 1 oracle oinstall       1525 Apr 26 23:06 README.txt

-r-xr-xr-x 1 oracle oinstall  405452046 Apr 26 23:05 ahf_setup

-rw-r--r-- 1 oracle oinstall        256 Apr 26 23:06 ahf_setup.dat

-rw-r--r-- 1 oracle oinstall        451 Apr 26 23:06 oracle-tfa.pub


  •  execute  ./ahf_setup , Here we have install root user , We can install with other user as per instructions on DOC ID. 

sgdc02: # ./ahf_setup


AHF Installer for Platform Linux Architecture x86_64


AHF Installation Log : /tmp/ahf_install_211100_25087_2021_05_05-15_39_22.log


Starting Autonomous Health Framework (AHF) Installation


AHF Version: 21.1.1 Build Date: 202104230128


TFA is already installed at : /u01/app/12.1.0/grid/tfa/sgdcplm02/tfa_home


Installed TFA Version : 121270 Build ID : 20160304140533


Default AHF Location : /opt/oracle.ahf


Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N :


AHF Location : /opt/oracle.ahf


AHF Data Directory stores diagnostic collections and metadata.

AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.


Choose Data Directory from below options :


1. /u01/app/grid [Free Space : 34257 MB]

2. Enter a different Location


Choose Option [1 - 2] : 1


AHF Data Directory : /u01/app/grid/oracle.ahf/data

Illegal option:  -sslserver

Try keytool -help


Shutting down TFA : /u01/app/12.1.0/grid/tfa/sgdcplm02/tfa_home


Copying TFA Data Files from /u01/app/12.1.0/grid/tfa/sgdcplm02/tfa_home


Uninstalling TFA : /u01/app/12.1.0/grid/tfa/sgdcplm02/tfa_home


Do you want to add AHF Notification Email IDs ? [Y]|N :

Enter Email IDs separated by space :

AHF will also be installed/upgraded on these Cluster Nodes :


1. sgdc01


The AHF Location and AHF Data Directory must exist on the above nodes

AHF Location : /opt/oracle.ahf

AHF Data Directory : /u01/app/grid/oracle.ahf/data


Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N :


Extracting AHF to /opt/oracle.ahf


Configuring TFA Services


Copying TFA Data Files to AHF


Discovering Nodes and Oracle Resources


Starting TFA Services

Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.

Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.


.------------------------------------------------------------------------------.

| Host      | Status of TFA | PID   | Port | Version    | Build ID             |

+-----------+---------------+-------+------+------------+----------------------+

| sgdc02 | RUNNING       | 27571 | 5000 | 21.1.1.0.0 | 21110020210423012809 |

| sgdc01 | RUNNING       | 16070 | 5000 | 21.1.1.0.0 | 21110020210423012809 |

'-----------+---------------+-------+------+------------+----------------------'


Running TFA Inventory...



Adding default users to TFA Access list...


.----------------------------------------------------------------.

|                  Summary of AHF Configuration                  |

+-----------------+----------------------------------------------+

| Parameter       | Value                                        |

+-----------------+----------------------------------------------+

| AHF Location    | /opt/oracle.ahf                              |

| TFA Location    | /opt/oracle.ahf/tfa                          |

| Orachk Location | /opt/oracle.ahf/orachk                       |

| Data Directory  | /u01/app/grid/oracle.ahf/data                |

| Repository      | /u01/app/grid/oracle.ahf/data/repository     |

| Diag Directory  | /u01/app/grid/oracle.ahf/data/sgdcplm02/diag |

'-----------------+----------------------------------------------'



Starting orachk scheduler from AHF ...


AHF install completed on sgdc02


Installing AHF on Remote Nodes :


AHF will be installed on sgdc01, Please wait.


Please Enter the password for sgdcplm01 :


Is password same for all the nodes? [Y]|N : N


Installing AHF on sgdcplm01 :


[sgdcplm01] Copying AHF Installer


[sgdcplm01] Running AHF Installer


AHF binaries are available in /opt/oracle.ahf/bin


AHF is successfully installed


Moving /tmp/ahf_install_211100_25087_2021_05_05-15_39_22.log to /u01/app/grid/oracle.ahf/data/sgdc02/diag/ahf/

  • Visit to ahf bin directory  /opt/oracle.ahf/bin

sgdc02:/u04/MasterDB/oracle_grid_software # cd /opt/oracle.ahf/bin

You have new mail in /var/mail/root

sgdc02:/opt/oracle.ahf/bin # su oracle

oracle@sgdc02:/opt/oracle.ahf/bin> ll

total 16

-rwxr-xr-x 1 root root 4142 May  5 15:42 ahfctl

lrwxrwxrwx 1 root root   34 May  5 15:42 oerr -> /opt/oracle.ahf/orachk/lib/oerr.sh

lrwxrwxrwx 1 root root   29 May  5 15:42 orachk -> /opt/oracle.ahf/orachk/orachk

-rwxr-xr-x 1 root root 4142 May  5 15:42 tfactl


  • Run tfactl utility  to collect ASM diagnonistic  report.
oracle@sgdc02:/opt/oracle.ahf/bin> tfactl diagcollect -srdc dbasm

Scripts to be run by this srdc: srdc_asm_acfs.sql

Components included in this srdc: CRS OS ASM CHMOS

Collecting data for all nodes


Collection Id : 20210506082239sgdc02


Detailed Logging at : /u01/app/grid/oracle.ahf/data/repository/srdc_dbasm_collection_Thu_May_06_08_22_40_CEST_2021_node_all/diagcollect_20210506082239_sgdc02.log

2021/05/06 08:22:45 CEST : NOTE : Any file or directory name containing the string .com will be renamed to replace .com with dotcom

2021/05/06 08:22:45 CEST : Collection Name : tfa_srdc_dbasm_Thu_May_06_08_22_40_CEST_2021.zip

2021/05/06 08:22:46 CEST : Collecting diagnostics from hosts : [sgdcplm02, sgdcplm01]

2021/05/06 08:22:46 CEST : Scanning of files for Collection in progress...

2021/05/06 08:22:46 CEST : Collecting additional diagnostic information...

2021/05/06 08:22:51 CEST : Getting list of files satisfying time range [05/06/2021 07:22:45 CEST, 05/06/2021 08:22:51 CEST]


2021/05/06 08:23:08 CEST : Collecting ADR incident files...

2021/05/06 08:23:41 CEST : Completed collection of additional diagnostic information...

2021/05/06 08:23:44 CEST : Completed Local Collection

2021/05/06 08:23:44 CEST : Remote Collection in Progress...




.--------------------------------------.

|          Collection Summary          |

+-----------+-----------+-------+------+

| Host      | Status    | Size  | Time |

+-----------+-----------+-------+------+

| sgdc01 | Completed | 7.5MB |  59s |

| sgdc02 | Completed | 10MB  |  58s |

'-----------+-----------+-------+------'


Logs are being collected to: /u01/app/grid/oracle.ahf/data/repository/srdc_dbasm_collection_Thu_May_06_08_22_40_CEST_2021_node_all

/u01/app/grid/oracle.ahf/data/repository/srdc_dbasm_collection_Thu_May_06_08_22_40_CEST_2021_node_all/sgdc02.tfa_srdc_dbasm_Thu_May_06_08_22_40_CEST_2021.zip

/u01/app/grid/oracle.ahf/data/repository/srdc_dbasm_collection_Thu_May_06_08_22_40_CEST_2021_node_all/sgdc01.tfa_srdc_dbasm_Thu_May_06_08_22_40_CEST_2021.zip

oracle@sgdc02:/opt/oracle.ahf/bin>