Wednesday, June 21, 2017

EBS -- there will be a major EBS release beyond 12.2!

EBS Product Development (Cliff Godwin, Oracle SVP) has announced the roadmap for EBS.
According to this updated roadmap, there will be a major EBS release in the future. Maybe this release will be EBS 12.3 :)

Anways, the up-to-date roadmap, also includes the support information.
According to this support information;
EBS 12.1 will be supported through Dec 2012.
EBS 12.2 will be supported through Sep 2023.
The new future relase (12.?) will be supported through 2030.

I have a feeling that this new major release will be announced in 2019-2020. ( as it ll be supported through 2030 --10 years minimum support..)

So, we will see EBS on the field at least 10 more years :)

Monday, June 19, 2017

Solaris 5.11/SSH -- Password authentication - Permission denied (gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive)

Here is a quick tip for you.
If you encounter permission denied errors in ssh (in Solaris or anywhere else), use -vvv argument for the analysis.

Ex: ssh -vvv root@<ip_address>..

The output produced by this "ssh -vvv" very readable and most of the time, it will make you see the cause and take the corrective actions.

I said most of the time, because sometimes it can not..

I have faced a case where a junior executed chmod to /etc . (chmod -R 777 /etc)

He said, he did it for making Oracle OS user to be able to create the oraInst.loc file during a new EBS installation:)

Anyways,this command "chmod -R 777 /etc", which seems harmless, may make your system function inappropriately.

For instance, in Solaris, you won't be able to relogin using SSH after executing this command..
ssh -vvv can not catch this. I mean, its output will seem clean but cut.

Actually, some files like passwd, which is stored in /etc, should not be world readable-writeable.

So, in such a case where you have 777 for all files in /etc, you end up with ssh permission denied errors. (this is only one of the things that you will face bytheway :)

A quick and working solution to this, is executing chmod -R 755 /etc, but a proper fix is to make the permissions of files in /etc to be equal with the files in the /etc directory of an identical and an untouched system.

EBS 12.2 -- enabling virtual host for the single host database tier, s_virtual_hostname, s_dbhost, ORA-12541

Recently needed to run an EBS's database tier on a single node database environment. (EBS version was 12.2.4 and the EBS's database version was 11GR2)

Actually, running on a single node DB, was only a temporary stage for this EBS.

That is, this database tier of this EBS instance, was already running on a single node system and the customer wanted us to migrate it to Exadata as a 2 Node RAC database.

This migration project was planned in 3 phases: Migrate EBS database tier to Exadata, Upgrade it to 12C and convert it to 2 node RAC .

So during the first phase, our database was needed to run only on 1 of the nodes of Exadata.

Even the database had to run on a single node in the 1st phase, we thought that; it was better to enable virtual host in advance.

 As this database was planned to be converted to RAC in the next stages, it was a good idea.

So, we decided to enable virtual host configuration for the database tier after migrating it to Exadata.

Ofcourse, we planned to use the virtual hostname (vip hostname) of the corresponding Exadata node for the virtual hostname of this newly migrated single node EBS database.

The thing that I want to share with you is actually appeared during the autoconfig.

That is ; everything was going well, we migrated the database using dataguard , cleaned the fnd_nodes etc and updated the context file with the relevant entries. (including s_virtual_hostname).

However; the db tier autoconfig-run was encountering TNS no listener errors (ORA-12541)

We weren't actually expecting that, because we did all the things that were said in the documentation.

The problem was on tnsnames.ora.

That is, the tnsnames.ora generated by autoconfig was relying the actual db hostname, however; the listener generated and started by autoconfig was listening on the virutal host name.

Actually, both the listener and tnsnames should have been relying on the virtual host name.

In order to check why; I jumped into the autoconfig templates and the scripts which were executed by autoconfig.

As for the listener, the template was; ad8ilsnr.ora

ad8ilsnr.ora was okay, as it was designed to rely on the virtual hostname..

Example ad8ilsnr.ora:

%s_db_listener% =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = %s_virtual_hostname%.%s_domainname%)(PORT = %s_dbport%))
    )
  )

SID_LIST_%s_db_listener% =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME= %s_db_oh%)
      (SID_NAME = %s_dbSid%)
    )
  )

However, the template that was used for generating the TNS was not okay:  ad8itns.ora
As seen below, it was relying on the value of s_dbhost which was set in the db tier context file.

%s_dbSid% = (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=%s_dbhost%)(PORT=%s_dbport%))
                (CONNECT_DATA=(SID=%s_dbSid%))
            )

%s_instLocalListener% =
            (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=%s_dbhost%)(PORT=%s_dbport%))
            )

%s_instRemoteListener% =
            (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=%s_dbhost%)(PORT=%s_dbport%))
            )
#
# Intermedia
#
extproc_connection_data =
          (DESCRIPTION=
              (ADDRESS_LIST =
                  (ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC%s_dbSid%))
              )
              (CONNECT_DATA=
                  (SID=PLSExtProc)
                  (PRESENTATION = RO)

This was weird. listening on virtual hostname but having a tns for dbhost?
So, at this point, I concluded that as a documentation lack/bug. (I said ad8itns.ora is not okay firstly, but it was no a problem of ad8itns.ora, its design was so)

I actually found a document that was justifying this decision.

-> Document
Using Oracle 12c Release 1 Real Application Clusters with Oracle E-Business Suite Release 11i (Doc ID 1498170.1) : Appendix E - >Change s_dbport and s_dbhost to SCAN Port and SCAN host.

So, the document was saying , set the dbhost on the host and port on which your database listener is listening. The document was mentioning the scan name and port, but it was obvious that the s_dbhost should be set to the virtual hostname in our case.

As, you can image, we set the s_dbhost to be the same as the s_virtual_host and rerun the autoconfig. The ORA-12514 error disspeared and autoconfig was successfully completed.

At the end of the day, what I m saying is, if you want to enable virtual host config on EBS, you need to set the s_dbhost context variable to the virtual hostname. (setting only s_virtual_hostname to the virtual hostname is not enough and create problems)

Sunday, June 18, 2017

Exadata -- my review of Oracle EXADATA published on IT Central Station

This is the second review that I did for ITCentralstation.com.

This time for Exadata!

I have given the valuable features of Exadata, info about the initial setup, the things that Oracle can do for improvement, my advices to Exadata customers and so on.

So, I think you will like it.

With this review, I reached the top 10% of contributors by points and earned the badge :)

My review of Oracle Exadata is now published and available for you to read at:



Tuesday, June 13, 2017

Exadata -- adding an additional hostname (with different domain name)

I'm writing this blog post, because I think the information that I will share with you below, can not be found on Oracle Support, as a whole. I mean, you can find your own way by filtering the information that you may gather from several documents and installation guides, but here is a filtered information obtained to achieve a specific purpose.

The purpose that I'm mentioning here is "adding additional hostnames (with an additional domain name, different than the current ones) to Exadata".

Note that, the Exadata that I' mentioning here is a Quarter Rack, generation 4 (X4-2). (Note that, the same logic applies almost for every Exadata versions )

The reason to have an additional hostname with a different domain name in Exadata, was actually  a business requirement, in this case.

In this specific case, a critical database was going to be migrated to the Exadata and the customer wanted this database to have specific local and remote listeners, which should have been listening the hostnames which had to have a different domain suffix than the domain name the Exadata compute nodes already had.

In order to have such an environment, we needed to go for a multiple scan configuration, and in order to have a multiple scan configuration, we needed to enable the free network interfaces of Exadata compute nodes, first.

There were 2 Exadata compute nodes in a 1/4 Rack.

Each node had 6 network interfaces.

Here is the interface list of an Exadata X4-2:


3 of these interfaces were already used.

eth0 -> management network
eth1 & eth2 ->  slaves of the bondeth0 device
The private interconnect was on infiniband.

So, the other 3 of these interfaces were free to use. (eth3, eth4 and eth5)

eth4 and eth5 was not ready to use. That is, the SFP transceiver modules for using these devices were not installed to the Exadata compute nodes. (This was actually normal, because these modules are purchased seperately)

Anyways, we decided to go with eth3, which was called as the out of the box additional network in almost all the Exadata related documentation.

Well.. We first needed to enable the eth3 interfaces on each of the exadata compute nodes. Then, we needed to create vips on them and configure an additional scan accordingly.

So, we put the following plan in action ->

  • Connect eth3 to the customer network switch (not to the infiniband switch) ,so the client connections can come to the database. 
  • Configure eth3 interface with IP, Subnet Mask and Gateway and make it up & running.
  • Set VIP hostnames for this new interface
  • Add these VIP hostnames in to DNS 
  • At this point, add an additional SCAN by following the MOS document - "12c: How to create additional SCAN/SCAN VIP  ( Doc ID 2138078.1 )" 

Wednesday, June 7, 2017

RDBMS -- Changing hostname of a Standalone Oracle Restart environment

Today, I implemented the alternative disaster recovery solution that I explained in one of my earlier blog post (http://ermanarslan.blogspot.com.tr/2017/05/rac-asm-disaster-recovery-solution.html), for building a real disaster environment.

The solution worked perfectly, as documented in the blog post, but this time; I had to do one more thing..

That is, changing the IP address and hostname of the DR host.

I had to do this because, customer wanted its DR host to have the same hostname and the IP address as the primary.

Note that, there were no risks for this, because the network between Primary and DR sites were totally seperated. (except the network between the primary and DR storage tiers.. -- bytheway, storage tier network is not a concern of us.. It won't break the sqlnet communication)

The primary environment was built on a 2 node RAC env and the Disaster site was built on a single node env. (both environment were using ASM)

The client, the codes and the application servers were using the scan hostname to connect to the primary database server.

So, the customer wanted to have the scan hostname of the primary environment, to be set as the hostname of the DR database server.

Although, having the scan hostname directly set as a server's hostname looks a little weird; this was okay for this DR environment, as it was built on a single node database server.

However, changing the hostname of a database server, which is hosting an ASM instance, was a little costly.

Also, this DR environment wasn't RAC, but it had to be Oracle Restart enabled.

The DR database server was replicated using VM techniques. (it was a virtual machine). It was a one time VM replication for having the virtual machine up & running on the DR site. (actually, for getting away from the need to install a new Operating System)

So, during the replication, the Oracle Restart home is replicated as is. So the DR server was provisioned with a preconfigured Oracle Restart home.

Well, in order to change the hostname and IP address of this replicated machine, I started with the OS prompt.

I implemented the standard Linux IP and hostname change routines, but when it came to Oracle Restart Home, I did the following; (in sequence)

Note that, the following actions were required because we changed the hostname..
In other words; changing the IP address alone, does not require any of these actions. (or any actions at all:)

  • As root user , deconfigured the system using roothas per script.

<GRID_HOME>/crs/install/roothas.pl -deconfig -force
  • As grid user reconfigured the system using roothas perl script
<GRID_HOME>/crs/install/roothas.pl
  • Made cssd auto start enabled
<GRID_HOME>/bin/crsctl modify resource "ora.cssd" -init -attr "AUTO_START=1"
  • Restarted the high availability services.
<GRID_HOME>/bin/crsctl stop has

<GRID_HOME>/bin/crsctl start has
  • recreated the default listener.
srvctl add listener

srvctl start listener
  • Created init.ora for ASM with the following contents;
asm_diskgroups= 'FRA'
asm_diskstring= ''
instance_type='asm'
large_pool_size=12M

  • Added the asm instance to the repo and made it auto_start enabled.
<GRID_HOME>/bin/srvctl add asm

<GRID_HOME>/bin/crsctl modify resource "ora.asm" -attr "AUTO_START=1"
  • Checked the disks using kfod;
<GRID_HOME>/bin/kfod asm_diskstring='ORCL:*' disks=all
  • After this, I started the ASM instance; using sqlplus "/as sysasm" 
Note that, I also created spfile in this step and modified the repository accordingly;

<GRID_HOME>/bin/srvctl modify asm -p <spfile full path>

<GRID_HOME>/bin/srvctl modify asm -l LISTENER
  • Lastly, I restarted the services and checked them to ensure that all is well.
<GRID_HOME>/bin/crsctl stop has

<GRID_HOME>/bin/crsctl start has

<GRID_HOME>/bin/crsctl stat res

<GRID_HOME>/bin/crsctl stat res -t

No problems encountered , no reboots required :)

These steps and actions were actually already documented in Oracle Support Document named : "How to change Hostname / IP for a Grid Infrastructure Oracle Restart Standalone Configuration (SIHA) (Doc ID 1552810.1)", so I ensured they were working as documented. 

Tuesday, June 6, 2017

EBS on Exadata -- Installing rpms for Database Nodes -- a missing info in the documentation

Unfortuneatly, there is no specific document for installing EBS on Exadata, at the moment.
This is even true for the new release of EBS, EBS 12.2
That's why, in order to properly install EBS database tier on Exadata, we follow a general EBS installation document, which is available on Oracle Support.

The document in question is "Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.2) for Linux x86-64 (Doc ID 1330701.1)"

Actually, it makes sense to follow this document for "EBS on Exadata projects", because Exadata compute/database nodes are Linux X86-64.

However, as Exadata is delivered with the latest software versions and as it is an engineered system that is equipped with the optimized OS and OS packages, the rpm installation instructions that are present in the above document, create a question mark in the mind.

According to the Doc 1330701.1, there are OS packages, which must be installed on both application and database tiers.

The document does not care whether the database nodes are Exadata nodes or not.

The document also say, alternatively you can install preinstall rpm if you like :) 

At this point, I want to give some additional and important info to you.

Firstly, preinstall rpm must not be installed to Exadata, because it updates the OS related configuration files such as limit.conf, resolv.conf, sysctl.conf and etc.

Secondly, "according to Oracle" (created couple of SRs for this), the rpms listed in Document 1330701.1, "can" be installed to Exadata nodes. (including the 32 bit ones), but it is not a must to install them when the database hosts are on Exadata. I gathered this info after lots of discussions on Oracle Support and after creating several SRs there.

This also means, you can ignore rpm warnings that will be displayed during the installation of Database Examples into Oracle Home of EBS. (Database examples installation is mandatory, but the rpm warnings can be ignored)

"However, we are still not trusting this information". 
This information is based on the SRs and the comments that are done by SMEs..  
In my opinion, there is no guarantee for this configuration yet.
I mean, the documentation is still saying "you need to install those rpms"

So till this info will become offical, my recommendation is as follows; 

What I recommend at the moment is, to install the rpms to Exadata nodes by skipping the one which has lots of dependencies. 
An example is redhat-lsb.. It is not required at all and it will complain about the dependencies (lots of them).
This is what we are doing in our recent Exadata migration project as well.

3 more things;

-Never downgrade any rpm versions on Exadata. 
-Be careful when you are playing with glibc package.
-Your Exadata compute nodes should be equal in terms of rpm levels. (check this one, before starting the rpm installations, as well)

Monday, June 5, 2017

Linux/RDBMS/Scripting -- Oracle Hot backup aware Netapp Snapmirror Update script

Here is a script that I have recently written and deployed on a production site.
Ofcourse, it can be written more professionally, but it does the job even in this case.

What this script does is, actually executing storage commands from a database server.
The script first checks, if the database is in hot backup mode.
If it finds the database in hot backup mode, it quits. (designed to do so, it can do any other thing, as well)

Well.. If the database is not in hot backup mode, the scripts takes it into hot backup mode, connects to the storage and executes the snapmirror command, which produces a storage snapshot and do the snapmirror towards to the disaster site. (replication)

I tried to catch all the errors all the way down to the shell. (from OS to SQL, from SQL to shell)
So, the script is a little smart , but can be made smarter as well.
During the execution, the script produces readable log files..
I wrote this script for specific needs of one of my customers, but you can go further and modify it according to your needs.

The thing that made me share this script with you is, actually showing what else can be done using the shell scripts.

Note that, I actually wrote a similar script (probably better than this one) in year 2008 , using Perl. In that script, I was creating snapshot from the code and managing the expired snapshots and so on.

This one, however; is written as a bash script and it uses "sshpass" for password authentication, while connecting to the storage using ssh.  (note that, alternatively, we can build key based authentication between Linux and Netapp)

I have written this script for Linux and the storage that the script is written to connect, is Netapp.

I used WHENEVER OSERROR and WHENEVER SQLERROR for catching sqlplus errors. I used $? for catching shell program errors/returns.

I used bash functions to make the script look nicer. (I must admin that I needed to write a function specifically for logging as well :).

I also designed this script to ignore some of the errors, which are actually good things. (like ORA-01403 while checking the datafiles, which are in backup mode --they must not be in that case..)

You will find an example of the execution (with output) , an example logfile and the script code below.

Example execution and the output:


[oracle@rac01 netapp_snap_script]$ sh snapmirror.sh

Snapmirror script is started , logging to : /home/oracle/netapp_snap_script/log/snapmirror.log.2017_06_05_1496660095
Checking if the database is already in hot backup
Database is not in backup mode, Good.. Now executing alter database begin backup command
Enabling Hot Backup Mode
Succesfully enabled hot backup mode
Connecting to the storage and executing snapmirror command
Disabling Hot Backup Mode
Successfully disabled hot backup mode
Execution successful : date : Mon Jun 5 13:55:08 +03 2017


Example Logfile:


###################################################################################
##Checking if the database is already in hot backup, we will exit if it is so..#####
###################################################################################

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 5 13:54:55 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> SQL> SQL>   2    3    4    5  declare check_variable number(4);
*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 3


Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
###################################################################################
INFO: If you see ORA-01403 No data found errors above, this is NORMAL!! IGNORE IT .. it means -> there are no files in hot backup mode, which is good
###################################################################################
###################################################################################
##Database is not in backup mode, Good.. Now executing alter database begin backup command #####
###################################################################################

###################################################################################
####Putting the database into Hot Backup Mode####
###################################################################################

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 5 13:54:55 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> SQL> SQL> SQL> 
Database altered.

SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

STORAGE WORK !!

###################################################################################
Connecting to the storage and executing snapmirror command
###################################################################################
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.

###################################################################################
Disabling Hot Backup Mode
###################################################################################

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 5 13:55:07 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> SQL> SQL> 
Database altered.

SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
###################################################################################
Execution Successful !! date : Mon Jun  5 13:55:08 +03 2017
###################################################################################


Script:


storage_ip="10.10.10.1" ## I m not giving the real storage ip here :)
storage_user="snapuser"
storage_password="blablablabla"
oracle_connection_type="/ as sysdba"
oracle_home="/u01/app/oracle/product/11.2.0.4/dbhome_1"
oracle_sid="TEST1"
log_file_dir="/home/oracle/netapp_snap_script/log"
execution_date=`date +%Y_%m_%d_%H_%M`
#####Setting the ORACLE ENVIRONMENT####

export ORACLE_HOME=$oracle_home
export ORACLE_SID=$oracle_sid
export PATH=$ORACLE_HOME/bin:$PATH
echo Snapmirror script is started , logging to : /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date

#################FUNCTION DECLERATIONS ####

function check_hot_backup_mode()

{

echo "Checking if the database is already in hot backup"
echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "##Checking if the database is already in hot backup, we will exit if it is so..#####" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
sqlplus $oracle_connection_type 2>&1 >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date <<EOF
WHENEVER OSERROR EXIT 9;
WHENEVER SQLERROR EXIT 30;
declare check_variable number(4);
begin
select file# into check_variable from v\$backup where status='ACTIVE' and rownum=1;
end;
/
EOF

sql_return_code=$?
if [ $sql_return_code == 0 ]
then
echo "ERROR!!! error while checking the hot backup status, there is a good chance that: the Database is already in the hot backup mode(at least one file is in backup mode)"
echo "Disabling the hot backup mode, by executing alter database end backup, but you should check the logfile /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date"
end_backup
exit;
fi


echo " " >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date

echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date

echo "INFO: If you see ORA-01403 No data found errors above, this is NORMAL!! IGNORE IT .. it means -> there is no files in hot backup mode, which is good" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date

echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date

}

function begin_backup()
{
check_hot_backup_mode
echo "Database is not in backup mode, Good.. Now executing alter database begin backup command"
echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "##Database is not in backup mode, Good.. Now executing alter database begin backup command #####" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo"">> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "Enabling Hot Backup Mode"
echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "####Putting the database into Hot Backup Mode####" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
sqlplus $oracle_connection_type 2>&1 >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date <<EOF
WHENEVER OSERROR EXIT 9;
WHENEVER SQLERROR EXIT 29;
alter database begin backup;
EOF

sql_return_code=$?
if [ $sql_return_code == 29 ]
then
echo "ERROR!!! while enabling hot backup mode (SQL/SQLPLUS level error)"
echo "check the logfile /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date"
exit 29;
fi


if [ $sql_return_code == 9 ]
then
echo "ERROR!!! while enabling hot backup mode (OS level error)"
echo "check the logfile /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date"
exit 9;
fi

echo "Succesfully enabled hot backup mode"

}


function storage_work()
{
### DO THE STORAGE WORK ##
echo "" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "STORAGE WORK !!" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "Connecting to the storage and executing snapmirror command"
echo"">> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "Connecting to the storage and executing snapmirror command" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date

sshpass -p $storage_password ssh $storage_user@$storage_ip "snapmirror update IZMOSRVCLS" 2>&1 >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
sshpass_return=$?

if [ $sshpass_return != 0 ]
then
echo "ERROR!!! while executing SSH!!"
echo "ERROR!!! while execution ssh to storage, the return is $sshpass_return" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "check the logfile /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date"
end_backup
exit $sshpass_return;
fi

}


function end_backup()
{
## END HOT BACKUP ##
echo "Disabling Hot Backup Mode"
echo"">> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date

echo "Disabling Hot Backup Mode" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date

echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date

sqlplus $oracle_connection_type 2>&1 >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date <<EOF

WHENEVER OSERROR EXIT 9;
WHENEVER SQLERROR EXIT 29;
alter database end backup;
EOF

sql_return_code=$?
if [ $sql_return_code == 29 ]
then
echo "ERROR!!! while disabling hot backup mode (SQL/SQLPLUS level error)"
echo "check the logfile /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date"
echo " " >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "!!!!!!!!!!ORA-01260 errors are ignorable if select * from v$backup where status='ACTIVE' returns no rows!!!!!!!!!!!!!!!!!" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "just rerun this script !!!! if you see ORA-01260 here""" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
exit 29;
fi


if [ $sql_return_code == 9 ]
then
echo "ERROR!!! while disabling hot backup mode (OS level error)"
echo "check the logfile /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date"
exit 9;
fi

echo "Successfully disabled hot backup mode"
echo "Execution successful : date : `date`"
echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "Execution Successful !! date : `date`" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
echo "###################################################################################" >> /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
}


echo Snapmirror script is started , logging to : /home/oracle/netapp_snap_script/log/snapmirror.log.$execution_date
begin_backup
sleep 2;
storage_work
sleep 2;
end_backup

Tuesday, May 23, 2017

RDBMS/Exadata -- Estimating the HCC compression ratio and the gain //using DBMS_COMPRESSION.GET_COMPRESSION_RATIO

If you are considering buying an Exadata and if you are wondering how much storage space that you can save with the Exadata specific compression methods, you should take a look at this one.

You can see the compression ratio and calculate the % gain for a table by executing the dbms_compression.get_compression_ratio (specifying COMP_FOR_QUERY_HIGH or other compression types such as COMPRESS_FOR_QUERY_LOW as the compression type.)

You can use this method to compute the compression ratio and gain, even if you do not currently have the required hardware to run HCC. (In other words; you don't need your database running on Exadata or you don't need your database files residing on Pillar or ZFS storage hardware)

Note that, the name of compression types change according to the release. For example, it is COMP_QUERY_HIGH if the database is 12C.

Anyways, by using this method, you can have an idea about what you can gain in terms of storage savings.

Ofcourse, there are some requirements for using this method;
  • Your database must be 11gR2 ( I recommend >=11.2.0.2, because there are bug records for 11.2.0.1 dbms_compression )
  • The RDBMS compatibility parameter must to be set to at least 11.2.0.
  • The table to be estimated needs to have a minimum of 1 million rows.
The code reports the stats both from uncompressed and compressed version of the table (the code itself creates those in the scratch tablespace that we specify as the "scratchtbsname")

The code creates 2 internal tables named DBMS_TABCOMP_TEMP_UNCMP (uncompressed) and DBMS_TABCOMP_TEMP_CMP (compressed) . It generates the COMPRESSION_RATIO by comparing these two tables. It drops these tablespace immediately after the estimation.

Let's see an example for QUERY HIGH compression->
  • We get a big table such as XLA_DIAG_SOURCES;

SQL> select owner,table_name,num_rows from dba_tables where num_rows>100000 order by num_rows desc;

OWNER                          TABLE_NAME                       NUM_ROWS
------------------------------ ------------------------------ ----------
XLA                            XLA_DIAG_SOURCES                226354744

  • The user that owns the table to be analyzed, must have quota on the scratch tablespace. So we quato to the user. This is to avaoid "ORA-01950: no privileges on tablespace" 'XXERM' errors.
SQL>

 ALTER USER XLA QUOTA UNLIMITED ON XXERM;

User altered.    
  • We execute the DBMS_COMPRESSION.GET_COMPRESSION_RATIO.
SQL>

set serveroutput on
declare
 v_scratchtbsname varchar2(32) := 'XXERM';
 v_ownname varchar2(32) := 'XLA';
 v_tabname varchar2(32) := 'XLA_DIAG_SOURCES';
 v_partname varchar2(32) := null;
 v_comptype number(10,0) := DBMS_COMPRESSION.COMP_FOR_QUERY_HIGH;
 v_blkcnt_cmp pls_integer;
 v_blkcnt_uncmp pls_integer;
 v_row_cmp pls_integer;
 v_row_uncmp pls_integer;
 v_cmp_ratio number;
 v_comptype_str varchar2(4000);
begin
 DBMS_COMPRESSION.GET_COMPRESSION_RATIO (
 Scratchtbsname => v_scratchtbsname,
 ownname => v_ownname,
 tabname => v_tabname,
 partname => v_partname,
 comptype => v_comptype,
 blkcnt_cmp => v_blkcnt_cmp,
 blkcnt_uncmp => v_blkcnt_uncmp,
 row_cmp => v_row_cmp,
 row_uncmp => v_row_uncmp,
 cmp_ratio => v_cmp_ratio,
 comptype_str => v_comptype_str
 );
 dbms_output.put_line('Block count if compressed  :' || v_blkcnt_cmp);
 dbms_output.put_line('Block count if uncompressed: ' || v_blkcnt_uncmp);
 dbms_output.put_line('Row count per block if compressed: ' || v_row_cmp);
 dbms_output.put_line('Row count per block if uncompressed : ' || v_row_uncmp);
 dbms_output.put_line('Compression ratio :' || v_cmp_ratio);
 dbms_output.put_line('Gain (%) :' || round((v_blkcnt_uncmp-v_blkcnt_cmp)/v_blkcnt_uncmp*100,2));
 dbms_output.put_line('Method : ' || v_comptype_str);
end;

OUTPUT
--------------
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows

Block count if compressed: 353
Block count if uncompressed:  19745
Row count if compressed: 2833
Row count if uncompressed : 50
Compression ratio :55.9
Gain (%) :98.21
Method : "Compress For Query High"

Let's see an example for QUERY LOW compression->

SQL> set serveroutput on
declare
v_scratchtbsname varchar2(32) := 'XXERM';
v_ownname varchar2(32) := 'XLA';
v_tabname varchar2(32) := 'XLA_DIAG_SOURCES';
v_partname varchar2(32) := null;
v_comptype number(10,0) := DBMS_COMPRESSION.COMP_FOR_QUERY_LOW;
v_blkcnt_cmp pls_integer;
v_blkcnt_uncmp pls_integer;
v_row_cmp pls_integer;
v_row_uncmp pls_integer;
v_cmp_ratio number;
v_comptype_str varchar2(4000);
begin
DBMS_COMPRESSION.GET_COMPRESSION_RATIO (
Scratchtbsname => v_scratchtbsname,
ownname => v_ownname,
tabname => v_tabname,
partname => v_partname,
comptype => v_comptype,
blkcnt_cmp => v_blkcnt_cmp,
blkcnt_uncmp => v_blkcnt_uncmp,
row_cmp => v_row_cmp,
row_uncmp => v_row_uncmp,
cmp_ratio => v_cmp_ratio,
comptype_str => v_comptype_str
);
 dbms_output.put_line('Block count if compressed  :' || v_blkcnt_cmp);
 dbms_output.put_line('Block count if uncompressed: ' || v_blkcnt_uncmp);
 dbms_output.put_line('Row count per block if compressed: ' || v_row_cmp);
 dbms_output.put_line('Row count per block if uncompressed : ' || v_row_uncmp);
 dbms_output.put_line('Compression ratio :' || v_cmp_ratio);
 dbms_output.put_line('Gain (%) :' || round((v_blkcnt_uncmp-v_blkcnt_cmp)/v_blkcnt_uncmp*100,2));
 dbms_output.put_line('Method : ' || v_comptype_str);
end;

OUTPUT
--------------------
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
Block count if compressed: 836
Block count if uncompressed:  19745
Row count if compressed: 1196
Row count if uncompressed : 50
Compression ratio : 23.6
Gain (%) : 95.77
Method : "Compress For Query Low"

Wednesday, May 17, 2017

RAC ASM -- Disaster Recovery solution (using storage techniques) snapshot/snapmirror, Netapp

This blog post is about an alternative Disaster Recovery solution that I  recently built through a POC (Proof of Concept).
The source system was a 2 Node Oracle RAC (Grid + RDBMS both 11.2.0.4) and the target system residing in DR was a 1 Node environment (GRID + RDBMS both 11.2.0.4).
Both source and target was using ASM for storing all the db files.
This made me think a little bit, becuase this alternative disaster recovery solution was relying on Netapp snapshot and snapmiror mechanisms.
We were planning to put the database to the backup mode (hot backup mode
/alter database begin backup), take the snapshot on the local storage, transfer the snapshot to the DR storage using snapmirror , map the luns created by this snapshot/snapmirror to the target server, end backup in target and open the database successfully in the target.

ASM was my concern, because  I had never done this before.
Actually I have implemented (even automated) this solution 6 years ago, but the filesystems were ext3 ( a cooked one) in that case.

So, I was afraid that ASM header info might get corrupted during the snapshot/snapmirror operations.
However, everything went well.

It is ofcourse not supported by Oracle. However,  I can say that ASM diskgroups can be replicated using the snapshot/snapmirror operations. (ofcourse if you know what you are doing..)

I did 3 tests and didn't encounter any problems.

In this blog post, I will give you the things that I did to replicate a 2 Node RAC system(Grid/ASM+RDBMS) to a single Node System(Grid/ASM+RDBMS) using Netapp's snapshot and snapmirror functionalities.

Following is a representative picture, explaining the work done.
As shown in the following picture, we take the snapshot of the ASM disks, replicate it to the DR using snapmirror and then map the luns to the target server running on the DR site, and make ASM running on the target server to see those disks...



The operation that was done for this, requires both Linux admin and storage admin skills. ( I don't see a need for mentioning the required DBA skills :) )
It requires Linux admin skills, because we do stuff like formatting the drives (we are working on new luns since it is a POC), rescanning the scsi bus and etc.
It requires the Storage admin skills, because we do stuff like taking snapshots, initiating snapmirror, mapping luns and etc.
Lastly, it requires DBA skills, because we put the database to the backup mode, change some init.ora parameters, make the replicated disks be known by ASM, open our database succesfully in the target node (recovery may ve needed), check the log files in case of an error and so on.

Let's see what I did in this POC ->
Note that, this is a technical overview. (doesn't have to include all the steps that I have done and all the commands that I have used).

RAC to SINGLE STORAGE REPLICATION (ASM diskgroup which is built on top of 3 disks)

1. Storage team creates 3 ASM disks (each sized approx. 13gb) and maps these luns to our Linux RAC nodes. (source nodes)
We do the scsi rescan to make Linux see these luns/disks, and then we create partition on them.  (oracleasm createdisk requires partition)

fdisk output just after the scsi rescan:

Disk /dev/sdbk: 11.8 GB, 11811160064 bytes
64 heads, 32 sectors/track, 11264 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x00000000


Disk /dev/sdbl: 12.9 GB, 12884901888 bytes
64 heads, 32 sectors/track, 12288 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x00000000


Disk /dev/sdbm: 14.0 GB, 13960740864 bytes
64 heads, 32 sectors/track, 13314 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x00000000

Attention, before discovering the disks we use fdisk to create partitions on them.
fdisk output just after the partition creations:

Disk /dev/sdbk: 11.8 GB, 11811160064 bytes
64 heads, 32 sectors/track, 11264 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x42926436

    Device Boot      Start         End      Blocks   Id  System
/dev/sdbk1               1       11264    11534320   83  Linux

Disk /dev/sdbl: 12.9 GB, 12884901888 bytes
64 heads, 32 sectors/track, 12288 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0xb900c4fd

    Device Boot      Start         End      Blocks   Id  System
/dev/sdbl1               1       12288    12582896   83  Linux

Disk /dev/sdbm: 14.0 GB, 13960740864 bytes
64 heads, 32 sectors/track, 13314 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x65f91faf

    Device Boot      Start         End      Blocks   Id  System
/dev/sdbm1               1       13314    13633520   83  Linux


2. We use oracleasm to create the ASM disks and again using oracleasm, we make ASM see them.

[root@source01 ~]# oracleasm createdisk DEV1 /dev/sdbk1
Writing disk header: done
Instantiating disk: done
[root@source01 ~]# oracleasm createdisk DEV2 /dev/sdbl1
Writing disk header: done
Instantiating disk: done
[root@source01 ~]# oracleasm createdisk DEV3 /dev/sdbm1
Writing disk header: done
Instantiating disk: done

[root@source01 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@source01 ~]# oracleasm listdisks

DATA1
DATA2
DATA3
DATA4
DATA5
DATA6
DATA7
DATA8
DATA9
DEV1 ** new disk
DEV2 ** new disk
DEV3 ** new disk
FRA1
FRA2
TEST1

3. In order to make these new ASM disks available in source node 2 (remember this is RAC) , we rescan the scsi and use oracleasm scandisks in source node 2 as well..

root@source02 ~]#  echo "- - -" > /sys/class/scsi_host/host0/scan
[root@source02 ~]#  echo "- - -" > /sys/class/scsi_host/host1/scan
[root@source02 ~]#  echo "- - -" > /sys/class/scsi_host/host2/scan
[root@source02 ~]#  echo "- - -" > /sys/class/scsi_host/host3/scan
[root@source02 ~]#  echo "- - -" > /sys/class/scsi_host/host4/scan
[root@source02 ~]# oracleasm  scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DEV2"
Instantiating disk "DEV1"
Instantiating disk "DEV3"
[root@source02 ~]# oracleasm listdisks
DATA1
DATA2
DATA3
DATA4
DATA5
DATA6
DATA7
DATA8
DATA9
DEV1  ** new disk
DEV2  ** new disk
DEV3  ** new disk
FRA1
FRA2
TEST1

4. After these scsi and ASM disks become available on both of the RAC nodes, we create a diskgroup  called +DEV on these new 3 disks using ASMCA . After that we create a 2 node RAC database called DEVRAC and we configure this database and make all its files (controlfiles,redologs,datafiles etc..) be put into +DEV diskgroup.

SQL*Plus: Release 11.2.0.4.0 Production on Wed May 10 13:39:43 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select instance_name from gv$instance;

INSTANCE_NAME
----------------
DEVRAC1
DEVRAC2

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DEV/devrac/system01.dbf
+DEV/devrac/sysaux01.dbf
+DEV/devrac/undotbs01.dbf
+DEV/devrac/users01.dbf
+DEV/devrac/undotbs02.dbf

SQL> select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
+DEV/devrac/control01.ctl
+DEV/devrac/control02.ctl

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
+DEV/devrac/redo02.log
+DEV/devrac/redo01.log
+DEV/devrac/redo03.log
+DEV/devrac/redo04.log


5. At this point, in order to make the necessary checks in the target environment, we create a table called ERMAN in the source. This table resides on a tablespace called ERMAN. (all the datafiles of this tablespace called ERMAN reside in +DEV diskgroup)
So, we first create a tablespace called ERMAN by creating all its datafiles on diskgroup +DEV. Then, we create a table named ERMAN in the tablespace called ERMAN . (lots of ERMANs.., I know :)

SQL> CREATE TABLESPACE ERMAN DATAFILE '+DEV';

Tablespace created.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DEV/devrac/system01.dbf
+DEV/devrac/sysaux01.dbf
+DEV/devrac/undotbs01.dbf
+DEV/devrac/users01.dbf
+DEV/devrac/undotbs02.dbf
+DEV/devrac/datafile/erman.270.943623731


SQL> create table ERMAN tablespace ERMAN as select  * from dba_objectS;

Table created.

SQL> insert into ERMAN select * from ERMAN;

86293 rows created.

SQL> commit;

Commit complete.

SQL> insert into ERMAN select * from ERMAN;

172586 rows created.

6. We put our database (DEVRAC) to backup mode/hot backup mode and while it is in backup mode, we create a storage snapshot and then do a snapmirror update. After the snapmirror update, we create the luns on the target storage system (using the snapmirrored data) and map these luns to the target database node. After the lun mapping, we rescan the scsi on target node (Linux), and make ASM (running on target node) see these new luns (using oracleasm)

root@targetsinglenode ~]# echo "- - -" > /sys/class/scsi_host/host0/scan
[root@targetsinglenode ~]# echo "- - -" > /sys/class/scsi_host/host1/scan
[root@targetsinglenode ~]# echo "- - -" > /sys/class/scsi_host/host2/scan
[root@targetsinglenode ~]# echo "- - -" > /sys/class/scsi_host/host3/scan
[root@targetsinglenode ~]# echo "- - -" > /sys/class/scsi_host/host4/scan

Disk /dev/sdn: 11.8 GB, 11811160064 bytes
64 heads, 32 sectors/track, 11264 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x42926436

   Device Boot      Start         End      Blocks   Id  System
/dev/sdn1               1       11264    11534320   83  Linux
Disk /dev/sdo: 12.9 GB, 12884901888 bytes
64 heads, 32 sectors/track, 12288 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0xb900c4fd

   Device Boot      Start         End      Blocks   Id  System
/dev/sdo1               1       12288    12582896   83  Linux

Disk /dev/sdp: 14.0 GB, 13960740864 bytes
64 heads, 32 sectors/track, 13314 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x65f91faf

   Device Boot      Start         End      Blocks   Id  System
/dev/sdp1               1       13314    13633520   83  Linux

[root@targetsinglenode ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DEV1"
Instantiating disk "DEV2"
Instantiating disk "DEV3" 

7. After making the snapmirrored disks available on the target host, we modify the init.ora, which we copy from the source. We modify it to exclude the RAC related parameters, because the target node is a single node.
After modifying the init.ora, we mount the diskgroup named +DEV to the target node's ASM and we startup the DEVRAC database as single node database in our target node.
Note that, after opening the database, we check the table named ERMAN in the target instance to ensure that it is replicated successfully.

[oracle@targetsinglenode ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 11.2.0.4.0 Production on Wed May 10 14:26:35 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Automatic Storage Management option

SQL> alter diskgroup DEV mount;


SQL> startup mount;
ORACLE instance started.

Total System Global Area 2355015680 bytes
Fixed Size                  2255472 bytes
Variable Size             587203984 bytes
Database Buffers         1761607680 bytes
Redo Buffers                3948544 bytes
Database mounted.
SQL> alter database end backup;

Database altered.

SQL> alter database open;

Database altered.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DEV/devrac/system01.dbf
+DEV/devrac/sysaux01.dbf
+DEV/devrac/undotbs01.dbf
+DEV/devrac/users01.dbf
+DEV/devrac/undotbs02.dbf
+DEV/devrac/datafile/erman.270.943623731

6 rows selected.

SQL> select count(*) from ERMAN;

  COUNT(*)
----------
    172586

SQL> alter system checkpoint;

Note: After opening the database, we monitor the alert logs and check the related log files for both RDBMS and GRID and check whether any errors recorded in them.. In conclusion, we don't see any errors.

8. Just in case, we use the replicated database a little bit. We do some read write tests by inserting some data using a loop, switching the log files and so on.. During these tests, we do not encounter any abnormalities.

SQL*Plus: Release 11.2.0.4.0 Production on Wed May 10 14:57:11 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SQL> create table ERMAN2 as select * from ERMAN;
BEGIN
    FOR x IN 1 .. 10 LOOP
         INSERT INTO ERMAN2
         SELECT * from ERMAN;
    END LOOP;
END;
Table created.

SQL>   2    3    4    5    6
  7  /

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> commit;

Commit complete.

SQL> archive log all;
ORA-00271: there are no logs that need archiving
SQL> alter system checkpoint;

System altered.

SQL> alter system switch logfile;

System altered.

8. Actually, at this point the POC is successfully finished, but I want to give you a bonus info about the deletion of the databases and other things that we created during this POC.
Basically, we drop the databases using rman. We drop the diskgroups using the ASMCA and we clear these candidate disks using oracleasm(deletedisk).
You can see the "rman drop database method" in the the following example.. The method that I give in the following example is for a single node database, but it can be used in RAC environments as well.. ( note: in order to drop a RAC database using rman, the parameter named cluster_database must be set to "false" before executing the drop database command)

[oracle@targetsinglenode trace]$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.4.0 Production on Wed May 10 15:01:06 2017
Copyright (c) 1982, 2013, Oracle.  All rights reserved. 

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SQL> shu immediaTE;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL> startup mount exclusive;
ORACLE instance started.

Total System Global Area 2355015680 bytes
Fixed Size                  2255472 bytes
Variable Size             587203984 bytes
Database Buffers         1761607680 bytes
Redo Buffers                3948544 bytes
Database mounted.
SQL> alter system enable restricted session;

System altered.

[oracle@targetsinglenode trace]$ echo $ORACLE_SID
DEVRAC
[oracle@targetsinglenode trace]$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Wed May 10 15:04:12 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DEVRAC (DBID=680719421, not open)

RMAN> drop database including backups;

database name is "DEVRAC" and DBID is 680719421

Do you really want to drop all backups and the database (enter YES or NO)? YES

using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=25 device type=DISK
specification does not match any backup in the repository

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=25 device type=DISK
specification does not match any datafile copy in the repository
specification does not match any control file copy in the repository
specification does not match any control file copy in the repository
List of Archived Log Copies for database with db_unique_name DEVRAC
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
2       1    7       A 10-MAY-17
        Name: +DEV/devrac/archivelog/2017_05_10/thread_1_seq_7.271.943623819

3       1    8       A 10-MAY-17
        Name: +DEV/devrac/archivelog/2017_05_10/thread_1_seq_8.272.943623843

4       1    9       A 10-MAY-17
        Name: +DEV/devrac/archivelog/2017_05_10/thread_1_seq_9.273.943626473

7       1    10      A 10-MAY-17
        Name: +DEV/devrac/archivelog/2017_05_10/thread_1_seq_10.276.943628345

8       1    11      A 10-MAY-17
        Name: +DEV/devrac/archivelog/2017_05_10/thread_1_seq_11.277.943628349

9       1    12      A 10-MAY-17
        Name: +DEV/devrac/archivelog/2017_05_10/thread_1_seq_12.278.943628353

10      1    13      A 10-MAY-17
        Name: +DEV/devrac/archivelog/2017_05_10/thread_1_seq_13.279.943628355

11      1    14      A 10-MAY-17
        Name: +DEV/devrac/archivelog/2017_05_10/thread_1_seq_14.280.943628377

1       2    1       A 10-MAY-17
        Name: +DEV/devrac/archivelog/2017_05_10/thread_2_seq_1.269.943619773

5       2    2       A 10-MAY-17
        Name: +DEV/devrac/archivelog/2017_05_10/thread_2_seq_2.274.943626473

6       2    3       A 10-MAY-17
        Name: +DEV/devrac/archivelog/2017_05_10/thread_2_seq_3.275.943626475

deleted archived log
archived log file name=+DEV/devrac/archivelog/2017_05_10/thread_1_seq_7.271.943623819 RECID=2 STAMP=943623818
deleted archived log
archived log file name=+DEV/devrac/archivelog/2017_05_10/thread_1_seq_8.272.943623843 RECID=3 STAMP=943623842
deleted archived log
archived log file name=+DEV/devrac/archivelog/2017_05_10/thread_1_seq_9.273.943626473 RECID=4 STAMP=943626472
deleted archived log
archived log file name=+DEV/devrac/archivelog/2017_05_10/thread_1_seq_10.276.943628345 RECID=7 STAMP=943628345
deleted archived log
archived log file name=+DEV/devrac/archivelog/2017_05_10/thread_1_seq_11.277.943628349 RECID=8 STAMP=943628349
deleted archived log
archived log file name=+DEV/devrac/archivelog/2017_05_10/thread_1_seq_12.278.943628353 RECID=9 STAMP=943628352
deleted archived log
archived log file name=+DEV/devrac/archivelog/2017_05_10/thread_1_seq_13.279.943628355 RECID=10 STAMP=943628356
deleted archived log
archived log file name=+DEV/devrac/archivelog/2017_05_10/thread_1_seq_14.280.943628377 RECID=11 STAMP=943628377
deleted archived log
archived log file name=+DEV/devrac/archivelog/2017_05_10/thread_2_seq_1.269.943619773 RECID=1 STAMP=943619772
deleted archived log
archived log file name=+DEV/devrac/archivelog/2017_05_10/thread_2_seq_2.274.943626473 RECID=5 STAMP=943626473
deleted archived log
archived log file name=+DEV/devrac/archivelog/2017_05_10/thread_2_seq_3.275.943626475 RECID=6 STAMP=943626474
Deleted 11 objects

database name is "DEVRAC" and DBID is 680719421
database dropped

Well. That is it :) it is working.. Please feel free to comment.