Wednesday, August 16, 2017

RDBMS-- FLASHBACK FEATURE -- a DEMO , Guranteed Restore Points with or without Flashback Logging + Prereqs and Restrictions

Today, I will give you some key information about Oracle's Flashback technology. So, this post will be completely related with the database tier.
Flashback is not a new technology, I know.. It was introcuded in Oracle Database 10.1, however; it still saves our days, sometimes.

It has also some important restrictions and basically it has two modes. 1) the things we can do when the flashback logging is enabled 2) the things we can do when flashback logging is disabled.
Also, the disk consumption and the performance overhead changes according to the way we choose to use the flashback database feature.

The thing that made me write this post is, an upcoming upgrade operation (applications upgrade) in a mission critical Oracle RAC database.

The database size is too huge, so it not efficient to back up this database just before the upgrade ( both from rman perspective and storage perspective).
In addition, even if we can create a backup, the restore time is estimated to be very long and it can not be accepted by the business.

So in this situation, what we recommend? The flashback database feature.
The business and application guys just want to have the ability to restore the database to a specific point before the upgrade.. (in case of a failure)
So in this situation, what we recommend? A guranteed restore point "without" enabling the flashback logging. (Note that, flashback logging is enabled by the command -> alter database flashback on)

These recommendations are caused by the following facts;
  • Flasback database feature is not only used for traditional recovery, but it can also be used when we need quick recovery. (scenarios like  database upgrades, application deployments and testing  when test databases must be quickly created and re-created)
  • A guaranteed restore point ensures that you can use Flashback Database to rewind a database to its state at the restore point SCN, even if the generation of flashback logs is disabled.
  • Flashback Database is much faster than point-in-time recovery because it does not require restoring datafiles from backup and requires applying fewer changes from the archived redo logs.
I will demostrate the things that I just mention above, but first; let's see the prereqs and restrictions of the Flashback feature, as well as,  those 2 different configurations that can be made for getting benefit the Flashback feature -- "GURANTEED RESTORE POINT without Enabling Flashback logging" vs "FLASHBACK LOGGING (or GURANTEED RESTORE POINT  with Flashback logging)" .
( I categorized them as 2, but they can be categorized into more categories if desired/needed)

PREREQS, RESTRICTIONS AND THE 2 TYPES OF USAGE OF FLASHBACK FEATURE

Let's start with the prerequisites.. I will just give a quick list of prereqs here..
  • The database should be in archivelog mode.
  • We must use the FRA. (db_recovery_file_dest and db_recovery_file_dest_size).. Because Flashback logs (.flb files) are createdt here.
  • If needed, flashback logging must be enabled using "alter database flashback on" command , when the database is in mount mode.
  • Our database must be Enterprise Edition.
Okay.. Let's continue with the Restrictions part;
  • Flashback feature can not be used with the Standard Edition Oracle Databases.
I mean, flashback logs can be created  (tested with a guaranteed restore point), but we just can't restore our database to the restore points..

SQL> flashback database to restore point ERMAN_TEST;
flashback database to restore point ERMAN_TEST
*
ERROR at line 1:
ORA-00439: feature not enabled: Flashback Database

Similarly, if we try to enable flashback logging  on a Standard Edition Oracle Database, we end up with this ->

In mount mode;
SQL> alter database flashback on;
alter database flashback on
*
ERROR at line 1:
ORA-00439: feature not enabled: Flashback Database
  • If we don't enable flashback logging, then the first guaranteed restore point must be created in the mount state. Else, we get the following error;
SQL>CREATE RESTORE POINT ERMAN_TEST GUARANTEE FLASHBACK DATABASE
*
ERROR at line 1:
ORA-38784: Cannot create restore point 'ERMAN_TEST'.
ORA-38787: Creating the first guaranteed restore point requires mount mode when
flashback database is off.

Lastly, there is a list of restrictions, that I want to give;
  • We cannot flash back to an SCN ahead of the current SCN
  • When we restore using flashack, or database must be opened with RESETLOGS
  • We just can not use Flashback Database to undo a shrink data file operation. (shrinking a data file or dropping a tablespace can prevent flashing back the affected data files)
  • If we use Flashback Database to flashback our database to a target time at which a NOLOGGING operation was in progress, block corruption is likely in the database objects and datafiles affected by the NOLOGGING operation.
  • In order to flashback the database, it must be in mount state, else we get the following error -> ORA-38757: Database must be mounted and not open to FLASHBACK.

GURANTEED RESTORE POINT without Enabling Flashback logging vs FLASHBACK LOGGING (or GURANTEED RESTORE POINT with Enabling Flashback logging) :

In this subsection, I will give you the difference between a "flashback logging enabled Oracle Database" and "flashback logging disabled Oracle Database"

First of all, we can use the guaranteed restore point in any mode , both when flashback logging is enabled and when flashback logging is disabled.

The main difference is, without flashback logging, the modified blocks are only saved one time.

So, with flashback logging, modified blocks are saved every time, in every modification. (note that, flashback logging is enabled using "alter database flashback on")

This is because, if we enable flashback logging to be able to restore any SCN. (according to our flashback retention).

But if we don't enable flashback logging and create a guarenteed restore point, we can only restore to the SCN of the time that we created that guaranteed restore point.

Consider a scenario where the business and application guys want to do an upgrade and they want us to have the ability to restore the database to a specific point before the upgrade (in case their upgrade fails), then we use guaranteed restore point without flashback logging. (ofcourse we can also enable flashback logging, but it is unnecessary in this case)

But, if the business and application guys want us to have the ability to restore our database to any point of time between the start time of the upgrade and the end time of the upgrade, we enable flashback logging and create one or more guaranteed restore points and configure our flashback retention policies.

One last important info before continuing;

Each update will generate REDO and UNDO and each block in the UNDO tablespace that is used for the first time, will require flashback data to be written.  So with this in mind, we can say that; if our database is 100 Gb then potentially, the flashback data where only a guaranteed restore point is used, should be at most 100 Gb.

DEMO:

Let's make a demo and see if it is working :)

In this DEMO, I m doing the things below;
  • Create a table named T.
  • Load 100000 row into table T using a loop.
  • Create a Guaranteed restore point in mount mode.
  • Update those 100000 rows using a loop
  • Restore the database to the Guaranteed restore point
  • Drop te restore point.
I'm creating the restore point just before the table update, because I want to get back to the point where the update is not executed yet. 
  • First, I start my database and create the table T.
SQL> startup
ORACLE instance started.
Database mounted.
Database opened.

SQL> create table t ( x int, y char(50) );

Table created.
  • Then I load 100000 rows into the table T.
SQL> begin
for i in 1 .. 100000
loop
insert into t values ( i, 'x' );
end loop;
commit;
end;
/

PL/SQL procedure successfully completed.
SQL> exit
  • I check my FRA to see if there is any flashback logs created at this point. As I didn't enable Flashback logging and as I didn't create any Restore point yet, I don't expect to see any flashback logs there..
[oracle@prelive flashback]$ ls -lrth
total 0
  • Then I put my database into mount mode and  create my guaranteed restore point as follows;(remember : Creating the first guaranteed restore point requires mount mode when flashback database is off)
SQL> shu immediaTE;
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Database mounted.

SQL> CREATE RESTORE POINT ERMAN_TEST GUARANTEE FLASHBACK DATABASE;
Restore point created.
  • Now I check to see any flasback logs are created and I see a file is initialized..
SQL> !ls -lrt
total 15268
-rw-rw---- 1 oracle dba 15613952 Aug 15 15:11 o1_mf_ds5s89wn_.flb
  • Next, I update all the rows in table T as follows.. I'm doing this to show you the generated flashback logs as a result of this update. 
SQL> alter database open; (remember, my db was in mount mode)
Database altered.

SQL> begin
for i in 1 .. 100000
loop
update t set y='y' where x=i;
commit;
end loop;
end;
/
PL/SQL procedure successfully completed.
  • Now I check the generated flashback logs both from DB and from the filesystem.
SQL> select flashback_size/1024/1024 MB from v$flashback_database_log;

MB
----------
44.6484375

SQL> !ls -lrth
total 45M
-rw-rw---- 1 oracle dba 15M Aug 15 15:15 o1_mf_ds5s89wn_.flb
-rw-rw---- 1 oracle dba 15M Aug 15 15:19 o1_mf_ds5shl52_.flb
-rw-rw---- 1 oracle dba 15M Aug 15 16:12 o1_mf_ds5spgkt_.flb

As you see 45 MB flashback log is created .. (table size was approxiametly 10-15 GB) .. This is because of  modified blocks + Undo data..
  • Now, I put my database to the mount mode and flashback my database to my guaranteed restore point.
SQL> shu immediatE;
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Database mounted.

SQL> flashback database to restore point ERMAN_TEST;

Flashback complete.
  • I successfully issued my flashback command and now, it is time to open the database, but database must be opened using the resetlogs.. (Else , I get "ORA-01589: must use RESETLOGS or NORESETLOGS option for database open" )
SQL> alter database open resetlogs;
Database altered.
  • Now I check table T and see if my update is reverted back.
SQL> select * from t where y='y';
no rows selected

Yes. As if it never happened :)

  • Lastly, I drop my restore point and see the relevant flashback logs are cleared from Flash Recovery Area.
SQL> drop restore point ERMAN_TEST;
Restore point dropped.

SQL> exit

[oracle@prelive flashback]$ ls -lrt
total 0 ---CLEAN!

That's all :) Nice feature and it is working like charm :)

Saturday, July 22, 2017

Book: Practical Oracle E-Business Suite, last package delivered :)

Today's blog post is not technical :)
This post is book-related and it will be short.

As you know, our book, Practical Oracle E-Business Suite has been published for almost a year.

I  really appreciate the feedbacks and increasing interest in this book.

I want my followers and my readers to know that, in addition to my desire for writing, these feedbacks and the recognition are my best motivations for writing.

Actually, these are my new motivations for making continous contributions to the Oracle Community.

After the publication, I had my 10 copies that Apress sent to me. As of yesterday, I delivered the last copy that I have left.

Once again,  many thanks to all the people who directly or indirectly, consciously or unconsciously, have helped me to arrive today.

Friday, July 21, 2017

EBS 12.2 - after a fresh install, Appslogin is not working, adgendbc.sh is failing with java.sql.SQLException: Invalid number format for port number

We have encountered this strange problem just after a fresh EBS 12.2 installation.
HTTP Server check that was done in the last screen of the rapidwiz was failed.
The underlying database was a 12.1 RAC and that's why we first tried to solve it by analyzing the dbc files and jdbc thin urls.
We even went inside the database and checked the fnd_* (fnd_databases, fnd_listener_ports etc..) files to find a clue. We did a  full db tier check and ensured that both local and scan listeners were configured perfectly.
We recreated the topology by running autoconfigs, after truncating the fnd_oam_context_files table and the other related tables using fnd_conc_clone.setup_clean.
Nothing that we did, fixed the error that we were seeing in the apps tier autoconfig executions.
adgendbc.sh was failing with  java.sql.SQLException: Invalid number format for port number.

After a long research and lots of efforts, we concluded that we were facing the problem that was documented for EBS 12.1, in an EBS 12.2 instance!

The solution was disabling java just-in-time compiler for the EBS database (alter system set JAVA_JIT_ENABLED= FALSE scope = both;)

Here is the MOS document that was written for EBS 12.1 -> Adgendbc Fails With Database Connection Failure (Doc ID 1302708.1)

We have seen this issue in a EBS 12.2 instance that was freshly installed on a Solaris 11.3.

EBS 12.2 -- Roadmap for "High available EBS 12.2 installation using a Shared Application Filesystem, Oracle RAC infra and a Load Balancer"

Here is a roadmap including the required documentation references, which can be used to build the configuration that I call "High available EBS 12.2 configuration provided by a Shared Application Filesystem, Oracle RAC infra and a Load Balancer"



Actually, I m currently installing a 4 node EBS 12.2 environment and in couple of days, I will document it as a whole.
Anyways, I still wanted share the action plan with you.

Actions:

1) Install Grid 12.1 and build a RAC environment.
2) Install EBS database using rapidwiz. Install it as a RAC database.
3) Install EBS Apps Tier in to a single apps server (primary apps server) , upgrade it to the supported release (currently 12.2.6)
4) Export the necessary directories from the primary apps server using NFS
5) Mount these exported directories in the secondary apps server.
6) Follow the standard Oracle Support documents (mainly Sharing The Application Tier File System in Oracle E-Business Suite Release 12.2) and add the secondary apps server to the topology.
7) Follow EBS Load balancer document and enable the load balancer. (Note 1375686.1 - Using Load-Balancers with Oracle E-Business Suite Release 12.2)
8) Do post installation work and tune the configuration (Enable SSL, configure PCP etc..)

Actions from a different point of view:

1) Install Grid infra 12.1
2) Install EBS database tier as a RAC database using startCD 51
4) Perform a full rman backup
5) Do the database post installation work -> Using Oracle 12c Release 1 (12.1) Real Application Clusters with Oracle E-Business Suite Release R12.2 (Doc ID 1626606.1)
"Section 5.2.2 Post Install Steps"
6) Execute rapidwiz on 1 st (primary) apps node and load the configuration from db to install EBS 12.2.0 apps tier.
7) Upgrade EBS to 12.2.6  and apply the translation+localization patches (if required)
8) Follow -> Sharing The Application Tier File System in Oracle E-Business Suite Release 12.2 (Doc ID 1375769.1) to add a secondary application server to the configuration.
"do the things documented in Section 3.3 Execute adpreclone Utility on the Run and Patch File System" and afterwards
9) Enable load balancer by following -> Using Load-Balancers with Oracle E-Business Suite Release 12.2
10) Enable Parallel Concurrent Processing - > Using Oracle 11g Release 2 Real Application Clusters and Automatic storage management with Oracle E-Business Suite Release 12.2 (Doc ID 1453213.1) (I m giving the 11gR2 document for this, because enabling PCP  is not documented in the document named Using Oracle 12c Release 1 (12.1) Real Application Clusters with Oracle E-Business Suite Release R12.2 (Doc ID 1626606.1)) : Appendix I : Configure Parallel Concurrent Processing

Some of the key requirements:

*All database node must be at the same OS level (same OS patch level)
*All application node must be at the same OS level (same OS patch level)
*NFS must be installed on apps nodes
*ssh equivalency must be configured for apps to apps and db to db nodes.
*Grid 12.2 is not certified with EBS. (EBS database version delivered with latest start cd (startCD 51) is 12.1 and RDBMS 12.1 has critical issues with GRID 12.2. So GRID 12.1 should be used.

Things to know for Multi node installation:
  • Rapidwiz no longer supports multi node apps tier installation.
  • In order to have a multi node apps tier; we install the apps tier as single node, then we upgrade the EBS 12.2 to the latest RUP levle (12.2.6) and then we add the secondary application node using standard cloning procedure.
  • We use NFS for mounting the the APPL_TOP, COMMON_TOP, OracleAS 10.1.2 Oracle Home, Oracle WebLogic Server, and WebTier Oracle Home file systems from primary application tier to the secondary application node.
  • Shared Application Tier File System can not be a read-only-file system unlike in the previous releases.
  • For Solaris 11 installation, a modification in the installation stage files is required. .> http://ermanarslan.blogspot.com.tr/2017/07/ebs-1220-installation-on-solaris-511.html
Related documents:
  • Using Oracle 12c Release 1 (12.1) Real Application Clusters with Oracle E-Business Suite Release R12.2 (Doc ID 1626606.1)
  • Note 1375769.1 - Sharing The Application Tier File System in Oracle E-Business Suite Release 12.2
  • Note 1375686.1 - Using Load-Balancers with Oracle E-Business Suite Release 12.2

Wednesday, July 19, 2017

EBS -- 12.2.0 Installation on SOLARIS 5.11 / make: Failed linking targets / "modifying the stage files" (continued)

My followers should remember this.
I already wrote an article about the failed make commands, which you may see during the installation of EBS 12.2 on Solaris 5.11 Operating System.

Here is the url of the relevant blog post: http://ermanarslan.blogspot.com.tr/2017/06/ebs-1220-installation-on-solaris-511.html

As you will see when you read that blog post, I recommended a manual modification that should be done in the installation stage files.

I  tested and verified that solution and recommended it to you.

However, there was a question in my mind.. That is, we normally shouldn't modify anything right? (this is certified environment)

Anyways, I m writing this blog post in order to tell you that this kind of a modification can also be recommended by Oracle Support.

I mean, I didn't stop chasing this problem and created an SR, followed it and finally get the same recommendation from Oracle.

Oracle Support recommended almost the same thing that I recommended earlier.

Note that, currently there is no fix for this.

The workaround is ->

Locate inst_reports.mk file in your stage directory. 
EBSInstallMedia/AS10.1.2/Disk1/appsts/stage/tools34_reports.zip. 

unzip it and you will find the file reports/lib32/ins_reports.mk 

Modify that file to include LD_OPTIONS for every relink/compile for every executable. 
Rename your old tools34_reports.zip (rename it as tools34_old.zip) 
After that, zip the contents that you unzipped earlier with the name tools34_reports.zip 
At this point, your new tools34_reports.zip file will include a modified ins_reports.mk 
Lastly, re-execute rapidwiz. 

Note: append the relevant line if it already included LD_OPTIONS 

Example 1: 

Before the change: 

$(LIBSRWUSO): 
rm -f rwsutil.o rwspid.o ; \ 
$(AR) x $(LIBSRWU) rwsutil.o rwspid.o ; \ 
(LD_OPTIONS="-z muldefs"; \ 
$(SOSD_REPORTS_LDSHARED) rwsutil.o rwspid.o \ 
-lm $(LIBCLNTSH) $(LLIBTHREAD) $(MOTIFLIBS) $(SYSLIBS) -lc ) 

After the change: 

$(LIBSRWUSO): 
rm -f rwsutil.o rwspid.o ; \ 
$(AR) x $(LIBSRWU) rwsutil.o rwspid.o ; \ 
(LD_OPTIONS="-L/lib -L/lib/sparcv9 -z muldefs"; \ 
$(SOSD_REPORTS_LDSHARED) rwsutil.o rwspid.o \ 
-lm $(LIBCLNTSH) $(LLIBTHREAD) $(MOTIFLIBS) $(SYSLIBS) -lc ) 

Example 2: 

Before: 

$(RRUNM) rwrun${RW_VERSION}x: 
$(LINK) $(JVMLIB) $(RXMARB) $(RUNSTUB) $(LIBSBM) 

After: 
$(RRUNM) rwrun${RW_VERSION}x: 
LD_OPTIONS="-L/lib -L/lib/sparcv9" \ 
$(LINK) $(JVMLIB) $(RXMARB) $(RUNSTUB) $(LIBSBM) 

Pretty similar with my solution right? :) 
Anyways, I m waiting for the next startCD, because it seems the fix for this will be included in it.

Tuesday, July 18, 2017

EBS 12.2 -- Watch out the Grid Version!. Don't use 12.2 Grid with EBS ! (at least for now...)

Here is a little but important info for you.
If you plan to use 12.2 Grid with the EBS, then you should read this.

You may already know that, EBS installer (rapidwiz) delivers a 12.1.0.2 Oracle Database, when used with the latest startCD (startCD51).

Normally, Grid Infra version can be higher than the RDBMS version. (as long as the RDBMS compatability is set accordingly). We know that..

However; while trying to use an EBS Database (RDBMS version 12.1.0.2) with 12.2 grid infra; we discovered a project stopper bug.

Because of this bug, rman or dbca or any other tool can not write to ASM diskgroups.

They are all failing with ORA-15040, although all the ASM diskgroups are mounted and all the OS disk permissions are correctly set.

The problem is caused by "Bug 21626377 - 12.2_150812: DBCA FAILS TO CREATE 12102 DB OVER 12.2 GI/ASM .

The solution seems to be applying the latest Database Bundle Patch. (12.1.0.2.170117 DB BP or above)

The size of this bundle is almost 1.3 Gigabytes and putting it to the EBS install stage is a big customization for us and ofcourse, for the project. (we need to repackage our stage and make a custom stage, because we need to make Rapidwiz install this BP during the EBS installation)

That's why, we decided to reinstall the Grid Infra. Today, we will delete the current 12.2 Grid Infra installation and install a fresh 12.1 Grid Infra.
In short, if you going to place your EBS database on ASM; or let's say; if you want your EBS database to be RAC , then go for a 12.1 Grid installation. (do not try to use 12.2 Grid.. at least for now..) 

Following table shows the latest EBS/RDBMS/GRID component versions for a troubleless EBS 12.2 installation.

ComponentApplicable Versions
Oracle E-Business Suite Release 1212.2.4, 12.2.5, 12.2.6
Oracle Database12.1.0.2
Oracle Cluster Ready Services12.1.0.2

Monday, July 17, 2017

RDBMS -- missing/zeroed redologs, ORA-00312 ORA-00338, _allow_resetlogs_corruption, RMAN Restore & Recovery

Recently dealed with a database startup problem.
It was critical, because the database was a production database.
All the redologs were erased, actually they were zeroed.
At first, I thought that the issue might be caused by a wrong duplicate command. Such as, a duplicate command specified with NOFILENAMECHECK.

INFO: NOFILENAMECHECK prevents RMAN from checking whether the source database datafiles and online redo logs files share the same names as the duplicated files. This option is necessary when you are creating a duplicate database in a different host that has the same disk configuration, directory structure, and filenames as the host of the source database. If duplicating a database on the same host as the source database, then make sure that NOFILENAMECHECK is not set.

However; later on, I learned the truth. The issue was caused by a wrong controlfile recreation operation, that was done by a junior dba.

He was trying to clone a database, which was planned to be running on the same database server as the source database. Unfortuneatly, he recreated the controlfile of this cloned environment by pointing the redologs of the production environment. So, he went too far with this..

When I connected to the production database, I saw the redologs were zeroed.

I tried to validate them using alter system dump logfile '+REDO/redo0x.log' validate; and saw that, there are no redo records left in them.

At that point, I realized that , we were in a critical situation.

There were no redo records in redologs and the database was complaining with ORA-00312: online log x thread x: '+REDO/logx.dbf' and "ORA-00338": log X of thread X is more recent than control file. 
As a result, the instance was terminated with opiodr aborting process unknown ospid (82519) as a result of ORA-1092.

ORA-0038 normally means -> The control file change sequence number in the log file is// greater than the number in the control file. But, another potential cause for getting such error is that listed redo log is Not valid (i.e contain zeros). -- "actually this was the case"..

Well... The production database could not be opened, as the recovery was requesting one of the zeroed redologs. (the cloned database used these redologs and zeroed them. At this point; it was impossible to reuse them with the production database.)

I also saw that, the last redo was lost, but the previous one was archived.

INFO: In a cooked filesystem like ext3/ext4, if you remove the redologs while the datababase is open,  there are still some ways to get the redolog contents . (considering linux/unix doesn't delete the filecontents if the file is open by some processes, using lsof and /proc filesystem, you can get the data of those deleted files) -- it seems this is not possible with ASM at all.

Likewise, if your database is closed (closed with shutdown normal, not abort/not crashed) and if you delete your redologs (or zeroed them), then this is not a problem.
However, if the database is open and if you shutdown it using "shutdown abort" or if the database is crashed somehow, then it means you just lost all your redo.

Well.. The production database including all its redolog files was on ASM. So there were no ways to get the before image of the redolog files, so I decided to force a startup using _allow_resetlogs_corruption=true and startup force.

Well, after this forced startup, the database opened. EBS services started without errors and no problem encountered, but as recommend by Oracle Support, we needed to rebuild the database after opening it with this kind of a method. rebuild means doing the following, namely: (1) perform a full-database export, (2) create a brand new and separate database, and finally (3) import the recent export dump. When the database is opened, the data will be at the same point in time as the datafiles used.

Then, I thought that, "even if we do a full-export and import and become stable, we still lost some data. We forced the startup, so we didn't apply the redo records.. (redologs were already zeroed anyways)"

So, at that time, I also realized that, even we rebuild the database in this stage, we will never be sure about it stability. Full exp itself might encounter errors as well..

At the end of the day; the best option that came to my mind was restoring and recovering the database.

We had the backups (both full and incremental) + we had the backup of the archivelogs + we knew the log sequence number when the instance terminated.

So I told to myself "why not we restore and recover it?  The database is now open but it is not stable.."

Anyways, "rman" is intelligent enough to use incremental backups during the recover operations (if they are available and relevant). Ofcourse, rman applies archivelogs automatically after restoring the database and rolling it forward with the level1 incremental backups.

We just issued a simple run {} block as the one below and waited.

RUN
{
SET UNTIL SEQUENCE 12538;
RESTORE DATABASE;
RECOVER DATABASE;
}

It was a friday night and we restored and recovered an EBS database. We opened it with a minium data loss and luckily that data could be recreated by the business & application guys.

At the end of the day, the lesson learned here was -> "do not to place production and the clone environments in the same host".

However; the biggest lesson was " work on the production server only if you know what you are doing" and/or "do not work on the production, when you lose your focus".

Friday, July 14, 2017

About Erman Arslan's Oracle Blog Facebook Page

Today, I have created a Facebook page for this blog.
Thanks to the followers who liked and started following it.
Till today, I was sharing my blog posts in various Facebook group pages manually.
From now on, everything that I 'll write here, will be reflected to the facebook page of this blog automatically. (including this blog post:)


Here is the facebook page url : https://www.facebook.com/ermanarslansoracleblog/
I would be appreciated if you will follow this facebook page as well:)
As, it's easier to use facebook for checking the news sometimes, it may also be easier to use facebook for glancing at the blog posts.

Friday, July 7, 2017

ODA- KVM Virtualization for ODA X6-2S/X6-2M/X6-2L !!

Good news! This is my 600th blog post :)

What is better than that? Let me tell you;

With the support of KVM, Oracle added the virtualization functionality to the ODA X6-2S/X6-2M/X6-2L models. (Before this, we needed to have ODA X6-2HA to make use of the virtualization capabilities of ODA)

Oracle recently announced that, from now on; we can use virtual machines on ODA X6-2S, ODA X6-2M and ODA X6-2L. This means  ODA X6-2 S/M/L environments can now be considered as Solution-in-a-box environments! This means "applications and databases all-in-one box".




The virtualization technology that we will use with these machines is Linux KVM (Kernel-based Virtual Machine)

This new virtualization option comes with the new ODA release, ODA 12.1.2.11 release.

The ODA 12.1.2.11 release is now available, and it is promising the following the new things for ODA X6-2S/M/L.
  • Support for Unbreakable Enterprise Kernel Release 4 (UEK R4) for Oracle Linux.
  • Oracle Database Bundle Patch 12.1.0.2.170418 and 11.2.0.4.170418
  • Support for Oracle KVM virtualization for Linux applications, enabling you to create isolation between your database and applications. It is not supported to run Oracle database in a guest VM with KVM environment.
So, it seems; from now on , we will reimage our ODA X6-2 machines with the 12.1.2.11 ISO images and install ODA 12.1.2.11 release on top of it to have an ODA X6 ready for supporting the KVM based virtualization.

Patch 23530609: ORACLE DATABASE APPLIANCE X6-2 S and X6-2 M 12.1.2.11.0 OS ISO IMAGE

Patch 26080577: ORACLE DATABASE APPLIANCE X6-2 S AND X6- M 12.1.2.11.0 PATCH BUNDLE DOWNLOAD

Currently, there is no instructions for creating KVM based virtual machines in Oracle Support, however; you can find some blog posts in Oracle Database Appliance blog.

https://blogs.oracle.com/oda/

Things like KVM Import an OVA Template, KVM Deploying Guest VMs with ISO, KVM Networking on ODA (Oracle Database Appliance) and Enabling KVM on ODA are alredy explained in Oracle Database Appliance blog.

The following restrictions, however; should be noted, as well :
  • only Linux OS on the guest VMs
  • It is not supported to install an Oracle database on the guest VMs
  • There is no capacity-on-demand for databases or applications running in the guest VMs

Tuesday, July 4, 2017

Exadata-- Initial Deployment , OEDA and checkip script

In these days, we are migrating several EBS instances to Exadata.
We (as Apps and Core DBAs) are involved in these works, from deployment to the end.


We are not cabling the Exadata but, we are usually there to check and to give the inputs.
In Exadata implementations and migration projects, everyting starts with the initial deployment.
I mean the deployment of Exadata.
The deployment of Exadata is usually straight forward and the process we follow during the deployment, make us feel pretty professional.
There are two tools that we use for the initial deployment of Exadata.

The first one is "OEDA"( Oracle Exadata Deployment Assistant) and the second one is the "checkip script" that is generated by OEDA.


Using OEDA, we sent Oracle almost all the inputs that are needed for the deployment.
Things like, our scan name, our IP addresses,  our DNS IPs, ASM diskgroup names and everything...
OEDA replaces the manual configuration forms that we used in the past for deploying the older versions of Exadata.

OEDA is a tool that can be used even in our Windows clients.
It is an easy to use tool, which is fully documented Oracle Exadata Database Machine Installation and Configuration Guide ( https://docs.oracle.com/cd/E50790_01/doc/doc.121/e51950/configurator.htm )


After giving all the necessary inputs, OEDA create the configuration files which will be used by the Oracle Field engineer during the deployment..

All the configuration files are created under the folder named "ExadataConfigurations".

After we run OEDA, we continue with executing the checkip script.
checkip script can be found in the ExadataConfigurations folder that is created by the OEDA during its run.

Checkip is the tool for ensuring all the ips that are given while running OEDA, are available and all the DNS entries and relevant stuff like that are already configured in the client/customer environment. (checkip script can be run on windows as well..)

The following DNS entries must be configured before running the tool;

DNS entry for Management/Admin network
DNS entry for ILOM Network
DNS entry for  Public/Client network
DNS entry for  VIP network
DNS entry for  SCAN IPs

So, it is like the tool to crosscheck the inputs that are given in OEDA.
checkip script produces an output file when we execute it.
In this output file we need to see the prefix, named GOOD for every check and we need to see the successful message at the end of that output file;

SUCCESS: 

 Successfully completed execution of step Validate Configuration File [elapsed Time [Elapsed = 95573 mS [1.0 minutes] Tue Jul 04 09:51:26 EEST 2017]]

At the end of the day, we sent the output of checkip script and the template files that are created under the ExadataConfigurations folder, to Oracle and wait for the deployment date.

So, in summary; there are 3+1 steps:

1. Customer should fill OEDA configuration.

2. Customer to run checkip script, generated by OEDA utility.

3. Customer to send Oracle the OEDA configuration files and checkip script output for validation.

4. Once configuration files has been validated and checkip script found to be ready, Oracle will be able to schedule HW and SW engineers visits. (this is done by Oracle)

One more thing;

In addition to the outputs of these 2 tools, there is one more file that is sent to Oracle for Exadata deployment. It is named as Exadata Logistic template deployment form, and it is usually filled easily.

In Exadata Logistic template deployment form, we send the information like the company name, work location, dress code, closest hotel, Vpn access(if available) and the necessary contacts to Oracle.

Well.. This is all we need to do as customer site dbas and consultants for the initial deployment of Exadata.

The real excitement, however; begins once the machine is deployed.

Once this instrument(Exadata) is deployed, we need to play it, we need to play it well. 
(The important thing is not the words, but the actions :) )