Saturday, August 26, 2017

EBS 12.2- An important point to be considered when building a Shared Application Filesystem using NFS

Nowadays, I come across Multi-AppsNode EBS installations, which are based on Shared Application Filesystem technology. I see mostly NFS is utilized rather OCFS or any other shared filesystem and that's why I wanted to write a few words about it.

First, shared Appl_top using NFS is completely supported and documented.  The Oracle EBS system supports shared Application Filesystem terminology starting from application version 11i, and it’s supported until 12.2. It is easy to build and use.
Ofcourse, there are some points to be considered :

  • The sharing of APPL_TOP in a Windows environment is not supported.
  • INST_TOP also will be in shared locations unlike earlier releases.
  • All application tiers sharing the file system should be running on the same operating system.
  • Shared application tier file systems should be in read-write mode on all application tiers.

In addition to the points above; the most point of all is I think, the high availability.

I mean, when we build a shared Application Filesystem using NFS, we actually become dependent on NFS shares. (NFS shared + primary node OS + primary node Hardware = primary node's availability)
That is, we export NFS shares from the primary apps node to the secondary apps node.
So, if the primary apps node is down (as a result of a system failure) , our EBS application services on both nodes go down.
We can't start them on the surviving nodes, unless we do a quick failover operation in the Storage level.
That is , in case of a failure that can happen in the primary apps node, the storage admin should map the disks to the secondary node and in that secondary node, the disks should be mounted.
By doing so, the secondary node can read-write the Apps filesystem directly (without a need to mount the NFS shares exported by the primary node -- primary is down..).
Ofcourse, a revert-back operation should be planned to be done in the Storage and OS levels in order to revert back changes when the primary will be up.

So, for load balancing, it is okay, but for high availability, this configuration named "Shared Application Filesystem using NFS" is a little inadequate. I mean, there are things to be done and there is downtime for this.

Well... If you need  both the "High availability" and "Load Balancing" on EBS Apps Tier, my recommendation is, "using a non-shared Application Filesystem, where the apps nodes may have their own dedicated application file systems." (you can also consider OCFS and shared Application filesystem)

You can implement the non-shared Application Filesystem based multi Node configuration by doing a single apps tier installation and then adding a node.

Here is a document to followed ->
"Cloning Oracle Applications Release 12 with Rapid Clone (Doc ID 406982.1) "Option 3: Adding a New Node to an Existing System"

Don't bother the name of the document above.. It includes Cloning, yes.. But, what we do for building a non-shared filesystem based multi node EBS apps tier is the same as the things documented in that node. Specifically, the section named Option 3: Adding a New Node to an Existing System".

Wednesday, August 23, 2017

ODA X6-2 -- an important note about Migrating 11.2.0.4 databases to ODA X6

A few days ago, a follower of mine reported an issue using my forum.(http://ermanarslan.blogspot.com.tr/p/forum.html)

He was complaining about an error encountered while he was trying to restore a 11.2.0.4 database to ODA X6-2.
As ODA X6-2 supports both 11.2.0.4 and 12C databases, this issue seemed as not expected, in the first glance.

However; when I saw the error reported by my follower, I understood that there was a missing info about this restore thing.

Here, the follower said, they were trying to restore their 11.2.0.4 database to ODA X6, but they were getting "ORA-15204: database version 11.2.0.4.0 is incompatible with diskgroup DATA"

The whole error stack was as follows;

ORA-17502: ksfdcre:3 Failed to create file +DATA/PROD/datafile/a_txn_data04.dbf
ORA-15001: diskgroup "DATA" does not exist or is not mounted
ORA-15204: database version 11.2.0.4.0 is incompatible with diskgroup DATA

Well.. When I see the stack, I saw that , the restore operation was trying to be done directly on ASM.
As Grid deployed with ODA X6 is 12C and as the database compatabilities of the ASM diskgroups that come with the ODA deployment, are 12C; the error, that the follower was getting, was actually normal and expected.!

"That is, if you want to restore a 11.2.0.4 database to ODA, you don't use ASM diskgroups directly..
So you use ACFS filesystem rather than ASM diskgroups.."
ODA's Appliance Manager does it with the same way.

Here is what I suggested to my follower:
  • Use ACFS in ODA X6-2. 
  • Create a dummy 11.2.0.4 database from Appliance Manager .. This action will create an ACFS filesystem for you. 
  • Delete that dummy database using standard approaches and use the created ACFS filesystem for your restore. 
Why did you want him to create a dummy database using Appliance Manager?
Because , I want his restored database to be seen in the Appliance Manager.. I wanted him to have a clean environment :)

Well. This is the tip of the day, but it is important right?
Anyways, use ACFS and know the following;

  • ACFS uses ASM in the backend. It is an filesystem built on top of ASM. 
  • ACFS is much more useful than ASM. 
  • For instance; ASM has a limit of 1 million files per Diskgroup. ACFS, on the other hand; supports 2^40 (1 trillion) files in a file system. 
  • ACFS has also the snapshot ability. 
  • Oracle uses ACFS in its new Enginereed Systems, but still it requires ASM. ACFS uses it. So nothing is useless in engineered systems :) 

Monday, August 21, 2017

RDBMS -- a useful Datapump tip, Attaching to a running Job

Sometimes, during the database migrations especially, during the big-sized database migrations, we may need to change some of the configurations of our running datapump jobs.
While, stopping/cancelling the job and executing the datapump again, can be a solution for this, it can be costly in some environments where a full database export/import takes several hours.

In these kinds of situations, the datapump argument named attach comes to our help.
I see a need to write this, because we used it recently to increase the parallelism count of a running datapump export job and it performed well.

This attach feature is also useful when we start a job at one location and need to check on it at a later time from a different location.

Anyways, let's take a look at what we did for increasing the parallelism of our export job during an Exadata migration.

In the first glance; we predicted the optimal setting for the datapump parallelism as 8 but when we saw that the machine (in this case it was an Exadata x6-2 1/8) had plenty of I/O and CPU resources eventhough the 8 parallel expdp threads were running, we increased the parallelism to 24 and we did it using attach argument, without terminating and re-executing our datapump command.

It was as simple as follows;
  • We found our jobs using "select * from dba_datapump_jobs; "
  • We attached to our running job using "expdp usert/pass attach=job_name"
  • We increased the parallelism to 24 using "export> parallel=24"
It is pretty handy, isn't it? Actually, we can do a lot more using this attach argument.. Here is the list of the activities, that we can do when we attach a running datapump job. (along with their definitions)
  • ADD_FILE: Add additional dump files.
  • CONTINUE_CLIENT: Exit interactive mode and enter logging mode.
  • EXIT_CLIENT: Stop the export client session, but leave the job running.
  • FILESIZE: Redefine the default size to be used for any subsequent dump files.
  • HELP: Display a summary of available commands.
  • KILL_JOB: Detach all currently attached client sessions and terminate the current job.
  • PARALLEL: Increase or decrease the number of active worker processes for the current job. This command is valid only in the Enterprise Edition of Oracle Database 11g.
  • START_JOB: Restart a stopped job to which you are attached.
  • STATUS: Display detailed status for the current job and/or set status interval.
  • STOP_JOB:Stop the current job for later restart.

Wednesday, August 16, 2017

RDBMS-- FLASHBACK FEATURE -- a DEMO , Guaranteed Restore Points with or without Flashback Logging + Prereqs and Restrictions

Today, I will give you some key information about Oracle's Flashback technology. So, this post will be completely related with the database tier.
Flashback is not a new technology, I know.. It was introcuded in Oracle Database 10.1, however; it still saves our days, sometimes.

It has also some important restrictions and basically it has two modes. 1) the things we can do when the flashback logging is enabled 2) the things we can do when flashback logging is disabled.
Also, the disk consumption and the performance overhead changes according to the way we choose to use the flashback database feature.

The thing that made me write this post is, an upcoming upgrade operation (applications upgrade) in a mission critical Oracle RAC database.

The database size is too huge, so it not efficient to back up this database just before the upgrade ( both from rman perspective and storage perspective).
In addition, even if we can create a backup, the restore time is estimated to be very long and it can not be accepted by the business.

So in this situation, what we recommend? The flashback database feature.
The business and application guys just want to have the ability to restore the database to a specific point before the upgrade.. (in case of a failure)
So in this situation, what we recommend? A guranteed restore point "without" enabling the flashback logging. (Note that, flashback logging is enabled by the command -> alter database flashback on)

These recommendations are caused by the following facts;
  • Flasback database feature is not only used for traditional recovery, but it can also be used when we need quick recovery. (scenarios like  database upgrades, application deployments and testing  when test databases must be quickly created and re-created)
  • A guaranteed restore point ensures that you can use Flashback Database to rewind a database to its state at the restore point SCN, even if the generation of flashback logs is disabled.
  • Flashback Database is much faster than point-in-time recovery because it does not require restoring datafiles from backup and requires applying fewer changes from the archived redo logs.
I will demostrate the things that I just mention above, but first; let's see the prereqs and restrictions of the Flashback feature, as well as,  those 2 different configurations that can be made for getting benefit the Flashback feature -- "GURANTEED RESTORE POINT without Enabling Flashback logging" vs "FLASHBACK LOGGING (or GURANTEED RESTORE POINT  with Flashback logging)" .
( I categorized them as 2, but they can be categorized into more categories if desired/needed)

PREREQS, RESTRICTIONS AND THE 2 TYPES OF USAGE OF FLASHBACK FEATURE

Let's start with the prerequisites.. I will just give a quick list of prereqs here..
  • The database should be in archivelog mode.
  • We must use the FRA. (db_recovery_file_dest and db_recovery_file_dest_size).. Because Flashback logs (.flb files) are created here.
  • If needed, flashback logging must be enabled using "alter database flashback on" command , when the database is in mount mode.
  • Our database must be Enterprise Edition.
Okay.. Let's continue with the Restrictions part;
  • Flashback feature can not be used with the Standard Edition Oracle Databases.
I mean, flashback logs can be created  (tested with a guaranteed restore point), but we just can't restore our database to the restore points..

SQL> flashback database to restore point ERMAN_TEST;
flashback database to restore point ERMAN_TEST
*
ERROR at line 1:
ORA-00439: feature not enabled: Flashback Database

Similarly, if we try to enable flashback logging  on a Standard Edition Oracle Database, we end up with this ->

In mount mode;
SQL> alter database flashback on;
alter database flashback on
*
ERROR at line 1:
ORA-00439: feature not enabled: Flashback Database
  • If we don't enable flashback logging, then the first guaranteed restore point must be created in the mount state. Else, we get the following error;
SQL>CREATE RESTORE POINT ERMAN_TEST GUARANTEE FLASHBACK DATABASE
*
ERROR at line 1:
ORA-38784: Cannot create restore point 'ERMAN_TEST'.
ORA-38787: Creating the first guaranteed restore point requires mount mode when
flashback database is off.

Lastly, there is a list of restrictions, that I want to give;
  • We cannot flash back to an SCN ahead of the current SCN
  • When we restore using flashack, or database must be opened with RESETLOGS
  • We just can not use Flashback Database to undo a shrink data file operation. (shrinking a data file or dropping a tablespace can prevent flashing back the affected data files)
  • If we use Flashback Database to flashback our database to a target time at which a NOLOGGING operation was in progress, block corruption is likely in the database objects and datafiles affected by the NOLOGGING operation.
  • In order to flashback the database, it must be in mount state, else we get the following error -> ORA-38757: Database must be mounted and not open to FLASHBACK.

GURANTEED RESTORE POINT without Enabling Flashback logging vs FLASHBACK LOGGING (or GURANTEED RESTORE POINT with Enabling Flashback logging) :

In this subsection, I will give you the difference between a "flashback logging enabled Oracle Database" and "flashback logging disabled Oracle Database"

First of all, we can use the guaranteed restore point in any mode , both when flashback logging is enabled and when flashback logging is disabled.

The main difference is, without flashback logging, the modified blocks are only saved one time.

So, with flashback logging, modified blocks are saved every time, in every modification. (note that, flashback logging is enabled using "alter database flashback on")

This is because, if we enable flashback logging to be able to restore any SCN. (according to our flashback retention).

But if we don't enable flashback logging and create a guarenteed restore point, we can only restore to the SCN of the time that we created that guaranteed restore point.

Consider a scenario where the business and application guys want to do an upgrade and they want us to have the ability to restore the database to a specific point before the upgrade (in case their upgrade fails), then we use guaranteed restore point without flashback logging. (ofcourse we can also enable flashback logging, but it is unnecessary in this case)

But, if the business and application guys want us to have the ability to restore our database to any point of time between the start time of the upgrade and the end time of the upgrade, we enable flashback logging and create one or more guaranteed restore points and configure our flashback retention policies.

One last important info before continuing;

Each update will generate REDO and UNDO and each block in the UNDO tablespace that is used for the first time, will require flashback data to be written.  So with this in mind, we can say that; if our database is 100 Gb then potentially, the flashback data where only a guaranteed restore point is used, should be at most 100 Gb.

DEMO:

Let's make a demo and see if it is working :)

In this DEMO, I m doing the things below;
  • Create a table named T.
  • Load 100000 row into table T using a loop.
  • Create a Guaranteed restore point in mount mode.
  • Update those 100000 rows using a loop
  • Restore the database to the Guaranteed restore point
  • Drop te restore point.
I'm creating the restore point just before the table update, because I want to get back to the point where the update is not executed yet. 
  • First, I start my database and create the table T.
SQL> startup
ORACLE instance started.
Database mounted.
Database opened.

SQL> create table t ( x int, y char(50) );

Table created.
  • Then I load 100000 rows into the table T.
SQL> begin
for i in 1 .. 100000
loop
insert into t values ( i, 'x' );
end loop;
commit;
end;
/

PL/SQL procedure successfully completed.
SQL> exit
  • I check my FRA to see if there is any flashback logs created at this point. As I didn't enable Flashback logging and as I didn't create any Restore point yet, I don't expect to see any flashback logs there..
[oracle@prelive flashback]$ ls -lrth
total 0
  • Then I put my database into mount mode and  create my guaranteed restore point as follows;(remember : Creating the first guaranteed restore point requires mount mode when flashback database is off)
SQL> shu immediaTE;
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Database mounted.

SQL> CREATE RESTORE POINT ERMAN_TEST GUARANTEE FLASHBACK DATABASE;
Restore point created.
  • Now I check to see any flasback logs are created and I see a file is initialized..
SQL> !ls -lrt
total 15268
-rw-rw---- 1 oracle dba 15613952 Aug 15 15:11 o1_mf_ds5s89wn_.flb
  • Next, I update all the rows in table T as follows.. I'm doing this to show you the generated flashback logs as a result of this update. 
SQL> alter database open; (remember, my db was in mount mode)
Database altered.

SQL> begin
for i in 1 .. 100000
loop
update t set y='y' where x=i;
commit;
end loop;
end;
/
PL/SQL procedure successfully completed.
  • Now I check the generated flashback logs both from DB and from the filesystem.
SQL> select flashback_size/1024/1024 MB from v$flashback_database_log;

MB
----------
44.6484375

SQL> !ls -lrth
total 45M
-rw-rw---- 1 oracle dba 15M Aug 15 15:15 o1_mf_ds5s89wn_.flb
-rw-rw---- 1 oracle dba 15M Aug 15 15:19 o1_mf_ds5shl52_.flb
-rw-rw---- 1 oracle dba 15M Aug 15 16:12 o1_mf_ds5spgkt_.flb

As you see 45 MB flashback log is created .. (table size was approxiametly 10-15 GB) .. This is because of  modified blocks + Undo data..
  • Now, I put my database to the mount mode and flashback my database to my guaranteed restore point.
SQL> shu immediatE;
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Database mounted.

SQL> flashback database to restore point ERMAN_TEST;

Flashback complete.
  • I successfully issued my flashback command and now, it is time to open the database, but database must be opened using the resetlogs.. (Else , I get "ORA-01589: must use RESETLOGS or NORESETLOGS option for database open" )
SQL> alter database open resetlogs;
Database altered.
  • Now I check table T and see if my update is reverted back.
SQL> select * from t where y='y';
no rows selected

Yes. As if it never happened :)

  • Lastly, I drop my restore point and see the relevant flashback logs are cleared from Flash Recovery Area.
SQL> drop restore point ERMAN_TEST;
Restore point dropped.

SQL> exit

[oracle@prelive flashback]$ ls -lrt
total 0 ---CLEAN!

That's all :) Nice feature and it is working like charm :)