Monday, March 9, 2015

EXADATA -- EBS 11i -- Migrating EBS 11i to EXADATA (12C RDBMS environment)

This was my 6th EBS on Exadata implementation..
This time, the source system' Application tier was 11i(11.5.10.2) running on Redhat Linux 32 bit and Source Db tier was 12c (12.1.0.2) running on Redhat Linux 64 bit.
It is certified..

The source database was Oracle Rdbms 10.2.0.4 64 bit ... Its size was approx 800GB and it was a heavy loaded one .. (hugepages were enabled with 90gb SGA and so on)

The target Database Tier was Exadata X3-2 equipped with 12c RDBMS software and Grid Infrastructure Homes.. It was a X3-2 Quarter Rack (High Performance)
Target Application tier was again Redhat Linux 32 bit, as 11i apps tier code is 32 bit..

The target environment was certified ..
Here is current certification of EBS 11i and Oracle Databases.


The migration operation was done for an EBS on Exadata POC, as before making the purchasing, Customer needed to see the performance improvement in standard EBS screens, processes, Discoverer reports and batch processes..

The documents used for there migration were as follows;

Export/Import Process for EBS 11i on RDBMS 12c (Doc ID 1585257.1)
Migrating Oracle E-Business Suite to Oracle Exadata Database Machine Using Oracle Data Pump Oracle Maximum Availability Architecture White Paper February 2013

The main document to follow was "Export/Import Process for EBS 11i on RDBMS 12c (Doc ID 1585257.1)" and the supplementary document was "Migrating Oracle E-Business Suite to Oracle Exadata Database Machine Using Oracle Data Pump Oracle Maximum Availability Architecture", which was actually used for EBS-RAC configurations.

Note that : there were other documents like EBS 11i-12c interoperability , EBS 11i 12c upgade and so on) referenced by these two main documents..

Okay, before we proceed, I need to say that this POC was a hard one , because some preparation steps were missing in the source system..
I mean autoconfig patches and 12c interop patches were not applied..
We realized this situation in the middle of the migration work, and we didnt want to go back and apply those patches and export the source database again..
Rather than doing the same things again, I choosed to try by continuing the migration altough the preparations were missing..

So It was a challange really, but I can say that it worths it..

There were a lot of thruobles on the way, but at the end of the day, if you survive all of those throubles, you feel yourself like you have done a big work really: )

Note that; I don't recommend such an operation .. Even for the POCs...

I will not give you the step by step instruction this time.
On the other hand; I will give the problems and solutions which may be encountered on the way;

MIGRATING EBS 11i to EXADATA 12C RAC database -- without applying 12c interop and autoconfig  patches in the source system preparation phase.. ->

The plan in this migration project was to make the database migration using expdp, and then make autoconfig in both the db nodes and the application node (Note that : apps tier autoconfig requires manual changes in the context file in this scenario)..
After running autoconfigs; once we would able to connect our target apps tier to its db tier; we planned to apply the autoconfig and 12c interop patches to make the System be supported again.
If we could achieve this; the next thing in our plan was to activate the second db node and make postclone in the Application tier to correct other things that we didnt while running autoconfig with manuel context file changes..  (that might be some other things that postclone does and auconfig does not..)

Lets take a look at the problem that we have encountered..

1)
The patch 20204035  could not be applied to the Exadata RDBMS Home 12.1.0.2..
It had a conflict patch , which belonged to a bundle patch (20075921)..
There should be a lack in the oracle document, because we could not rollback the bundle ,as it brought a lot of bugfixes.. So we just didnt apply the patch 20204035..

2)
While using opatch lsinventory to check the patches in RDBMS home; we have encountered an inventory lock problem.
We fixed it  by removing the patch_locked file located in $ORACLE_HOME/.patch_storage

3)
As I mentioned above the interop patch 16289505 and the autoconfig patches were not applied, which have made this operation more interesting :)
The reason was urgency.. For this time, we did not prepare the system.. We actually gave that work to the customer, and later we realized that the customer just left out the source system patching steps...
When I came to the customer environment and realized that the patches in the source system preparation phase were not applied , we were in the middle of importing the EBS database to Exadata , so I decided to give a try :)

I decided to continue without autoconfig and interop patches , and apply these patches at the end..
I mean , after the postclone operation -- if I could  achieve to reach the postclone steps..:)

4)
Installed Oracle Products from the 12C Database Examples CD into the Oracle RDBMS homes of the database nodes.. Altough, I didnt encounter any problems in this step, I had to write this, as this examples cd thing can be confusing sometimes.

5)
Copied the init.ora from the source system to the target node 1 and applied the 12c1 section of the 216205.1 on it.. Added some params, removed some params and so on..

The new parameter pga_aggregate_limit have attracted my attention, as it comes with 12c and it can be used to limit the pga usage..

Default valueBy default, PGA_AGGREGATE_LIMIT is set to the greater of 2 GB, 200% of PGA_AGGREGATE_TARGET, and 3 MB times the PROCESSES parameter. It will be set below 200% of PGA_AGGREGATE_TARGET if it is larger than 90% of the physical memory size minus the total SGA size, but not below 100% ofPGA_AGGREGATE_TARGET.
ModifiableALTER SYSTEM
Range of valuesOracle recommends that you do not set PGA_AGGREGATE_LIMIT below its default value. PGA_AGGREGATE_LIMIT cannot be set below its default value except in a text initialization parameter file (pfile) or binary server parameter file (spfile). If a value of 0 is specified, it means there is no limit to the aggregate PGA memory consumed by the instance.
BasicNo
PGA_AGGREGATE_LIMIT specifies a limit on the aggregate PGA memory consumed by the instance.
There is no difference in behavior between PGA_AGGREGATE_LIMIT being explicitly set or being set to the default.

6)
 While modifying aucrdb.sql, actually while modifying the datafile and redolog file paths, we can use vi find and replace.. A command something like this may be used :
%s/ \/dbf/+DATAC1\/TEST70\/datafile/g

7)
While reviewing the aucrdb log file, we may see some errors related with default temp tablespace.. We can ignore them..
Anyways; after restarting our newly created db, we run database preparation scripts..
Note that: user pass information is as follows;
sys/change_on_install, system/manager

8)
After creating the db and running preparation scripts, it was good to check dba_registry..
At this point; we prepared a temporary listener  to make our DBAs be able to connect to the database directly from their PCs and make our check with our favorite tool (like toad)
After configuring listener.ora and starting the listener by making the local listener to be set to this temporary listener , the client could connect to the database..
alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=ermanhost)(PORT=1591))' scope=memory;
We have seen CATALOG and CATPROC as invalid in the dba_registry.

We could directly validate CATALOG using;
sqlplus "/as sysdba"
SQL> exec dbms_registry_sys.validate_catalog();
PL/SQL procedure successfully completed.

For validating te catproc, we just compiled the invalids and used validate_catproc procedure..
cd $ORACLE_HOME/rdbms/admin
sqlplus "/as sysdba"
SQL>@utlrp.sql
SQL>exec dbms_registry_sys.validate_catproc(),

9)

Could not disable automatic gathering of stats; as in 12C gather_stats_job does not exist..

declare
*
ERROR at line 1:
ORA-27476: "SYS"."GATHER_STATS_JOB" does not exist
ORA-06512: at "SYS.DBMS_ISCHED", line 4567
ORA-06512: at "SYS.DBMS_SCHEDULER", line 2920
ORA-06512: at line 2
ORA-06512: at line 23

743507.1 "Why Has the GATHER_STATS_JOB been removed in 11g?"

10)
At this point it was good to take a backup;
as our target db was in noarchivelog mode;
we mounted the database and took our backup;
rman target /
backup database format "/directory/TEST70_%U"

11)
We used parallel import; using impdp parallel parameter..
Index creation operations is in impdp are actually serial in OS level.. They are parallel in db tier.

That is ; there is only one index is created in a time, but that one index is created using parallel workers.. (Like Create index parallel 8)
Also ; even specifying parallel argument; oracle take the parallelization of db.. It takes the value of the parallel_max_servers,parallel_max_target and creates our indexes accordingly..


12)
While impdp is working; it may get resumable errors sometimes.. These resumable errors are generally encountered because of the insufficient spaces on the corresponding tablespaces.
When impdp gets these kind of errors, it waits and when we enlarge the problematic tablespaces, imdp continues automatically.But, there are also some errors which are not resumable.. For example: insufficient privileges error encountered during creation of indexes..

If such an error is encountered Imdp skips these problematic actions.. Thus, we need to take those actions manually after the import is finished.. For ex: We need to create the indexes using a database user which has needed grants..
In order to get rid of this risk, we can grant DBA privilege to the custom schemas just after starting our impdp process.. Ofcourse, after the impdp is finished, we need to revoke those privileges.
I say custom schemas because standard schemas are export/import compatible.. These kind of privilege errors are caused by the custom schemas most of the time..

13)
We may a suffix problem in the DB LINKS.
If that is the case, we just alter our global name and make it set to the db name with domain suffix.
For ex:
alter database rename global_name to "TEST70.ermanhost.com.tr"

14)
While importing the statistics , impdp had hard times.. It could take a long time.
That 's why we cancelled the import using imdpb ATTACH_JOB and kill_job..
Actually, firstly  we tried to skip the stats import job using START=SKIP_CURRENT, but it did not work.. Because SKIP CURRENT of impdp is for the sub jobs, not the main jobs.
I mean, you can skip importing of a table, but you cant skip importing the TABLE class action..
As , Importing statistics is a main job wihch does not have a sub job, it can not be skipped using SKIP_CURRENT option..

Anyways; we killed the import job and restored the db ..
We restored the db to the initial form of it, because there are some other tasks which impdp usually does after importing the statistics.. As we killed the impdp during the stats import, those other taks have become missing..

Before dropping the database for restore, we backed up the current control file. (default backup location is $ORACLE_HOME/dbs)
Then we dropped the db using rman..
To restore the db; we first restored the control file by specifying its path to rman, and then restored the database..

After restoring the db, we restarted the import.. This time we gave dba privilege to the custom schemas and enlarged the tablespaces.

15)
Because the interop and autoconfig patches were missing; we encounter Timezone problem while running autoconfig on target db nodes.
In order to fix this errors, we needed to modify the template files and
if conditions for db121..

if db121 -> use this timezone file..

16)
In postclone of appsTier , we encountered problems again.. It was caused by the missing interop and autoconfig patches as you may guess :)
That 's why we run autoconfig rahter than postclone.

But, autoconfig could not connect to db.. The cause was that autoconfig was trying to connect to the database using SID.. I mean its connection format was not right for the 12c RAC listeners..To solve this, I used a temporary listener in the db node 1 appropriate to the format that autoconfig uses, and made the autoconfig to connect to the db node using this listener..

Something like this;


listener_test70=
(description=
(address=(protocol=tcp)(host=exadb01-vip)(port=1555)


sid_list_listener_test70=
(sid_list=

(sid_desc=
(oracle-home=/u02/app/oracle/product/12.1.0/dbhome_2)
(sid_name=test701)))

This time, autoconfig encountered no available protocol errors.
I solved using this using SQLNET.ALLOWED_LOGON_VERSION=8 in sqlnet.ora files of the listener 's Oracle Home of theve problem giderildi.

Also granted inherit privileges to apps for the privilege errors.

About SQLNET.ALLOWED_LOGON_VERSION:


There's a common misconception about the sqlnet.ora parameter SQLNET.ALLOWED_LOGON_VERSION and how it affects the version of clients that are allowed to connect to a database for specific values of this parameter.

The parameter actually specifies the AUTHENTICATION PROTOCOL (for example SHA-1) that a client is allowed to use, NOT the actual VERSION of that client.

Example: SQLNET.ALLOWED_LOGON_VERSION=10

Even though the parameter value implies Oracle version 10 the internal check is really against the authentication protocol 'SHA-1'.

In earlier Oracle versions this was a one-on-one relation, ie. a certain Oracle client version used a specific authentication protocol.
Starting with Oracle 10g, this is no longer a one-on-one relation: Oracle 10g and 11g both use the SHA-1 protocol.
Oracle 12g will use the SHA-2 protocol.


This means that there's no way to prevent 10g client connections and allowing only 11g client connections through the use of this parameter, because both use the same authentication protocol SHA1. This also means that both client versions are 'equally secure' - there's no gain in security by trying to prevent 10g clients alone.
Important note:

The SQLNET.ALLOWED_LOGON_VERSION parameter is deprecated in Oracle Database 12c.
This parameter has been replaced with two new Oracle Net Services parameters:

SQLNET.ALLOWED_LOGON_VERSION_SERVER
SQLNET.ALLOWED_LOGON_VERSION_CLIENT



Okay.. It is actually not a big thing , it is written in the interop document, too :)
Start the new database listener (conditional)
If the Oracle Net listener for the database instance in the new Oracle home has not been started, you must start it now. Since AutoConfig has not yet been implemented, start the listener with the lsnrctl executable (UNIX/Linux) or Services (Windows). See the Oracle Database Net Services Administrator's Guide, 12c Release 1 (12.1) for more information.
Attention
Set the TNS_ADMIN environment variable to the directory where you created your listener.ora and tnsnames.ora files.
Attention
Ensure that your sqlnet_ifile.ora has the line:
SQLNET.ALLOWED_LOGON_VERSION_SERVER = 8

17)

Flexfields named JG_AP_INVOICES and JG_FA_BOOKS could not be compiled;

here is the explanation:

JG_FA_BOOKS descriptive flexfield is needed only for Latin American Localizations (JL). It is not needed when these localizations are not used. Therefore, it could remain as invalid just in case that the Latin American Localizations (JL) are not used.

JG_AP_INVOICES has problems in its declaration, if it is needed , those problems should be corrected..


Actually, these flexfields are not used and can not be compiled in the source environment , too..
That's why, this problems were ignored..


18) 
After making autoconfigs and connecting the appplication tier to the db node properly, we applied the autoconfig and 12c interop patches..

The patches 9535311 and 16289505 ...


16289505 encountered some errors..
sqlplus -s APPS/***** @/appydk/TEST65/test65appl/admin/TEST65/out/p001invok.sql checkfile(115.8=120.4):fnd:patch/115/sql:AFINSTLS.pls
/bin/sh: -c: line 0: syntax error near unexpected token `('
/bin/sh: -c: line 0: `sqlplus -s APPS/APPSPWEXA @/appydk/TEST65/test65appl/admin/TEST65/out/p001invok.sql
checkfile(115.8=120.4):fnd:patch/115/sql:AFINSTLS.pls'


This error was documented and Patch 12539637 solved it.
16289505 Failed With AFINSTLS.pls Error (Doc ID 1966820.1)
Please check if you have this patch applied:
12539637 : REQUEST NOT TO ERROR 11I FILES THAT HAVE CHECKFILE EQUIVALENCE SYNTAX

If NOT applied on your instance, please apply it first and then reapply patch 16289505.

---> At this point we became SUPPORTED again :)


19)
After the upgrade invalid OLAP objects may be dropped
Invalid OLAPSYS Objects After Upgrading TO 12C (Doc ID 1593666.1)
If needed, OLAP can be remove using;
How To Find Out If OLAP Is Being Used (Doc ID 739032.1)

20)
After making everying right, we run a postclone just to be sure that it is working properly..
Then changed the twotask context file parameters and run autoconfig to activate the changes..
We configured concurrent manager to be on db node 1 and everyting else to be load balance and failover..

21
)We have gather stats using fnd_stats.

22) 
We did not create any ACLs, because we saw that our patched appsTier already created one..
An ACL with HOST=* 
There were ORA-12504 in non-Oracle DBLinks.. (dblinks for MS SQL and DB2)
ORA-12504 When Using (HOSTNAME) Method For 11G Client/Database (Doc ID 556996.1)
On the other hand; the  HOSTNAME.DEFAULT_SERVICE_IS_HOST=1 parameter mentioned in the document was not the solution..

23)
At the end; we opened the system and tried to login..

Unfortuneatly, we could not...

When using the submit button Login page was throwing;
Oracle error &ERRNO: &REASON has been detected in &ROUTINE.

We analyzed and found that there was a invalid custom trigger in the fnd_nodes table.. Disabling it fixed the error.
Event Alerts on FND_USER, or FND Login/User Type Tables Cause Problems (Doc ID 353602.1)

After this step, the migration was finished ... We took a deep breath and delivered the system to the user acceptence and performance testing..

At the end of the day; I can say that those Interop patches are really needed to be in place before the migration..
Following Oracle documents for accomplishing such big taks is not the only way, but it is clearly the best way. If you dont follow, you become unsupported and you need to support the system yourself :)
Even if your system become unsupported during the way, you may find a way to be supported again, but note that: you will be on your own while you are unsupported..

Anyways, this POC was a successful one.. Migration that I have mentioned in this post was hard but successful ..
In the Exadata side, configured hugepages for the SGA, configured the database to use hugepages only, optimized the parallelization levels , kept some tables of flash cache and that's all..
Tests were successful and It is needles to say that the performance of Exadata was very good..
Without configuring almost anything , we have seen x10 increase in most of the batch jobs and reports..

In conclusion;
I can say that; EBS 11i works well with 12C database.
Interop and autoconfig patches make EBS 11i to work with 12c .. , the affects of these patches can be seen in the behaviours/capabilities of AD Tools, autoconfig and general application connections.. 
Exadata X3-2 with 12c database and grid software increases the performance of EBS in general . 
Ofcourse, some works which are designed to process the rows one by one, may be stay in the same performance, but in general EBS benefits from the processing power of Exadata..
In 12C we have pga_aggregate_limit parameter which let us limit the PGA usage.. This is good for consolidation environments.. (think several databases run on the same machine, same OS)
Using DBRM , we can limit maximum degree of parallelism for any operation within consumer groups..We use the PARALLEL_DEGREE_LIMIT_P1 directive attribute to specify the degree of parallelism for a consumer group. This is good for mixed environments such as EBS..
Lastly, migrating EBS to Exadata may be seen like following some documents and executing some scripts and so on, but in fact it is not :) It is a crucial work, and you encounter with new problem scenarios in every new Exadata migration projects..

I hope you will find this post useful..
I will post the performance reports when I will have them..

4 comments :

  1. Hey Erman,

    Need to know whether your source was on 11i(RUP4) or 11i(RUP7) ? We do have similar migration coming up but ours will be from HP-UX to Exadata(DB) and HP-UX to Linux(Apps). Have you ever had any such migrations/upgrades before?

    Rgrds,
    Ata

    ReplyDelete
  2. If you are asking ATG RUP Level , it was RUP7.

    ReplyDelete
  3. Thanks for your reply... sice we are on RUP4, is there a way we could move on to Exadata without going on RUP7 ?

    Rgrds,
    Ata

    ReplyDelete
  4. You will need at least RUP6.
    Your application tier needs to be interoperable with 11gR2 at least.
    In order to supply this, you will follow this kind of docs: Interoperability Notes Oracle EBS 11i with Oracle Database 11gR2 (11.2.0) (Doc ID 881505.1)

    You will use datapump for the migration, but even if you use transportable tablespace, the situation is the same.

    ReplyDelete

If you will ask a question, please don't comment here..

For your questions, please create an issue into my forum.

Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html

Register and create an issue in the related category.
I will support you from there.