Monday, February 9, 2015

EXADATA -- EBS 11i EXADATA migration -- From HPUX to Linux Exadata 2 node RAC

This time migrating EBS 11i database to Exadata... Actually this was my third EBS 11i - Exadata migration and fifth EBS - Exadata implementation.

Last year I migrated an EBS 11i to Exadata, but at that time the source system was Solaris ..
3 month ago, there was a POC . The source system used in POC was Linux..
This time, I had to deal with a HPUX RAC source system..

Anyways;
Source database was an 10.2.0.4 64 bit Oracle Database running on HP UX..
Target is an Exadata X4-2 , so OS is Oracle Linux  5.9 64 bit and it hosts a 2 node Oracle 11gR2(11.2.0.4) RAC environment..
Source Application was running on 3 servers Linux 32 bit(shared appltop), target application server was single and also Linux 32 bit.

The method was "DataPump" , so the operation was a logical migration.

Following describes the whole operation in general;


For this project; my document inventory was consisting of ;

Oracle Docs:
  • Interoperability Notes Oracle EBS 11i with Oracle Database 11gR2 (11.2.0) (Doc ID 881505.1)
  • Migrating an Oracle E-Business Suite Database to Oracle Exadata Database Machine (Doc ID 1133355.1)
  • Export/import notes on Applications 11i Database 11g (Doc ID 557738.1)
  • Oracle Applications Installation Update Notes, Release 11i (11.5.10.2) (Doc ID 316806.1)
  • Migrating_EBS_to_Exadata_MAA_whitepaper
  • Configuring DBFS on Oracle Exadata Database Machine (Doc ID 1054431.1)
  • Recreating the Spfile for RAC Instances Where the Spfile is Stored in ASM [ID 554120.1]
  • Interoperability Notes Oracle EBS 11i with Oracle Database 11gR2 (11.2.0) (Doc ID 881505.1)
  • 11i: adcfgclone fails with signal 11 while executing a Java Command (Doc ID 395689.1)
  • EBS 11.5.10.2 Install Fails on OEL5 with AC-50207 and java.lang.Exception: Fatal Exception (Doc ID 948628.1)
  • 11i: 'adcfgclone.pl appsTier' Fails with Error: 'txkGenExtSecConf.pl' - ORACLE_HOME env. variable must be set to validate sqlplus connection (Doc ID 1186394.1)
  • ORA-20000 APPS_DDL/APPS_ARRAY_DDL Package(s) Missing Or Invalid In Schema CTXSYS (Doc ID 944150.1)
  • Error When Trying To Start Discoverer (Addisctl.Sh) (Doc ID 854976.1)
Migrating of Application tier was easy, it involved cloning actions .. It was something like standart cloning.. So I will not go in details about application migration here.. It was just: apply the patches for 11gR2 database interoperability ,and clone the application in to the new application server.

So lets start ;

Lets look at what we have done to migrate an EBS 11i database from HPUX 10.2.0.5 to Exadata X2-4 Linux 11.2.0.4

1)
First of all we checked the patches in 11.2.0.4 Exa db
There were two patches(17468141 ve 17501296). On the other hand; these paches were already subset of the other patches that comes with Exadata.. that 's why , we didnt need to apply them.
[oracle@exaerpdbadm01 17468141]$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation. All rights reserved.

PREREQ session
Oracle Home : /u02/app/oracle/product/11.2.0.4/dbhome_2
Central Inventory : /u01/app/oraInventory
from : /u02/app/oracle/product/11.2.0.4/dbhome_2/oraInst.loc
OPatch version : 11.2.0.3.6
OUI version : 11.2.0.4.0
Log file location : /u02/app/oracle/product/11.2.0.4/dbhome_2/cfgtoollogs/opatch/opatch2015-01-27_14-01-23PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" failed.
Summary of Conflict Analysis:

There are no patches that can be applied now.
Following patches are not required, as they are subset of the patches in Oracle Home or subset of the patches in the given list :
17468141
Conflicts/Supersets for each patch are:
Patch : 17468141
Bug SubSet of 17943261
Subset bugs are:
17468141
OPatch succeeded.
[oracle@exaerpdbadm01 17468141]$ opatch lsinventory|grep 17943261
Sub-patch 17943261; "DATABASE PATCH FOR EXADATA (JAN2014 - 11.2.0.4.3) : (17943261)"
[oracle@exaerpdbadm01 17501296]$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /u02/app/oracle/product/11.2.0.4/dbhome_2
Central Inventory : /u01/app/oraInventory
from : /u02/app/oracle/product/11.2.0.4/dbhome_2/oraInst.loc
OPatch version : 11.2.0.3.6
OUI version : 11.2.0.4.0
Log file location : /u02/app/oracle/product/11.2.0.4/dbhome_2/cfgtoollogs/opatch/opatch2015-01-27_14-02-18PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" failed.
Summary of Conflict Analysis:
There are no patches that can be applied now.
Following patches are not required, as they are subset of the patches in Oracle Home or subset of the patches in the given list :
17501296
Conflicts/Supersets for each patch are:
Patch : 17501296
Bug SubSet of 18825509
Subset bugs are:
17501296
OPatch succeeded.
[oracle@exaerpdbadm01 17501296]$ opatch lsinventory|grep 18825509
Sub-patch 18825509; "DATABASE PATCH FOR EXADATA (JUL2014 - 11.2.0.4.9) : (18825509)"

2)
We have created 2 directories for expdp.
We create 2 directories, because our source database was big, and HPUX could not handle mount points which are more than 1 tb in size. Our export would be more than 1 tb..
We also configured expdp through its parameter file to make it use multiple dirs for storing its dump files.

DUMPFILE=dir1:aexp01%U.dmp, dir2:aexp02%U.dmp.... )

3)
We use 4 parallel processes for expdp. The important thing in here was that we used FILE keyword in parfile, not DIRECTORY.. Also for declaring the logfile location, we used FILE keyword.. (if we wpuld not state the log file location using FILE keyword, expdp would take the defaults and try to write the logs into the DATA_PUMP_DIR directory and get errors as we might not have DATA_PUMP_DIR directory defined.
We did not export the statistics..
We did not use compress option , as it can only compress the metadata (not the data itself) in 10g..

4)
Export was taking log time.. Especially the estimate phase.. We hit a bug. We could not get the patch of it, because our CSI was not containing the Extended Support option.. (the patch was an extended support patch) Anyways, we requested the patch from Oracle , but HPUX version of this patch was not available..

The patch was for 10.2.0.5 : Patch 12780993: DATA PUMP PERFORMANCE FOR ESTIMATE=STATISTICS IN EXPORT IS BAD
Fortuneatly, we have found another patch 14645826 , which was containing the fix for bug 12780993.. 14645826 was a generic patch, so we applied it.
Note that : we used ESTIMATE=STATISTICS rather than  ESTIMATE=BLOCKS.. This made expdp to gets the info for calculating the estimates from the statistics .. This approach have also increased the speed of estimate phase.


5)
We created the database using dbca in the target ..Thus , our target database became RAC directly.. So, using dbca; we did not have to struggle with the init.ora and srvctl commands for configuring our target database.
Note that: later, we have created the redolog , undo and tempfile after creating the database using dbca.
We took the info about these files from the aucrdb.sql directly.


6)
We took our target database into noarcivelog mode before our impdp operation.
We also put our spfile which was configured with the interop parameters (using EBS 11i with 11gR2 database) into use in the target database.

Recreating the Spfile for RAC Instances Where the Spfile is Stored in ASM [ID 554120.1]
Interoperability Notes Oracle EBS 11i with Oracle Database 11gR2 (11.2.0) (Doc ID 881505.1)

7)
We have created a DBFS to store the dumps form import..
Configuring DBFS on Oracle Exadata Database Machine (Doc ID 1054431.1)

8) 
Created big file tablespaces for the tablespaces which consist of large number of datafiles.
A good comparison: Big file vs Small File

9)
Because the database was created using DBCA, instance were registered properly with the cluster repository and there was no effort needed for us to configure them as Rac instances, as well. But there was a disadvantage of using dbca in this operation... The disadvantage was some components which came by default as installed. I m talking about XMLDB, OLAP etc... They all came as installed and because of this aumsc1120.sql encountered errors, and made OLAP to be invalid in the dba_registry.
For the solution, we uninstalled the components ,which should not be installed by default ,such as XMLDB ..
So , dbca have created problems in this operation, but these problems could be fixed . 
That is ; we dropped these components and made the dba_registry pure again. After cleaning the dba_registry, we executed aumsc1120.sql and errors dissapeared.. Ausmsc has installed XMLDB, OLAP etc and made them installed and valid as they should be.

Note that : OLAP Catalog and Oracle Multimediate was still invalid .. We dropped and reinstalled them . The reinstallation made them valid.

10)
We imported the dump file into the target database which was on Exadata. 
Analyzed the import log ilfe; and found the following errors:

Some compilation errors caused by dblinks.
Some compilation errors caused by Looping Chain of Synonyms
A MW creation error caused by missing privileges.
A problem in table data becuase of insufficient undo (actually ORA-01555)
Problems in creation of some Indexes.. (again privilege)

11) 
The problems in dblinks hava fixed by copying the tnsnames.ora from the source system ,as the problematic dblinks were not created as direct-dblinks.. Also we set GLOBAL_NAMES to FALSE.. (we were encountering ORA-02085 while it was set to TRUE)
In addition to that; increased open links parameter and bounced the databse (just in case)
Note that : no need to recreate dblinks..

12) 
Recreated the problematic MW . It encountered privilege erros in import, because the owner of the MW did not have the necessary privileges defined for the related table.

13)
Recreated the index using Apps schema.. The situation was the same for indexes too.. the owners of them did not have the necessary privileges defined for the related table.

14)
Recompiled the function which encounter looping chain of synonyms error , manually. This action fixed the problem.

15)
sqlplus "/as sysdba" @auque2.sql


16)
sqlplus "/ as sysdba" @adgrants.sql APPS


17) 
sqlplus apps/apps @adctxprv.sql systemsun CTXSYS

18) 
sqlplus "/ as sysdba"
SQL> exec ctxsys.ctx_adm.set_parameter('file_access_role', 'public');

19)
sqlplus apps/apps
exec fnd_conc_clone.setup_clean;
cd $ORACLE_HOME
unzip -o appsutil.zip
Install the $ORACLE_HOME/appsutil/jre directory:
cd $ORACLE_HOME/appsutil
cp -r $ORACLE_HOME/jdk/jre jre
mkdir clone
cp -r $ORACLE_HOME/jdk/jre clone/jre
perl $ORACLE_HOME/appsutil/bin/adbldxml.pl tier=db
After creating xml file; correct the virtualhost definitions and then run autoconfig..
Set the virtual hostname in each nodes context file
Set the value of s_virtual host_name to point to the virtual hostname for the database
host, by editing the database context file:
On dscgigdb03: $ORACLE_HOME/appsutil/VISMIG1_dscgigdb03.xml
On dscgigdb03: $ORACLE_HOME/appsutil/VISMIG2_dscgigdb04.xml
On dscgigdb03: <host oa_var="s_virtual_hostname">dscgig03-vip</host>
On dscgigdb04: <host oa_var="s_virtual_hostname">dscgig04-vip</host>
Set the named listener name in each nodes context file:
<DB_LISTENER oa_var="s_db_listener">LISTENER_VISMIG</DB_LISTENER>
contextfile=/u01/app/oracle/product/11.2.0/ebsmig/appsutil/VISMIG1_dscgigdb03.xml

Run autoconfig on each database node:
$ORACLE_HOME/appsutil/bin/adconfig.sh

20)
Gathered AD stats in restricted mode.
sqlplus "/ as sysdba"
SQL> shutdown normal;
SQL> startup restrict;
SQL> @adstats.sql
SQL> shutdown normal;
SQL> startup;
SQL> exit;

21)
Revise the listener on each database node.
The reason the listener requires revision is:
In 11.2, listeners are configured at the cluster level, and all nodes inherit the port
and environment settings. This means that the TNS_ADMIN directory path must
be the same on all nodes.
The current version of AutoConfig creates a listener.ora file under the
$ORACLE_HOME/network/admin/<context-name> directory.
AutoConfig does support SCAN listeners as of Oracle E-Business Suite 12.1.3.
The SCAN was not used in this example. See MOS ID 823587.1, “Using Oracle
11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12”
for further details.
Note : Although SCAN listener is supported, if scan_name resolves in DNS to IP1
and IP2 , the client side 10.1.2 network code does not use round robin on the
underlying IPs if the first IP fails. An AutoConfig solution for this is being tracked
through Bug 10427234 and continue to check MOS ID 823587.1. The only
existing workaround is to create custom TNS aliases configured with multiple
SCAN IP addresses.
a. On each database node, add an ifile entry to the Grid Infrastructure listener.ora and
tnsnames.ora network files.
/u01/app/11.2.0/grid/network/admin/listener.ora
/u01/app/11.2.0/grid/network/admin/tnsnames.ora
Point these to the Autoconfig generated files under
$ORACLE_HOME/network/admin/<context-name>/listener.ora
i. Update the Grid Infrastructure listener.ora on each database node to point to
the Autoconfig generated files under
$ORACLE_HOME/network/admin/<context-name>/listener.ora
For example, on database node 1, dscgigdb03, add the following line to the end
of the /u01/app/11.2.0/grid/network/admin/listener.ora file:
IFILE=/u01/app/oracle/product/11.2.0/ebsmig/network/admin/VISMIG1_d
scgigdb03/listener.ora

 Update the Grid Infrastructure tnsnames.ora on each database node to point to
the Autoconfig generated files under
$ORACLE_HOME/network/admin/<context-name>/tnsnames.ora
For example, on database node 1, dscgigdb03, add the following line to the end
of the /u01/app/11.2.0/grid/network/admin/tnsnames.ora file:
IFILE=/u01/app/oracle/product/11.2.0/ebsmig/network/admin/VISMIG1_d
scgigdb03/tnsnames.ora
b. Set the listener and database environment in OCR and verify it works with srvctl:
(add the listener , then set its envrionment as follows)
srvctl add listener -l listener_vismig
srvctl setenv listener -l listener_vismig –T
TNS_ADMIN=/u01/app/11.2.0/grid/network/admin
srvctl setenv database -d VISMIG -T
TNS_ADMIN=/u01/app/11.2.0/grid/network/admin
srvctl stop listener -l listener_vismig
srvctl start listener -l listener_vismig
Also stop/start the database..

22)
Executed autoconfig in db nodes once again..

23)
Followed doc: 395689.1 to not to encounter classes111.zip error before postclone..

24)
We set PATH and CLASSPATH before running adcfgclone.pl clone in the target apps tier

25)
Followed Doc id: 948628.1 to not to get libdb.so.3 errors while running adcfgclone

26)
Executed adcfgclone in the target apps tier..
Note that: it is better to run adpreclone in the source and copy the up-to-date clone directory into the target apps tier, before running adcfgclone.pl. Also it is good to check the doc id: 316806.1 and everyting(rpms, patches etc) is in place before running the post clone in target apps tier.

27)
Encountered erros in autoconfig that executed by the post clone itself..

****************************************************
*******FATAL ERROR*******
PROGRAM : (/u02/eRMAN/ERMANcomn/admin/install/ERMAN_testapps1/txkGenExtSecConf.pl)
TIME : Tue Feb 3 09:59:00 2015
FUNCTION: TXK::SQLPLUS::validateConnectInfo [ Level 3 ]
MESSAGES:
ORACLE_HOME env. variable must be set to validate sqlplus connection

STACK TRACE
TXK::Error::abort('TXK::Error', 'HASH(0x86a39b0)') called at /u02/ERMAN/ERMANappl/au/11.5.0/perl/TXK/Common.pm line 299
TXK::Common::doError('TXK::SQLPLUS=HASH(0x8609200)', 'ORACLE_HOME env. variable must be set to validate sqlplus connec...', undef) called at /u02/ERMAN/ERMANappl/au/11.5.0/perl/TXK/Common.pm line 314
TXK::Common::setError('TXK::SQLPLUS=HASH(0x8609200)', 'ORACLE_HOME env. variable must be set to validate sqlplus connec...') called at /u02/ERMAN/ERMANappl/au/11.5.0/perl/TXK/SQLPLUS.pm line 331
TXK::SQLPLUS::validateConnectInfo('TXK::SQLPLUS=HASH(0x8609200)') called at /u02/ERMAN/ERMANappl/au/11.5.0/perl/TXK/SQLPLUS.pm line 267
TXK::SQLPLUS::setConnectInfo('TXK::SQLPLUS=HASH(0x8609200)', 'HASH(0x8609260)') called at /u02/ERMAN/ERMANcomn/admin/install/ERMAN_testapps1/txkGenExtSecConf.pl line 192
eval {...} called at /u02/ERMAN/ERMANcomn/admin/install/ERMAN_testapps1/txkGenExtSecConf.pl line 68
ERRORCODE = 1 ERRORCODE_END
.end std out.

The solution was setting the APPSORA.env and running autoconfig once again.
Solution 11i: 'adcfgclone.pl appsTier' Fails with Error: 'txkGenExtSecConf.pl' - ORACLE_HOME env. variable must be set to validate sqlplus connection (Doc ID 1186394.1)

28)

-- Do this, else errors will be generated while recreating grants and synonyms using adadmin..
The fix :
Exected the CTXSYS script mentioned in the note 944150.1

cd $AD_TOP/patch/115/sql sqlplus APPS/<Password> @adctxpkg.sql <SYSTEM Password> CTXSYS APPS


29)
Recreated the grants and synonyms using adadmin.

30)
Using perl $AU_TOP/patch/115/driver/dpost_imp.pl [driver file] , create a driver_file (For ex: erman.drv)
And applied this driver file using adpatch..

31)
Configured twotask context file parameters and make them use load balanced tns entries.
Especially, made s_cp_twotask to ERMAN_806_BALANCE (for EBS 11i , it should be ERMAN_806_BALANCE,, for R12 ERMAN_BALANCE, according to the note: Concurrent Processing - How To Ensure Load Balancing Of Concurrent Manager Processes In PCP-RAC Configurations (Doc ID 762024.1)) and executed autoconfig.
jdbc_url was already configured to be able to do load balance and failover.


Section 3.11 Establish Applications Environment for RAC :
Configuring Oracle Applications Release 11i with Oracle10g Release 2 Real Application Clusters and Automatic Storage Management (Doc ID 362135.1)

Also check:
After a 11.5.10.2 Linux Platform Migration AutoConfig Fails With Error "java.lang.Exception: Error while generating listener.ora" at NetServiceHandler.generate806Listener(NetServiceHandler.java) (Doc ID 2158857.1)

( Set <TWO_TASK oa_var="s_tools_twotask" osd="unix"><database_name>_806_BALANCE</TWO_TASK>
  Set <TWO_TASK oa_var="s_weboh_twotask" osd="unix"><database_name>_BALANCE</TWO_TASK>
)

Concurrent Processing - How To Ensure Load Balancing Of Concurrent Manager Processes In PCP-RAC Configurations (Doc ID 762024.1)
By making concurrent processing tns load balances; obtained the ability to spread the database works of  concurrent managers into multiple rac nodes.

Explanation: Apps listener start the Concurrent Managers(FNDSM) ..
Apps listener starts the concurrent managers using a script. This script uses s_cp_twotask to derive the tns to be used by the concurrent managers. 
Name of this script is gsmstart.sh..
So when we change s_cp_twotask in contextfile and run autoconfig, gsmstart script become recreated using the new info (load balanced tns) stored in s_cp_twotask ..
So, the script uses this tns entry and look the corresponding tns string in the tnsnames .ora.. It find the load balanced tns string and starts the FNDSM using that load balanced tns string..
This configuration is called Concurrent Manager. This configuration is done using the Load Balanced database connections for the concurrent manager connnections.
Note that :Using load balanced concurrent processing can be a disadvantage, becasuse using this method, there is no Load balancing based on modules.. The load balancing in this method is in the database level, so it is random in a manner..
That 's why, a data which can be needed by a concurrent program which is in XXERMAN Module , can be in the second node's memory because, the session of another concurrent program which is in XXERMAN Module (same module) can be running in the second.. This will result an increase the interconnect usage.

PCP is naturally a better configuration. In PCP , 1 app node can be associated to 1 db node. Thus, module based load balancing can be achieved. On the other hand; PCP requires multiple app nodes..

32)
Migrated Application Server Linux printer configurations.
To copy printer configurations; transffered /etc/cups from source App to target App and then restarted cups .service cups restart

33)
Compiled Flexfields using adadmin.

34)
addisctl.sh which was run by the adstrtal.sh encountered errors.

8.0.6 $ORACLE_HOME/jre1183o/lib/i686/green_threads/libzip.so: symbol errno, version GLIBC_2.0 not defined in file libc.so.6 with link time reference (libzip.so)
Unable to initialize threads: cannot find class java/lang/Thread
Could not create Java VM
This error was actually expected..Error When Trying To Start Discoverer (Addisctl.Sh) (Doc ID 854976.1)The solution for this was a Discoverer upgrade, but we have ignored this, because there was another Discoverer server already running on another machine in Customer's environment.


35)
Applied patch 7721754 for Oracle Human Resources (HRMS). Note that: this patch colud be applied to the source system, just before the migration , as well.

36)
Disabled maintanence mode and started up application services.

37)
Update fnd_concurrent_queues because their node names were wrong.. (this was actually a cloning problem.. ) Anyways, updated the node_names and target_nodes accordingly..
Especially Standart Manager's and AP Manager's node names were wrong

update fnd_concurrent_queues
set NODE_NAME='TESTAPPS1',
TARGET_NODE='TESTAPPS1'
where CONCURRENT_QUEUE_NAME='APMNGR'

update fnd_concurrent_queues
set NODE_NAME='TESTAPPS1',
TARGET_NODE='TESTAPPS1'
where CONCURRENT_QUEUE_NAME='STANDARD’

38)
For the concurrent program which produce PDF outputs, started a Vnc , and configured the DISPLAY environment variable in the manager start scripts accordingly.
So,
Connected to VNC port 1 , set DISPLAY=testapps1:1.0 and executed xhost + 

Afterwards, modified the scripts in $COMMON_TOP/admin/scripts ; scripts like : adcmctl.sh and gsmstart etc.. we set DISPLAY env variables in these scripts accordingly and restarted the concurrent managers.

39)
Created directories for APPLTMP and APPLPTMP in db site, and include them in the utl_file_dir parameter, then restarted the database and application services.

40)
Created DQM indexes by following these steps:
Log on to Oracle Applications with the "Trading Community Manager" responsibility
Click Control > Request > Run
Select "Single Request" option
Enter "DQM Staging Program" name
Enter the following parameters:
Number of Parallel Staging Workers: 4
Staging Command: CREATE_INDEXES
Continue Previous Execution: NO
Index Creation: SERIAL
Click "Submit"

41)
Configured notification mailer and delivered the system connection information to the Application Test Team.

That 's it. The rest depends on your needs.. You may drop some indexes, archive/compress some data using Exadata specific functionalities, configure IORM and so on..

No comments :

Post a Comment

If you will ask a question, please don't comment here..

For your questions, please create an issue into my forum.

Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html

Register and create an issue in the related category.
I will support you from there.