Tuesday, January 16, 2018

EBS 12.2.7 - Oracle VM Virtual Appliance for Oracle E-Business Suite 12.2.7 is now available!

Oracle VM Virtual Appliance for E-Business Suite Release 12.2.7 is now available from the Oracle Software Delivery Cloud !!

You can use this appliance to create an Oracle E-Business Suite 12.2.7 Vision instance on a single virtual machine containing both the database tier and the application.

Monday, January 15, 2018

RDBMS -- diagnosing & solving "ORA-28750: unknown error" in UTL_HTTP - TLS communication

As you may remember, I wrote a blog post about this ORA-28750 before.. (in 2015).
http://ermanarslan.blogspot.com.tr/2015/09/rdbms-ora-28750-unkown-error-in-web.html

In that blog post, I was addressing this issue with the SHA2 certification lack, and as for the solution , I recommended upgrading the database for the fix .. (this was tested and worked)
I also recommended using a GeoTrustSSLCA-G3 type server side certificate for the workaround. (this was tested and worked)

Later on, last week ; we encountered this error in a 11.2.0.4 database and the server side certificate was GeoTrustSSLCA-G3 certificate.. The code was doing "UTL_HTTP.begin_request" and failing with ORA-28750.
So, the fix and the workaround that I documented earlier, were not applicable in this case.. (DB was up-to-date and the certificate was already GeoTrust..G3)..

As you may guess, this time, there was a more detailed diagnostic needed.

So we followed the note:

"How To Investigate And Troubleshoot SSL/TLS Issues on the DB Client SQLNet Layer (Doc ID 2238096.1)"

We took a tcpdump..  (with the related IP addresses to have a consolidated tcp output..)

Example: tcpdump -i em1 dst 10.10.10.10 -s0 -w /tmp/erman_tcpdump.log

In order to see the character strings properly, we opened the tcpdump's output using Wireshark. *

When we opened the output with Wireshark; we concantrated on the TLS V1.2 protocol type communication and we saw an ALERT just after the first HELLO message;


The problem was obvious.. TLS V1.2 communication was throwing Unsupported Exception error.

This error redirected us to the Document named:  UTL_HTTP : ORA-28750: unknown error | unsupported extension (Doc ID 2174046.1)

This document was basically saying "apply patch 23115139", however; this patch was not written for Oracle Database 11.2.0.4 running on Linux X86-64.. In addition to that, our PSU Version was 11.2.0.4.171017 and the patch was not for it.  

So we needed find another patch which includes the same fix and it was required to be appropriate for our DB & PSU Version..

Now look what we found :) ;

Patch 27194186: MERGE REQUEST ON TOP OF DATABASE PSU 11.2.0.4.171017 FOR BUGS 23115139 26963526

Well.. We applied patch 27194186 and our problem solved.

Now, by the help of this issue and its resolution; I can give 2 important messages; 

1) Use wireshark or a similar tool to analyze the tcpdump outputs.  (analyze the dumps by concantrating on TLS protocol messages)

2) Dont surrender even when the patch that is recommended by Oracle Documents, isn't compatible with your RDBMS and PSU versions.. 
Most of the time, you can find another patch (maybe merged), which is compatible with your RDBMS & PSU versions and that patch may include the same fix + more :)

Monday, December 25, 2017

Erman Arslan is now an Oracle ACE!

I 'm an Oracle ACE now! Today is my birthday, and this is the best birthday gift ever!  :)
I have been writing this blog since 2013 and thanks to my passion on writing, I wrote the book (Practical Oracle E-business Suite) with my friend Zaheer Syed last year.
I aimed to share my knowledge with all my followers around the world and to keep up with the new developments in Oracle technologies.
I spent a significant time to give voluntarily support on my forum and did several Oracle projects in customer sites in parallel to that.
My primary focus was on EBS, but I was also researching and doing projects on Exadata, Oracle Linux, OVM, ODA, Weblogic and many other Oracle Technologies.
I 'm still  working with the same self-sacrifice as I started to work as an Oracle DBA in the year 2006 and I 'm still learning, implementing and explanining the Oracle Solutions with the same motiviation that I had in the first years of my career.

I want to send my special thanks to Mr. Hasan Tonguç Yılmaz, who's nominated me to become an Oracle ACE. I offer my respect to Mr. Alp Çakar, Mr. Murat Gökçe and Mr. Burak Görsev who have directly or indirectly supported me in this way.


Friday, December 22, 2017

Goldengate -- UROWID column performance

As you may guess, these blog posts will be the last blog posts of the year :)

This one is for Goldengate.

Recently, started working for a new company and nowadays, I deal with Exadata machines and Goldengate more often.

Yesterday, analyzed a customer environment, where Goldengate was not performing well.

That is, there are was a lag/gap reported in a target database, in which, 55-60 tables were populated by Goldengate 12.2.

When we analyzed the environment, we saw that, it was not the extract process or network which was causing the issue.

The REPLICAT process was also looking good in the first glance, as it  was performing well on its trail files.

However, when we check the db side, we saw that there was a lag around 80 hours.. So target db was behind the source db with a 80 hours difference.

We analyzed the target database, because we thought that it might be the cause.. I mean, there could be some PK, or FK missing on the target environment.. (if the keys are missing, this can be a throuble in goldengate replications). However, we concluded that, no keys were missing.

In addition to that, we analyzed the AWR reports, we analyzed the db structure using various client tools (like TOAD) and we check the db parameters, but -> all were fine..

Both source and target databases were on Exadata. AWR reports were clean. Load average was so low, the machine was sleeping and there were almost no active sessions in the database (when we analyzed it real time)

Then we checked the goldengate process reports and saw that REPLICAT was performing very slow.

It was doing 80 tps , but it should be around 10000 tps in this environment..

At that moment, we followed the note, and check the replicat accordingly.
(Excessive REPLICAT LAG Times (Doc ID 962592.1))

We considered the things in the following list, as well:
  • Preventing full table scan in the absence of keys KEYCOLS
  • Splitting large transactions
  • MAXTRANSOPS
  • MAXSQLSTATEMENTS
  • Improve update speed - redefine tables - stop and start replicat
  • Ensure effective execution plans by keeping fresh statistics
  • Set Replicat transaction timeout
Unfortuneatly, no matter what we did, the lag was increasing..

Then fortuneatly :), we saw that all these 55-60 tables in the target db had columns with the type of
UROWID..

These columns were recently added to the tables by the customer.

We also discovered that, this performance issue have started, after these columns were added.

We wanted to change the column type, because these UROWID columns have recently become supported with Goldengate..

ROWID/UROWID Support for GoldenGate (Doc ID 2172173.1)

So we thought that these columns may cause the REPLICAT to perform with this low performance.

The customer was using these columns to identify the PK changes and accepted to change the type of these columns to VARCHAR2.

As for the solution, we changed the type of those columns to varchar2 by creating empty tables and transferring the data using INSERT INTO APPEND statements.

Thanks to EXADATA , it didn't take lots of our time and thanks to Oracle Database 12C, we didn't need to gather statistics of these new tables, since in 12C it is done automatic during CTAS and Insert Into Append..

After changing the column type of those tables, we restarted the REPLICAT and the lag was dissapeared in 2 hours.

So, be careful when using UROWID columns in a Goldengate environment..

Monday, November 20, 2017

EBS 12.2 -- Solaris Sparc relinking reports, executing make -f from lib or lib32 directory? + an important info about the ins_reports.mk and the env settings

I'm writing this post, because I found a mismatch in the info delivered by Oracle Support documents.
According to some of those documents; reports should be relinked from lib32 directory( if that directory is present..) here ->

cd $ORACLE_HOME/reports/lib32
--Note: if this directory does not exist: --
cd $ORACLE_HOME/reports/lib
$ make -f ins_reports.mk install

However, even if you relink the reports binaries from lib32 directory, you may end up with the following error in your concurrent manager log files (when running reports based concurrent programs) ->

Program exited with status 1
Concurrent Manager encountered an error while running Oracle*Report for your concurrent request XXXXXXX


So, this error brings you to the following document:

E-Business Suite R12 : Oracle Reports (RDF) Completing In Error "Program Exited With Status 1" Specifically on Oracle Solaris on SPARC OS Installation (Doc ID 2312459.1)

Actually, what that is underlined in this document is the famous environment setting (LD_OPTIONS="-L/lib -L/lib/sparcv9").

However, look at what document 2312459.1 also says, 

Navigate to this directory $ORACLE_HOME/reports/lib and compile the reports executable make command "$ make -f ins_reports.mk install"

So, the document says relink your reports from the lib directory (not from the lib32 directory).

At the end of the day, it is not important to be in lib32 or lib directory for relinking the reports binaries in Solaris Sparc.

The important things are having a good/clean ins_reports.mk and the necessary a LD_OPTIONS environment setting.

What I mean by a clean ins_reports.mk is -> a default ins_reports.mk which is deployed by the EBS installer. No LD_OPTIONS inside that file!

You may recall that, I wrote a blog post for the error that we usually get in EBS 12.2 Solaris installation - > http://ermanarslan.blogspot.com.tr/2017/06/ebs-1220-installation-on-solaris-511.html

In that blog post, I was modifying the ins_reports.mk inside the stage. I wrote LD_OPTIONS inside the ins_reports.mk file and hardly fixed the errors in the runInstaller screens. This was the one and only solution as the runInstaller could not get the LD_OPTIONS env setting from the terminal where I executed it.

However, this was just for the installation.

So after the installation, we need to delete these modifications and use directly the LD_OPTIONS env setting for our future reports relink actions.

cd $ORACLE_HOME/reports/lib
make -f ins_reports.mk install

or

cd $ORACLE_HOME/reports/lib32
make -f ins_reports.mk install

Both works..

This is based on a true story and this is the tip of the day.

Wednesday, November 15, 2017

ODA X7-2 -- a closer look //Oracle Database Appliance X7-2 Family

The adventure of ODA started with the machine named Oracle Database Appliance in 2011, and almost each year a new model released since then.

ODA X3-2 in 2013, ODA X4-2 in 2014, ODA X5-2 in 2016 and ODA X6-2 in 2016.

Today, I m writing about the newest model, which was released recently.. the "ODA X7-2".



In ODA X6-2, we were introduced with the S (Small), M (Medium) and L (Large) types of ODA family. That is, the standard name of the ODA became ODA HA, and these S, M and L types were added to the familiy.

In ODA X7-2, Oracle decided to remove the Large model from the product family and released the new machine with 3 models: S, M and HA.


Actually, there are several enhancements delivered with the new ODA X7-2, but the most interesting enhancements of all is, "we can built standard edition RAC databases on ODA X7-2 machines". (most interesting new feature, in my opinion)

General enhancements:
  • 3 new models ODA X7-2S, ODA X7-2M, ODA X7-2-HA
  • Increased storage capacity
  • Virtualization option for all the models.
  • Standard Edition RAC support
  • New Intel Xeon processors
  • Increased core count
  • Oracle database 12.2.0.1
General Specs:

Oracle Database Appliance X7-2S
Single-instance
SE/SE1/SE2 or EE
Virtualization
10 Cores
12.8 TB Data Storage (Raw)


Oracle Database Appliance X7-2M

Single-instance
SE/SE1/SE2 or EE
Virtualization
36 Cores
Up to 51.2 TB Data Storage (Raw)

Oracle Database Appliance X7-2HA

RAC, RAC One, SI
SE/SE1/SE2 or EE
Virtualization
72 Cores
Up to 128TB SSD or 300 TB HDD Data Storage (Raw)

NVME:

Like we had in ODA X6-2, we have Nvme storage for X7-2S and M models.

Appliance Manager and ACFS:

We have also appliance manager for the proactive management and we have ACFS as the filesystem.
Again, we have odacli as the command line management interface.


ODA X7-2 HA provides us the ability to build a high available environment. Again we are using ACFS for the filesystem, but this time we have SSD disks in the backend. (or SSD+HDD disks)


Supported Versions:

ODA X7-2 supports almost all the up-to-date Oracle Versions.

Enterprise Edition – 11.2.0.4, 12.1.0.2, 12.2.0.1
Standard Edition (SE, SE1, SE2)  – 11.2.0.4, 12.1.0.2
If you need the Database options, then you need to go with EE.
http://docs.oracle.com/database/121/DBLIC/options.htm Advanced Security Option, In-Memory, Mutitenant, …..

About Licensing:

Standard edition 2 RAC support is limited to the 2 socket db servers.
So, we have SE2 RAC support in ODA X7-2 HA (with Oracle Virtual Machine / OVM)
In order to run a SE2 RAC database on ODA X7-2 HA, we use max 18 cores in the ODA_BASE. The remaining cores can be used with the virtual application servers that can be built on top of ODA X7-2 HA. (using OVM)

In SE RAC licensing, on the other hand; we have no socket limit.
So, if we have SE RAC license, then we can have a SE RAC database on the virtualized environment or directly on Bare Metal.
If we want to have both SE RAC and SE2 RAC databases on ODA X7-2 HA, then we need to built a virtualized environment and we need to use max 18 cores in ODA_BASE.

Capacity-on-demand:

ODA X7-2 has the capacity-on-demand feature as well.
We use appliance manager for configuring our core counts.
Enabled core count must be an even number between 2 and 36 (2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36)

ACFS snapshots:

In ODA X7-2, we have ACFS.. So, this new release of ODA offers ACFS snapshot based quick and space-efficient database copies and ACFS snapshot based rapid cloning of virtual machines.
This is also one of the exciting news..
It was actually there in ODA X6-2, but it wasn't documented clearly.

Here in the following blog post, you can see a demo that I did for ODA X6-2;

http://ermanarslan.blogspot.com.tr/2017/02/oda-x6-2-using-acfs-snapshots-for.html

Oracle VM Server and KVM (Kernel Based Virtual Machine):

The virtualization technology used in ODA X7-2 HA Virtualized environment is still Oracle VM Server.
However, we have KVM (Kernel Based Virtual Machine) for virtualizing ODA X7-2S and ODA X7-2M environments..

We can even use KVM to have a virtualized environment on top of ODA X7-2 HA Bare Metal..

There are some limitations for KVM though;

  • You can run only Linux OS on top of them.
  • It is not supported to run Oracle database on top of Guest KVM machines.
  • There is no capacity-on-demand option for databases and applications that are running on Guest KVM machines. 
You can read my blog posts for getting the details about KVM and enabling the KVM..



Note that, above links are for ODA X6-2M , but the process is almost the same..

Auto starting VMs , Failover for VMs:

In ODA X7-2 HA virtualized deployments, we still get the ODA_BASE based configuration for the Oracle VM Server.
This Oracle VM Server deployed with ODA, give us the opportunity for auto restarting the VMs and for using the standard failover capabilities for those VMs.
We may configure our VMs to be started on the same node or on the surviving node, in case of a failure. (without a manual intervention)

ASR, oakcli orachk and odaadmcli manage diagcollect:

In ODA X7-2, we have the opportunity to use ASR (Automatic Service Request), as well.. ASR monitors and creates automatic service request (Oacle SR) on any hardware component failure.
In addition to hat, we have the "oakcli orachk" command for checking all the hardware and software units of ODA. So, using oakcli, we can validate hardware and software components and quickly identify any anomaly or violation of best practice compliance.
Morever, we can collect the diagnostics by running a single command -> odaadmcli manage diagcollect.

So, at the bottom line, this new ODA X7-2 is not breaking the tradition... ODA is still easy to deploy, patch and manag.  It is still affordable and optimized.

I think, I will work on this new ODA at least in one of the upcoming projects this year.. So we will see :)

Monday, November 13, 2017

EBS 12.2 -- Solaris Apps Tier, "setting the Swap Space" , "/tmp on swap"

According to some administrators, swap should not be used in a production system.. They may say "if we are on swap(using swap at any time), we ' re screwed..

With this in mind, they may not configure a swap at all or they may give you a very small swap area for the use with your applications.

In case of EBS 12.2, this is especially true for the apps tier node.
In apps tier, the swap area is generally kept in the minimum size.

On the other hand, I don't find it true.

Let's take Solaris for example..

In EBS 12.2 Solaris installation document, there is no info about swap space.

That is, in EBS 12.2 Solaris installation document, there is only one swap space related requirement and it is lead to the database tier..

Interestingly, in EBS 12.2 Linux installation document, we have a line saying "it is recommended that the swap space on the system be 16 GB or more

Well.. for Solaris actually, this info is more crucial..

In Solaris 10 and 11, there is a important enhancement which is made for the temporary file performance. That is, in Solaris 10 and 11, /tmp is mounted on swap.

Here is an "mount |grep tmp" output, which was get from a Solaris 11.3;

/tmp on swap read/write/setuid/devices/rstchown/xattr/dev=90c0002 on Wed Oct 11 13:42:55 2017

You see /tmp is on swap!
So, when you use /tmp for storing some files, your swap is used in the background.

This means, if you configure a small swap area, you may end up with application errors.

Programs like FNDWRR.exe will definetly crash randomly and even your OS commands like top, like ls will hang during these peaks.
When FNDWRR.exe crashes, you won't be able to see any concurrent output.

So, what I recommend is, to set the swap size to a value which is equal or greater then your /tmp directory size. 

Ofcourse, if you have unnecessarily big /tmp directory and if your EBS application code uses a very little portion of it, then you can arrange your swap size accordingly.

Bytheway, you may change this behaviour on OS level.. On the other hand; as this /tmp on swap thing comes by default, I don't want to make recommendation on Solaris OS layer..

Anyways.. In conclusion, we use swap and we use it very often in Solaris..

Lastly, the important point is ; using swap is not a bad thing for all the cases..  It doesn't always mean that we are screwed:)

Here we see an enhancement to store the temporary files on swap, rather than on disk and this increases the speed of accessing to those files.

Friday, November 3, 2017

ODA X6 -- my review of Oracle Database Appliance(ODA) published on IT Central Station

This is the third review that I did for ITCentralstation.com. This time for ODA X6!

I have given the valuable features of ODA, info about the initial setup, the things that Oracle can do for improvement, my advices to ODA customers and so on.

So, I think you will like it..

My review of Oracle Database Applicance X6 is now published and available for you to read at:

https://www.itcentralstation.com/product_reviews/oracle-database-appliance-review-46946-by-erman-arslan

Bytheway, ODA X7 released! (my followers probably know this, as it was already announced here earlier)
Today, I decided to write a more detailed blog post about it..
Stay tuned, an article about the new ODA Machine (ODA X7) is on its way :)


Monday, October 30, 2017

EBS 12.2 -- libjava.so problem in Solaris environment / libjava.so: open failed: No such file or directory

I wrote about the problems that I encountered on EBS 12.2 - Solaris Sparc environments.
I also have given you the solutions, which are found after a great deal of diagnostic works.
I have dealed with forms makefiles, reports makefiles and so on to make EBS 12.2 be stable on Solaris 11.3.
Here is a blog post that I wrote about an interesting problem that I encountered during the initial installation of EBS 12.2 on to a Solaris 11.3 Operating System.
http://ermanarslan.blogspot.com.tr/2017/06/ebs-1220-installation-on-solaris-511.html

However, this one is more interesting :)
This time, I encountered a problem after the installation.
Note that, this problem appear after running adop's cutover phase (even if you are on the latest AD and TXK levels).
Also it may appear in a freshly cloned environment.

The problem was related with Oracle Reports that comes built in with EBS.
When this problem was encountered, the reports can not be run. Any reports related Concurrent requests can not be run successfully(complete with error) and any report related tool, such as rwconventer can not be executed.

They all failed with  libjava.soopen failed: No such file or directory.

Example of the error stack:

Error occurred during initialization of VM

Unable to load native library: ld.so.1: rwrun: fatal: /u01/app/fs1/EBSapps/10.1.2/jdk/jre/lib/libjava.so: open failed: No such file or directory


As for the diagnostics, I reviewed the ins_reports.mk and related env file.. Everything was okay and seemed fine. (sparcv9 related modifications were there already)

I also used ldd command to check the related binaries and libraries. No trails of "/u01/app/fs1/EBSapps/10.1.2/jdk/jre/lib/libjava.so"...

Actually, in Solaris, we have this libjava.so file in "<10.1.2_Oracle_HOME>/jdk/jre/lib/sparc/" and this location is correct, but somehow reports executables like rwconverter wanted to use it from "/u01/app/fs1/EBSapps/10.1.2/jdk/jre/lib."

I checked it from various places and concluded that this is not configurable..
But, at the end of the day, it was not normal and needed to be fixed...

Then I made a research on the libjava.so file and gathered the following info about it;

It is a shared library and used when you need to  invoke the Java Virtual Machine from your own code. For ex: a C program that invokes the Java Virtual Machine and calls the Erman.main method defined in Erm.java..

So, in order to be able to do this, you need to compile your C program with java libraries, that comes with JDK.
libjava.so is closely related with libjvm.so. You can think like, one of them is for creating the virtual machines and other one is for loading classses. Probably libjava.so is loaded in the jvm startup..

So, once I gathered this info, I started to think that there may a JDK related problem, a wrong library link or something like that in this environment.

After trying lots of things (rebuild reports, relinking binaries and so on), I decided to recreate the JDK that comes with the EBS 12.2 installation.
I aimed at the JDK, which was located in 10.1.2 Oracle Home, because the problem was there.

As a solution, I did a fake JDK upgrade..

That is, I installed the same JDK version once again to the EBS 12.2's 10.1.2 Oracle Home using the document: "Using the Latest JDK 7.0 Update with Oracle E-Business Suite Release 12.2 (Doc ID 1530033.1)" -> "Section 4: Upgrading to Latest Java 7.0 in OracleAS 10.1.2 Oracle_Home"

Remember: startCD 12.2.0.47 or higher delivers JDK 7

For our case, it was 1.7.0.85

-bash-4.4$ ./java -version
java version "1.7.0_85"
Java(TM) SE Runtime Environment (build 1.7.0_85-b15)
Java HotSpot(TM) Server VM (build 24.85-b06, mixed mode)
-bash-4.4$ pwd
/u01/app/fs2/EBSapps/10.1.2/jdk/jre/bin

So, I downloaded JDK 1.7.0.85 for Solaris and took the following actions to install it.

soure run env.
cd $ORACLE_HOME
$ mv jdk jdk_old 
$ mv jdk1.7.0_85 jdk 
$ rm -rf jdk_old

That was it.. This move solved the libjava.so problems.. (no other modifications needed, no autoconfig or nothing) 
So it was caused by a misconfiguration in JDK, a wrong library link maybe..

What a hard issue it was... It resolved easily but the diagnostic work and the effort that I given for that, was huge..

I hope you will find this undocumented solution useful. See you in my next blog post :)

One last thing, the issue is documented in "Unable To Load native library: ld.so.1: rwrun: fatal: (Doc ID 1529558.1)", but the provided there was odd and irrelevant , at least for this case..

Tuesday, October 24, 2017

FMW -- 12C Forms and Reports Cluster installation (2 node, High Available)

Recently, installed a Forms & Reports 12C Cluster on Oracle Linux servers.
This installation was actually a pure FMW installation, rather than the FMW deployment that comes built in with EBS.
It was a little tricky, but at the end of the day; it completed successfully.

Here is the list of the components that were used along with their versions;
  • FMW 12.2.1.3 (latest version) for Solaris (FMW infra 12.2.1.3.0 )
  • Certified jdk: 1.8.0_131 or higher (64 bit)
  • Following Solaris packages (installed on Solaris servers):
SRU 11.3.3.6.0+
SUNWlibC

developer/assembler
libxp
motif
  • An Oracle database to be used for placing the RCU schemas. (11.2.0.4+)
  • Client Side - Browser: Microsoft Edge 40.*, Microsoft Internet Explorer 11.*, Google Chrome 60+,Mozilla Firefox 52+,Apple Safari 9.* vor Apple Safari 10.*
  • RCU -> Required for Forms and Reports RCU schemas Its version should be the same as, FMW infra version. In 12C, RCU comes built it with FMW.. (no need to download it seperately)
  • Forms_report_binary/installer : "Oracle Forms and Reports 12c (12.2.1.3.0)", for Solaris Sparc 64 bit 
I did the installation in 6 steps..

Installation steps:
  1. Install FMW INFRA on both of the nodes.
  2. Install Forms and Reports on both of the nodes.
  3. Create Forms and Reports related Database schemas using RCU. (only on 1 node, first node)
  4. Configure Weblogic Domain using config.sh (only on one node, first node)
  5. PACK and UNPACK the domain (Pack on node1, unpack on node2)
  6. Start the services
  7. Do the tests and fix the problems (if they exist)
Okay.. Let's take a closer look at the installation process ->

FMW infra installation:

We first install FMW Infra (on both of the nodes)

-bash-4.4$ export JAVA_HOME=/u01/java/jdk1.8.0
-bash-4.4$ export PATH=$JAVA_HOME/bin:$PATH
















Next, we install Form and Reports 12c (on both of the nodes)

Forms & Reports 12C installation:

We unzip the package in both of the nodes and run ./fmw_12.2.1.3.0_fr_solaris_sparc64.bin.. (this was a Solaris env)

Note: We first install motif and libXp packages to both of our Solaris nodes to not to get forms makefile errors during the installation.





















Next, we use RCU to place our Forms and Reports related schemas in to our database.. (RCU is executed on one of the nodes)

Using RCU to create the database schemas:

Note: We don't need to download RCU for FMW 12C.. RCU comes built in with FMW 12C.
Note: There is no need to create tablespaces for RCU schemas, beforehand.. RCU creates them during its run automatically.

cd /u01/FMWHOME/oracle_home/oracle_common/bin
./rcu

















Next, we config our Weblogic Domain..
This step is done only on node 1.

config.sh run:























































Note that, we have a bug to bypass..

FMW 12.2.1.3.0 ships with JDBC driver 12.2.0.1.0 where OMS error can occur after the driver FAN is auto-enabled.
The issue is reported in unpublished Bug 26045997 : ENABLING DRIVER FAN WITHOUT RUNNING ONS DAEMONS CAUSES CONNECT REQUEST ERROR.


In order to not to get these ONS errors we modify the config_internal.sh (set the fanEnabled to false)..
If we don't do this, we get ONS related errors as seen in the below screenshot
inside /u01/FMWHOME/oracle_home/oracle_common/common/bin/config.internal.sh ->

JVM_ARGS="-Dpython.cachedir=/tmp/cachedir ${JVM_D64} ${UTILS_MEM_ARGS} ${SECURITY_JVM_ARGS} ${CONFIG_JVM_ARGS}"
if [ -d "${JAVA_HOME}" ]; then
eval '"${JAVA_HOME}/bin/java"' -Doracle.jdbc.fanEnabled=false ${JVM_ARGS} com.oracle.cie.wizard.WizardController '"$@"' ${CAM_ARGUMENTS}
fi
 





Next we pack the domain from Node 1 and unpack it in Node 2 ..

PACK & UNPACK:

In NODE1 ->
cd /u01/FMWHOME/oracle_home/oracle_common/common/bin
./pack.sh -managed=true -domain=/u01/FMWHOME/oracle_home/user_projects/domains/base_domain -template=/u01/FMWHOME/frsdomain.jar -template_name=frsdomainTemplate

<< read domain from "/u01/FMWHOME/oracle_home/user_projects/domains/base_domain"
>> succeed: read domain from "/u01/FMWHOME/oracle_home/user_projects/domains/base_domain"
<< set config option Managed to "true"
>> succeed: set config option Managed to "true"
<< write template to "/u01/FMWHOME/frsdomain.jar"
..............................
>> succeed: write template to "/u01/FMWHOME/frsdomain.jar"
<< close template
>> succeed: close template

In NODE2  ->
scp oracle@192.168.1.69:/u01/FMWHOME/frsdomain.jar /u01/FMWHOME
The authenticity of host '192.168.1.69 (192.168.1.69)' can't be established.
RSA key fingerprint is 9f:3d:b4:10:60:a7:f0:1f:ba:bb:da:42:6f:6e:2e:c2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.69' (RSA) to the list of known hosts.
Password:
frsdomain.jar 100% |******************************************************************************************************************************| 1232 KB 00:00

cd /u01/FMWHOME/oracle_home/oracle_common/common/bin
./unpack.sh -domain=/u01/FMWHOME/oracle_home/user_projects/domains/base_domain template=/u01/FMWHOME/frsdomain.jar -log_priority=DEBUG -log=/tmp/unpack.log -app_dir=/u01/FMWHOME/oracle_home/user_projects/applications/base_domain

<< read template from "/u01/FMWHOME/frsdomain.jar"
>> succeed: read template from "/u01/FMWHOME/frsdomain.jar"
<< set config option AppDir to "/u01/FMWHOME/oracle_home/user_projects/applications/base_domain"
>> succeed: set config option AppDir to "/u01/FMWHOME/oracle_home/user_projects/applications/base_domain"
<< set config option DomainName to "base_domain"
>> succeed: set config option DomainName to "base_domain"
>> validateConfig "KeyStorePasswords"
>> succeed: validateConfig "KeyStorePasswords"
<< write Domain to "/u01/FMWHOME/oracle_home/user_projects/domains/base_domain"
...........................................................................
>> succeed: write Domain to "/u01/FMWHOME/oracle_home/user_projects/domains/base_domain"
<< close template
>> succeed: close template

* At this point, we start our services.. (optionally, we may configure a load balancer in front of our Http Servers..)

After starting the services, we may get some errors actually. So, we may need to do some extra work to have a stable environment.

Here are my notes about these known issues:

Known Issues & Solutions:

Note 1:

Again: Bug 26045997 -- We disable the FAN/ONS in all the managed server nodes.



Note 2:

After pack and unpack operations, the config.xml is not built on node2. This can be a bug, but it is not documented.
In order to solve this, we start the managed servers using startManagedWeblogic.sh by specifying an ADMIN_URL.
After this move, the config.xml in node 2 gets created and subsequent start/stop operations can be done using the weblogic console.
In this step, we also do the FAN disabling thing in startManagedWeblogic.sh to disable ons..
Again: Bug 26045997

We write the following in startManagedWeblogic.sh->

export JAVA_OPTIONS=” -Doracle.jdbc.fanEnabled=false”:$JAVA_OPTIONS
Example of start commands: (“script <managed_server_adı> <admin_url>”)

Examples:
sh startManagedWebLogic.sh WLS_FORMS1 http://forms01:7001
sh startManagedWebLogic.sh WLS_REPORTS1 http://forms01:7001

Note 3:

In some cases, the health state of Managed servers in node 2 can be "not reachable".
However, if they are started using startmanagedweblogic.sh, their status seem okay.
In other words, their Health status become Not Reachable, if they are started from the weblogic console.

In order to fix this, I first tried setting Invocation timeout to 10..
Weblogic Console > base domain > Configuration > General  > Advanced > Invocation Timeout Seconds = 10 
However, this move didn't solve it.

As for the solution, we set the listen address for Admin server and restart everything.

Note 4:

Again: Bug 26045997
We disable FAN/ONS in startWeblogic.sh.. If we don't do this, admin server can not be started because of ONS errors.

JAVA_OPTIONS -Doracle.jdbc.fanEnabled=false

Note 5:

In order to be able to start the OHS components without supplying password everytime, the following script is executed once..

/u01/FMWHOME/oracle_home/user_projects/domains/base_domain/bin/startComponent.sh ohs1 storeUserConfig
--we give password when prompted and it is saved for the subsequent executions.

Note 6:

We just cant start OHS on node2 (OHS2) using startComponent.sh script. This is a restriction of Oracle. That's why, we need to use WLST or FMW console.

Here is an example of doing it via WLST ->

cd $FMW_HOME/oracle_common/common/bin
./wlst.sh

WLST> nmConnect('nodemanager','xxxxx','forms02.oracle.com’,'5556','base_domain','/u01/FMWHOME/oracle_home/user_projects/domains/base_domain','ssl'); ## at this point, we connect  to thenodemanager ##of node 2.
nmStart(serverName='ohs2', serverType='OHS');

Note 7:

We just enable the ssh equivalency between our nodes to ease the scripting works that were done for starting and stopping the whole stack from one node in one go.

Well.. After the installation, we can check the forms and reports services to ensure that they are running successfuly.

In order to do these tests, we use the following urls:

Forms: http://ip_adress:port/forms/frmservlet
Reports: http://ip_adress:port/reports/rwservlet

We do these test for both of the nodes and expect to see the following outputs in our browsers:





That 's it :)

Ohh, almost forgot.
Here is a handy script that I write for controlling this cluster and all the services accross nodes easily ->
FMW -- Starting/ Stopping a 2 Node Forms&Reports 12C Cluster with a single command. SCRIPTS.. Automated start/stop for High Available FMW environments