Thursday, February 1, 2018

Exadata -- Elastic config, adding one compute and one cell node to Exadata X4-2

This post will be about Exadata, but it is different than my other Exadata blog posts. Because, in this post, I will fully concantrate on the hardware related part of the work.

We recently added 1 compute node and 1 cell node to an Exadata X4-2 Quarter Rack.. So, after these actions, the machine became Elastically configured, as it was not an X4-2 Quarter Rack anymore.

If you are asking about the support thing, my answer is yes! it is supported for X4-2 as well.
If you are asking about the whole process , here is the link : Extending Oracle Exadata Database Machine

The versions of the compute node and the cell node that we added, were X7-2.

Note that, the newly added nodes had 18.1.0.0.0 image versions installed on them.  So we planned to upgrade the image versions of the existing nodes to 18.1.0.0.0, as it is recommended to use the same  image version for all the nodes of an Exadata machine.

We planned this upgrade, but the first thing that we needed to, was to install this nodes physically into the Exadata machine and today's blog post is specially about that.

Okay.. Let's take a look at what we have done for physically installing the nodes and building an Elastic Configuration.

We first took the new servers/nodes out of their boxes .
In order to attach the new nodes to the Exadata Rack, we installed server rack rails and server slide rails that come with these new nodes.
After installing the rails, we installed the cable arms/cable organizers into the new nodes.
After installing the rails and cable organizers, the new nodes became ready , so that we installed them into the Exadata Rack easily.

After physically installing these new servers, we first put the power cables.
Note that, we put 2 power cables into each node (high available) and we connected each of these cables into different PDUs (Power Distribution Unit) , PDU-1 and PDU-2.

We didn't installed the infiniband cables and left this work to be done in the image upgrade part of the work. On the other hand, we installed 2 SFP cards into the compute nodes. ( to be used for backups)

After these hardware related installation actions were taken, we connected to the new nodes using serial port. Using the serial port connection we configured ILOM net interfaces of these new nodes by executing the following commands;

set /SP/network pendingipdiscovery=static
set /SP/network pendingipaddress=<some_ip_address>
set /SP/network pendingipgateway=<gateway_ip_address>
set /SP/network pendingipnetmask=<netmask>
set /SP hostname=<somename-ilom>
set /SP/clients/ntp/server/1 address=<ntp_server_ip_address>
set /SP/clients/dns nameserver=<dns_server_1_ip>,<dns_server2_ip>

set /SP/network/ commitpending=true 

Next, we connected related network cables to the ILOM ports of these newly installed nodes and lastly, we powered these new nodes on and checked their connectivity.. (ILOM)

Tuesday, January 23, 2018

Exadata -- How to Connect Oracle Exadata to 10G Networks Using SFP Modules

In this post, I will share you the way of activating SFP modules (SFP modules on Ethernet cards) in an Exadata X3-2 Quarter Rack env.

As you may know, by default in Exadata X3-2 , we configure our public network using the bondeth0 bond interface. This bondeth0 is actually a virtual bonded device built on top of 2 physical interfaces. (eth1 and eth2) . The bonding mode of this bondeth0 is by default active-backup and as for the speed; it relies on the speed of the underlying eth1 and eth2.

In Exadata X3-2, eth1 and eth2 devices are actually Os interfaces for the underlying 1Gbit cards.
So, this means, by default we are limited to 1Gbit interfaces.

Luckly, Exadata X3-2 also supports 10Gbit SFP interfaces. (Fiber) . So if the Exadata that we are working with, has the necessary SFP modules, then we can configure our public/client network to run on these 10Gbit SFP modules, as well.

In order to activate these SFP modules, what we need to is;

1) The first step is to purchase the proper SFP+ and fiber cables to make the uplink connection.

2) Then we plan a time to reconfigure bondeth0 to use eth4 / eth5 and reboot.

So , it seems simple but it requires attention, since it requires OS admin skills.
Well. Here are the detailed steps;

First, we need to check the SFP modules and see the red light coming from them. (red light means fibre type link is up). Then , we connect the fibre cables to our SFP cards.

After that, we shutdown our databases running on this Exadata and we shutdown the Cluster services as well. We do all these operations using the admin network.. (we connect to Exa nodes using admin network interfaces, using relevant hostnames)

After shutting down the Oracle Stack, we shut down the bondeth0, eth1 and eth2 interfaces.

Then we delete ifcfg-eth1 and ifcfg-eth2 files (after taking backup ofcourse)

After deleting the eth1 and eth2 conf files, we configure the eth4 and eth5 devices.  We make them to be slaves of bondeth0.. (eth4 and eth5 are the OS interfaces for  these 10gbit Sfp cards in Exa X3-2 1/4) Note that: our public/client network ip configuration is stored in bondeth0, so we just modify its slaves, do not touch bondeth0 and the ip setting.

After the modifications, we start eth4, eth5 and bondeth0 (using ifup) and check their link status and their speeds using ethtool.

Once we confirm all the links are up, bonding is okay (cat /proc/net/bonding/bondeth0), we reboot the Exadata nodes and wait our cluster services be automatically up and running again ..


 So that's it :) 

Exadata -- Reimaging Oracle Exadata machines

Nowadays, I 'm mostly working on Exadata deployments.. These deployments I 'm mentioning, are machine deployments, including reimaging, upgrading the Image versions, the first deployments of new Exadata machines and so on.


I find it very enjoyable though, as these kinds of deployments make me recall my System Admin days when I was mounting the servers to the rack cabinets, installing Operating Systems, doing cabling , administrating the SANs and so on :)

Ofcourse, these new deployments, I mean these Exadata deployments are much more compilicated & complex than the server deployments that I was doing between the years 2006-2010.


Actually, the challanges we face during these deployments made this subjects more interesting and enjoyable , so that I can write blog posts about them :)

Today, I m writing about an Exadata X6-2 reimaging that we have done a few days ago.

The machine was an Exadata X6-2 1/8 HP and we needed to reimage it , as it was a secondhand machine used in lots of POCs..

We start the work by running OEDA.
You can read about it by the following link: http://ermanarslan.blogspot.com.tr/2017/07/exadata-initial-deployment-oeda-and.html

So once we get the OEDA ouputs that are required for imaging the Exadata machine, we continue by downloading the required files and lastly we follow the following sequence to reimage the machine..

  • First, we connect to the Cisco switch using serial port (switch used for the Admin network) and configure it according to the OEDA output.
  • Then connect to the Infiniband switches using serial ports and configure them according to the OEDA output.
  • Next, we start up our virtual machine that we configured earlier for these deployments. This virtual machine gives us the ability to boot the Exa nodes using PXE. This Virtual machine has Dhcp, Pxe, TFTP and NFS services running on it.
  • So, once started up, we connect our virtual machine to one of the ports that is available on Cisco Switch and we configure it with the IP of one of our PDUs.. (PDU ip address are available during the first deploy, so this move is safe) -- it is important to give the ip address according to the configuration that we done inside our virtual machine.. I mean our virtual machine has services running on top of it.. So if these services are configured on a static ip, then we use that ip for configuring our virtual machine itself.
  • Next, we transfer the preconf.csv file , which is created by OEDA to our virtual machine and edit the MAC addresses written in this file according to the MAC adresses of our compute and cellnodes.  (MAC address of the nodes are written in the front panel of the nodes .)
  • At this point, we connect to our compute and cell nodes using their ILOMs, and set their first boot devices to PXE. After this setting, we restart the nodes using ILOM  reset commands.
  • When the machines are rebooted, they boot from the PXE devices and display the imaging menu, that our virtual machine serves them using PXE. In this menu, we display all the images which can be used for the imaging both the cell and compute nodes.
  • Using this approach we image the compute and cell nodes in parallel.. I mean we connect to console of each node using their ILOMS, and select the relevant image from the menu and start the installation.
  • Once the installation of nodes are completed, we connect the client network to our Exadata machines. (admin network - to the CISCO switch, client network - directly to the Compute Nodes)
  • Lastly, just after imaging is finished, we continue by installing GRID and RDBMS software. In order to do this, we transfer, the GRID and RDBMS installation files + onecommand utility + OEDA outputs to the first compute node and then run the install.sh script which is included in onecommand to install our GRID and RDBMS. (As you may guess, one of the arguments for install.sh script is OEAD xml..)
That 's it.. Our Exadata is ready to use :)

Tuesday, January 16, 2018

EBS 12.2.7 - Oracle VM Virtual Appliance for Oracle E-Business Suite 12.2.7 is now available!

Oracle VM Virtual Appliance for E-Business Suite Release 12.2.7 is now available from the Oracle Software Delivery Cloud !!

You can use this appliance to create an Oracle E-Business Suite 12.2.7 Vision instance on a single virtual machine containing both the database tier and the application.

Monday, January 15, 2018

RDBMS -- diagnosing & solving "ORA-28750: unknown error" in UTL_HTTP - TLS communication

As you may remember, I wrote a blog post about this ORA-28750 before.. (in 2015).
http://ermanarslan.blogspot.com.tr/2015/09/rdbms-ora-28750-unkown-error-in-web.html

In that blog post, I was addressing this issue with the SHA2 certification lack, and as for the solution , I recommended upgrading the database for the fix .. (this was tested and worked)
I also recommended using a GeoTrustSSLCA-G3 type server side certificate for the workaround. (this was tested and worked)

Later on, last week ; we encountered this error in a 11.2.0.4 database and the server side certificate was GeoTrustSSLCA-G3 certificate.. The code was doing "UTL_HTTP.begin_request" and failing with ORA-28750.
So, the fix and the workaround that I documented earlier, were not applicable in this case.. (DB was up-to-date and the certificate was already GeoTrust..G3)..

As you may guess, this time, there was a more detailed diagnostic needed.

So we followed the note:

"How To Investigate And Troubleshoot SSL/TLS Issues on the DB Client SQLNet Layer (Doc ID 2238096.1)"

We took a tcpdump..  (with the related IP addresses to have a consolidated tcp output..)

Example: tcpdump -i em1 dst 10.10.10.10 -s0 -w /tmp/erman_tcpdump.log

In order to see the character strings properly, we opened the tcpdump's output using Wireshark. *

When we opened the output with Wireshark; we concantrated on the TLS V1.2 protocol type communication and we saw an ALERT just after the first HELLO message;


The problem was obvious.. TLS V1.2 communication was throwing Unsupported Exception error.

This error redirected us to the Document named:  UTL_HTTP : ORA-28750: unknown error | unsupported extension (Doc ID 2174046.1)

This document was basically saying "apply patch 23115139", however; this patch was not written for Oracle Database 11.2.0.4 running on Linux X86-64.. In addition to that, our PSU Version was 11.2.0.4.171017 and the patch was not for it.  

So we needed find another patch which includes the same fix and it was required to be appropriate for our DB & PSU Version..

Now look what we found :) ;

Patch 27194186: MERGE REQUEST ON TOP OF DATABASE PSU 11.2.0.4.171017 FOR BUGS 23115139 26963526

Well.. We applied patch 27194186 and our problem solved.

Now, by the help of this issue and its resolution; I can give 2 important messages; 

1) Use wireshark or a similar tool to analyze the tcpdump outputs.  (analyze the dumps by concantrating on TLS protocol messages)

2) Dont surrender even when the patch that is recommended by Oracle Documents, isn't compatible with your RDBMS and PSU versions.. 
Most of the time, you can find another patch (maybe merged), which is compatible with your RDBMS & PSU versions and that patch may include the same fix + more :)

Monday, December 25, 2017

Erman Arslan is now an Oracle ACE!

I 'm an Oracle ACE now! Today is my birthday, and this is the best birthday gift ever!  :)
I have been writing this blog since 2013 and thanks to my passion on writing, I wrote the book (Practical Oracle E-business Suite) with my friend Zaheer Syed last year.
I aimed to share my knowledge with all my followers around the world and to keep up with the new developments in Oracle technologies.
I spent a significant time to give voluntarily support on my forum and did several Oracle projects in customer sites in parallel to that.
My primary focus was on EBS, but I was also researching and doing projects on Exadata, Oracle Linux, OVM, ODA, Weblogic and many other Oracle Technologies.
I 'm still  working with the same self-sacrifice as I started to work as an Oracle DBA in the year 2006 and I 'm still learning, implementing and explanining the Oracle Solutions with the same motiviation that I had in the first years of my career.

I want to send my special thanks to Mr. Hasan Tonguç Yılmaz, who's nominated me to become an Oracle ACE. I offer my respect to Mr. Alp Çakar, Mr. Murat Gökçe and Mr. Burak Görsev who have directly or indirectly supported me in this way.


Friday, December 22, 2017

Goldengate -- UROWID column performance

As you may guess, these blog posts will be the last blog posts of the year :)

This one is for Goldengate.

Recently, started working for a new company and nowadays, I deal with Exadata machines and Goldengate more often.

Yesterday, analyzed a customer environment, where Goldengate was not performing well.

That is, there are was a lag/gap reported in a target database, in which, 55-60 tables were populated by Goldengate 12.2.

When we analyzed the environment, we saw that, it was not the extract process or network which was causing the issue.

The REPLICAT process was also looking good in the first glance, as it  was performing well on its trail files.

However, when we check the db side, we saw that there was a lag around 80 hours.. So target db was behind the source db with a 80 hours difference.

We analyzed the target database, because we thought that it might be the cause.. I mean, there could be some PK, or FK missing on the target environment.. (if the keys are missing, this can be a throuble in goldengate replications). However, we concluded that, no keys were missing.

In addition to that, we analyzed the AWR reports, we analyzed the db structure using various client tools (like TOAD) and we check the db parameters, but -> all were fine..

Both source and target databases were on Exadata. AWR reports were clean. Load average was so low, the machine was sleeping and there were almost no active sessions in the database (when we analyzed it real time)

Then we checked the goldengate process reports and saw that REPLICAT was performing very slow.

It was doing 80 tps , but it should be around 10000 tps in this environment..

At that moment, we followed the note, and check the replicat accordingly.
(Excessive REPLICAT LAG Times (Doc ID 962592.1))

We considered the things in the following list, as well:
  • Preventing full table scan in the absence of keys KEYCOLS
  • Splitting large transactions
  • MAXTRANSOPS
  • MAXSQLSTATEMENTS
  • Improve update speed - redefine tables - stop and start replicat
  • Ensure effective execution plans by keeping fresh statistics
  • Set Replicat transaction timeout
Unfortuneatly, no matter what we did, the lag was increasing..

Then fortuneatly :), we saw that all these 55-60 tables in the target db had columns with the type of
UROWID..

These columns were recently added to the tables by the customer.

We also discovered that, this performance issue have started, after these columns were added.

We wanted to change the column type, because these UROWID columns have recently become supported with Goldengate..

ROWID/UROWID Support for GoldenGate (Doc ID 2172173.1)

So we thought that these columns may cause the REPLICAT to perform with this low performance.

The customer was using these columns to identify the PK changes and accepted to change the type of these columns to VARCHAR2.

As for the solution, we changed the type of those columns to varchar2 by creating empty tables and transferring the data using INSERT INTO APPEND statements.

Thanks to EXADATA , it didn't take lots of our time and thanks to Oracle Database 12C, we didn't need to gather statistics of these new tables, since in 12C it is done automatic during CTAS and Insert Into Append..

After changing the column type of those tables, we restarted the REPLICAT and the lag was dissapeared in 2 hours.

So, be careful when using UROWID columns in a Goldengate environment..

Monday, November 20, 2017

EBS 12.2 -- Solaris Sparc relinking reports, executing make -f from lib or lib32 directory? + an important info about the ins_reports.mk and the env settings

I'm writing this post, because I found a mismatch in the info delivered by Oracle Support documents.
According to some of those documents; reports should be relinked from lib32 directory( if that directory is present..) here ->

cd $ORACLE_HOME/reports/lib32
--Note: if this directory does not exist: --
cd $ORACLE_HOME/reports/lib
$ make -f ins_reports.mk install

However, even if you relink the reports binaries from lib32 directory, you may end up with the following error in your concurrent manager log files (when running reports based concurrent programs) ->

Program exited with status 1
Concurrent Manager encountered an error while running Oracle*Report for your concurrent request XXXXXXX


So, this error brings you to the following document:

E-Business Suite R12 : Oracle Reports (RDF) Completing In Error "Program Exited With Status 1" Specifically on Oracle Solaris on SPARC OS Installation (Doc ID 2312459.1)

Actually, what that is underlined in this document is the famous environment setting (LD_OPTIONS="-L/lib -L/lib/sparcv9").

However, look at what document 2312459.1 also says, 

Navigate to this directory $ORACLE_HOME/reports/lib and compile the reports executable make command "$ make -f ins_reports.mk install"

So, the document says relink your reports from the lib directory (not from the lib32 directory).

At the end of the day, it is not important to be in lib32 or lib directory for relinking the reports binaries in Solaris Sparc.

The important things are having a good/clean ins_reports.mk and the necessary a LD_OPTIONS environment setting.

What I mean by a clean ins_reports.mk is -> a default ins_reports.mk which is deployed by the EBS installer. No LD_OPTIONS inside that file!

You may recall that, I wrote a blog post for the error that we usually get in EBS 12.2 Solaris installation - > http://ermanarslan.blogspot.com.tr/2017/06/ebs-1220-installation-on-solaris-511.html

In that blog post, I was modifying the ins_reports.mk inside the stage. I wrote LD_OPTIONS inside the ins_reports.mk file and hardly fixed the errors in the runInstaller screens. This was the one and only solution as the runInstaller could not get the LD_OPTIONS env setting from the terminal where I executed it.

However, this was just for the installation.

So after the installation, we need to delete these modifications and use directly the LD_OPTIONS env setting for our future reports relink actions.

cd $ORACLE_HOME/reports/lib
make -f ins_reports.mk install

or

cd $ORACLE_HOME/reports/lib32
make -f ins_reports.mk install

Both works..

This is based on a true story and this is the tip of the day.

Wednesday, November 15, 2017

ODA X7-2 -- a closer look //Oracle Database Appliance X7-2 Family

The adventure of ODA started with the machine named Oracle Database Appliance in 2011, and almost each year a new model released since then.

ODA X3-2 in 2013, ODA X4-2 in 2014, ODA X5-2 in 2016 and ODA X6-2 in 2016.

Today, I m writing about the newest model, which was released recently.. the "ODA X7-2".



In ODA X6-2, we were introduced with the S (Small), M (Medium) and L (Large) types of ODA family. That is, the standard name of the ODA became ODA HA, and these S, M and L types were added to the familiy.

In ODA X7-2, Oracle decided to remove the Large model from the product family and released the new machine with 3 models: S, M and HA.


Actually, there are several enhancements delivered with the new ODA X7-2, but the most interesting enhancements of all is, "we can built standard edition RAC databases on ODA X7-2 machines". (most interesting new feature, in my opinion)

General enhancements:
  • 3 new models ODA X7-2S, ODA X7-2M, ODA X7-2-HA
  • Increased storage capacity
  • Virtualization option for all the models.
  • Standard Edition RAC support
  • New Intel Xeon processors
  • Increased core count
  • Oracle database 12.2.0.1
General Specs:

Oracle Database Appliance X7-2S
Single-instance
SE/SE1/SE2 or EE
Virtualization
10 Cores
12.8 TB Data Storage (Raw)


Oracle Database Appliance X7-2M

Single-instance
SE/SE1/SE2 or EE
Virtualization
36 Cores
Up to 51.2 TB Data Storage (Raw)

Oracle Database Appliance X7-2HA

RAC, RAC One, SI
SE/SE1/SE2 or EE
Virtualization
72 Cores
Up to 128TB SSD or 300 TB HDD Data Storage (Raw)

NVME:

Like we had in ODA X6-2, we have Nvme storage for X7-2S and M models.

Appliance Manager and ACFS:

We have also appliance manager for the proactive management and we have ACFS as the filesystem.
Again, we have odacli as the command line management interface.


ODA X7-2 HA provides us the ability to build a high available environment. Again we are using ACFS for the filesystem, but this time we have SSD disks in the backend. (or SSD+HDD disks)


Supported Versions:

ODA X7-2 supports almost all the up-to-date Oracle Versions.

Enterprise Edition – 11.2.0.4, 12.1.0.2, 12.2.0.1
Standard Edition (SE, SE1, SE2)  – 11.2.0.4, 12.1.0.2
If you need the Database options, then you need to go with EE.
http://docs.oracle.com/database/121/DBLIC/options.htm Advanced Security Option, In-Memory, Mutitenant, …..

About Licensing:

Standard edition 2 RAC support is limited to the 2 socket db servers.
So, we have SE2 RAC support in ODA X7-2 HA (with Oracle Virtual Machine / OVM)
In order to run a SE2 RAC database on ODA X7-2 HA, we use max 18 cores in the ODA_BASE. The remaining cores can be used with the virtual application servers that can be built on top of ODA X7-2 HA. (using OVM)

In SE RAC licensing, on the other hand; we have no socket limit.
So, if we have SE RAC license, then we can have a SE RAC database on the virtualized environment or directly on Bare Metal.
If we want to have both SE RAC and SE2 RAC databases on ODA X7-2 HA, then we need to built a virtualized environment and we need to use max 18 cores in ODA_BASE.

Capacity-on-demand:

ODA X7-2 has the capacity-on-demand feature as well.
We use appliance manager for configuring our core counts.
Enabled core count must be an even number between 2 and 36 (2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36)

ACFS snapshots:

In ODA X7-2, we have ACFS.. So, this new release of ODA offers ACFS snapshot based quick and space-efficient database copies and ACFS snapshot based rapid cloning of virtual machines.
This is also one of the exciting news..
It was actually there in ODA X6-2, but it wasn't documented clearly.

Here in the following blog post, you can see a demo that I did for ODA X6-2;

http://ermanarslan.blogspot.com.tr/2017/02/oda-x6-2-using-acfs-snapshots-for.html

Oracle VM Server and KVM (Kernel Based Virtual Machine):

The virtualization technology used in ODA X7-2 HA Virtualized environment is still Oracle VM Server.
However, we have KVM (Kernel Based Virtual Machine) for virtualizing ODA X7-2S and ODA X7-2M environments..

We can even use KVM to have a virtualized environment on top of ODA X7-2 HA Bare Metal..

There are some limitations for KVM though;

  • You can run only Linux OS on top of them.
  • It is not supported to run Oracle database on top of Guest KVM machines.
  • There is no capacity-on-demand option for databases and applications that are running on Guest KVM machines. 
You can read my blog posts for getting the details about KVM and enabling the KVM..



Note that, above links are for ODA X6-2M , but the process is almost the same..

Auto starting VMs , Failover for VMs:

In ODA X7-2 HA virtualized deployments, we still get the ODA_BASE based configuration for the Oracle VM Server.
This Oracle VM Server deployed with ODA, give us the opportunity for auto restarting the VMs and for using the standard failover capabilities for those VMs.
We may configure our VMs to be started on the same node or on the surviving node, in case of a failure. (without a manual intervention)

ASR, oakcli orachk and odaadmcli manage diagcollect:

In ODA X7-2, we have the opportunity to use ASR (Automatic Service Request), as well.. ASR monitors and creates automatic service request (Oacle SR) on any hardware component failure.
In addition to hat, we have the "oakcli orachk" command for checking all the hardware and software units of ODA. So, using oakcli, we can validate hardware and software components and quickly identify any anomaly or violation of best practice compliance.
Morever, we can collect the diagnostics by running a single command -> odaadmcli manage diagcollect.

So, at the bottom line, this new ODA X7-2 is not breaking the tradition... ODA is still easy to deploy, patch and manag.  It is still affordable and optimized.

I think, I will work on this new ODA at least in one of the upcoming projects this year.. So we will see :)

Monday, November 13, 2017

EBS 12.2 -- Solaris Apps Tier, "setting the Swap Space" , "/tmp on swap"

According to some administrators, swap should not be used in a production system.. They may say "if we are on swap(using swap at any time), we ' re screwed..

With this in mind, they may not configure a swap at all or they may give you a very small swap area for the use with your applications.

In case of EBS 12.2, this is especially true for the apps tier node.
In apps tier, the swap area is generally kept in the minimum size.

On the other hand, I don't find it true.

Let's take Solaris for example..

In EBS 12.2 Solaris installation document, there is no info about swap space.

That is, in EBS 12.2 Solaris installation document, there is only one swap space related requirement and it is lead to the database tier..

Interestingly, in EBS 12.2 Linux installation document, we have a line saying "it is recommended that the swap space on the system be 16 GB or more

Well.. for Solaris actually, this info is more crucial..

In Solaris 10 and 11, there is a important enhancement which is made for the temporary file performance. That is, in Solaris 10 and 11, /tmp is mounted on swap.

Here is an "mount |grep tmp" output, which was get from a Solaris 11.3;

/tmp on swap read/write/setuid/devices/rstchown/xattr/dev=90c0002 on Wed Oct 11 13:42:55 2017

You see /tmp is on swap!
So, when you use /tmp for storing some files, your swap is used in the background.

This means, if you configure a small swap area, you may end up with application errors.

Programs like FNDWRR.exe will definetly crash randomly and even your OS commands like top, like ls will hang during these peaks.
When FNDWRR.exe crashes, you won't be able to see any concurrent output.

So, what I recommend is, to set the swap size to a value which is equal or greater then your /tmp directory size. 

Ofcourse, if you have unnecessarily big /tmp directory and if your EBS application code uses a very little portion of it, then you can arrange your swap size accordingly.

Bytheway, you may change this behaviour on OS level.. On the other hand; as this /tmp on swap thing comes by default, I don't want to make recommendation on Solaris OS layer..

Anyways.. In conclusion, we use swap and we use it very often in Solaris..

Lastly, the important point is ; using swap is not a bad thing for all the cases..  It doesn't always mean that we are screwed:)

Here we see an enhancement to store the temporary files on swap, rather than on disk and this increases the speed of accessing to those files.