Thursday, November 28, 2013

EBS - Oracle Application Concepts Presentation

This presentation was prepared by my collegue, Hülya Yılmaz,  to create a general understanding on Oracle EBS technologies, as well as to give information about the tools that a developer may need to use within the EBS system.

It was presented to EBS developers and functional users ... It covers general information about topics such as general architecture, file locations, cloning, gathering diagnostics, tracing , monitoring and etc..


Wednesday, November 27, 2013

RDBMS -- Oracle Database Presentation

Acutally I have prepared and presented this presentation to developer who have been using Oracle Database for development and data analysis. I tried to create a dba perspective on them.
In this presentation, you can find the following topics;



Click here to reach the slides of the presentation..

Presentation Name :
Oracle Database for Developers
File Name: ORACLE_DATABASE_snip_by_Earslan.ppsx

Contents:

Primary Components
Oracle Instance
Oracle Database
Memory Architecture(pga, sga, Shared pool, buffer cache, ...)
Processes (user,server,background)
Diagnostic Files
Trace Levels (Tracefiles, tkprof)
Redo & Undo
Commit
Async Commit
Rollback
Row Migration & Chaining
HWM
Indexes( stats, constraint , clustering factor, index and cbo, notes)
Locks (blocking locks, identifying, v$lock)
Cbo
Auto_trace (sqlplus)
Execution Plan / Explain Plan
Hints
Dictionary

References:
Tom Kyte //asktom.oracle.com
Richard Foote //Oracle Index Internals
Donald Burleson //dba-oracle.com
Pavan Kumar > pavandba.com
Natalka Roshak //What is blocking my lock
Michael Messina // Indiana University
Oracle // docs.oracle.com

Sunday, November 24, 2013

Database-- Oracle, allocating/deallocating/resizing shared memory, Sga_target, Investigating Oracle shm allocation.

We know that, Oracle introduced Sga_target parameter to manage Buffer cache (DB_CACHE_SIZE), Shared pool (SHARED_POOL_SIZE),Large pool (LARGE_POOL_SIZE) and Java pool (JAVA_POOL_SIZE). It s considered as an important concept in Automatic Memory Management (AMM)
As it is well known by the most of the community, I will not go in details about this parameter, except one thing = Reducing the memory usage using sga_target parameter.  

Actually, the reason that made me write this post is, the shared memory.. I was testing some memory boundries and behaviours of the operating system in case of a shared memory deallocation.. I was working on it because the information was needed to analyze an EBS production system performance..   So, I was working on a Linux 64 bit, writing a program to allocate some shared memory memory ... 
I used SYSV compatible shmget and allocated some shared memory.. I could use shmctl to deallocate this shared memory without any problems..


I could use ipcs on Linux to display and saw memory allocation and deallocation activities.
On the other hand, I realized that I could not resize the allocated shared memory on-the-fly. It seems using shmget , you cant do this in Linux. There is no such option to resize a shared memory segment using shmget in Linux.. In Aix,there is a shm_size argument for shmget . It s for resizing. But it seems it is not available for Linux.

So, I used the following C code to allocate and deallocate 1.5G shared memory.

#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <stdio.h>
#include <errno.h>
#include <string.h>

#define SHMSZ 1589934592

int main()
{
int shmid;
int shamerman;
key_t key = 5690;
shmid = shmget(key,SHMSZ,0666 | IPC_CREAT);
fprintf(stdout,"%d",shmid);
shmctl(shmid,SHM_SIZE,589934592);
sleep(3);
if ((shamerman=shmctl(shmid,IPC_RMID,0)) !=0)
{
fprintf(stderr,"ERROR with shmctl (%d: %s)\n", (int)(errno), strerror(errno));
return 1;
}
return 0;
}

I could see the shared memory segment that I allocated in ipcs output..
This server was an Oracle Database server, and I saw the shared memory segments allocated by the running Oracle Database , too. Oracle Database allocated approx 7*2 gb(14gb in total) shared memory segments.. I checked its SGA to be sure.. Both, the sga_target and sga_max_size parameters were set to 14gb. On the other hand, the kernel parameter shmmax was set to 2gb.. That was why Oracle allocated the 14 gb memory segment in 7 pieces..  (Note that : Shmmax describes maximum size of a shared memory segment)

Lets come back to the our actual topic; I was allocating and deallocating the shared memory segments.  I could do it with shmget. But one thing that I cant do with it , is that I could not resize a shared segment on-the-fly. Ofcourse, I could deallocate and reallocate a segment with a new size, but this is not satisfying..

To able to resize a shared memory segment, I decided to use shm_open(create or gain access to a shared memory segment) and ftruncate(truncate a file to a specified length), posix memory operation functions..
So shm_open uses a file based approach and technology for shared memory operations. It basically uses tmpfs,/dev/shm to function. Because it s using a file based architecture, there is an opportunity to use the ftruncate for resizing the file, which is actually tied to a shared memory segment.. By doing this, a shared memory segment can be resized on-the-fly.

I used following C program , which uses shm_open and ftruncate functions to allocate and resize the shared memory segment ;
Note that : this program should be compiled using gcc -lrt ,  by using -lrt argument posix support is added to the gnu c compiler.

#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <stdio.h>
#include <errno.h>
#include <string.h>
#include <sys/mman.h>
#include <sys/stat.h>        /* For mode constants */
#include <fcntl.h>
int main()
{

int *result = NULL;
int integerSize = sizeof(300000);
int descriptor = shm_open("/shmerman",O_CREAT | O_RDWR, S_IRUSR | S_IWUSR);
fprintf(stderr,"ERROR with shmctl (%d: %s)\n", (int)(errno), strerror(errno));
fprintf(stderr,"%d",descriptor);
ftruncate(descriptor, integerSize);
fprintf(stderr,"ERROR with shmctl (%d: %s)\n", (int)(errno), strerror(errno));
result = mmap(NULL, integerSize, PROT_WRITE | PROT_READ, MAP_SHARED, descriptor, 0 );
sleep(10);
fprintf(stderr,"SIMDI DEALLOCATE EDIORUM");
sleep(10);
ftruncate(descriptor, 600000);
return 0;
}


This time , I checked the file named shmerman, which was located under /dev/shm and saw that its size was 300000b after created and it jumped to 600000b after 10 seconds..(sleep 10)
Note that: You cant see the shared memory segment allocated by shm_open in ipcs output. You can check the segments from the /dev/shm filesystem.

In 11g , we have memory_target and I believe Oracle uses a similar method, that I just mentioned above,  to manage the shared memory . As you know; to be able to use above method, you need to have a /dev/shm..  So /dev/shm is required for using memory_target in Oracle.
Beyond that,  I think and read maybe, that Oracle uses /dev/shm to pass segments from SGA(shared memory) to PGA (process memory).. I think it just gives back the shared memory back to OS ,and OS gives it to the processes in case they needed. Giving back the unused memory should be enough for managing the PGA,as is not a limit and it s is not a fixed memory area resides in memory..

So far so good.. On the other hand, 10g Oracle does not use memory_target , naturally it does not use a /dev/shm based memory access..
ipcs output shows that it uses shmget to get the needed shared segments... Also , I have sent a mail to Tanel Poder regarding this topic, and he said by using strace in Oracle startup; shmget call can be seen..

Following was taken in startup process of an Oracle 10gR2 database.

open("/proc/sys/kernel/shmmax", O_RDONLY) = 2   --> It reads the shmmax to obtain the max segment size.
[pid 12584] shmget(2505607756, 6444548096, IPC_CREAT|IPC_EXCL|SHM_HUGETLB|0600) = -1 ENOMEM (Cannot allocate memory)  --> It tries the Hugetlbs and can not get them as it s not configured.
[pid 12584] shmget(2505607756, 6444548096, IPC_CREAT|IPC_EXCL|0660) = 557059, then it uses small pages to allocate the 6gb sga , which shmid= 557059

shmat(557059, 0x7fbfffdf01, 0) = ?  --> it attaches to the allocated shared memory

It gets the shared memory using shmget call;

Here is the ipcs command output; shmid in the ouput is the same as it s returned from shmget..(557059)

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status      
0xb92f91b4 524290     proddb    660        526385152  27                      
0x95588a4c 557059     proddb    660        6444548096 12                      
0x00000000 360452     proddb    644        151552     2          dest         
0x00000000 393221     proddb    644        122880     2          dest         
0x00000000 425990     proddb    644        790528     2          dest 

So it s clear Oracle 10g uses shmget to allocate the shared memory segments..
The question is that , how it can resize them?
As we know we can decrease the size of sga on-the-fly using sga_target parameter.

I tested and analyzed it to satisfy my curiosity;

1) sga_target = sga_max_size
First I tested this functionality in a Oracle Database, which has 6 gb sga capacity and configured to use all the 6gb of it. (sga_max_size = 6gb, sga_target=6gb)
There parameter lock_sga=FALSE ..  Note that  LOCK_SGA locks the entire SGA into physical memory. It is usually advisable to lock the SGA into real (physical) memory, it prevents paging.. I checked this parameter because. It can mislead us..

After the startup of the instance; I see that shared memory segment size is 6gb. So, it s clear that Oracle allocated 6 gb of shared memory (note that sga_target is 6gb too)

Here is the ipcs command output , just after startup;

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status          
0x95588a4c 557059     proddb    660        6444548096 16                    
0x00000000 360452     proddb    644        151552     2          dest      
0x00000000 393221     proddb    644        122880     2          dest      
0x00000000 425990     proddb    644        790528     2          dest

Look at the following Oracle process.. It has 6325 of virtual memory.. 6 gb + some mbytes.. So Oracle process is prepared to be is able to access 6 gb memory at the present moment.

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                        
12606 proddb    16   0 6325m 1.3g 1.2g S  0.0  7.1   0:15.36 oracle

Output of free command show approx 18007-(17064+227)
free -m
             total       used       free     shared    buffers     cached
Mem:         18049      18007         41          0        227      17064
-/+ buffers/cache:        716      17333
Swap:         8001          0       8000


Lets look at the Shared memory components of Oracle.It has a sga configured to 6 gb.
It has 5301600256 bytes 's of buffer cache.  1056964608 bytes of Shared pool and etc..

SQL> show sga

Total System Global Area 6442450944 bytes
Fixed Size                  2093544 bytes
Variable Size            1124077080 bytes
Database Buffers         5301600256 bytes
Redo Buffers               14680064 bytes

SQL> select * from v$sgainfo;

NAME                                  BYTES RES
-------------------------------- ---------- ---
Fixed SGA Size                      2093544 No
Redo Buffers                       14680064 No
Buffer Cache Size                5301600256 Yes
Shared Pool Size                 1056964608 Yes
Large Pool Size                    16777216 Yes
Java Pool Size                     16777216 Yes
Streams Pool Size                  33554432 Yes
Granule Size                       16777216 No
Maximum SGA Size                 6442450944 No
Startup overhead in Shared Pool   167772160 No
Free SGA Memory Available                 0

So everything is just like expected, here.. we use all the sga , and Free SGA Memory Available =0

2) sga_target < sga_max_size
Next I tested this functionality in a Oracle Database, which has 6 gb sga capacity but configured to use 4gb of  it. (sga_max_size = 6gb, sga_target=4gb)

So here is the  ipcs ouput just after restart.. Oracle still allocates 6gb Sga..  But this does not mean, that this memory is  initialized and taken from physical memory.

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status                    
0x95588a4c 589827     proddb    660        6444548096 14                    
0x00000000 360452     proddb    644        151552     2          dest      
0x00000000 393221     proddb    644        122880     2          dest      
0x00000000 425990     proddb    644        790528     2          dest      

Oracle processes still maps more than 6gb memory.. VIRT column supplies this information.. Virtual memory is a memory map that can be point to physical memory, or a swap disk space, or a memory "to be allocated".

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                        
14231 proddb    16   0 6284m  38m  34m S  0.0  0.2   0:00.06 oracleDEV (LOCAL=NO)                                                                                                            
$ free -m
             total       used       free     shared    buffers     cached
Mem:         18049      18016         32          0        229      17048
-/+ buffers/cache:        737      17311
Swap:         8001          0       8000

       
SQL> show sga

Total System Global Area 6442450944 bytes
Fixed Size                  2093584 bytes
Variable Size            2986348016 bytes
Database Buffers         3439329280 bytes
Redo Buffers               14680064 bytes

SQL> select * from v$sgainfo;

NAME                                  BYTES RES
-------------------------------- ---------- ---
Fixed SGA Size                      2093584 No
Redo Buffers                       14680064 No
Buffer Cache Size                3439329280 Yes
Shared Pool Size                  771751936 Yes
Large Pool Size                    16777216 Yes
Java Pool Size                     16777216 Yes
Streams Pool Size                  33554432 Yes
Granule Size                       16777216 No
Maximum SGA Size                 6442450944 No
Startup overhead in Shared Pool   134217728 No
Free SGA Memory Available        2147483648


As you see above, Oracle allocated 6gb Sga. Buffer cache size is 3.2Gb. , the variable size is 2.78Gb..
It looks like it is same whether we use sga_target = 4gb or sga_target = 6b.. But it is not the same.
Look at Free SGA Memory Available .. It is 2147483648, which is 2gb.

3) Lets reduce the sga_target value from 4gb to 2b of this running instance ..
(sga_max_size=6gb, sga_target = 2gb (reduced from 4gb, without restarting the db)

SQL> alter system set sga_target=2g;            
System altered.

ipcs output shows the same; 6gb shared memory segment is stil allocated.

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status          
0x95588a4c 589827     proddb    660        6444548096 15                    
0x00000000 360452     proddb    644        151552     2          dest      
0x00000000 393221     proddb    644        122880     2          dest      
0x00000000 425990     proddb    644        790528     2          dest

Top command shows the same, VIRTUAL Memory of a given Oracle process is approx. 6gb.

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                        
13611 proddb    16   0 6284m  20m  16m S  0.0  0.1   0:00.03 oracle      

Output of free command shows similar amount of free memory..

             total       used       free     shared    buffers     cached
Mem:         18049      18009         40          0        238      17032
-/+ buffers/cache:        738      17310
Swap:         8001          0       8000


SQL> show sga

Total System Global Area 6442450944 bytes
Fixed Size                  2093584 bytes
Variable Size            5133831664 bytes
Database Buffers         1291845632 bytes
Redo Buffers               14680064 bytes

SQL> select * from v$sgainfo;

NAME                                  BYTES RES
-------------------------------- ---------- ---
Fixed SGA Size                      2093584 No
Redo Buffers                       14680064 No
Buffer Cache Size                1291845632 Yes
Shared Pool Size                  771751936 Yes
Large Pool Size                    16777216 Yes
Java Pool Size                     16777216 Yes
Streams Pool Size                  33554432 Yes
Granule Size                       16777216 No
Maximum SGA Size                 6442450944 No
Startup overhead in Shared Pool   134217728 No
Free SGA Memory Available        4294967296

As we see;  ipcs ,top -p, free -m and oracle's sga outputs are similar(they point to 6gb shared memory), but Free SGA Memory Available showed in v$sqainfo display 4gb free, as expected..

Not seeing the unused portion of the shared memory in ipcs and top is not an issue. These are tools, so they can be coded that way. No need to investigate..

But, following are the questions;
How can Oracle make use of the advantages brought by sga_target? Actually, how does OS threat these allocated but not used shared memory blocks..

I will try to answer these questions in a few days.

Thursday, November 21, 2013

Linux & EBS 11i -- uek kernel, PAE support(32->36 bits), HighMem-LowMem,address large memory

Oracle Enterprise Linux 5 32 bit comes with a built-in PAE support.
You can see it by looking to the kernel configuration file;
Configuration file is under /boot directory, and it s something like config-2.6.32-300.10.1.el5uek.
In this confiuration file, you can see the PAE support declaration -> CONFIG_X86_PAE=y

PAE stands for Physical Address Extension.. It is basically a feature for 32 bit systems.. It provides addressing memory larger than 4GB. Technically a 32 bit system can address 2^32 byte  (4GB)  memory.. PAE increases address size from 32 bits to 36 bits.. By increasing the address size, PAE actually increases the adressable memory from 4GB to 64 GB.

Lets look how does an Operating System like Linux locate the data? (without PAE)

The memory manager locates the Page Directory for the current process. The memory manager is informed of the location of the Page Directory for the process by a special control register.
The 10-bit Page Directory Index in the virtual address is used to locate a Page Directory Entry (PDE) that defines the location of the Page Table needed to translate the virtual memory address.
The 10-bit Page Table Index in the virtual address is used to locate the correct Page Table Entry (PTE).
The PTE is used to locate the correct 4 KB page of memory.
After accessing this page, the 12-bit Byte Index in the virtual address is used to locate the physical address of the desired data.

With PAE -> A page directory pointer table  is also added to the process above. Actually page directory pointer table provides accessing more than 4 gb memory.

Following figure explains reaching a page in the memory using a PAE kernel memory architecture..
(Reference: Paging Extentions for the Pentium Pro Processors, by Robert R. Collins)


As you see, there is a high level hierarchy. This hierarchy between page tables are used to decrease the size of the page tables.

Note that: Adding page tables to the hierarchy can reduce performance..

The virtual address is the same, it s 32 bit. The first table which is the Page Directory pointer uses 2 bits to points to the 4-page directory. The entries in the page directories are 64 bit , but only 36 bits of them are used to decribe the location.  So we have 36 bit addresses.. By using 36 bit, 64 GB memory becomes addressable.
In my opinion, the virtual address is still 32 bit, so a single process can not access more than 4gb even with PAE. Maybe using swapping techniques, a single process can address more than 4gb pyhsical memory , but at single point in time, it will have an address space of 4gb at most.


Okay, technically PAE supports 64 GB, but it is not the case in practice, especially for critical systems..

A Reference: Mel Gorman, University of Miami , Understanding The Linux Virtual Memory Manager

PAE allows a processor to address up to 64GiB in theory but, in practice, pro-
cesses in Linux still cannot access that much RAM as the virtual address space is
still only 4GiB. This has led to some disappointment from users who have tried to
malloc() all their RAM with one process.
Secondly, PAE does not allow the kernel itself to have this much RAM available.
The struct page used to describe each page frame still requires 44 bytes and this
uses kernel virtual address space in ZONE_NORMAL. That means that to describe 1GiB
of memory, approximately 11MiB of kernel memory is required. Thus, with 16GiB,
176MiB of memory is consumed, putting significant pressure on ZONE_NORMAL. This
does not sound too bad until other structures are taken into account which use
ZONE_NORMAL. Even very small structures such as Page Table Entries (PTEs) require
about 16MiB in the worst case. This makes 16GiB about the practical limit for
available physical memory Linux on an x86. If more memory needs to be accessed,
the advice given is simple and straightforward, buy a 64 bit machine.

To test this; we booted our 64 bit HP server (which has 64gb memory installed) with a PAE enabled Oracle Linux 32 bit.. It could see the 64 GB Ram.. So we used hugepages to address the large memory, and took necessary actions to make Oracle use a big sga. Everything seemed perfect in the beginning..  Then we started up our Oracle Database with 4gb Sga. Database started, so still no problem. Then we started to create some tablespaces and lastly, we ended up with the following;

Nov 12 10:25:22 productlinux kernel: lowmem_reserve[]: 0 0 0 0
Nov 12 10:25:22 productlinux kernel: DMA: 1*4kB 1*8kB 0*16kB 0*32kB
1*64kB 0*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 1868kB
Nov 12 10:25:22 productlinux kernel: DMA32: empty
Nov 12 10:25:22 productlinux kernel: Normal: 1*4kB 1*8kB 8*16kB 1*32kB
0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3756kB
Nov 12 10:25:22 productlinux kernel: HighMem: 0*4kB 0*8kB 1*16kB
0*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 6114*4096kB =
25044944kB
Nov 12 10:25:22 productlinux kernel: 5360827 pagecache pages
Nov 12 10:25:22 productlinux kernel: Swap cache: add 0, delete 0, find
0/0, race 0+0
Nov 12 10:25:22 productlinux kernel: Free swap = 34799608kB
Nov 12 10:25:22 productlinux kernel: Total swap = 34799608kB
Nov 12 10:25:22 productlinux kernel: Free swap: 34799608kB
Nov 12 10:25:22 productlinux kernel: 16777216 pages of RAM
Nov 12 10:25:22 productlinux kernel: 16547840 pages of HIGHMEM
Nov 12 10:25:22 productlinux kernel: 403562 reserved pages
Nov 12 10:25:22 productlinux kernel: 7467319 pages shared
Nov 12 10:25:22 productlinux kernel: 0 pages swap cached
Nov 12 10:25:22 productlinux kernel: 158886 pages dirty
Nov 12 10:25:22 productlinux kernel: 0 pages writeback
Nov 12 10:25:22 productlinux kernel: 146965 pages mapped
Nov 12 10:25:22 productlinux kernel: 90980 pages slab
Nov 12 10:25:22 productlinux kernel: 5958 pages pagetables
Nov 12 10:25:22 productlinux kernel: Out of memory: Killed process

So, from the kernel messages above, we can say that, OOM killer killed one of our background processes.. In this case the process got killed, was LGWR..

OOM can be disabled, but it s a built-in security mechanism.. It s actually there for helping us. OOM killed the process because it saw that the amount of free memory is decreasing to zero..

But we have 64 gb right? There should be a lot of memory left even when the database was running..

We see the HighMem there, 25044944kB, it s approx. 23 GB. It seems okay...

Higmem is the memory that an application or lets say user process can access. In this case, there were plenty of rooms in High Mem.

Lets look at the line starting with Normal.. It should be the line for the Low Memory.. In this case, Low Memory had only 3756KB (approx 3MB) available.. This points to a problem.. Low Memory is reserved memory for kernel, and it seems the source of the problem is the Low Memory... Linux 32bit systems implements by default a 3:1 split memory, as 32 bit Linux can address 4GB of memory; the first 3GB of the memory is called High Memory which is the user address space and the remaining 1GB is called Low Memory which is the kernel address space. So when the kernel needs to create its structures and load data into them, it uses the Low Memory. For a 32 bit Linux System with 64 Gb memory installed, it seems the kernel stores a lot of information just to manage this big memory. It stores this information in the low memory.. So even if you use PAE, it is not stable when you adress such a big memory like 64 gb.. The reason behind that is that PAE uses 3:1 split for high-low memory.. So in brief, PAE kernel can see 64 gb memory, but when the server load increases, it can fill out the low memory very easily and break our processes down.


Lets talk about HugeMem support . Hugemem is another option that allows Linux 32 bit kernel to address more than 4 GB memory. Hugemem memory boundries are not like PAE(3:1), so a kernel that support hugemem has a 4:4 split memory . That is, 4 Gb for high mem, 4 Gb for low mem... It makes a Linux kernel to be able to address more than 16gb memory on a 32 bit system. It is not like the PAE, as it has a 4:4 split. As you predict, It will not suffer from the 1gb lowmem limit like the other 32bit kernels. Ofcourse, if you need to address such a big memory on 32 bit system, you have to use your Oracle Database with some specific parameters (like indirect data buffer).

Bad news is that, Oracle Enterprise Linux 5 and upwards does not support HugeMem. It seems it is not supported because of the insufficiency for the new patches and widespread use of 64 bit systems in the Enterprise Level..

In addition, there is no equivalent kernel for Oracle Enterprise Linux 5 and above.. It seems, the same applies for Redhat, too..

In brief, if you want to use 64 Gb memory on a 32 bit Linux system, you need to use Oracle Enterprise Linux 4 with hugemem. If using an 64 Gb memory is a very important requirement and if you want to use Oracle Enterprise Linux 5 or 6, you need to go for 64 bit..


So lets suppose we have an Application (Oracle EBS) running on a Oracle Enterprise Linux 5 32 bit OS installed on a 32bit server with 16 GB memory installed;

As I mentioned above we can use PAE for this configuration..

Oracle Enterprise Linux 5 supports PAE option.. PAE kernel can address 16 gb memory stably.

To test this, lets use the following C program;
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define PAGE_SZ (1<<12) /* it makes 4096*/
int main() {
int i;
int gb = 2;
for (i = 0; i < ((unsigned long)gb<<30)/PAGE_SZ ; ++i) { /*converting GB to Byte */
void *m = malloc(PAGE_SZ);
if (!m)
break; /*Break if you can not allocate*/
memset(m, 0, 1); /*Write*/
}
printf("allocated %lu MB\n", ((unsigned long)i*PAGE_SZ)>>20); /*Converting Byte to MB*/
getchar();
return 0;
}

We upload this c code into a file and compile it with gcc on our Oracle Enterprise Linux 5 system.

The name of the produced executable is "a.out"

So lets execute this a.out..

It will allocate 2GB of memory .. It will allocate them in pages(4096 by 4096). It will stop if it can not allocate any more memory and it will print out the total memory allocated.

The server has 16gb memory as below;

op - 16:48:13 up 3:45, 15 users, load average: 0.12, 0.12, 0.04

Tasks: 454 total, 1 running, 453 sleeping, 0 stopped, 0 zombie

Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st

Mem: 16502020k total, 15973388k used, 528632k free, 8072k buffers
Swap: 33559776k total, 1195800k used, 32363976k free, 35676k cached

Lets start 8 a.out processes.. (as I know that a process can allocate 2GB memory at most in Linux 32 bit..)
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                 
 2196 root      20   0 2056m 2.0g  344 S  0.0 12.7   0:01.56 a.out                                                   
 2037 root      20   0 2056m 2.0g  344 S  0.0 12.7   0:01.41 a.out                                                   
 2001 root      20   0 2055m 2.0g  344 S  0.0 12.7   0:01.39 a.out                                                   
 2101 root      20   0 2056m 2.0g  344 S  0.0 12.7   0:01.41 a.out                                                   
 2004 root      20   0 2055m 2.0g  344 S  0.0 12.7   0:01.39 a.out                                                   
 2068 root      20   0 2056m 2.0g  344 S  0.0 12.7   0:01.36 a.out                                                   
 2218 root      20   0 2055m 2.0g  344 S  0.0 12.7   0:03.10 a.out                                                   
 1967 root      20   0 2056m 975m  344 S  0.0  6.1   0:01.40 a.out    

Not that : The RES column in the above ps output shows , the real allocated physical memory for each process.. As you see 7x a.out processes allocates 2 gb memory per process and 1x a.out process allocates 975m of memory, because only 975m memory has left..

This proves that a PAE enabled 32 bit Linux can adress 16Gb memory without problems..

Lastly, I will write something from the Oracle EBS / Apps Dba perspective..
By using this information in this post, a conclusion has arrived -> If you need to EBS 11i in to a Linux environment, I suggest you to choose a split configuration. 
First, Install the database into a 64 bit Linux and use a big sga and then install the Application Tier into a 32 bit Linux with PAE support. Note that, EBS 11i application tier should be installed into a 32bit Linux because EBS 11i application code is not supported on 64 bit Linux Operating Systems..
By using this split configuration method, you can use a big sized sga for database, as well as 16 gb memory for application services..

I don't want to say that EBS application code can not be compiled and run in a 64 bit Linux.. Maybe it can.. Maybe by modifying the makefiles, environments and so on, you can compile or relink executables.. On the other hand; It is not supported by Oracle.

Wednesday, November 13, 2013

HP DL,BL series and Oracle Enterprise Linux support Matrix

I share this support information, because  we tried to install Oracle Enteprise Linux 4 on an HP DL Gen8 server and unfortuneatly, ended up with unable to find any devices error. We choosed Oracle Linux 4 because we wanted to use HugeMem option in order to address 64 gb memory on a 32 bit OS.
However; the Linux installer could recognize neither dvd/cd rom nor local disks ..
These seems because, the drivers for these kind of Hp hardware are not shipped by Oracle Linux 4... Anyways, HP does not certify installing Oracle Linux 4 into its new generation hardwares..

Here is the support Matrix for Oracle Enterprise Linux & HP BL,DL server series..

Please visit Hp support for more information

Monday, November 11, 2013

Database -- Oracle Database 12C Briefing -- notes, comments.

I attended a Partner Briefing about Oracle Database 12C recently. The briefing was delivered by François Pons, who is a Senior Principal Technologist at Oracle .. The briefing was based on several presentations about the new features of Oracle 12c , as well as the comparions between Oracle 12c and the older releases. I must say that, this was one of the most useful  Partner organization I have ever attended. Mr Pons clearly described and covered all the topics in the context in 2 days, with a fast but effective way.. He even showed and proved the concepts with real examples by doing little workshops at the end of each lecture.
During the lectures, most of the time, I've written down the hints and tricks that Mr Pons have mentioned, and the comments (which are very important in my opinon) that he have made.
In this post, you will find my notes related to Oracle Database 12C Partner Briefing in Istanbul ..

Note that: This post covers my notes , and sometimes my comments. If there are any mistakes, you can get in contact with me, I can correct them.
  • Total Recall no longer exists. There is Flashback Archive, and it is even available in Standard Edition..
  • In 12C , there is an option called In Database Archiving.. This option can be used the reach the old data, which is the data that is not frequently queried ..
  • The In-Memory-Option introduced in 12C and TimeSten are two different things.
Following is a comparison between In-Memory Option and Timesten..


  • Oracle 6i introduced Row Level Locking
  • Oracle 7 introduced PLSQL.
  • Oracle 9i introduced too many changes and features.. These were there for RAC actually.. Undo segments instead of Rollback Segments, dictionary to local managed tbs..These changes and features were produced, because they were suited for Rac actually.
  • With Oracle 10g, the managebility is improved. (for example automatic stats gathering)
  • Oracle 11g brought us an improved environment to service for Large Databases as well as Small databases. 11g handles the databases with different sizes very well, Everything was better.
  • Oracle 12c improved the capabilties of Oracle 11g to support Big Data, large Db's and Engineered Systems(Especially Exadata). 12c can handle Larger Clusters and have Larger Rac support.. It introduced the Container Databases and Pluggable Databases to support consolidation.. It has brought a new architecture.. (PDB and CDB) Optimizer also have improved .. There are major changes in Oracle Optimizer.. The optimizer have become a self-learning beast.. It learns the right way of doing something by looking its mistakes ..
Following picture describes the Oracle Database 12C in a different way.. As you see, CDB/PDB is in the chest of the man in the picture.. This is because, Container Database and Pluggable Database concepts are the major innovations in 12c architecture.. 


12C or 11gR2 software can be installed in to Exadata Compute Nodes.. In fact, 12c and 11gR2 can be installed into Exadata at the same time , but It seems it s better to install and use 11gR2 software on Exadata for now, because the storage cell's cant do Smart Scan if the software in compute nodes is 12c. This is because of the incompatability..(compute nodes 12c, storage cell 11gR2).. Especially if the databases in Exadata are Data Warehouses, this will create a performance impact, because DW type workload takes advantage from the Smart Scan feature of Exadata. On the other hand; if the databases in Exadata are OLTP, 12C can be used, because OLTP type workload dont use Smart Scan so often..

Container/Pluggable Databases can be used even in Stardard Edition ,but only one Pluggable Database can be created..
Normally, 252 Pluggable Databases (PDB)can be created in one Container Database(CDB). Every PDB has a unique PDB number. This PDB number is stored in one bayt. There are also a root DB and a seed PDB in every CDB.. Seed PDB is like a template, used when a new PDB is created..  These PDB 's have unique PDB numbers  too..In addition to that, there may be one more reserved PDB number.. So if we calculate > 1 byte's worth - reserved pdb numbers makes 252 PDB numbers available..

12C supports both old architecture(non-cdb), and new architecture(cdb/pdb)..
The new architecture is not supported with some of the options .. For example ADO is not compatible with Pluggable Databases.. If you want to use ADO, you need to use the old architecture..
On the other hand, 12Cr2 seems promising about the support.. All the options will be compatible with 12Cr2..

In 12C v$ views have extra columns. For example, you can understand whether you are connected to a PDB or a non-CDB database.. You can gather this kind of information by querying V$ views.

PDB's in 12C use a common SGA and a common set of background processes. Every PDB has a service name. That 'is, OS authentication is not supported for connecting to PDB's. With SGA, the Buffer cache is shared among the PDB's.  So, there are blocks from multiple PDB's in the Buffer cache .. Every Block holds a PDB ID information, Oracle uses this information to decide,  which PDB a particular block belongs.. LRU applies as always.
PDB and non-CDB databases are fully compatible.For example, you can create a database link in a PDB, just like you create a database link in a non-CDB database.. 

In the new Architecture, every PDB has its own metadata to store the user information..
The old architecture does not scale as well as the new architecture..

DataGuard in new Architecture is provided on CDB level. DataGuard is a thing that is hard to configure.. Configuring and managing it on CDB level at least minimizes the work.. The same thing applies for redologs and undo, they are on CDB level too.. Dataguard and tools like Goldengate use redologs to operate.. All the other thing can be on PDB level.. Every PDB has SYSTEM,SYSAUX and TEMP tablespaces ( TEMP is optional, it can be on CDB level, too.. Undo and Redologs must be on CDB level..)

By using Enterprise Manager 12c, dba s can do a lot of admin work for managing CDB's and PDB's. (For example: creating PDB)
PDB's can be cloned .. A new PDB can be created by taking a clone of an existing PDB..

If the CDB is on a filesystem which has copy-on-write feature,cloning can be an instant operation..
Of course, cloning can be always done using the old copy method, too(cp from dir to dir).

Copy-on-write supplies the snapshots ( Like in Netapp SAN). If a filesystem has the ability to make copy on write, then it support snapshots, too..
Copy-on-write : Writes to the disk does not update the existing block.. The new data is written in to a new block and a pointer is created to the existing block, recording and keeping the old block available for the snapshots.. Thus, you can create a snapshot from the old blocks using these pointers.. You can quickly mount these snapshots to the server and use the old data if needed.. An example for the snapshot concept is ZFS(solaris)..

In 12C, database files can be placed in ACFS, but there can be a performance impact.. Note that , placing datafles in ACFS is not supported in 11G..

Every PDB has a DNA. That is , every PDB knows its history (For example, PDB knows its own creation time)..
PDB's can not be migrated from version to version.
PDB's can be migrated from CDB to CDB.. (to a remote CDB or local CDB does not matter)
If you are using a shared filesystem(like ASM), a PDB can be migrated from a local CDB to remote CDB without even copying the files..

PDB is put into quiesce mode , while cloning.
To create a local clone -> create plugable database ERMAN from ERMAN2;
To create a remote clone -> create plugable database ERMAN from ERMAN2@ERMAN.WORLD;

Every CDB has a seed PDB inside of it. The new PDB's are created from the seed PDB. Actually, seed PDB is copied when you create a new PDB. That is why, the PDB creation process is very fast.. Seed PDB 's are like templates. You can create your own seed PDB's(template) if needed.

PDB 's can be unplugged and plugged. Unplug/plug operation differs from cloning.. Unplug & Plug PDB operations are something like migrating a database. For example , if you need to upgrade your database version from 12CR1 to 12CR2 , you can unplug your DB's from 12CR1 and plug them into the 12CR2.. During this upgrade operation , your data will not be touched. If your data is on a shared disk, the operation can even be logical..( without a need to copy anyting) 
The command for this operation is something like below;

Alter pluggable database HCM unplug into ‘full path of a file/erman.xml’,
When you unplug a pluggable databse, the information needed to plug HCM database is written to a xml file.. When you plug the database into a new CDB, that xml file is read by the Oracle.

There are users on PDB level(local users) as well as CDB level (common users).

Common users are supplied by Oracle.(SYS,SYSTEM...) These users are created when the CDB is created. Common Users can connect to any PDB and access the tables on the PDB's. To prevent this, you should use the Database Vault..
On the other hand; Local users on a PDB can not access the tables on other PDB's , even if they are granted as DBA's.
You can create users in CDB level if needed. The usernames of these kind of users should start with C##.. For example: create user c##DBA identified by pwd container=all

In the new architecture, sessions are created in the same way as they are created in old architecture. Only thing you need to supply is the service name, when you are connecting to a PDB. The connection requests are still handled by the Listener..

You can assign a PDB to work on a specific Rac node. You can dedicate one specific node for a particular PDB..  In addition, You can say " dont share my sga with others"  ..

Patching get easy with the new architecture..You can upgrade in place by patching the CDB, or you can create a patched CDB and use unplug&plug method for upgrade..

If you need extra resources for your PDB's, you can add more nodes on the fly .( this is already a  RAC feature)

You can use Resource Manager to control the CPU,IO,Sessions and Parallel Execution Servers of the PDB's.

To upgrade your database from 11g to new 12c architecture;
First; upgrade your 11g database to 12c (non-cdb)
Next,put your database into readonly mode and create PDB manifest file.(this file will be used in plugging operation)
Finally, purge your  old medata , as it will be obsolete after upgrading to 12c new architecture.. 
Note that, as an alternative; you can always use export/import method for these kind operations..

Full transportable database option is introduced in 12C. (note that: expdb/impdb, and exp/imp can be used as an alternative.)
Expdb is a new feature that can be used to export the whole database. It is introduced in 12C and it is backported to 11.2.0.3 or 11.2.0.4..

ASM is introduced in 10g for performance and managebility..
A filesystem can be created on ASM, this is called ACFS. ACFS is a copy-on-write filesystem that supports snapshots.

Why ASM has a good performance?

1) ASM uses block devices. It does not use a filesystem.. So, when you use ASM, you get through a very thin layer of code on the OS, as ASM sees the disks as block devices.  Thus, less code is executed in the OS layer.
2)Allocation units(Multiple of MB) are used for read/write operations. ASM writes blocks in multiple of MB. Larger IO's but fewer number IO's.
3)ASM uses Direct IO .. It does not use a cache. Without ASM, yhen you write, OS writes it into the cache, but Oracle has a cache too.. These caches ignore eachother.. This situation ends up with double buffering.. We dont have double buffering in ASM.
4)Code Path

Why ASM is more managable?

You can add more storage, take off disks and rebalance on the fly.
Automatic rebalancing makes you not to optimize IO's anymore.
On Linux , we have asm libraries. Before the installations , you can prepare disks for ASM configuration. By doing so, you will not encounter any problems when you install ASM.

In 10G, ASM uses the same binaries, which the database software uses..
In 11G, this has changed. ASM uses its own binaries. They are supplied with clusterware - grid infrastructure.
This is because;

1)Seperation of duties; You can give the control of ASM to the system engineers.. (not dba 's)
2)Rac..  You can build a RAC with or without ASM. You install the clusterware to be able to do that.

On the other hand; the clusteware puts its files in to the ASM .(voting disks , OCR and etc.) So if you want to install the clusterware, there should be ASM to put this files.. To address this problem, Oracle has put Oracle ASM, Clusterware and even the listeners together and build a single infrastructure, called Grid infrastructure.

Before 12c, there was a communication between ASM and Database.. For example, when you create a database object...
Actually, database writes to disks directly, so ASM is not a bottleneck.. It works like a listener..
When ASM crashes, database does not crash immediately.. The database crash after a time as the ASM works like a listener.. (When a new connection needed.) 
This seems like a problem..
For solution, Oracle introduced a new feature called Flex ASM in 12C.
Now, when you install the ASM, you can select the Flex ASM option.
In 11g , the authentication method used in ASM connection was the OS authentication.On the other hand; In 12C, DB and ASM communicate through Oracle Net authentication. By this means; a database can connect to other remote ASM's too.. Now, the password file must not be on the database Oracle Home for remote sys connection.. This need was because the OS authentication.. In 12C, we dont put the password file in to Oracle Home in 12C.  Even, the normal ASM (not FLEX) uses Oracle Net communication  in 12C.

You can consider a FLEX ASM as a RAC environment.. When a ASM node crashes , the related ASM instance can start in another ASM node.. As you predict ,Databases can connect to any ASM instance in a FLEX ASM environments..

With FLEX ASM, you can create 511 Disk Groups..(The limit was 63 in the past)
Note: Creating diskgroups per database is not the best practice in ASM. Diskgroups should be created per environment. For example : 1 diskgroup for PROD databases, 1 diskgroup for TEST databases..

In FLEX ASM, there are hub nodes and leaf nodes. Databases and ASM can not be run on the Leaf nodes. Leaf nodes are used for applications. Leaf nodes have VIP'S associated with them.
Clusterware controls the applications using this VIP's. It can stop/start applications if a database is just opened or closed.(this is the main reason) So , clusterware can control the applications in a way.

In 11G, read operations can be done using the primary copy.. In 12C, read operations can be done using all available copies on the environment. This is an optimization, that can improve the performance of ASM IO's.

Note: When you use external redundancy in ASM, ASM does not do mirroring, it spreads the data in to disks, as the data is mirrored on a different level ..(on storage)

In 11G, the database file can not be placed on the ACFS. ACFS is licence-free..
In 12C , there a filesystem called CloudFS. It s like an ACFS,technically. But in Cloudfs, you can place datafile on the filesystem too.. ACFS is free , but if you need to place your datafiles on it, you have to use avanced options .. These options requires licences.. (Options like snapshots, reset database from snapshot and etc..)
ACFS can be mounted to several nodes, but these are the nodes that the grid infrastruce is running on..
In 12C, ACFS/CLOUDFS can be mounted to the servers that does not have a grid infrastructure.. This supplied by the HAN(High Availability) File System.
HANFS can be exported to the clients using Vip addresses. By the use of the Vip's, High availability is supplied .

In brief, we cant say it s a revolution but ASM and RAC in 12C bring a lot of enhancements ..

ASM is a very important recommendation if your database is a critical database. In addition, if your database is big, paritioning is another important recommendation.. Partitioning increases both the managebility and performance.. Compression is also another recommendation made by Oracle for these kind of databases..

So the recommendations are as follows;

1)ASM
2)Partitioning
3)Compression

When you combine these 3 recommendations , you obtain the ILM(Information Lifecycle Management)

12C brings Automatic Data Optimization(ADO).. ADO is used for automating the compression and movement activites in the database. For example: ADO automatically moves a partition from Diskgroup to Diskgroup, or it compress a partition automatically. In 11g these kind of operations have been done manually.

Oracle 9i introduced the basic compression. Basic compression have been useful for queries and had constraints.. For example, you could only query a compressed table , you could not do an update on it.
Advanced Compression was introduced in 11g(11.1) -- free on enterprise edition.  DML operations can be done on a table which is compressed with Advanced Compression.
11.1 -> alter table compress for all operations.. (old syntax, 1st version of advanced compression)

Advanced compression is transperent, efficient and works asynchronously. That s why, it does not decrease the performance of update/insert/delete operations.. When you insert a data in to the block, you dont need to wait for the block to be compressed, you just continue inserting to the next block...Compression is done asynchronously.. 

Hybrid Columnar Compression on Exadata can compress the data for oltp  (15x), and for archive (50x)
In HCC, the data is reorganized in the columnar form. The data is seperated in to chunks and it s reorganized chunk bu chunk.. That is why it s called Hybrid.. By doing so; the repeating records can be compressed better..
In 12C , there is a new syntax for compression.. alter table row compress or alter table column compress;
There a re new names for compression methods.. Advanced Row Compression, Lob Compression and etc.
Little by Little, Oracle start to do some favours for some spesific platforms like Exadata.. Platform specific options are the proofs for that.
Using Logminer and GoldenGate against compressed data, become supported with Oracle 12c.. Before 12c ,for example: Goldengate could not read the compressed blocks..
In 12c the limitations like max column count (255) are no longer exist..
Heat Map(hot->cold) is introduced in 12C.Heat Map is used for tracking the changes in data blocks to decide whether a block is hot or cold.. Oracle decides what to do with the blocks according to Heat Map's findings.. Note that: Heat Map is not supported in pluggable databases.If you want to use Heat Map, you need to use the old architecture)
Heat Map is disabled by default. You can enable it with the following;
Alter system set heat_map=’ON’ scope=both;

Automatic Data Optimization works through policies. For example : A policy can be the following;
"Compress partitions which have not been  modified for 30 days. If this particular partition have not been modified for 120 days, compress it further. If this partition have not beed modified for 180 days, make the related tablespace readonly".. Note that : this requires an advanced compression license.

Oracle can compress the data online. In 12c, moving a partition or a database file are online operations,too.
Note that: Queue tables should not be compressed, as it is pointles.. 

Interval Reference partitioning(master and child relationship) is introduced in 12c. 
Partition drop and truncate operations complete immediately. You dont need the statement "update indexex) anymore, Oracle does it automatically and asynchronously. In 11.2 , suppose an update indexes operation completes in 2 minutes. In 12C, the same operation completes in 2 seconds. Oracle does this index maintanence operation thorugh a list. In other words, Oracle knows the index blocks those need to be touched, and operate on those blocks only..

Before 12c, every select operation you execute was a little bit in the past..(consistency)
Actually, you see the data in the past -- in the point of time that you executed the select(by the help of undo)
Note: Undo supplies an isolation mechanism to the Oracle Database. Not commited transactions should not be seen by others. Flashback uses Undo , too(as of timestamp, Scn queries)
Flashback Data Archive holds the undo for a long time . Thus , you can query the very old data..  
What happens here is, when the undo data become obsolete/aged out, Oracle puts this undo data to the Flashback archive...(note that : a retention window mechanism applies for Flashback archives,too..)

In 12C, when you put a table into archive, you can still add columns (no schema limitations) .. That is, you are not blocked from modifying the table strructure just because the table is in a archive.
In addition, user context tracking is supplied. Information like the username that modified the table, can be put into the archive..
Temporal Support>Valid Time Temporal > Querying with Time validity clause is used for historically querying .. But these feature is not like a Flashback query. The mechanism works based on the Start and End columns and returns the data. These columns are hidden, but when you query the table you get the valid rows based on these column values.. This feature makes the life of a custom application developer easier. It minimizes the custom code, especially if the application does its job based on some time intervals..

12C Note : When you create a new column with a default value, we do not modify the table itself..

In-Database Archiving introduced in 12C is the best new feature for some of the big customers of Oracle.
When you execute a command like alter table – row archival > it automatically creates a hidden column > ORA_ARCHIVE_STATE = 1. When you select from that table, just like a VPD, oracle adds a ora_archive_state=0 to the end of your select statement.

Identity Columns in 12C are like a combination of the sequences and triggers. The columns are automatically gets valued , it actually create  sequences, but the identity column feature is more efficient than the triggers.

12C introduced extended size limit (32K) for some column types;
Create table with Varchar 32k >alter system set max_string_size = EXTENDED.
Note that : The COMPATIBLE initialization parameter must be set to 12.0.0.0 or higher to set MAX_STRING_SIZE = EXTENDED.
max_string_size controls the maximum size of VARCHAR2, NVARCHAR2, and RAW data types in SQL.
standard means that the length limits for Oracle Database releases prior to Oracle Database 12c apply (for example, 4000 bytes for VARCHAR2 andNVARCHAR2, and 2000 bytes for RAW).
Extended means that the 32767 byte limit introduced in Oracle Database 12c applies.
Note that : When you set the max_string_size , you can not take it back.. This approach is more efficient than CLOBs.

A Swiss knife can be a good synonym for Oracle Sql Developer .

We have 2 options for managing the Rac Nodes. 1 )Admin managed Rac Datatabase 2) Policy Managed Rac Database.

In 11G, when you create a RAC, you can select admin managed. With admin managed, you can  adjust the system. You can declare your services to run on specific nodes or on several nodes.. As you see, these kind of declerations are made by the admin, so this is called admin managed. This approach is good if you have a cluster with a few number of nodes (4 for example)
On the other hand; If you have several number of nodes in the cluster, you need to choose Policy Managed. Server Pools are used in this method. You can declare policies like below;
I have a Production database. This database has a production server pool. This server pool has 4 servers. Minimum number of servers in server pool is 3, max number of servers in server pool is 6.
So, by the use of these kind of policies, you gain flexibility. In the above example, If the environment has available number of servers, 6 servera can be dedicated to production server pool.. If the environment has difficulties,  3 servers can be dedicated to production server pool.
Even you can declare the following;
I have an application in production env.. If this application needs some resources, detach the servers from the test server pool, and attach them into the production server pool..

Note: In Rac, if you want to break the comminucation between an application and a database; stop the related service not the listener. Thus, admin operations can still be done by the administrators using the listener..

Rac supplies database continuity not application continuity..
Transaction Guard can track the transactions.. It takes the transaction at the lowest level and store it on the application side. It can then supply a mechanism to Oracle to be able to ask to question:
Was my transaction commited, or not? 
By the help of this mechanism, you can prevent the same update to be done twice.. 
Transaction Guard is an Api. Application Continuity, on the other hand; is a higher level thing..  Application Continutiy uses Transaction Guard to track the transactions .( using jdbc-thin,OCI,OCCI,,ODP.net)
In the absence of Transaction Guad > In worst case scenario> network comminucation problem can make the client to execute the same transaction again, altough the first transaction had been commited... The reason here is that the commit message could not be transmitted to the client, it was lost on the network layer and this situation could not be handled..
In 12C, we have replay. The jdbc thin driver is intelligent. Suppose you made insert and the database crashed just after that. Jdbc driver handles this by creating connection to one of the remaining nodes. It executes the commit of the first statement again. Then it continues with the second transaction. After this points it delivers the message to the client. End user only experiences a delay..  This was not possible in 11g. Application Continuity in 12C works like this..
Note that , this is not supported in .Net, yet. It s only supported for java.. 
Note that, this method can not be implemented for every transactions, even in 12C..

UCP stands for Universal Connection Pool. It supports multi database connection(sqlserver ,oracle and etc..) It s delivered in a jar file. It s an extraordinary feature. UCP is a requirement if you want to use Application Continuity and Replay.

Data Guard Far Sync in 12c reduces the risk for Data Loss. You use the sync method between primay and far sync instances.. You use async method between far sync and standby instances.  Thus, you supply near-zero data loss without affecting the performance..
The new Data Guard can supply level 2 standby mechanism , better than the old releases..
In 12C dataguard, when the archive is transmitted, an acknowledgenment message is delivered immediately. In other words, primary does not have to wait for archives to be written into local redologs.( Oracle guarantees this write operation and continues asyncronously..)  That's why , 12C 's DataGuard is faster than 11G and older releases.
In 12C, Rolling upgrade can be done easier using DBMS_ROLLING package . 
As you know, Active DataGuard can open the database readonly.. On the other hand, the sophistaced reporting tools can not work with a readonly database. Because, this kind of tools want to write in to database.. To address this problem, you can use global temporary tables.

12C Rman can recover  a tablespace in point time.
12C Rman can recover  a tables in point time. recover tabe command..
12C Rman can migrate a database accross platforms using backup and recovery operations, even between the platforms with different the endians..

In the old good times, we use sysdba.. 11g introduced sysasm. 12C introduced sysbackup for rman, sysdataguard for dg broker.. 

Enterprise Manager 12C does not use any middleware product.. It works completely in the database. It uses XMLDB infrastructure for webservices. It uses listeners for connections.. Note that listeners can listen on http and https.
There is no more need for Enterprise Manager configuratin assistant. RAC, SI, SE and EE support EM.
Enterprise Manager Database Express 12C is not fully compatible with PDB and CDB architecture. Maybe in next release, it will be..  

In 12C , ADDM can run on almost real time. ADDM is triggered through some conditions. For example,  if run queue length is 3x CPU count > ADDM automatically runs.. This mechanism supplies an opportunity for  diagnosing the problems instantaneously.

Data Redaction is used for masking the sensitive data. It can be implemented on the fly without touching the applications. Redaction means automatically change..
Redaction can be Full , Partial , Regexp or Random .
For example : With partial redaction, you can display the last 4 digits of the data -> XX-XXX-1111
You use DBMS_REDACT.ADD_POLICY to add this kind of policies.
When you declare a redaction for a column, it is immediately activated by Oracle.
Note that: you cant use Create Table as Select from a table, if your user is affected by a Redaction policy which is associated with that table.. If you do this, you end up with insufficient privileges error.
Redaction have been backported to 11g(11.2.04)..

You can use privilege analysis to determine the needed privileges for a database user ..You can use a PLSQL for tracking the user activity and the usage of the privileges.. On this method, the user privilege data is monitored and captured for some time.. Then, by analyzing this data , you can revoke the unnecessary but granted privileges from the user in question.
Privilege analysis can not be used without a Database Vault.. To able to use the privilege analysis , you need to have a Database Vault license.

12C improves Optimizer, too.
In the old releases, Optimizer was executing the first plan that it generates.
When the CBO was introduced, Optimizer became more intelligent. CBO have used the statistics and calculated a cost for every plan..By looking these cost, CBO have been able to choose the best plan for executing an operation.
Being fast is an important need for Oracle optimizer. Optimizer have to decide very fast. This need have always created some problems, like choosing the wrong plans or not being stable..For example, when you upgrade your database, you end up with changed execution plans, which can lead performance issues.
In 12C, optimer becomes a proactive and self-learning thing.. Oracle's have been always  to use memory and not to use Cpu's.
12C introduces adaptive query optimization for optimizer.This is a big change in Optimizer.With this feature, optimizer can make adjustments on runtime. For example, Optimizer decides  a query to use a nested join.. After a while, when the query is running, Optimizer detected that a hash join may be better for this query .. In this case, after last nested join operation, Oracle can switch to hash join to achieve a better peformance on this particular query execution. This is called using an adaptive plan.. When an adaptive plan is used, you can see it in Execution plan.. So in brief , the final decision is made by the statistics gathered during the runtime.
You can disable adaptive query optimization by setting , OPTIMIZER_ADAPTIVE_REPORTING_ONLY parameter to TRUE.
OPTIMIZER_ADAPTIVE_REPORTING_ONLY controls reporting-only mode for adaptive optimizatons.
When OPTIMIZER_ADAPTIVE_REPORTING_ONLY is set to FALSE, reporting-only mode is off, and the adaptive optimizations are enabled as usual.
When OPTIMIZER_ADAPTIVE_REPORTING_ONLY is set to TRUE, adaptive optimizations run in reporting-only mode. With this setting, the information required for an adaptive optimization is gathered, but no action is taken to change the plan. For instance, an adaptive plan will always choose the default (optimizer-chosen) plan, but information is collected on what plan to adapt to in non-reporting mode. This information can be viewed in the adaptive plan report.
This parameter affects only adaptive optimizations that are enabled.


There are improvement in deciding whether to use parallel execution servers. Before 12c, we have decided the object that are scanned in parallel ( with a hint, or alter table parallel)
In fact, If the jobs can not be distributed equally among the parallel servers, parallel server mechanism becomes inefficient. 
For example: Parallel server 1 scans partition 1 which has 10 rows, Parallel server 2 scan scans partition 2 which has 1 million rows.
In 12C There are improvements in this distribution process, too. There is reoptimization by using cardinality feedback ( it is there for 11g also)
Optimizer counts the rows on runtime. Suppose , we expect 3 rows, but the rows we got is more that 3. So in this situation the related cursor is marked as unusable. The next execution of the same statement triggers a new explain plan to be generated. In 11g , this is not done for the join operations, it is done only for tables.. In 12c, it can be done for joins, too.. It works even with the binds variables..

Important Note about CDB:  You can license in container level. For example: you can license partitioning in CDB level. This license become valid for all PDB's in that CDB. There is a trick here. Based on this example, we should put the databases which use partitioning together in to the same CDB. By doing so, we will only have to buy a partition license for the particular CDB. Otherwise , we need to buy more licenses as the PDB's,which require partitioning, will be on different CDB's.