UNDO tablespace keeps growing in Oracle

The UNDO tablespace size is governed by the the undo_retention parameter.  f its non-default and is set to a high value in order to enable a large flash recover area.  Undo table-space will tend to keep the old data till that time period. Also check out that whether you have any transactions which are not getting committed/rolled back as if this is not happening than your table-space will keep on growing.

Undo data is managed within the UNDO table-space using UNDO segments that are automatically created and maintained by Oracle. Every Oracle Database must have a method of maintaining "before images" of rows that is used to roll back changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed.  UNDO rows are use for several purposes.  We also see these reasons for UNDO:
  • Transaction rollback:   UNDO is the mechanism by which you can undo your changes before a commit with the ROLLBACK command. Also, if a process is abnormally terminated UNDO will be used to clean up any uncommitted transactions.

  • Flashback. Oracle 10g and beyond has Flashback Database. Flashback Query and Flashback Table (but not dropped table) utilize UNDO.

  • Read consistency:  If you start a query at 9AM and it takes 3 hours, are the results you see from 9AM or noon? The answer is 9AM, the moment you pressed the enter key. Oracle accomplishes this through read consistency, which pulls the data you require from current data blocks or consistent read blocks, which is UNDO. If you do not have enough UNDO to satisfy your long running queries, you might get a ORA-01555 error.

  • Transaction Recovery: If your instance is abnormally terminated (kill -9 pmon, shutdown abort), SMON will handle recovery on instance startup. This involves four phases: mount the database, roll forward all redo since the last checkpoint to the data-files, open database, and rollback all non-committed transactions.
Hence, if the UNDO tablespace has a high undo_retention then it will eat up all of your disk space in the UNDO tablespace. I suggest that you take a look at your UNDO adviser within OEM and see what it recommends for an UNDO table-space size for your database.
There are some views that show information related to UNDO activity:
  • V$UNDOSTAT: histogram-like view that shows statistics for 10-minute intervals.
  • V$TRANSACTION: present time view providing information on current transactions.
  • V$SESSTAT: individual session statistics, which includes one for undo usage.
You can query the v$undostat and the v$transaction view to see the actual usage of your UNDO table-space.

You can also run this query to see UNDO usage:

select
   a.sid,
   b.name,
   a.value

from
   v$sesstat  a,
   v$statname b

where
   a.statistic# = b.statistic#

and
   a.statistic# = 176
order by
   a.value DESC;


We also have this query to measure growing UNDO tablespace usage:

colusername FORMAT A15

SELECT s.username,
       s.sid,
       s.serial#,
       t.used_ublk,
       t.used_urec,
       rs.segment_name,
       r.rssize,
       r.status
FROM   v$transaction t,
       v$session s,
       v$rollstat r,
       dba_rollback_segs rs
WHERE  s.saddr = t.ses_addr
AND    t.xidusn = r.usn
AND    rs.segment_id = t.xidusn
ORDER BY t.used_ublk DESC;


In a nutshell, the undo_retention parameter is used to throttle a growing UNDO tablespace.

Managing the UNDO TABLESPACE

Every Oracle Database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo.

Undo records are used to:

Roll back transactions when a ROLLBACK statement is issued
Recover the database
Provide read consistency
Analyze data as of an earlier point in time by using Flashback Query
Recover from logical corruptions using Flashback features
Earlier releases of Oracle Database used rollback segments to store undo. Oracle 9i introduced automatic undo management, which simplifies undo space management by eliminating the complexities associated with rollback segment management. Oracle strongly recommends that you use undo tablespace to manage undo rather than rollback segments.

Switching to Automatic Management of Undo Space

To go for automatic management of undo space set the following parameter.

Steps:-

1    If you have not created an undo tablespace at the time of creating a database then, create an undo tablespace by typing the following command

SQL>create undo tablespace myundo datafile
         ‘/u01/oracle/ica/undo_tbs.dbf’ size 500M
                        autoextend ON next 5M ;

When the system is first running in the production environment, you may be unsure of the space requirements of the undo tablespace. In this case, you can enable automatic extension for datafiles of the undo tablespace so that they automatically increase in size when more space is needed

2.    Shutdown the Database and set the following parameters in parameter file.
UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=myundo

3.    Start the Database.
Now Oracle Database will use Automatic Undo Space Management.

Calculating the Space Requirements For Undo Retention

You can calculate space requirements manually using the following formula:

UndoSpace = UR * UPS + overhead

where:

UndoSpace is the number of undo blocks
UR is UNDO_RETENTION in seconds. This value should take into consideration long-running queries and any flashback requirements.
UPS is undo blocks for each second
overhead is the small overhead for metadata (transaction tables, bitmaps, and so forth)
As an example, if UNDO_RETENTION is set to 3 hours, and the transaction rate (UPS) is 100 undo blocks for each second, with a 8K block size, the required undo space is computed as follows:

(3 * 3600 * 100 * 8K) = 8.24GBs

To get the values for UPS, Overhead query the V$UNDOSTAT view. By giving the following statement

SQL> Select * from V$UNDOSTAT;



Altering UNDO Tablespace

If the Undo tablespace is full, you can resize existing datafiles or add new datafiles to it

The following example extends an existing datafile

SQL> alter database datafile ‘/u01/oracle/ica/undo_tbs.dbf’ resize 700M

The following example adds a new datafile to undo tablespace

SQL> ALTER TABLESPACE myundo ADD DATAFILE '/u01/oracle/ica/undo02.dbf' SIZE 200M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED;

Dropping an Undo Tablespace

Use the DROP TABLESPACE statement to drop an undo tablespace. The following example drops the undo tablespace undotbs_01:

SQL> DROP TABLESPACE myundo;

An undo tablespace can only be dropped if it is not currently used by any instance. If the undo tablespace contains any outstanding transactions (for example, a transaction died but has not yet been recovered), the DROP TABLESPACE statement fails.

Switching Undo Tablespaces

You can switch from using one undo tablespace to another. Because the UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM SET statement can be used to assign a new undo tablespace.

The following statement switches to a new undo tablespace:

ALTER SYSTEM SET UNDO_TABLESPACE = myundo2;

Assuming myundo is the current undo tablespace, after this command successfully executes, the instance uses myundo2 in place of myundo as its undo tablespace.

Viewing Information about Undo Tablespace

To view statistics for tuning undo tablespace query the following dictionary

SQL>select * from v$undostat;

To see how many active Transactions are there and to see undo segment information give the following command

SQL>select * from v$transaction;

To see the sizes of extents in the undo tablespace give the following query

SQL>select * from DBA_UNDO_EXTENTS;

ORA-28

I am getting the following ORA-28 error in my alert log:
**********************************************************************
Following errors written to the Alert log file. Please verify
ORA-28 : opiodr aborting process unknown ospid (18941_47665083519440)

**********************************************************************

ORA 28:  Your session has been killed.

Cause:

 A privileged user killed the session and it is no longer logged in to the database.

Action: 

Contact the DBA.   The administrator may be attempting to perform an operation that requires users to be logged out.  When the database administrator announces that the database is available, log in and resume work.
Here is a breakdown of the sample ORA-28 message:

      "opiodr aborting process unknown ospid (24380) as a result of ORA-28"
  • "unknown" => means it is not a background or shadow process that is killed.
  • "ospid (24380)" => this is the OS pid of the process which opiodr is aborting.
  • "as a result of" => this precedes the error message which is the reason for opiodr to kill the process.
  • "ORA-28" => this is the reason that opiodr killed the process. In this case, it is ora-28, that the session has been killed by Oracle.
On MOSC see "opiodr aborting process" Messages in Alert.Log [ID 1230858.1].  Also note this bug on the ORA-28 error, bug 6377339 titled "Misleading alert log message for "opiodr aborting" [ID 6377339.8]"

The solution for the ORA-28 error message may be to apply patch ID 14737386

How to create read-only user for oracle schema

User hrqa_ro will be globally read only user.

SQL> conn / as sysdba
SQL> create user hrqa_ro identified by hrqa_ro123;
User created.

SQL> grant create session,select any dictionary,select any table to hrqa_ro;
Grant succeeded.

*************************************************************************
Question:  I need to create a read-only user within my schema.  
How can you grant read-only access for a single user without granting read to every table?

Answer:  You can make any user read-only with the grant select any table privilege:

SQL> connect scott/tiger

SQL>create user scott_read_only_user identified by readonly;

SQL>grant create session to scott_read_only_user;

SQL>grant select any table to scott_read_only_user;

This will only grant read-only to scott tables, you would need to connect to another schema owner
to grant them read-only access.  Optionally, you can add read-only dictionary acces:

SQL>grant select any dictionary to scott_read_only_user;

**************************************************************************
[oracle@testsrv]$ sqlplus / as sysdba

SQL> create user juser_read_only identified by test;
SQL> grant create session to user_read_only;
SQL> grant select any table to user_read_only;

Once you have granted these privileges you can connect as the new user.

SQL> conn user_ready_only/test
SQL> select * from scott.emp;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
---------- ---------- --------- ---------- --------- ---------- ----------
    DEPTNO
----------
      7369 SMITH      CLERK           7902 17-DEC-80        800
        20

      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300
        30

If you wish to grant select on dictionary views then:

SQL> conn / as sysdba
SQL> grant select any dictionary to user_read_only;
If you wish the read_only user could select ddl of any objects belongs to any schema then:

SQL> grant SELECT_CATALOG_ROLE to user_read_only;
SQL> conn user_read_only

Startup and Shutdown Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c

Multitenant : Startup and Shutdown Container Databases (CDB) and Pluggable Databases (PDB)
 in Oracle Database 12c Release 1 (12.1)
The multitenant option introduced in Oracle Database 12c allows a single container database (CDB)
to host multiple separate pluggable databases (PDB).

Container Database (CDB)

Startup and shutdown of the container database is the same as it has always been for regular instances.
The SQL*Plus STARTUP and SHUTDOWN commands are available when connected to the CDB as a privileged user.

Some typical values are shown below.

STARTUP [NOMOUNT | MOUNT | RESTRICT | UPGRADE | FORCE | READ ONLY]
SHUTDOWN [IMMEDIATE | ABORT]

Pluggable Database (PDB)

Pluggable databases can be started and stopped using SQL*Plus commands or the ALTER PLUGGABLE DATABASE command.

SQL*Plus Commands

The following SQL*Plus commands are available to start and stop a pluggable database, when connected to
that pluggable database as a privileged user.
STARTUP FORCE;
STARTUP OPEN READ WRITE [RESTRICT];
STARTUP OPEN READ ONLY [RESTRICT];
STARTUP UPGRADE;
SHUTDOWN [IMMEDIATE];
Some examples are shown below.

STARTUP FORCE;
SHUTDOWN IMMEDIATE;

STARTUP OPEN READ WRITE RESTRICT;
SHUTDOWN;

STARTUP;
SHUTDOWN IMMEDIATE;
ALTER PLUGGABLE DATABASE

The ALTER PLUGGABLE DATABASE command can be used from the CDB or the PDB.

The following commands are available to open and close the current PDB when
connected to the PDB as a privileged user.

ALTER PLUGGABLE DATABASE OPEN READ WRITE [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE OPEN READ ONLY [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE OPEN UPGRADE [RESTRICTED];
ALTER PLUGGABLE DATABASE CLOSE [IMMEDIATE];
Some examples are shown below.

ALTER PLUGGABLE DATABASE OPEN READ ONLY FORCE;
ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;

ALTER PLUGGABLE DATABASE OPEN READ WRITE;
ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
The following commands are available to open and close one or more PDBs when connected to the CDB as a privileged user.

ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN READ WRITE [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN READ ONLY [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN UPGRADE [RESTRICTED];
ALTER PLUGGABLE DATABASE <pdb-name-clause> CLOSE [IMMEDIATE];
The <pdb-name-clause> clause can be any of the following:

One or more PDB names, specified as a comma-separated list.
The ALL keyword to indicate all PDBs.
The ALL EXCEPT keywords, followed by one or more PDB names in a comma-separate list, to indicate a subset of PDBs.
Some examples are shown below.

ALTER PLUGGABLE DATABASE pdb1, pdb2 OPEN READ ONLY FORCE;
ALTER PLUGGABLE DATABASE pdb1, pdb2 CLOSE IMMEDIATE;

ALTER PLUGGABLE DATABASE ALL OPEN;
ALTER PLUGGABLE DATABASE ALL CLOSE IMMEDIATE;

ALTER PLUGGABLE DATABASE ALL EXCEPT pdb1 OPEN;
ALTER PLUGGABLE DATABASE ALL EXCEPT pdb1 CLOSE IMMEDIATE;
Pluggable Database (PDB) Automatic Startup

The 12.1.0.2 patchset has introduced the ability to preserve the startup state of PDBs,
 so you probably shouldn't be implementing a trigger in the manner discussed in this section.

Prior to 12.1.0.2, when the CDB is started, all PDBs remain in mounted mode.
There is no default mechanism to automatically start them when the CDB is started.
The way to achieve this is to use a system trigger on the CDB to start some or all of the PDBs.

CREATE OR REPLACE TRIGGER open_pdbs
  AFTER STARTUP ON DATABASE
BEGIN
   EXECUTE IMMEDIATE 'ALTER PLUGGABLE DATABASE ALL OPEN';
END open_pdbs;
/

You can customise the trigger if you don't want all of your PDBs to start.

Preserve PDB Startup State (12.1.0.2 onward)

The 12.1.0.2 patchset introduced the ability to preserve the startup state of PDBs through a CDB restart. 

This is done using the ALTER PLUGGABLE DATABASE command.

We will start off by looking at the normal result of a CDB restart.
Notice the PDBs are in READ WRITE mode before the restart, but in MOUNTED mode after it.

SQL>  SELECT name, open_mode FROM v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           READ WRITE

SQL>


SHUTDOWN IMMEDIATE;
STARTUP;


SQL> SELECT name, open_mode FROM v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           MOUNTED
PDB2                           MOUNTED

SQL>
Next, we open both pluggable databases, but only save the state of PDB1.

SQL> ALTER PLUGGABLE DATABASE pdb1 OPEN;
SQL> ALTER PLUGGABLE DATABASE pdb2 OPEN;
SQL> ALTER PLUGGABLE DATABASE pdb1 SAVE STATE;

The DBA_PDB_SAVED_STATES view displays information about the saved state of containers.

SQL> COLUMN con_name FORMAT A20
SQL> COLUMN instance_name FORMAT A20

SQL> SELECT con_name, instance_name, state FROM dba_pdb_saved_states;

CON_NAME             INSTANCE_NAME        STATE
-------------------- -------------------- --------------
PDB1                 cdb1                 OPEN

SQL>
Restarting the CDB now gives us a different result.

SQL> SELECT name, open_mode FROM v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           READ WRITE




SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP;


SQL> SELECT name, open_mode FROM v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           MOUNTED


The saved state can be discarded using the following statement.

SQL> ALTER PLUGGABLE DATABASE pdb1 DISCARD STATE;

SQL> COLUMN con_name FORMAT A20
SQL> COLUMN instance_name FORMAT A20

SQL> SELECT con_name, instance_name, state FROM dba_pdb_saved_states;

no rows selected


To connect to a PDB using the SQL*Plus CONNECT command:

Configure your environment so that you can open SQL*Plus.


Start SQL*Plus with the /NOLOG argument:

[oracle@sujeet ~]$ sqlplus /nolog
Issue a CONNECT command using easy connect or a net service name to connect to the PDB.

To connect to a PDB, connect to a service with a PDB property.

Connecting to a PDB in SQL*Plus Using the PDB's Net Service Name

The following command connects to the hr user using the PDB service.
The PDB service has a PDB property for the hrpdb PDB. This example assumes that the client is
configured to have a net service name for the hrapp service.

[oracle@sujeet ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Fri Apr 14 08:51:55 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> CONNECT hr@PDB
Enter password:
Connected.


[oracle@sujeet~]$ sqlplus /nolog
SQL> CONNECT sonu@PDB
Enter password:
Connected.

SQL> show user
USER is "sonu"


SQL> SELECT name, open_mode FROM v$pdbs;
NAME                           OPEN_MODE
------------------------------ ----------
PDB                        READ WRITE

SQL> CONNECT system@PDB
Enter password:
Connected.


Here is a brief list of some of the usage notes explained in the documentation.

The state is only saved and visible in the DBA_PDB_SAVED_STATES view if the container is in READ ONLY or READ WRITE mode.
The ALTER PLUGGABLE DATABASE ... SAVE STATE command does not error when run against a container in MOUNTED mode,
but nothing is recorded, as this is the default state after a CDB restart.
Like other examples of the ALTER PLUGGABLE DATABASE command, PDBs can be identified individually,
 as a comma separated list,
 using the ALL or ALL EXCEPT keywords.
The INSTANCES clause can be added when used in RAC environments. The clause can identify instances individually,
 as a comma separated list, using the ALL or ALL EXCEPT keywords. Regardless of the INSTANCES clause,
 the SAVE/DISCARD STATE commands only affect the current instance.


How to Stop and Start a Pluggable Database

The following explains how to stop and start a the container database and the containers (pluggable databases).

1. Shutdown a container database (CDB) and all pluggable databases (PDBs)

sqlplus '/ as sysdba'
SQL> show connection
NB Make sure you are on the root CDB$ROOT
SQL> shutdown immediate
2. Startup the CDB

sqlplus '/ as sysdba'
SQL> startup
Note: When you start a CDB it does not automatically start the PDBs

3. Check the status of the PDBs

sqlplus '/ as sysdba'
SQL> select name, open_mode from v$pdbs;
Note: Any PDBs are in mounted status.

4. Start a PDB

sqlplus '/ as sysbda'
SQL> alter pluggable database myplugdb3 open;
NB This will open pluggable database myplugdb3.
SQL> alter pluggable database all open;
NB This will open all pluggable databases.
5. Stop a PDB

sqlplus '/ as sysdba'
SQL> alter pluggable database myplugdb3 close immediate;
NB This will close pluggable database myplugdb3
SQL> alter pluggable database all close immediate;
NB This will close all pluggable databases
6. Using a trigger to open all pluggable databases.

sqlplus '/ as sysdba'
SQL> CREATE OR REPLACE TRIGGER pdb_startup AFTER STARTUP ON DATABASE
SQL> BEGIN
SQL> EXECUTE IMMEDIATE 'alter pluggable database all open';
SQL> END pdb_startup;
SQL> /


Start and Shutdown Pluggable Database

I will tell about start and shutdown pluggable database in this my post.This is as you know that tradational
start and shutdown process. But there is a little difference when you use container database.
If you have container database which have pluggable database and when you start container database at the same
time you should all pluggable database manually after starting container database. Let’s look this process.

Start and Shutdown Pluggable Database: This operation can be made in two ways.
First one is you can start or shutdown direct from inside pluggable database.
The second one is “alter pluggable database” command from root container.
You know first way. You set container parameter and do it what you want.

1)You can start or shutdown direct from inside pluggable database.

SQL> alter session set container=PDB2;
Session altered.

SQL> shutdown immediate;
Pluggable Database closed.

SQL> select name,open_mode from v$pdbs;
NAME                           OPEN_MODE
—————————— ———-
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           MOUNTED

You can start with same way.

SQL> startup
Pluggable Database opened.

SQL> select name,open_mode from v$pdbs;
NAME                           OPEN_MODE
—————————— ———-
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           READ WRITE

2) The another way you can use “alter pluggable database” command from root container to start and
shutdown pluggable database.You connect to container database.

SQL> alter pluggable database PDB1 close;
Pluggable database altered.

SQL> select name,open_mode from v$pdbs;
NAME                           OPEN_MODE
—————————— ———-
PDB$SEED                       READ ONLY
PDB1                           MOUNTED
PDB2                           READ WRITE
You can start pluggable database with same way.

SQL> alter pluggable database PDB1 open;
Pluggable database altered.

SQL> select name,open_mode from v$pdbs;
NAME                           OPEN_MODE
—————————— ———-
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           READ WRITE

Maybe you have a lot of pluggable database in the container database and these shutdown operation would
be disturbed.We can shutdown all pluggable database with one command from root container.

SQL> alter pluggable database all close;
Pluggable database altered.

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
—————————— ———-
PDB$SEED                       READ ONLY
PDB1                           MOUNTED
PDB2                           MOUNTED
To open all PDB’s

SQL> alter pluggable database all open;
Pluggable database altered.

SQL> select name,open_mode from v$pdbs;
NAME                           OPEN_MODE
—————————— ———-
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           READ WRITE

You may want close all pluggable database except one pluggable database. You can do this except command as following.

SQL> alter pluggable database all except PDB2 close;
Pluggable database altered.

SQL> select name,open_mode from v$pdbs;
NAME                           OPEN_MODE
—————————— ———-
PDB$SEED                       READ ONLY
PDB1                           MOUNTED
PDB2                           READ WRITE

When you open all pluggable database you can do same thing  “open” command  instead “close” command .

Or you can specify pdb list to perform operation.

SQL> alter pluggable database pdb1,pdb2 close;
Pluggable database altered.

NOTE:When you shutdown container database all PDB will shutdown too. But when you start container
database any PDB is not start automaticly.
To start PDB we should do manually intervention or we can create a trigger as following.

SQL> CREATE OR REPLACE TRIGGER open_pdbs
AFTER STARTUP ON DATABASE
BEGIN
EXECUTE IMMEDIATE ‘ALTER PLUGGABLE DATABASE ALL OPEN’;
END open_pdbs;
/




Oracle Database 12c Feature: Multitenant Database


Imagine a situation where you have to create two different schemas in the same databases; but both with the same name. A typical example is in the case of Peoplesoft applications which have a specific schema name - SYSADM, that can't be changed. So if you want to install two Peoplesoft applications in the same database, you will soon discover that it's not possible since you can't have two schemas named SYSADM in the same database. So, what are your choices?
Well, you could create two different databases. In fact, prior to Oracle Database 12c that was your only choice. But with two different databases comes two different sets of overheads - two Oracle instances (the memory areas such as SGA and the processes such as pmon and smon) which consume memory and CPU cycles in the host. The more databases you have, the more the CPU and memory usage - all because you want to create multiple schemas in the same name. 

Not any more, in the multi-tenancy option in Oracle Database 12c. Instead of creating a physical database for each SYSADM schema you want to create, you can a virtual database for each schema. Each virtual database behaves like an independent database; but runs on the top of a real, physical database which may be hidden from the end users. These virtual databases are called Containers. The physical database that houses these containers, is in effect a database of containers, and is known as a Container Database (CDB). You can pull out (or "unplug") a container from one CDB and place it (or, "plug" it) into another CDB. This is why a container is also known as a Pluggable Database (PDB). For all practical purposes from the perspective of the clients, the PDBs are just regular databases.

Please note a very important point: It is NOT necessary that the database be created as a CDB with PDBs inside it. You can also create a database exactly how it was (non- CDB) in the prior versions. The multi-tenancy option to the Oracle Database 12c is required to create s. That is an extra cost option; but there is no cost to create exactly one PDB inside a CDB. Later in this article you will see how to create a database as a PDB. To find out if the database has been created as a CDB or not, just check the column called CDB in the view V$DATABASE. 
SQL> select cdb from v$database;
 
CDB
---
YES

What is a Container Database


So, what is the big advantage in this setup, you may ask? Couldn't we just have created multiple plain vanilla databases instead of multiple PDBs? Yes, we could have; but then each of these plain "vanilla" databases would have has its own instance (processes and memory) overhead. PDBs do not have an instance associated with them, eliminating this overhead. Let's examine the setup with an example. Suppose you have a CDB (container database - the real database that houses other PDBs) called CONA which has a PDB calledPDB1. If you check the Oracle instance, you will see that there is only one - that of the CDB. Let's check the ORACLE_SID first:
[oracle@prosrv1 ~]$ echo $ORACLE_SID
CONA
That's it. There is just one SID; not one for each PDB. Next, let's check for the processes, specifically the very important one known as "pmon":
[oracle@prosrv1 ~]$ ps -aef|grep pmon
oracle 7672 7323 0 11:18 pts/2 00:00:00 grep pmon
oracle 12519 1 0 Feb19 ? 00:00:00 asm_pmon_+ASM
oracle 21390 1 0 Feb19 ? 00:00:00 ora_pmon_CONA
As you can see, the only instance running is CONA (the CDB) beside,of course, the ASM instance. There is no instance for the PDB namedPDB1. You can create as many of these PDBs on this CDB called CONA; there will be no additional instance. PDBs are simply hosted on the CDBs. So in effect these PDBs are like virtual machines running on a physical machine (akin to the CDB) in a virtual machine context.
Since the CDB is the only real database, all the physical database components such as the Automatic Diagnostic Repository (ADR) is associated with it. Let's check the ADR using the ADRCI command line utility:
[oracle@prosrv1 trace]$ adrci
ADRCI: Release 12.1.0.1.0 - Production on Sun Feb 24 12:18:12 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
ADR base = "/u02/oradb"
adrci> show homes
ADR Homes:
diag/rdbms/cona/CONA
As you see from the output, there is only one ADR home - that for CONA (the CDB). There is no separate ADR for the PDBs.
You can check the containers (or PDBs) created in a database in a view named V$PDBS, which is brand new in Oracle Database 12c.
select con_id, dbid, name
from v$pdbs;

CON_ID DBID NAME
---------- ---------- ------------------------------
2 4050437773 PDB$SEED
3 3315520345 PDB1
4 3874438771 PDB2
5 3924689769 PDB3
Note how the DBIDs are also different for each PDB. There are two striking oddities in this output:
  • There is no CON_ID of 1. The answer is simple - there is a special container called the "root" container, known as CDB$Root that is created to hold the metadata. This container has the CON_ID of 1.
  • There is a PDB called PDB$SEED, which is something we didn't create. You will get the explanation of this PDB later in the article.
There are new built-in functions to identify PDBs from their details without querying the V$PDBS view. Here is an example how to identify the container ID from the name:
SQL> select con_name_to_id('PDB2') from dual;
CON_NAME_TO_ID('PDB2')
----------------------
4
And, here is how you can get the container ID from the DBID:
SQL> select con_dbid_to_id(3924689769) from dual;
CON_DBID_TO_ID(3924689769)
--------------------------
5


Operating on Specific PDBs


The next big question you may have is considering the unusual nature of the PDBs (they are virtual inside a real database) how you can operate on a specific PDB. There are several approaches. Let's examine them one by one.
  • Session Variable. You can set a session variable called container to the name of the PDB you want to operate on. First connect to the CDB as usual. Here is how I connected as the SYSDBA user: 

    [oracle@prosrv1 pluggable]$ sqlplus sys/oracle as sysdba
    SQL*Plus: Release 12.1.0.1.0 Production on Sat Mar 2 18:09:10 2013
    Copyright (c) 1982, 2013, Oracle. All rights reserved.
    Connected to:
    Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
    With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
    and Real Application Testing options
    SQL> alter session set container = pdb1;
    Session altered.
    
    Now all commands in this session will be executed in the context of the PDB called PDB1. For instance suppose you want to shutdown the PDB named PDB1, you would issue:
    SQL> shutdown immediate
    Pluggable Database closed.
    
    Only the PDB called PDB1 will be shut down; other PDBs will not be affected. 
  • Service Name. When you create a PDB, Oracle automatically adds it as a service in the listener. You can confirm it by looking at the listener status: 

    [oracle@prosrv1 trace]$ lsnrctl status
    LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 24-FEB-2013 12:20:14
    Copyright (c) 1991, 2013, Oracle. All rights reserved.
    Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
    STATUS of the LISTENER
    ------------------------
    Alias LISTENER
    Version TNSLSNR for Linux: Version 12.1.0.1.0 - Production
    Start Date 19-FEB-2013 21:09:05
    Uptime 4 days 15 hr. 11 min. 9 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File
    /u02/app/oracle/product/12.1.0/grid/network/admin/listener.ora
    Listener Log File
    /u02/app/oracle/diag/tnslsnr/prosrv1/listener/alert/log.xml
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=prosrv1.proligence.com)(PORT=1521)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
    Instance "+ASM", status READY, has 1 handler(s) for this service...
    Service "CONA" has 1 instance(s).
    Instance "CONA", status READY, has 1 handler(s) for this service...
    Service "CONAXDB" has 1 instance(s).
    Instance "CONA", status READY, has 1 handler(s) for this service...
    Service "pdb1" has 1 instance(s).
    Instance "CONA", status READY, has 1 handler(s) for this service...
    The command completed successfully
    
    The service "pdb1" actually points to the PDB called PDB1. It's very important to note that that this is not a service name in initialization parameter of the database, as you can see from the service_names parameter of the database.
    SQL> show parameter service
    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    service_names string CONA
    
    You can place that service name in an entry in the TNSNAMES.ORA file:
    PDB1 =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = prosrv1.proligence.com)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = PDB1)
    )
    )
    
    Now you can connect to PDB1 using the connect string:
    [oracle@prosrv1 ~]$ sqlplus system/oracle@pdb1
    
  • Using TWO_TASK
  • A third way is by defining the TWO_TASK operating system variable to point to the PDB you want to connect to: 

    [oracle@prosrv1 schema]$ export TWO_TASK=PDB1
    
    And, then you can connect as usual without giving a connect string:
    [oracle@prosrv1 schema]$ sqlplus system/oracle
    
    I prefer this last approach because it simulates a database connection prior to introduction of the PDB. If you connect to a specific PDB the majority of the time, all you have to do is to set this variable in the shell initialization file (e.g. .profile in case of bourne shell) of the user so that the variable is automatically set when you log in. If you need to know which PDB you are connected right now in SQL*Plus, just use the following command: To show which container (or, the PDB) you are connected to:
    SQL> show con_id
    CON_ID
    ------------------------------
    5
    
    SQL> show con_name
    CON_NAME
    ------------------------------
    PDB1
    

Creating PDBs


You can create the PDBs when creating the main (CDB) database; or later. If you use Database Configuration Assistant (DBCA) to create the CDB database, DBCA will ask you whether this is to be created as a CDB and if so how many PDBs you want to create, etc. But how do you create a new PDB when the main CDB has been created already? You have two options - using the familiar DBCA interface and the manual approach. Let's see the steps in each approach.

DBCA Approach

  1. Start DBCA.
  2. You will see a menu like this: 
  3. Choose "Manage Pluggable Databases":
  4. In the next screen, choose "Create a Pluggable Database."
  5. In the next screen you will see a list of container databases (CDBs). Choose the one where you want the PDB to be created. In this case we have only one CDB called CONA.
  6. In the next screen click on the option "Create a New Pluggable Database." 
  7. On the next screen you will have to answer some questions about the PDB you are about to create.
  8. Enter the name PDB2, or something else you want to name it as. Let's examine the options on the screen. 
  9. The PDB uses some storage exclusive to its own use; which is not part of the root container CDB$Root.You will need to mention in the screen how you want that storage to be created. In this case I chose Oracle Managed File (OMF) which lets Oracle put them in the proper location. I could have also chosen instead to put these files in a common location, which means I would have remember to clean up the files later if I drop the PDB. The overall storage occupied by CDB is a sum of the root container - CDB$Root, the seed PDB - PDB$Seed and all the PDBs contained in it.
  10. I also checked a box called "Create a Default User Tablespace". Every PDB may contain its own USERS tablespace that will be default tablespace of the users if not explicitly specified. This is very useful if you want the default tablespace to be different for each PDB. That way you can make sure that not all the users from one PDB take over the space in a common tablespace.
  11. You have to use a special user who can administer the PDB. In the above screen I used the name "syspdb2" and entered its password.
  12. After the PDB is created, you will see a message like the following screen. 
  13. After the PDB creation, you may examine the alert log to see the various activities performed to create the PDB: 
  1. Sun Feb 24 10:52:35 2013
    CREATE PLUGGABLE DATABASE PDB2 ADMIN USER syspdb2 IDENTIFIED BY * ROLES=(CONNECT) file_name_convert=NONE
    ****************************************************************
    Pluggable Database PDB2 with pdb id - 4 is created as UNUSABLE.
    If any errors are encountered before the pdb is marked as NEW,
    then the pdb must be dropped
    ****************************************************************
    Deleting old file#5 from file$
    Deleting old file#7 from file$
    Adding new file#11 to file$(old file#5)
    Adding new file#12 to file$(old file#7)
    Successfully created internal service pdb2 at open
    ALTER SYSTEM: Flushing buffer cache inst=0 container=4 local
    ****************************************************************
    Post plug operations are now complete.
    Pluggable database PDB2 with pdb id - 4 is now marked as NEW.
    ****************************************************************
    Completed: CREATE PLUGGABLE DATABASE PDB2 ADMIN USER syspdb2 IDENTIFIED BY * ROLES=(CONNECT) file_name_convert=NONE
    alter pluggable database PDB2 open
    Pluggable database PDB2 dictionary check beginning
    Pluggable Database PDB2 Dictionary check complete
    Opening pdb PDB2 (4) with no Resource Manager plan active
    XDB installed.
    XDB initialized.
    Pluggable database PDB2 opened read write
    Completed: alter pluggable database PDB2 open
    CREATE SMALLFILE TABLESPACE "USERS" LOGGING DATAFILE SIZE 5M AUTOEXTEND ON NEXT
    1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
    Sun Feb 24 10:53:24 2013
    Setting Resource Manager plan SCHEDULER[0x420C]:DEFAULT_MAINTENANCE_PLAN via
    scheduler window
    Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN at pdb PDB2 (4) via
    parameter
    Completed: CREATE SMALLFILE TABLESPACE "USERS" LOGGING DATAFILE SIZE 5M
    AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT
    SPACE MANAGEMENT AUTO
    ALTER DATABASE DEFAULT TABLESPACE "USERS"
    Completed: ALTER DATABASE DEFAULT TABLESPACE "USERS"
    Sun Feb 24 10:53:51 2013
    TABLE SYS.WRI$_OPTSTAT_HISTHEAD_HISTORY: ADDED INTERVAL PARTITION SYS_P223
    (41328) VALUES LESS THAN (TO_DATE(' 2013-02-25 00:00:00', 'SYYYY-MM-DD
    HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    Sun Feb 24 10:53:54 2013
    Thread 1 advanced to log sequence 70 (LGWR switch)
    Current log# 1 seq# 70 mem# 0: +CONDATA/CONA/ONLINELOG/group_1.262.807831747
    Current log# 1 seq# 70 mem# 1: +DATA/CONA/ONLINELOG/group_1.283.807831749
    TABLE SYS.WRI$_OPTSTAT_HISTGRM_HISTORY: ADDED INTERVAL PARTITION SYS_P226 (41328)
    VALUES LESS THAN (TO_DATE(' 2013-02-25 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
    'NLS_CALENDAR=GREGORIAN'))
    

Manual Approach

You don't have to fire up the DBCA interface. A simple SQL command does the trick. Connect to the CDB as SYSDBA:
$ sqlplus sys/oracle as sysdba
SQL> create pluggable database pdb3
2 admin user syspdb3 identified by syspdb3
3 roles=(CONNECT,DBA)
4 /
Pluggable database created.
You will learn about the different clauses used here (admin user, roles, etc.) later. The PDB is created but not open yet. You have to manually open it:
SQL> alter pluggable database pdb3 open;
Pluggable database altered.
Now that the database is open, you can connect to it using the different methods shown earlier. Note that you have to be authenticated with SYSDBA role for this operation.

Common and Local Users


With the introduction of PDBs comes a very important point not to be forgotten - there are two different types of databases now: CDB and PDB. Consequently, the concept of users in the database is bound to be affected. Let's connect to one of the PDBs - PDB1 - and create a user called HRMASTER that will hold our HR application.
SQL> alter session set container = pdb1;
Session altered.
SQL> create user hrmaster identified by hrmaster;
User created.
This user is created in the PDB named pdb1 alone; not in the CDB database. You can exploit this fact by creating the user in different containers. You can create a user called HRMASTER in another PDB called, say, PDB2. Since PDB1 and PDB2 are considered two databases, this is absolutely possible. This is exactly how you will be able to host two Peoplesoft applications in the same database - by creating two different PDBs and creating a SYSADM user in each PDB.
Since this user HRMASTER is visible inside the PDB named PDB1 only, it is called a "local" user. HRMASTER user in PDB1 is an entirely different entity from the HRMASTER user in PDB2. However, if you want to create a user that is visible to all the PDBs in that CDB, you should create a *common* user. Logging into the container database (CDB) as a DBA user, you create a common user as shown below:
SQL> create user c##finmaster identified by finmaster container = all;
User created.
Note, the common user must have a special prefix of "C##" to differentiate itself from local users. Although the user is common, you have to grant the necessary privileges in the appropriate PDBs. Granting a privilege in one PDB will not automatically make the user have that privilege in another. If you want to grant a specific privilege in all the PDBs, e.g. you want to grant "create session" to the common user in all the containers, you could grant that privilege individually in all the PDBs or use the SQL like the following to grant in all PDBs in one statement:
SQL> grant create session to c##finmaster container = all;
Grant succeeded.
To confirm that this user is indeed common, you can connect to one of the PDBs:
[oracle@prosrv1 ~]$ sqlplus c##finmaster/finmaster@pdb1

... output truncated ...
SQL> show con_name
CON_NAME
------------------------------
PDB1
Remember, you did not create the user explicitly in the container PDB1. Since the user is common, it can connect to that PDB. Now test the same in another PDB - PDB2:
[oracle@prosrv1 ~]$ sqlplus c##finmaster/finmaster@pdb2

... output truncated ...

SQL> show con_name
CON_NAME
------------------------------
PDB2
However, if you attempt to connect as HRMASTER:
SQL> conn hrmaster/hrmaster@pdb2
ERROR:
ORA-01017: invalid username/password; logon denied
This error occurred because the user HRMASTER has not been created in the PDB named PDB2. It has been created as a local user in PDB1 alone; so the attempt to connect to PDB2 fails. Oracle already created some common users. Notable examples are SYS, SYSTEM, etc.
Now that you know the difference between a local user and a common user, you have to learn another very important difference. A common user is merely a user that can connect to a PDB without explicitly being created there. The schemas of the common users in the PDBs are different from each other. For instance C##FINMASTER schema in PDB1 cannot see anything owned by C##FINMASTER in PDB2. Let's see with an example. Let's connect to PDB1 and create a table T1:
[oracle@prosrv1 ~]$ sqlplus c##finmaster/finmaster@pdb1
... output truncated ...
SQL> show con_name
CON_NAME
------------------------------
PDB1

SQL> create table t1 (col1 number);
Table created.
Now connect to PDB2 using the same user and check for the table T1:
[oracle@prosrv1 ~]$ sqlplus c##finmaster/finmaster@pdb2

... output truncated ...
SQL> show con_name
CON_NAME
------------------------------
PDB2

SQL> desc t1
ERROR:
ORA-04043: object t1 does not exist
This error occurred because the schema is different. The common user is merely an authentication mechanism; the actual storage of data is different across different PDBs.
Considering there are two types of users now, how do you find which users are local and which are common? A special type of view prefixed with CDB_ shows the information that relates to the entire CDB. You will learn more about this type of views in the section on data dictionary; but here is one for now - CDB_USERS (for all the users in the CDB):
SQL> select con_id, common, username from cdb_users where username like '%MASTER';
CON_ID COM USERNAME
---------- --- -------------
1 YES C##FINMASTER
3 NO HRMASTER
3 YES C##FINMASTER
4 YES C##FINMASTER

The column COMMON shows you if the user is common or not. From the output you know C##FINMASTER is common while HRMASTER is not. You can also see that C##FINMASTER shows up in all containers while HRMASTER shows up only in container 3, where it was originally created.
Although common users can be created in a CDB, there is little use of that in a real life application. Ordinarily you will create local users in each PDB as required and that is what Oracle recommends.

Administration


So far you learned how the PDBs are considered independent from each other allowing you to create users with the same names while not proliferating the actual databases. The next important topic you are probably interested in learning is how to manage this entire infrastructure. Since the PDBs are logically different, it's quite conceivable that separate DBAs are responsible for managing them. In that case, you want to makes sure the privilege of these DBAs fall within the context of the respective container and not outside of it. >br> Earlier you saw how to create the PDB. Here it is once again:
SQL> create pluggable database pdb3
2 admin user syspdb3 identified by syspdb3
3 roles=(CONNECT,DBA);
Note the clause "admin user syspdb3 identified by syspdb3". It means the PDB has a user called syspdb3 which is an admin user. The next line "roles=(CONNECT,DBA)" indicates that the user has the CONNECT and DBA roles. This becomes the DBA user of the PDB. Let's see that by connecting as that user and confirming that the roles have enabled.
[oracle@prosrv1 trace]$ sqlplus syspdb3/syspdb3@pdb3
...
SQL> select * from session_roles;
ROLE
-------
CONNECT
DBA
PDB_DBA
... output truncated ...

Note that this is for this PDB alone; not in any other PDB. For instance, if you connect to PDB2, it will not work:
[oracle@prosrv1 ~]$ sqlplus syspdb3/syspdb3@pdb2
ERROR:
ORA-01017: invalid username/password; logon denied
Back in PDB3, since this user is a DBA, it can alter the parameters of the PDB as needed:
SQL> alter system set optimizer_index_cost_adj = 10;
System altered.
This is a very important point to consider here. The parameter changed here is applicable only to PDB3; not to any other PDBs. Let's confirm that: In PDB2:
SQL> conn syspdb2/syspdb2@pdb2
Connected.
SQL> show parameter optimizer_index_cost_adj
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
optimizer_index_cost_adj integer 100
However, in PDB3:
SQL> conn syspdb3/syspdb3@pdb3
Connected.
SQL> show parameter optimizer_index_cost_adj
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
optimizer_index_cost_adj integer 10

PDB2 has the old, unchanged value while PDB3 has the changed value. This is a very important property of the PDBs. You can change parameters in specific containers to suit the application. There is no need to force a common parameter value for all the containers in the CDB. A classic example is in the case of two containers - one for production and other for development. You may want to force values of a parameter for performance reasons. This could be a permanent setting or just temporarily for some experiments. You can change the value in only one container without affecting the others.
Note that not all the parameters can be modified in a PDB. A column ISPDB_MODIFIABLE in V$PARAMETER shows whether the parameter can be modified in a PDB or not. Here is an example:
SQL> select name, ispdb_modifiable
2 from v$parameter
3 where name in (
4 'optimizer_index_cost_adj',
5 'audit_trail'
6* )
SQL> /
NAME ISPDB
------------------------------ -----
audit_trail FALSE
optimizer_index_cost_adj TRUE
The audit_trail parameter is for the entire CDB; you can't modify them for individual PDBs. It makes sense in many ways. Since audit trail is something that is for a physical database; not a virtual one, it is not modifiable for individual PDBs. Similarly some parameters such as db_block_buffers, which is for an Oracle instance are non-modifiable as well. That parameter is for an Oracle instance. A PDB doesn't have an instance; so the parameter has no relevance in the PDB context and hence is non-modifiable.
Additionally, you can also use any of the normal ALTER SYSTEM commands. A common example is identifying errant sessions and killing them. First we identify the session from V$SESSION. However, since V$SESSION shows background processes for CDB as well, you need to trim down to show only for the current PDB. To do that, get the container_id and filter the output from v$ession using that.
SQL> show con_id
CON_ID
------------------------------
5

SQL> select username, sid, serial#
2 from v$session
3 where con_id = 5;
USERNAME SID SERIAL#
------------------------------ ---------- ----------
SYSPDB3 49 54303
C##FINMASTER 280 13919
2 rows selected.
SQL> alter system kill session '280,13919';
System altered.
There is a special case for starting and shutting down the PDBs. Remember, the PDBs themselves don't have any instance (processes and memory areas) or controlfile and redo logs. These elements of an Oracle database instance belongs to the CDB and shutting them down will shutdown all the PDBs. Therefore there is no concept called an instance shutdown in case of PDBs. When you shutdown or startup a PDB, all that happens is that the PDB is closed. Similarly the startup of PDB merely opens the PDB. The instance is already started, since that belongs to the CDB. Let's see with an example.
[oracle@prosrv1 pluggable]$ sqlplus sys/oracle@pdb1 as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Sat Mar 9 14:51:38 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
SQL> shutdown
Pluggable Database closed.

Here is the corresponding entry from alert log:
2013-03-09 14:51:50.022000 -05:00
ALTER PLUGGABLE DATABASE CLOSE
ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
Pluggable database PDB1 closed
Completed: ALTER PLUGGABLE DATABASE CLOSE


Adding Services in PDBs


Remember, Oracle automatically creates service names in the same name as the PDBs. This lets you connect to the PDBs directly from the clients using the SERVICE_NAME clause in the TNS connect string. However, occasionally you may want to add services in the PDBs themselves. To do so you can use the SRVCTL command with a special parameter "-pdb" to indicate the PDB it should be created in:
[oracle@prosrv1 ~]$ srvctl add service -db CONA -s SERV1 -pdb PDB1
If you want to check on the service SERV1, use:
[oracle@prosrv1 ~]$ srvctl config service -db CONA -s SERV1
Service name: SERV1Service is enabled
Cardinality: SINGLETON
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type:
Failover method: TAF failover retries:TAF failover delay:
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Pluggable database name: PDB1
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Session State Consistency:

Remember a very important fact while defining services in PDBs: services are unique in a single CDB. Therefore you can't create a service called SERV1 in another PDB. However, this is not new rule specific to CDBs or PDBs. If you had a listener servicing multiple databases in a single server in the prior versions, the service names had to be unique across all those databases as well. Here is what happens when you try to create this service in PDB2 which is in the root CDB named CONA:  
[oracle@prosrv1 ~]$ srvctl add service -db CONA -s SERV1 -pdb PDB2

PRKO-3117 : Service SERV1 already exists in database CONA
The service you just created is not started right away. To start it, issue:
[oracle@prosrv1 ~]$ srvctl start service -db CONA -s SERV1

[oracle@prosrv1 ~]$ srvctl status service -db CONA -s SERV1
Service SERV1 is running


Data Dictionary


You are obviously familiar with the DBA_* views (e.g. DBA_USERS, DBA_TABLES, etc.) in the Oracle Database. However, now there is an element of contextual difference in the database. Remember from our earlier discussions that the PDB, for all practical purposes, behave like regular databases. Therefore, the DBA_* views show data for individual PDBs alone; not for all the PDBs inside that CDB. To get data for all the PDBs, we have to rely on a different kind of DBA views. To get the data that is across all the PDBs inside that CDB we need to get introduced to another type of DBA views, prefixed by CDB_ (for CDB). You saw an example of this type of view earlier - CDB_USERS, which shows the users across the entire CDB. The familiar DBA_USERS view shows the users in that PDB alone.
Similarly, the dynamic performance views (the ones with the prefix V$) show the data for the specific PDB alone; not across all the PDBs. If you want to get the data from different PDBs, you should be connected to the root container (CDB$Root). Here is an example of the view V$PDB that shows the information on all the PDBs. Connect to the root container and issue this SQL:
SQL> select con_id, name, open_time, create_scn, total_size
2* from v$pdbs
SQL> /

CON_ID NAME OPEN_TIME CREATE_SCN TOTAL_SIZE
---------- ----------------------- ------------------------- ---------- ----------
2 PDB$SEED 19-FEB-13 09.54.26.452 PM 1688774 283115520
3 PDB1 19-FEB-13 09.55.06.421 PM 1875166 288358400
4 PDB2 24-FEB-13 10.53.08.615 AM 2710636 288358400

You have to be careful to see the data from the dynamic performance views (the ones with V$ prefix, e.g. V$SESSION). They may show some data from outside of the context of the current PDB as well. The rows have a column called CON_ID. Depending on the container you are connected to, the CON_ID shows information differently. Let's see it with an example. When you are connected to the root (the CDB directly) the container ID of 0 shows the data for the entire CDB. Here is an example:
SQL> select sid, username, program
2 from v$session
3 where con_id = 0;

SID USERNAME PROGRAM
---------- ------------------------------ -------------------------------------
1 oracle@prosrv1.proligence.com (PMON)
6 oracle@prosrv1.proligence.com (LGWR)
... output truncated ...

All the Oracle background processes are relevant to CDBs; not to PDBs at all. Therefore the CON_ID is 0 for them. Now let's see the data where the CON_ID is 1.
SQL> l
1 select sid, username, program
2 from v$session
3* where con_id = 1
SQL> /

SID USERNAME PROGRAM
---------- -------- -------------------------------------------
42 SYS oraagent.bin@prosrv1.proligence.com (TNS V1-V3)
50 SYS sqlplus@prosrv1.proligence.com (TNS V1-V3)
237 SYS oraagent.bin@prosrv1.proligence.com (TNS V1-V3)


You can see that these are the sessions connected to the database. Sessions are always relevant in the context of some PDB. In the context of CON_ID = 1, which is a PDB, the user sessions such as SQL*Plus and agents are relevant; not the background processes. Therefore sessions related to background processes like PMON and LGWR are not shown when CON_ID is 1.  Now, let's repeat the exercise while connected to PDB3.
SQL> select sid, username, program
2 from v$session
3* where con_id = 0;

SID USERNAME PROGRAM
---------- -------- ------------------------------------------------
1 oracle@prosrv1.proligence.com (PMON)
2 oracle@prosrv1.proligence.com (VKTM)


Again, it shows the Oracle background processes for the instance, which is relevant data for the entire CDB; not to specific PDBs. If you check for data filtered for the present CON_ID.
SQL> show con_id
CON_ID
------------------------------
5

SQL> select sid, username, program
2 from v$session
3 where con_id = 5;

SID USERNAME PROGRAM
---------- -------- -----------------------------------------
50 SYSPDB3 sqlplus@prosrv1.proligence.com (TNS V1-V3)

This shows the session relevant in the context of this container only. The user sessions are not relevant to the CDB as a whole; and hence these is not shown for that container ID. Remember, the root (CDB$Root) is also a container; so sessions connected directly to it will show up as container ID of 1 (which is the ID of the root container). This difference in the data presented - one specific to the container and the other irrelevant to container but to the CDB as a whole - is seen in all V$ views, except a few important ones - V$SYSSTAT, V$SYS_TIME_MODEL, V$SYSTEM_EVENT and V$SYSTEM_WAIT_CLASS. In these four views only the container specific data is presented. Here is an example:
SQL> show con_id
CON_ID
------------------------------
5

SQL> select con_id, count(1) from v$sysstat group by con_id
2 /
CON_ID COUNT(1)
---------- ----------
5 817

The view V$SYSSTAT contains data specific to container ID 5 (the current one) only; not anything related to the entire CDB. The same behavior applies to the other three views. However, in managing an entire CDB, you will be interested to know the statistics of different PDBs simultaneously; not connecting to one PDB at a time. Therefore a new set of views with the prefix V$CON_ are provided to give the CDB-wide information for those four views. Here is an example where I want to find out the logical reads made by different PDBs in this CDB:
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT

SQL> select con_id, value
2 from v$con_sysstat
3 where name = 'session logical reads';

CON_ID VALUE
---------- ----------
1 956853686
2 955759828
3 864433091
4 329765911
5 496782913
Looking at the data, I can clearly see that the first three PDBs exert most pressure on the database while the last PDBs don't as much.


Backup and Restore/Recover

Backup is an important necessary feature of any database. Since the concept of database itself has changed - from a physical "database" to a virtual "pluggable database", and the former has all the files with the latter has some additional files specific to the PDB, the question now is how RMAN is used: on CDB as a whole or PDB selectively. If we use it on a PDB, it can't just stop at selecting the PDB-specific files since the common files of SYSTEM and SYSAUX, etc. are also required in a recovery. So RMAN has to understand what to backup in the context of its connection. Well, have no fear; RMAN is smart enough to understand what it should back up. Let's start with a simple example where we connect to the pluggable database PDB1 instead of the physical CDB.
[oracle@prosrv1 pluggable]$ rman target=sys/oracle@pdb1
Recovery Manager: Release 12.1.0.1.0 - Production on Thu Mar 7 21:36:44 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
connected to target database: CONA (DBID=1690995584)
Note how the target database shows up as CONA (the CDB); not the PDB itself. This should be quite intuitive. Since the physical database has the datafiles, controlfile and redolog files, it makes sense to backup there. However, when RMAN backs up the files, it backs up relevant files of PDB alone:
RMAN> backup incremental level 1 database;
Starting backup at 07-MAR-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=270 device type=DISK
no parent backup or copy of datafile 9 found
no parent backup or copy of datafile 8 found
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=+CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/sysaux.270.807832473
input datafile file number=00008 name=+CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/system.271.807832473
channel ORA_DISK_1: starting piece 1 at 07-MAR-13
channel ORA_DISK_1: finished piece 1 at 07-MAR-13
piece handle=+DATA/CONA/D62013E9C50254BDE04380A8840A3EE3/BACKUPSET/2013_03_07/nnndn0_tag20130307t213846_0.289.809473129 tag=TAG20130307T213846 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:15
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=+CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/users.273.807832517
channel ORA_DISK_1: starting piece 1 at 07-MAR-13
channel ORA_DISK_1: finished piece 1 at 07-MAR-13
piece handle=+DATA/CONA/D62013E9C50254BDE04380A8840A3EE3/BACKUPSET/2013_03_07/nnndn1_tag20130307t213846_0.290.809473203 tag=TAG20130307T213846 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 07-MAR-13
Starting Control File and SPFILE Autobackup at 07-MAR-13
piece handle=+DATA/CONA/D62013E9C50254BDE04380A8840A3EE3/AUTOBACKUP/2013_03_07/s_809473204.291.809473205 comment=NONE
Finished Control File and SPFILE Autobackup at 07-MAR-13
To confirm that RMAN knows the files to backup, issue:
RMAN> report schema
2> ;
Report of database schema for database with db_unique_name CONA
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
8 300 SYSTEM *** +CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/system.271.807832473
9 2530 SYSAUX *** +CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/sysaux.270.807832473
10 5 USERS *** +CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/users.273.807832517
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
3 20 TEMP 32767 +CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/TEMPFILE/temp.272.807832501
Note how RMAN reported only the relevant files of the PDB; not the entire CDB database. If you check another PDB - PDB2 you will see that there are different files that constitute the PDBs. The tablespaces AP_DATA and AP_IDX were created in the PDB2 alone and they show up in the latter output; not in the output of PDB1.
[oracle@prosrv1 pluggable]$ rman target=sys/oracle@pdb2
Recovery Manager: Release 12.1.0.1.0 - Production on Thu Mar 7 21:45:59 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
connected to target database: CONA (DBID=1690995584)
RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CONA
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
11 290 SYSTEM *** +CONDATA/CONA/D67B69C12A8C19BAE04380A8840A62A8/DATAFILE/system.274.808224757
12 1930 SYSAUX *** +CONDATA/CONA/D67B69C12A8C19BAE04380A8840A62A8/DATAFILE/sysaux.275.808224763
13 357 USERS *** +CONDATA/CONA/D67B69C12A8C19BAE04380A8840A62A8/DATAFILE/users.277.808224803
16 100 AP_DATA *** +DATA/CONA/D67B69C12A8C19BAE04380A8840A62A8/DATAFILE/ap_data.286.808920195
17 100 AP_IDX *** +DATA/CONA/D67B69C12A8C19BAE04380A8840A62A8/DATAFILE/ap_idx.287.808920247
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
4 20 TEMP 32767 +CONDATA/CONA/D67B69C12A8C19BAE04380A8840A62A8/TEMPFILE/temp.276.808224783
Now contrast this with the output from the same report schema command while connected to the root container. Note that all the files - of all the PDBs - are reported here.
[oracle@prosrv1 pluggable]$ rman target=/
Recovery Manager: Release 12.1.0.1.0 - Production on Thu Mar 7 21:48:53 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
connected to target database: CONA (DBID=1690995584)
RMAN> report schema
2> ;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CONA
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 830 SYSTEM *** +CONDATA/CONA/DATAFILE/system.258.807831649
3 3370 SYSAUX *** +CONDATA/CONA/DATAFILE/sysaux.257.807831595
4 260 UNDOTBS1 *** +CONDATA/CONA/DATAFILE/undotbs1.260.807831715
5 260 PDB$SEED:SYSTEM *** +CONDATA/CONA/DATAFILE/system.267.807831773
6 5 USERS *** +CONDATA/CONA/DATAFILE/users.259.807831713
7 670 PDB$SEED:SYSAUX *** +CONDATA/CONA/DATAFILE/sysaux.266.807831773
8 300 PDB1:SYSTEM *** +CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/system.271.807832473
9 2530 PDB1:SYSAUX *** +CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/sysaux.270.807832473
10 5 PDB1:USERS *** +CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/users.273.807832517
11 290 PDB2:SYSTEM *** +CONDATA/CONA/D67B69C12A8C19BAE04380A8840A62A8/DATAFILE/system.274.808224757
12 1930 PDB2:SYSAUX *** +CONDATA/CONA/D67B69C12A8C19BAE04380A8840A62A8/DATAFILE/sysaux.275.808224763
13 357 PDB2:USERS *** +CONDATA/CONA/D67B69C12A8C19BAE04380A8840A62A8/DATAFILE/users.277.808224803
14 280 PDB3:SYSTEM *** +CONDATA/CONA/D6D1419BB837449AE04380A8840A7601/DATAFILE/system.278.808593451
15 1340 PDB3:SYSAUX *** +CONDATA/CONA/D6D1419BB837449AE04380A8840A7601/DATAFILE/sysaux.279.808593451
16 100 PDB2:AP_DATA *** +DATA/CONA/D67B69C12A8C19BAE04380A8840A62A8/DATAFILE/ap_data.286.808920195
17 100 PDB2:AP_IDX *** +DATA/CONA/D67B69C12A8C19BAE04380A8840A62A8/DATAFILE/ap_idx.287.808920247
18 50 PDB3:USERS *** +DATA/CONA/D6D1419BB837449AE04380A8840A7601/DATAFILE/users.288.809001765
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 88 TEMP 32767 +CONDATA/CONA/TEMPFILE/temp.265.807831767
2 94 PDB$SEED:TEMP 32767 +CONDATA/CONA/DATAFILE/pdbseed_temp01.dbf
3 20 PDB1:TEMP 32767 +CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/TEMPFILE/temp.272.807832501
4 20 PDB2:TEMP 32767 +CONDATA/CONA/D67B69C12A8C19BAE04380A8840A62A8/TEMPFILE/temp.276.808224783
5 20 PDB3:TEMP 32767 +CONDATA/CONA/D6D1419BB837449AE04380A8840A7601/TEMPFILE/temp.280.808593479
After a backup is taken you may want to find out the existence of such backups. But in the multitenancy environment, you may be interested in knowing about the backup of a specific PDB; not just all the PDBs. It's quite likely that the backup of one PDB is more current than the other. To check for the existence of backup of a specific PDB, use the familiar "list backup" command but with an additional clause:
RMAN> list backup of pluggable database pdb1;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
4 Full 1.92G DISK 00:01:40 07-MAR-13
BP Key: 4 Status: AVAILABLE Compressed: NO Tag: TAG20130307T215412
Piece Name:
+DATA/CONA/BACKUPSET/2013_03_07/nnndf0_tag20130307t215412_0.292.809474053
List of Datafiles in backup set 4
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
8 Full 5437070 07-MAR-13 +CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/system.271.807832473
9 Full 5437070 07-MAR-13 +CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/sysaux.270.807832473
10 Full 5437070 07-MAR-13 +CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/users.273.807832517
The same concept applies to recovery as well. Let's see how I recover a specific PDB while not affecting other PDBs that may be contained in the parent CDB.
RMAN> alter pluggable database pdb1 close;
Statement processed
RMAN> restore database pdb1;
Starting restore at 07-MAR-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=29 device type=DISK
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00008 to
+CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/system.271.807832473
channel ORA_DISK_1: restoring datafile 00009 to
+CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/sysaux.270.807832473
channel ORA_DISK_1: restoring datafile 00010 to
+CONDATA/CONA/D62013E9C50254BDE04380A8840A3EE3/DATAFILE/users.273.807832517
channel ORA_DISK_1: reading from backup piece
+DATA/CONA/BACKUPSET/2013_03_07/nnndf0_tag20130307t215412_0.292.809474053
During recovery, the same principles you are used to in normal databases holds true. The only difference is that when you recover, you should connect as SYSBACKUP to the specific PDB.
SQL> conn sys/oracle@pdb1 as sysbackup
Connected.
SQL -- always a good idea to check the container (or PDB) you are connected to
SQL> show con_name
CON_NAME
------------------------------
PDB1

SQL> recover database;
ORA-00279: change 5437070 generated at 03/07/2013 21:54:13 needed for thread 1
ORA-00289: suggestion :
+DATA/CONA/ARCHIVELOG/2013_03_07/thread_1_seq_243.294.809474599
ORA-00280: change 5437070 for thread 1 is in sequence #243
... output truncated ...
ORA-00279: change 5463466 generated at 03/07/2013 22:09:45 needed for thread 1
ORA-00289: suggestion :
+DATA/CONA/ARCHIVELOG/2013_03_08/thread_1_seq_246.295.809485415
ORA-00280: change 5463466 for thread 1 is in sequence #246
Specify log: {=suggested | filename | AUTO | CANCEL}
Log applied.
Media recovery complete.

SQL> alter database open;
Database altered.
See how intuitive it is to use backup and recovery in a multitenant environment? It's not different from those operations in normal database environments.

Point in Time Recovery in PDB

Now that you learned about backups and recovery, let's discuss another vexing topic - Point in Time Recoveries (PITR). PITR allows you to recover up to a specific point in time. However, in a multitenant environment you have several PDBs, of which you are interested in doing the PITR of a specific one. Can you do PITR of one alone leaving the rest of them current? Of course you can. Here is an example of a point in time recovery for a specific PDB inside a CDB - PDB1. This CDB has other PDBs inside it.
RMAN> run {
2> set until time '08-MAR-13';
3> restore pluggable database pdb1;
4> recover pluggable database pdb1;
5> }

Here is the output (truncated for brevity):
Starting recover at 08-MAR-13
using channel ORA_DISK_1
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified
Point-in-Time
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
Creating automatic instance, with SID='pvsj'
initialization parameters used for automatic instance:
db_name=CONA
db_unique_name=pvsj_pitr_pdb1_CONA
compatible=12.0.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=80
diagnostic_dest=/u02/oradb
#No auxiliary destination in use
enable_pluggable_database=true
_clone_one_pdb_recovery=true
control_files=+DATA/CONA/DATAFILE/current.302.809519541
#No auxiliary parameter file used
starting up automatic instance CONA
Oracle instance started
As you know a point in time recovery requires restoration and recovery of all the datafiles. Since the PDBs all share some files, or more specifically, the undo tablespace datafile, there is a challenge to recover only a single PDB. The undo tablespace datafile can't be overwritten; so the PDB can't be restored in its current place. Therefore RMAN creates a brand new instance (which is, in this example, named "pvsj") and restores all the necessary datafiles there. In that temporary instance, a special parameter "_clone_one_pdb_recovery" is set to true, which indicates that this restored database is actually a PDB.
So, your knowledge of point in time recovery in RMAN does not have to go through an overhaul in this multitenant environment.

Resource Management


Now that you have a fair idea of PDBs, you realize that the PDBs are essentially virtual databases running inside a physical database (called a CDB). All the components of the physical database, the processes, the memory, etc. belong to the CDB; not the PDBs. Therefore, it is highly possible that a single PDB will hog all the system resources (such as CPU) inside a CDB leaving nothing for the rest. There are some OS level utilities you can use to limit processes to a CPU limit; but the processes (PMON, SMON, etc.) are for the CDB - not the PDB - leaving that approach ineffective. The same is true for CDB level resources that can't be defined at the PDB level. Take, for instance, the parallel query servers. The parameter parallel_max_servers limits the number of PQ servers that can exist in an instance; but this is something defined at the CDB level; not for the PDBs. So it's possible for a single PDB to kick off all the allowed PQ servers starving the rest of the PDB from kicking off even a single PQ server. Monopolization of resources in a CDB is the single biggest risk to the multitenant database implementation.
Well, worry no more. Fortunately, the creators of this powerful feature have thought about this risk and have built in a mechanism to prevent that. The answer to the issue lies in the Resource Manager, also known as Database Resource Manager (DBRM). You are probably already aware of the DBRM that has existed in Oracle Databases for quite a while now. But that is for governing resources inside the database. PDBs are something like databases-in-a-database; so that concept does not work in their case. Now the DBRM has new features to make sure they can force PDBs to play nice together in a single CDB database. In this section you will learn how to govern the resources among the PDBs using this new CDB-level DBRM. We will not explore the feature of DBRM that governs resources among different sessions inside a database, which is not a new feature. There are three new parameters that allow resource management naming PDBs.
  1. The CPU allocation is controlled by a parameter called "shares", which determine how the CPU will be divided among the PDBs in case of a crunch. So, if you have two PDBs - PDB1 and PDB2 - with shares parameter of 1 and 2 respectively, it tells the DBRM that PDB2 should get twice the amount of CPU consumed by PDB1. Note that DBRM kicks in only when there is a contention for CPU. If the total demand is less that 100%, everyone gets as mush they want; but if there a contention then PDB2 is guaranteed 2/(1+2), i.e. 2/3rd of the available CPU. Since PDB1's share is only 1, it's guaranteed 1/3rd of the CPU.
  2. The second parameter is "utilization_limit", which puts a ceiling on the CPU consumption by a specific PDB even if there is spare CPU available in that CDB. This is specified as a percentage of the total CPU. This parameter allows you to put a cap on the CPU consumption by a PDB for any reason.
  3. The third parameter is "parallel_server_limit", which limits the number of parallel query servers that can be kicked off in the PDB. This is a percentage of the overall maximum parallel query servers in a CDB.
Let's see how to implement this with an example. Suppose we have three PDBs named PDB1, PDB2 and PDB3. PDB1 hosts the most important applications. If there is a CPU contention, we want to give PDB1 50%, PDB2 25% and PDB3 25% of the available CPU respectively. When there is plenty of CPU, we don't want to limit any CPU consumption by PDB1, since it hosts critical apps; but we want to limit PDB2 and PDB3 so that they can't ever take up more than 50% and 70% of the CPUs respectively. We also want to limit the parallel query servers to 50% and 70% of the value defined by parallel_max_servers.
To implement this structure, we will execute the following PL/SQL block:
begin
dbms_resource_manager.clear_pending_area();
dbms_resource_manager.create_pending_area();
-- create the CDB resource plan
dbms_resource_manager.create_cdb_plan(
plan => 'dayshift_cona_plan',
comment => 'cdb plan for cona'
);
-- give the limits in the plan for PDB1
dbms_resource_manager.create_cdb_plan_directive(
plan => 'dayshift_cona_plan',
pluggable_database => 'pdb1',
shares => 2,
utilization_limit => 100,
parallel_server_limit => 100
);
-- and, now the same for PDB2
dbms_resource_manager.create_cdb_plan_directive(
plan => 'dayshift_cona_plan',
pluggable_database => 'pdb2',
shares => 1,
utilization_limit => 50,
parallel_server_limit => 50
);
-- and now, PDB3
dbms_resource_manager.create_cdb_plan_directive(
plan => 'dayshift_cona_plan',
pluggable_database => 'pdb3',
shares => 1,
utilization_limit => 70,
parallel_server_limit => 70
);
dbms_resource_manager.validate_pending_area();
dbms_resource_manager.submit_pending_area();
end;
/
With the plan in place, you should now enable it for the CDB to start enforcing it:
SQL> alter system set resource_manager_plan = 'DayShift_CONA_Plan' scope=both;
System altered.
That's it. Now the PDBs will play along nicely within the CDB.
There is a little problem - you may ask - when you create new PDBs inside the same CDB. Since you haven't put that new PDB into the DBRM yet, it's possible that the new PDB will not subject to any restriction. Well, rest assured that there is away. You can define a "default" value for all the future PDBs to be created. Here is how you can define those default plan directives:
begin
dbms_resource_manager.update_cdb_default_directive(
plan => 'dayshift_cona_plan',
new_shares => 1,
new_utilization_limit => 30,
new_parallel_server_limit => 30
);
end;
/
Now all the newly created PDBs in this CDB will be subject to this plan. You can also explicitly define plans for them.

Cloning

A very important and useful feature of PDBs in a multitenant environment is the ability to clone the PDB quickly and easily. Let's see an example where I am cloning the PDB1 to another PDB called PDB4:
SQL> create pluggable database pdb4
2 from pdb1
3 /
Pluggable database created.

After the PDB is created, let's check the existence of the datafiles:
SQL> alter session set container = pdb4;
Session altered.
SQL> show con_name
CON_NAME
------------------------------
PDB4SQL> select name
2 from v$datafile;
NAME
--------------------------------------------------------------------------------
+CONDATA/CONA/DATAFILE/undotbs1.260.807831715
+CONDATA/CONA/D784602E6E8B308FE04380A8840A2F06/DATAFILE/system.281.809621997
+CONDATA/CONA/D784602E6E8B308FE04380A8840A2F06/DATAFILE/sysaux.282.809622005
+CONDATA/CONA/D784602E6E8B308FE04380A8840A2F06/DATAFILE/users.283.809622065
4 rows selected.
Oracle automatically copies the source datafiles (those of PDB1) to new files and calls it PDB4. This is very similar to the "duplicate database" command in RMAN; but where the source and target databases are just virtual databases, not all the files need be copied and there is no database instance created. So apart from some additional storage, cloning a PDB does not consume any incremental resource.
In this example I used Oracle Managed Files (OMF) that allows the Oracle Database to determine the location of the files. While it is a good practice, it's not absolutely necessary. If you use specific locations, and the locations differ on the target PDB, you can use use file_name_convert clause to let the files be copied to the desired location.
Cloning is primarily the way the PDBs are created. But cloning needs a very important ingredient - the source PDB which is cloned. In this example we the source was PDB1. When you create your very first PDB, you didn't specify; so where does Oracle get the files from?
But there is indeed a source. Do you remember seeing a PDB named PDB$SEED? Let's check the PDBs in our CDB:
SQL> conn system/oracle
Connected.
SQL> select name
2 from v$pdbs;
NAME
------------------------------
PDB$SEED
PDB1
PDB2
PDB3
PDB4
5 rows selected.
Here you can see the newly created PDB - PDB4. The PDB named PDB$SEED is the "seed" container from which all other containers are cloned. So when you create a fresh new PDB, with the syntax shown below:
create pluggable database pdb6
admin user syspdb6 identified by syspdb6;
It's actually cloned from PDB$SEED. This is a very important fact to remember. It means if you want a the database to be created in a certain way, e.g. the system tablespace has to be of a specific size, etc., you can change that in the seed PDB database and the new PDB will have the same value since Oracle simply copies the files from the seed to the new PDB.

Transport


The idea of cloning is not limited to be inside the same CDB, or even the same server. You can clone a PDB from a different CDB or even a different host. You can also "move" a PDB from one CDB to a different one. For instance, suppose you have an application running against a PDB in a host called serv1. To debug some issue in the app, the developers want to point the test app against that database; but there is a little problem - the database is inside a production firewall and the test app server can't connect to it. You are asked to create a copy of the database outside the firewall. Normally, you would have resorted to backup and restore - a possible but time consuming process, not to mention the careful planning and additional work. But with PDB, it's a breeze; you just "transport" the PDB to a different CDB. Let's see an example where we transport a PDB called PDB4 to a different host.
  1. If the PDB is open, you should close it. 

    SQL> alter pluggable database pdb4 close;
    
    Pluggable database altered.
    
  2. Create the meta-information on the PDB in an XML file. 

    SQL> alter pluggable database pdb4
    2 unplug into 'pdb4_meta.xml';
    Pluggable database altered.
    
    This file is generated in $ORACLE_HOME/dbs directory in Unix and %ORACLE_HOME%\database folder in Windows. If you examine the contents of the XML file, you will see that it contains the information on the PDB, its constituents such as the tablespaces and datafiles.
  3. Copy this file as well as the datafiles to the target server. You probably had a listing of the datafiles already. If not, simply refer to the XML file. All the files are there. If the datafiles are on ASM, which they are most likely in, use the remote copy command of ASMCMD.
  4. On the target server, connect to the CDB with SYSDBA privilege:

    $ sqlplus sys/oracle as sysdba
    
  5. Execute this: 

    SQL> create pluggable database pdb9
    2 using 'pdb4_meta.xml';
    
    Pluggable database created.
    
    If you check in the alert log:
    Successfully created internal service pdb9 at open
    ALTER SYSTEM: Flushing buffer cache inst=0 container=6 local
    ****************************************************************
    Post plug operations are now complete.
    Pluggable database PDB9 with pdb id - 6 is now marked as NEW.
    ****************************************************************
    Completed: create pluggable database pdb9
    
  6. This PDB is not open yet. You should open it: 

    SQL> alter pluggable database pdb9 open;
    
    Now the PDB is created and ready for use. You can confirm it by listing the datafiles.
    SQL> select name from v$datafile;
    NAME
    --------------------------------------------------------------------------------
    +CONDATA/CONA/DATAFILE/undotbs1.260.807831715
    +CONDATA/CONA/D78933868BCA4E94E04380A8840A6D4A/DATAFILE/system.284.809642687
    +CONDATA/CONA/D78933868BCA4E94E04380A8840A6D4A/DATAFILE/sysaux.294.809642695
    +CONDATA/CONA/D78933868BCA4E94E04380A8840A6D4A/DATAFILE/users.295.809642771
    

History of PDBs


In a multitenant database environment you will normally create many PDBs, clone then, transport them, etc. With time you may forget what you did where and how the different PDBs came into being. Instead of trying to jog your memory, you have a new view to get that information from. The view CDB_PDB_HISTORY shows the various operations of PDBs inside it. Here is an example:
SQL> select PDB_NAME, OPERATION, OP_TIMESTAMP, CLONED_FROM_PDB_NAME
2 from cdb_pdb_history
3 /
PDB_NAME OPERATION OP_TIMEST CLONED_FRO
---------- ---------------- --------- ----------
PDB$SEED UNPLUG 01-FEB-13
PDB$SEED PLUG 19-FEB-13
PDB1 CREATE 19-FEB-13 PDB$SEED
PDB$SEED UNPLUG 01-FEB-13
PDB$SEED PLUG 19-FEB-13
PDB$SEED UNPLUG 01-FEB-13
PDB$SEED PLUG 19-FEB-13
PDB3 CREATE 28-FEB-13 PDB$SEED
PDB$SEED UNPLUG 01-FEB-13v
PDB$SEED PLUG 19-FEB-13
PDB2 CREATE 24-FEB-13 PDB$SEEDPDB4 CREATE 28-FEB-13 PDB111 rows selected.
You can see not only the creation dates but the source of PDB4 is PDB1 and it was cloned on 28th Feb.

Conclusion


In this article you learned about the new multitenancy option in Oracle Database 12c that allows you to create several virtual databases (PDB) called pluggable databases or containers in a single physical database called container database (CDB). The CDB is the one that has the Oracle instance associated with it, i.e. it has the background processes such as pmon, smon and the memory areas such as buffer cache and large pool. The PDBs do not have their own instance; but take up residence inside the CDB, with some additional storage exclusive to them. This arrangement allows you to address the PDBs as independent databases. So you can create a user called, say, SCOTT in every PDB, instead of creating a new database for creating these users. This is highly useful in case of applications that require a specifically named schema, e.g. PeopleSoft requires a user called SYSADM. If you want to place several PeopleSoft applications inside one database, you can't, since you can't create more than one user with the name SYSADM. So you had to resort to creating several databases to hold several copies of the application. Each of these databases had its own memory and resource overheads. In a multitenancy model you can create several PDBs instead of actual databases. Each PDB can then have a user called SYSADM and run its copy of the PeopleSoft application but without consuming any additional memory and CPU.
While these PDBs are hosted in a single CDB, they have the many of the identities of a regular independent database. For instance you can set a specific optimizer parameter to different values in different PDBs. You can, of course, start and shutdown a PDB leaving other PDBs in the CDB unaffected. You can back up PDBs independently, as RMAN knows which specific files are relevant for the PDBs. You can restore individual PDBs, even perform a point in time recovery for a PDB while the others are running.
This allows you to create a true "cloud database" where you can host as many databases as you want without increasing the memory and CPU footprint. This is particularly useful when you have budgetary constraints that prevent creation of additional databases forcing difficulties in development cycles. You can spin up a PDB very quickly to host the development effort, or even clone it from another PDB from either the same CDB or a different one, even a different server. This ability of creating virtual databases disengages the database provisioning from hardware provisioning. All the PDBs share the same resources. If the overall demand becomes too high, all you have to do add the resources to the CDB and all the PDBs get the benefit immediately. Perhaps the biggest advantage is seen during upgrades. Instead of upgrading multiple databases, all you have to do is to upgrade the CDB and all PDBs in it are immediately upgraded.
Multitenancy, introduced in Oracle Database 12c, redefines the concept of creating and managing an Oracle Database; but at the same time does not pose a steep learning curve for the existing database administrators to master and be effective.

ORA-00059: Maximum Number Of DB_FILES Exceeded in 19C database

When I am adding datafile to my 19C database facing the below error. SQL> alter tablespace DATA  add datafile '/u01/data/data15.dbf&#...