ora-00257: archiver error. connect as sysdba only until resolved.

ora-00257: archiver error. connect as sysdba only until resolved.


Error While Connecting db through SQL Developer...




Cause
​The archiver process of the oracle database received an error while trying to archive a redo log.  The most likely cause of this message is that the destination device is out of space to store the redo log file.


Solution
To resolve this issue, contact the DBA to clear archive log directory and make some room for redo log creation. Once the issue gets resolved, test the connection and run the task.


This command will show you where your archivelogs are being written to:


SQL> show parameter log_archive_dest
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest                     string      /u01/archivelog/orcl/

If the ‘log_archive_dest’ parameter is empty then you are most likely using a ‘db_recovery_file_dest’ to store your archivelogs. You can run the below command to see that location.


SQL> show parameter recovery

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      /u01/fast_recovery_area
db_recovery_file_dest_size           big integer 100G

You will see at least these two parameters if you’re on Oracle 10g, 11g, or 12c. The first parameter ‘db_recovery_file_dest’ is where your archivelogs will be written to and the second parameter is how much space you are allocating for not only those files, though also other files like backups, redo logs, controlfile snapshots, and a few other files that could be created here by default of you don’t specify a specific location.


SQL> archive log list;

Now, note thatyou can find archive destinations if you are using a destination of USE_DB_RECOVERY_FILE_DEST by:

SQL> show parameter db_recovery_file_dest;


 find out what value is being used for db_recovery_file_dest_size, use:

SQL> SELECT * FROM V$RECOVERY_FILE_DEST;

You may find that the SPACE_USED is the same as SPACE_LIMIT,

It is important to note that within step five of the ORA-00257 resolution,  you may also encounter ORA-16020 in the LOG_ARCHIVE_MIN_SUCCEED_DEST, and you should use the proper archivelog path and use (keeping in mind that you may need to take extra measures if you are using Flash Recovery Area as you will receive more errors if you attempt to use LOG_ARCHIVE_DEST):

SQL>alter system set LOG_ARCHIVE_DEST_.. = 'location=/archivelogpath reopen';

The last step in resolving ORA-00257 is to change the logs for verification using:

SQL> alter system switch logfile;



According to the alert log, my db_recovery_file_dest was full and need to increase its size.
If you have enough space on underlying file system then simply increase the size of db_recovery_file_dest_szie otherwise first increase the size of the storage then modify this parameter.

 SQL> show parameter db_recover 

 NAME                       TYPE      VALUE 
 ------------------------------------ ----------- ------------------------------ 
 db_recovery_file_dest             string      /u01/app/oracle/fast_recovery_ 
                                area 
 db_recovery_file_dest_size        big integer 4500M 


SQL> alter system set db_recovery_file_dest_size=5G; 
 System altered.



Now you can invoke sqlplus

$sqlplus /nolog
SQL>conn / as sysdba
SQL> archive log list;
Check the Archive destination and delete all the logs

SQL> shutdown immediate 
SQL> startup mount
SQL> alter database noarchivelog;
SQL> alter database open;
SQL> archive log list;


You should purge the archived logs with RMAN:

RMAN>delete archivelog all;

RMAN> crosscheck archivelog all;

RMAN> DELETE NOPROMPT EXPIRED ARCHIVELOG ALL;

SQL> SELECT * FROM V$RECOVERY_FILE_DEST;


SQL> select * from V$RECOVERY_AREA_USAGE;

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 10 DAYS;

CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7days;
RMAN> delete obsolete;


CONFIGURE RETENTION POLICY TO REDUNDANCY 1;


RMAN> show all;

RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7days;

RMAN> CONFIGURE RETENTION POLICY TO REDUNDANCY 1;

RMAN> Delete archivelog all completed before 'SYSDATE-1';

SQL> archive log list;

SQL> SELECT * FROM V$RECOVERY_FILE_DEST;

SQL> select * from V$RECOVERY_AREA_USAGE;

SQL> select * from v$flash_recovery_area_usage;


How to Pin Oracle objects in the shared pool

Pinning Oracle E-Business Suite Objects Into The Shared Pool (Doc ID 69925.1)


PURPOSE
Explain why, what and how to pin objects in the shared pool for Oracle Applications.

SCOPE
Oracle Applications DBAs who want to improve database performance by pinning packages into shared pool.

DETAILS
Why pinning objects into the shared pool?


Oracle Applications requires space in the ORACLE System Global Area (SGA) for stored packages and functions.
If SGA space is fragmented, there may not be enough space to load a package or function.
You should pre-allocate space in the SGA shared pool for packages, functions, and sequences by "pinning" them.

Pinning objects in the shared pool can provide a tremendous increase in database performance,
if it is done correctly. Since pinned objects reside in the SQL and PL/PLSQL memory areas,
they do not need to be loaded and parsed from the database, which saves considerable time.


What objects to be pinned into the shared pool?


Most performance improvement can be gained from pinning large, frequently used packages.
Pinned objects are expensive in terms of memory space, since other not-pinned objects need this memory space, too.
In general do not pin all objects or rarely used objects - this could even decrease database performance.

As a general rule, you should always pin the following packages which are owned by SYS:
(see Note 61623.1 SHARED POOL TUNING)

STANDARD
DBMS_STANDARD
DBMS_UTILITY
DBMS_DESCRIBE
DBMS_OUTPUT

and maybe other SYS packages that are often used (DBMS_LOCK, DBMS_ALERT, etc.).

The Applications objects that should be pinned are harder to identify and will vary from site to site, depending on what the users are doing.
To identify good candidates for pinning, you need to know which objects are being executed the most. To do this, let the system run long enough
to reach a steady state (several days to a week). Then initiate a SQL*Plus session as system (or sys or apps) and run the
following script $AD_TOP/sql/ADXCKPIN.sql. This will spool object execution and reload statistics into the output file ADXCKPIN.lst.

Example output: 

OBJECT TYPE SPACE(K) LOADS EXECS KEPT
---------------------------------- ------------ -------- ------ ------- ----
APPS.FND_ATTACHMENT_UTIL_PKG PACKAGE 15.2 1 9 NO
APPS.FND_ATTACHMENT_UTIL_PKG PACKAGE BODY 13.7 1 8 NO
APPS.FND_CLIENT_INFO PACKAGE 2.7 1 206 NO
APPS.FND_CLIENT_INFO PACKAGE BODY 13.0 1 206 NO
APPS.FND_CONCURRENT PACKAGE 15.2 1 199 NO
APPS.FND_CONCURRENT PACKAGE BODY 24.2 1 197 NO


Another handy script:

SELECT substr(owner,1,10)||'.'||substr(name,1,35) "Object Name",
' Type: '||substr(type,1,12)||
' size: '||sharable_mem ||
' execs: '||executions||
' loads: '||loads||
' Kept: '||kept
FROM v$db_object_cache
WHERE type in ('TRIGGER','PROCEDURE','PACKAGE BODY','PACKAGE')
AND executions > 0
ORDER BY executions desc,
loads desc,
sharable_mem desc;

Choose the objects with a high number of executions (EXECS) or very large (SPACE(K)), frequently used objects.
If the decision is between two objects that have been executed the same number of times,
then preference should be given to the larger object. From experience, very good results have been achieved with having pinned only about 10 packages.


How to pin object into shared pool?


The pl/sql scripts:

$AD_TOP/sql/ADXGNPIN.sql (for packages, functions)
$AD_TOP/sql/ADXGNPNS.sql (for sequences)

generate pinning scripts, which can be executed in Sql*Plus.

Do not run them without having edited them, otherwise the scripts would try to pin all objects.

 Create your own script to pin the packages and pin them in a descending order according to their size.

The pl/sql command to pin a package (i.e. FNDCP_TMSRV) manually is: 


SQL> execute dbms_shared_pool.keep('APPS.FNDCP_TMSRV');

Note: The objects have to be pinned after each instance startup, and ideally immediately after the startup. Prior to Oracle 7.2 DBMS_SHARED_POOL.KEEP does not actually load all of the object to be KEPT into the shared pool. It is advisable to create a dummy procedure to execute each package to be KEPT. This dummy procedure can then be called after calling DBMS_SHARED_POOL.KEEP to ensure that the object is fully loaded. This is not a problem from 7.2 onwards. 

The Output Post-processor is running but has not picked up this request.

OPP Error in the Log file:

------------------------------------------------------------------------
The Output Post-processor is running but has not picked up this request.
No further attempts will be made to post-process this request, and the request will be marked
with Warning status.
Setting the profile option Concurrent: OPP Response Timeout to a higher value may be necessary.
------------------------------------------------------------------------

Solution:

A) Restart the Output Post Processor and re-test the issue.

Navigation:-

Concurrent→Manager→Administer
You can restart using the RESTART Button or You can terminate the manager and Activate it again.
Use the terminate button and active button to achieve this.


B) If the above ddint solve your issue, Increase the below profile options and re-test the issue

Concurrent:OPP Process Timeout to 600 (300*2)

Concurrent:OPP Response Timeout TO 240 (120*2)

C) If A and B didnt work, Increase the procesess of the Outpost Processor (double it i.e if you have 1 increase it to 2) from Concurrent - Manager - Define screen

and restart the OPP, That should resolve this issue.

ORA-00257

Cannot connect to database (ORA-00257: Archiver error. Connect AS SYSDBA only until resolved. )

Error : Cannot connect to database (ORA-00257: Archiver error. Connect AS SYSDBA only until resolved. )

Cause: The archiver process received an error while trying to archive a redo log. If the problem is not resolved soon,
the database will stop executing transactions. The most likely cause of this message is the destination device is out of space to store the redo log file.

Action: Check archiver trace file for a detailed description of the problem. Also verify that the device specified in the initialization parameter
ARCHIVE_LOG_DEST is set up properly for archiving.alter database flashback on

There is two possible way to solution.
1. without increasing DB_RECOVERY_FILE_DEST_SIZE.
2. by increasing DB_RECOVERY_FILE_DEST_SIZE.
3. archivelog mode disable

Without increasing DB_RECOVERY_FILE_DEST_SIZE.

1. Check whether the database is in archive log mode and automatic archiving is enabled.

 SQL> archive log list;
 Database log mode              Archive Mode
 Automatic archival             Enabled
 Archive destination            USE_DB_RECOVERY_FILE_DEST
 Oldest online log sequence     29
 Next log sequence to archive   31
 Current log sequence           31

2. If archive destination is defined by USE_DB_RECOVERY_FILE_DEST, find the archive destination by:

SQL> show parameter db_recovery_file_dest;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      C:\oracle\product\10.2.0/flash_recovery_area
db_recovery_file_dest_size           big integer 2G
Check what the value for db_recovery_file_dest_size.

3. Find the space used in flash recovery area by using following SQL:

col ROUND(SPACE_LIMIT/1048576) heading “Space Allocated (MB)” format 999999
col round(space_used/1048576) heading “Space Used (MB)” format 99999
col name format a40
select name, round(space_limit/1048576) As space_limit,round(space_used/1048576) As space_used
from v$RECOVERY_FILE_DEST;

4. if SPACE_USED is equal to SPACE_LIMIT of db_recovery_file_dest, move the archive logs to different destination.

5. Archive all the log files

SQL> alter system archive log all;
6. Just switch the logs to verify

 SQL> alter system switch logfile;

7. DB_RECOVERY_FILE_DEST_SIZE is to delete (archive log) files from DB_RECOVERY_FILE_DEST if you are sure you have backups and the archived logs
are no longer necessary.

 $rman target /
 RMAN>delete archivelog until time 'SYSDATE-1';
 or,
 RMAN>delete archivelog all;

By increasing DB_RECOVERY_FILE_DEST_SIZE.

1. See the path of flash recovery area.

 SQL> show parameter db_recovery_file_dest;

2. Increase the Flash Recovery Area

SQL> ALTER SYSTEM SET db_recovery_file_dest_size='50G' SCOPE=BOTH;
Sytem Altered.



[oracle@oraprod pfile]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Thu Feb 22 16:22:35 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area 1.5032E+10 bytes
Fixed Size                 12349192 bytes
Variable Size            2348810488 bytes
Database Buffers         1.2616E+10 bytes
Redo Buffers               54759424 bytes
Database mounted.
SQL> alter database clear unarchived logfile group 1;

Database altered.

SQL> alter database clear unarchived logfile group 2;

Database altered.

SQL> alter database clear unarchived logfile group 3;

Database altered.

SQL> shutdown immediate

ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1.5032E+10 bytes
Fixed Size                 12349192 bytes
Variable Size            2348810488 bytes
Database Buffers         1.2616E+10 bytes
Redo Buffers               54759424 bytes
Database mounted.
Database opened.
SQL>
SQL>
SQL>
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PROD                         MOUNTED

SQL> alter pluggable database PROD open;

Pluggable database altered.

********************************************************************

archivelog mode disable

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

[oracle@oraprod ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Mar 20 16:52:35 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to an idle instance.

SQL>
SQL> startup mount
ORACLE instance started.

Total System Global Area 1.5032E+10 bytes
Fixed Size                 12349192 bytes
Variable Size            2348810488 bytes
Database Buffers         1.2616E+10 bytes
Redo Buffers               54759424 bytes
Database mounted.

SQL> alter database noarchivelog;

Database altered.

SQL>
SQL>
SQL> alter database open;

Database altered.

Talend Introduction

Talend ETL TOOL

ABOUT TALEND
Talend is a next-generation leader in cloud and big data integration software that helps companies become data driven by making data more accessible, improving its quality and quickly moving it where it’s needed for real-time decision making. Talend’s open-source, native, and unified integration platform, Data Fabric, enables customers to embrace new innovations and scale to meet the evolving data demands of the business.

The only open source vendor named a leader in the Gartner Magic Quadrant for Data Integration Tools, and the Forrester Big Data Fabric Wave, Talend’s innovative solutions are relied upon by 1500+ global enterprise customers across a range of industries, including Air France, GE, and Lenovo.

Talend Open Studio for Data Integration: The Powerful ETL Tool You Can Use Today
IT groups tasked with implementing extract, transform, and load (ETL) projects have traditionally been forced to choose between two time-consuming options: develop a custom ETL tool from scratch; or try to win approval for purchasing an expensive proprietary ETL tool. With Talend Open Studio for Data Integration, busy IT departments now have a new and better option: a powerful open source ETL tool that you can download for free and start using today.

A Feature-Rich ETL Tool

Talend Open Studio for Data Integration helps you to efficiently and effectively manage all facets of data extraction, data transformation, and data loading. This leading open source ETL tool boosts developer productivity with a rich set of features including:

An Eclipse-based graphical integrated development environment that enables easy data modeling, drag-and-drop job design, and  efficient reuse of completed work across projects and modules.
More than 900 components and built-in connectors that allow you to easily link a wide array of sources and targets.
Robust ETL functionality such as string manipulations, management of slowly changing dimensions, and automatic lookup handling.
The ability to execute extract, load, and transform (ELT) processes as well as ETL processes, even within the same job.
Proven in Production Talend Open Studio for Data Integration is a proven data integration solution that's been downloaded millions of times and has hundreds of thousands of users. Organizations using Talend Open Studio for Data Integration in production environments range from small start-ups to some of the largest corporations in the world, as well as local and national government agencies.

The Benefits of Open Source

As an open source ETL tool, Talend Open Studio for Data Integration gives you the ability to access and extend source code to best suit your needs. This powerful, productivity-boosting ETL tool is surrounded by a large and active user community that shares insights and application extensions through platforms such as Talend Forum and Talend Exchange. Backed by Talend's ongoing and extensive R&D; efforts, Talend Open Studio for Data Integration is frequently updated and enhanced in ways that reflect the experiences and needs of the user community.

A Seamless Path to Enterprise Scale For enterprise-scale, projects, Talend also offers the subscription-based Talend Data Integration. Talend Data Integration extends Talend Open Studio for Data Integration with enterprise features such as collaboration and versioning tools, monitoring and management tools, and world-class technical support. Projects built on Talend Open Studio for Data Integration can seamlessly transition to Talend Data Integration.

Learn more about Talend’s data integration solutions from the many resources on this web site, or download Talend Open Studio for Data Integration today and start benefiting from the leading open source data integration tool.

Why Talend?
Because supporting increasing data volumes, users, and use-cases requires you to constantly evolve your data infrastructure, from Java and data warehouses to the Cloud and Spark and The Next Big Thing. Only Talend allows you to effortlessly adopt new technologies so you can focus more on creating business value and less on data integration.
Our mission is to give you the data agility needed to enable every person in your organization to make more informed, real-time decisions every day.



INTRODUCTION:-

What is Talend? 

Talend is the first provider of open source data integration software. Its main product is Talend Open Studio. After three years of intense research and development investment the first version of that software was released in 2006. It is an Open Source project for data integration based on Eclipse RCP that primarily supports ETL-oriented implementations and is provided for on-premises deployment as well as in a software-as-a-service (SaaS) delivery model. Talend Open Studio is mainly used for integration between operational systems, as well as for ETL (Extract, Transform, Load) for Business Intelligence and Data Warehousing, and for migration.
Talend offers a completely new vision, reflected in the way it utilizes technology, as well as in its business model.
The company shatters the traditional proprietary model by supplying open, innovative and powerful software solutions with the flexibility to meet the data integration needs of all types of organizations.

Talend Open Studio is the most open, innovative and powerful data integration solution on the market today.

Talend ETL Tool

Talend open studio for data integration is one of the most powerful data integration ETL tool available in the market. TOS lets you to easily manage all the steps involved in the ETL process, beginning from the initial ETL design till the execution of ETL data load. This tool is developed on the Eclipse graphical development environment. Talend open studio provides you the graphical environment using which you can easily map the data between the source to the destination system. All you need to do is drag and drop the required components from the palette into the work space, configure them and finally connect them together. It even provides you a metadata repository from where you can easily reuse and re-purpose your work. This definitely will help you increase your efficiency and productivity over time.With this, you can conclude that Talend open studio for DI provides an improvised data integration along with strong connectivity, easy adaptability and a smooth flow of extraction and transformation process.


Main features and benefits of that solution:

Business modeling
Graphical development
Metadata-driven design and execution
Real-time debugging
Robust execution

Provided as a packaged, out-of-the-box, ready-to-install platform, Talend Open Studio meets data integration requirements of all organizations - regardless of their size or level of data integration expertise.

There are also available Talend Open Studio extensions:

Talend Integration Suite - The first Open Source enterprise data integration solution, Talend Integration Suite supports the tough requirements of enterprise development, and scales to the highest levels of data volumes and process complexity.

Talend On Demand - The industry's first data integration Software as a Service (SaaS), Talend On Demand consolidates Talend Open Studio metadata and project information in an online, shared repository hosted by Talend.

Talend Open Profiler - The first open source data profiling tool, Talend Open Profiler, allows business users or data management staff to define a set of indicators for each data element that needs to be analyzed or monitored. It produces sophisticated reports and graphs that let users gauge at a glance the level of quality of the data, and the status of the indicators that were defined.

Talend Data Quality - The first open source data quality solution with enterprise-grade features and technical support, Talend Data Quality is a graphical data quality management environment that processes data, such as addresses, phone numbers, spellings, synonyms and abbreviations. Talend Data Quality includes both data profiling and data cleansing capabilities.

What Is ETL Process?

ETL stands for Extract, Transform and Load. It refers to a trio of processes which are required to move the raw data from its source to a data warehouse or a database. Let me explain each of these processes in detail:

Extract

Extraction of data is the most important step of ETL which involves accessing the data from all the Storage Systems. The storage systems can be the RDBMS, Excel files, XML files, flat files, ISAM (Indexed Sequential Access Method), hierarchical databases (IMS), visual information etc. Being the most vital step, it needs to be designed in such a way that it doesn’t affect the source systems negatively. Extraction process also makes sure that every item’s parameters are distinctively identified irrespective of its source system.

Transform

Transformation is the next process in the pipeline. In this step, entire data is analyzed and various functions are applied on it to transform that into the required format. Generally, processes used for the transformation of the data are conversion, filtering,  sorting, standardizing, clearing the duplicates, translating and verifying the consistency of various data sources.

Load

Loading is the final stage of the ETL process. In this step, the processed data, i.e. the extracted and transformed data, is then loaded to a target data repository which is usually the databases. While performing this step, it should be ensured that the load function is performed accurately, but by utilizing minimal resources. Also, while loading you have to maintain the referential integrity so that you don’t lose the consistency of the data. Once the data is loaded, you can pick up any
chunk of data and compare it with other chunks easily.Now that you know about the ETL process, you might be wondering how to perform all these? Well, the answer is simple using ETL Tools.


Various ETL Tools

But before I talk about ETL tools, let’s first understand what exactly is an ETL tool.

As I have already discussed, ETL are three separate processes which perform different functions. When all these processes are combined together into a single programming tool which can help in preparing the data and in the managing various databases. These tools have graphical interfaces using which results in speeding up the entire process of mapping tables and columns between the various source and target databases.

Some of the major benefits of the ETL Tools are:

It is very easy to use as it eliminates the need for writing the procedures and code.
Since the ETL Tools are GUI based they provide a visual flow of the system’s logic.
The ETL tools have built-in error-handling functionality because of which they have operational resilience.
When dealing with large and complex data, ETL tools provide a better data management by simplifying the tasks and assisting you with various functions.
ETL tools provide an advanced set of cleansing functions as compared to the traditional systems.
ETL tools have an enhanced business intelligence which directly impacts the strategic and operational decisions.
Because of the use of the ETL tools, the expenses reduces by a lot and the businesses are able to generate higher revenue.
Performance of the ETL tools is much better as the structure of its platform simplifies the construction of a high-quality data warehousing system.
There are various ETL tools available in the market, which are quite popularly used. Some of them are:


ORA-03113: end-of-file on communication channel

ORA-03113: end-of-file on communication channel 

[oracle@oprd pfile]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Thu Feb 22 16:22:35 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area 1.5032E+10 bytes
Fixed Size                 12349192 bytes
Variable Size            2348810488 bytes
Database Buffers         1.2616E+10 bytes
Redo Buffers               54759424 bytes
Database mounted.

SQL> alter database clear unarchived logfile group 1;

Database altered.

SQL> alter database clear unarchived logfile group 2;

Database altered.

SQL> alter database clear unarchived logfile group 3;

Database altered.

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1.5032E+10 bytes
Fixed Size                 12349192 bytes
Variable Size            2348810488 bytes
Database Buffers         1.2616E+10 bytes
Redo Buffers               54759424 bytes
Database mounted.
Database opened.


SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB01                         MOUNTED

SQL> alter pluggable database PDB01 open;

Pluggable database altered.


What do I do to increase the size of the flash recovery area (FRA)?

Answer:  The size of the flashback recovery area (FRA) is determined by the size of the db_recovery_file_dest_size parameter.  To increase the size of the flash recovery area (FRA) simply  run this command:



SQL> alter system set db_recovery_file_dest_size=50g scope=both;

System altered.

SQL> show parameter db_rec

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      +RECO
db_recovery_file_dest_size           big integer 50G
db_recycle_cache_size                big integer 0
SQL>
SQL>
SQL>
SQL> create pfile from spfile;

File created.

SQL>
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1.5032E+10 bytes
Fixed Size                 12349192 bytes
Variable Size            2348810488 bytes
Database Buffers         1.2616E+10 bytes
Redo Buffers               54759424 bytes
Database mounted.
Database opened.

SQL> show parameter db_rec

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      +RECO
db_recovery_file_dest_size           big integer 50G
db_recycle_cache_size                big integer 0


SQL> sho pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB01                         MOUNTED


SQL> alter pluggable database PDB01 open;

Pluggable database altered.

ORA-28374: typed master key not found in wallet

When i was trying to add datafile facing below issue on 12c multitenant pluggable-database.

SQL> create tablespace odsprod datafile size 5G;
create tablespace odsprod datafile size 5G
*
ERROR at line 1:
ORA-28374: typed master key not found in wallet


SQL> SELECT wrl_parameter, status, wallet_type FROM v$encryption_wallet;

WRL_PARAMETER
--------------------------------------------------------------------------------
STATUS                         WALLET_TYPE
------------------------------ --------------------

OPEN_NO_MASTER_KEY             AUTOLOGIN


SQL> ALTER SESSION SET CONTAINER = CDB$ROOT;

Session altered.

SQL> SELECT wrl_parameter, status, wallet_type FROM v$encryption_wallet;

WRL_PARAMETER                       STATUS                         WALLET_TYPE
------------------------------ --------------------
/u01/oracle/dcs/commonstore/wallets/tde/pdb01_iad1m2/   OPEN         AUTOLOGIN


SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE close;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE open IDENTIFIED BY <admin-password>  CONTAINER=all;

keystore altered.

SQL> ALTER SESSION SET CONTAINER =pdb01;

Session altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY <admin-password> with backup;

keystore altered.

SQL> SELECT wrl_parameter, status, wallet_type FROM v$encryption_wallet;

WRL_PARAMETER
--------------------------------------------------------------------------------
STATUS                         WALLET_TYPE
------------------------------ --------------------

OPEN                           PASSWORD


SQL> create tablespace pdb01 datafile size 5G;

Tablespace created.

How to configure TDE in pluggable database in 12c

How to configure TDE in pluggable database in 12c for standalone and RAC environment 


APPLIES TO:
Oracle Database - Enterprise Edition - Version 12.1.0.1 and later
Information in this document applies to any platform.

GOAL
 This document details step by step instructions to configure TDE in 12c pluggable database for standalone and RAC environment.

SOLUTION

 1. Add the below entry in sqlnet.ora file

ENCRYPTION_WALLET_LOCATION =
           (SOURCE = (METHOD = FILE)
                   (METHOD_DATA =
                              (DIRECTORY = /cdbrdbms/etc/$ORACLE_SID)
                   )
           )

2. Create Key store on CDB database and generate master encryption key for CDB

ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/cdbrdbms/etc/MTc12c1' IDENTIFIED BY "welcome1";
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "welcome1";
ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY "welcome1" WITH BACKUP;

Note: If TDE was already configured for CDB / other PDBs , then skip step 1 and step 2.
  
3. Verify the wallet has been opened in CDB database
 
select * from v$encryption_wallet;

WRL_TYPE
--------------------
WRL_PARAMETER
--------------------------------------------------------------------------------
STATUS WALLET_TYPE WALLET_OR FULLY_BAC
------------------------------ -------------------- --------- ---------
CON_ID
----------
FILE
/cdbrdbms/etc/MTc12c1/
OPEN PASSWORD SINGLE NO
0


4. Connect to PDB
 
SQL> alter session set container=MTC12P2;
Session altered.

SQL> show con_name
CON_NAME
------------------------------
MTC12P2


5. Open the keystore in that PDB and generate master encryption key for the PDB
 
SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "welcome1";
keystore altered

SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
4 MTC12P2 READ WRITE NO

SQL> select * from v$encryption_wallet;
WRL_TYPE
--------------------
WRL_PARAMETER
--------------------------------------------------------------------------------
STATUS WALLET_TYPE WALLET_OR FULLY_BAC
------------------------------ -------------------- --------- ---------
CON_ID
----------
FILE
/cdbrdbms/etc/MTc12c1/
OPEN_NO_MASTER_KEY PASSWORD SINGLE UNDEFINED
0


 
SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY "welcome1" with backup;
keystore altered.


SQL> select * from v$encryption_wallet;
WRL_TYPE
--------------------
WRL_PARAMETER
--------------------------------------------------------------------------------
STATUS WALLET_TYPE WALLET_OR FULLY_BAC
------------------------------ -------------------- --------- ---------
CON_ID
----------
FILE
/cdbrdbms/etc/MTc12c1/
OPEN PASSWORD SINGLE NO
0




6. Create encrypted tablespace
 
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
4 MTC12P2 READ WRITE NO

SQL> create tablespace enc128_ts
datafile '/cdbrdbms/64bit/app/oracle/oradata/MTc12c1/MTc12p2/Test_encrption.dbf'
size 1M autoextend on next 1M
encryption using 'AES128'
default storage (encrypt)
/ 2 3 4 5 6
Tablespace created.
 


7. For RAC environment
7.1 Make sure encryption_wallet_location parameter is configured in sqlnet.ora file of all other RAC nodes.

ENCRYPTION_WALLET_LOCATION =
         (SOURCE = (METHOD = FILE)
              (METHOD_DATA =
                       (DIRECTORY = /cdbrdbms/etc/$ORACLE_SID)
                )
          )


7.2 Copy the wallet file ewallet.p12 from first RAC node to all other RAC nodes ENCRYPTION_WALLET_LOCATION directory

EXP-00111

ERROR :

EXP-00111: Table WORKER resides in an Encrypted Tablespace sujeet and will not be exported


Cause:

Encrypted Tablespace feature is not supported by classic export


Action:

Use DATAPUMP Utility to export this table.

ORA-28014: cannot drop administrative users

The below error is encountered when I was trying to drop schema from 12C database.

SQL> drop user BKP CASCADE;
drop user BKP CASCADE
*
ERROR at line 1:
ORA-28014: cannot drop administrative users

Solution :-

SQL> alter session set "_oracle_script"=true;

Session altered.

SQL> drop user BKP CASCADE;
User dropped.

enable pluggable database(PDB) archivelog mode

Enabling archive log mode -12c pluggable database/Container database.

Since the Redologs are created at container database level in 12c and not at pluggable database level.
(Enabling archivelog at pluggable database level is not possible). Archiving is done at CDB's.

You can check archive log mode either by querying v$database or archivelog list

SQL> select name,open_mode,log_mode from v$database;

NAME      OPEN_MODE            LOG_MODE
--------- -------------------- ------------
CDB      READ WRITE           NOARCHIVELOG

(OR)

SQL> archive log list
Database log mode        Archive Mode
Automatic archival        Disabled
Archive destination        USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     11
Next log sequence to archive   13
Current log sequence        13

***************  ***************
To enable the Archive-log mode
***************  ***************

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area  788529152 bytes
Fixed Size      2929352 bytes
Variable Size    541068600 bytes
Database Buffers   239075328 bytes
Redo Buffers      5455872 bytes
Database mounted.

SQL> ALTER DATABASE ARCHIVELOG;

Database altered.

SQL> ALTER DATABASE OPEN;

Database altered.

SQL> select name,open_mode,log_mode from v$database;

NAME      OPEN_MODE            LOG_MODE
--------- -------------------- ------------
CDB      READ WRITE           ARCHIVELOG



SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     413
Next log sequence to archive   415
Current log sequence           415

SQL> show parameter DB_RECOVERY_FILE_DEST

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      /u03/app/oracle/fast_recovery_area
db_recovery_file_dest_size           big integer 27G

PDB Clone steps



                                                       PDB Clone steps
Clone Oracle 12c Pluggable database from one Oracle Cloud Service to another

We can have multiple options for doing this data migration. For our example we will use remote cloning of a PDB using Database Links.
Note that this post will use the unique features of Oracle Database 12c Multitenant and Pluggable Databases (PDB).
This was a new and the key feature of Oracle 12c which is not present in Oracle database version 11g.
With PDBs in 12c you have got ease of data migration.
You may have to move PDBs between test and production environments in your live environments and this process makes data migration a simple process.

Source Database:
Database Cloud Service: Production
PDB Name to be cloned: PROD01
Target Database:
Database Cloud Service: SUJEET
Activity
To clone PROD01 PDB from ‘Production’ database service to ‘SUJEET’ database service.
Below is the series of steps that will be followed to accomplish this goal.
STEP 1 : Enable communication between Source and Target by Security Rules
STEP 2 : Add Source PDB TNS Entry in Target tnsnames.ora
STEP 3 : Create DBLINK in Target to connect to Source PDB
STEP 4 : Change the Source wallet from 'Autologin' to 'Password'
STEP 5 : Export the keys for Source PDB from Source Wallet 
STEP 6 : Transfer the exported keys from Source to Target server
STEP 7 : Change the Target wallet from 'Autologin' to 'Password'
STEP 8 : Import the copied keys into Target Database Wallet 
STEP 9 : Set the source PDB to read only mode
STEP 10 : Clone the Source PDB into Target database 
STEP 11 : Revert back the source PDB to READ WRITE Mode
STEP 12 : Revert back the Source wallet from 'Password' to 'Autologin'
STEP 13 : Revert back the Target wallet from 'Password' to 'Autologin'
STEP 14 : Verify the newly cloned Target PDB

STEP 1 : Enable communication between Source and Target by Security Rules
From the Oracle Compute Cloud Service Dashboard navigate to the Network > Security Rules, and create a new rule.
http://expertoracle.com/wp-content/uploads/2017/01/img_586d9b38659e8.png
Input the below parameters as shown:
Give your security rule a name, and make sure it is enabled. This Security Rule is based on the Security Application Production/db_1/ora_listener, which opens the port 1521. The Source is brijesh/db_1/ora_db, and the Destination is Production/db_1/ora_db.
http://expertoracle.com/wp-content/uploads/2017/01/img_586d9c0c37cc2.png
You will be able to see the new security rule in the list.
http://expertoracle.com/wp-content/uploads/2017/01/img_586d9d812ccd2.png
STEP 2 : Add Source PDB TNS Entry in Target tnsnames.ora
Open Putty connections to both ‘Production’ and ‘Brijesh’ compute instances
We will add source PDB tns entry, which is PRODPDB1 in our case. In the tns entry we will put private IP of the Production servers. Private IP can be easily located at the Compute Cloud Console in the instance information page or you can also run “ifconfig eth0” command from the unix command prompt to get it.
Go to $ORACLE_HOME/netowrk/admin and added below tns entry to the tnsnames.ora
PRODPDB1_CLONE =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = 10.196.139.62)(PORT = 1521))
 (CONNECT_DATA =
 (SERVER = DEDICATED)
 (SERVICE_NAME = prodpdb1.bgogia.oraclecloud.internal)
 )
 )
Verified if the connection is successful by tnsping command
http://expertoracle.com/wp-content/uploads/2017/01/img_586da154d2543.png

STEP 3 : Create DBLINK in Target to connect to Source PDB
Now log into the database  ‘Brijesh’ using SQL*Plus to create the Database Link that will be used to create a new PDB in this database.
$ sqlplus / as sysdba

SQL> set lines 132
col owner format a8
col DB_LINK format a40
col USERNAME format a10
col HOST format a25

select * from dba_db_links;

SQL> create database link db_link1 connect to system identified by <production_system_password> using 'PRODPDB1_CLONE';

SQL> select * from dba_db_links;

Verify if DB LINK is working OK
SQL> select name from v$database@DB_LINK1.BGOGIA.ORACLECLOUD.INTERNAL;

NAME
---------
PROD

SQL> exit

STEP 4 : Change the Source wallet from ‘Autologin’ to ‘Password’
The Oracle Public Cloud uses Transparent Data Encryption to secure the datafiles. When you move the pluggable databases you need to export/import the encryption keys. Before we can export the keys from the source PDB we need to first change the wallet type to ‘Password’ from ‘Autologin’
First keep the backup of /u01/app/oracle/admin/PROD/tde_wallet/cwallet.sso file
SQL> select WRL_TYPE, WRL_PARAMETER, STATUS, WALLET_TYPE from v$encryption_wallet;

WRL_TYPE     WRL_PARAMETER                          WALLET_TYPE
---------- ---------------------------------------- ----------
FILE        /u01/app/oracle/admin/PROD/tde_wallet/   AUTOLOGIN


SQL> administer key management set keystore close;
keystore altered.

SQL> administer key management set keystore open identified by "vana_05Nir" container=all;
keystore altered.

SQL> select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;

WRL_TYPE          WRL_PARAMETER                              WALLET_TYPE
-------------  -------------------------------------------  --------------------
FILE             /u01/app/oracle/admin/PROD/tde_wallet/       PASSWORD


STEP 5 : Export the keys for Source PDB from Source Wallet
In this step we will export the key from Source PDB in a temporary wallet file
Connect to Source PDB using SQLPLUS
SQL> administer key management export encryption keys with secret "<put_password>" to '/tmp/prodpdb1_2.p12'
     identified by "<put_password>" with identifier in
     (select key_id from v$encryption_keys where creator_pdbname='PRODPDB1');

keystore altered.
/tmp/prodpdb1_2.p12 will have the required key
STEP 6 : Transfer the exported keys from Source to Target server
In this step we will copy the /tmp/prodpdb1_2.p12 from Source server to Target server using WINSCP software
Connect to WINSCP and provide all the below required details to connect to your Oracle Cloud Servers
http://expertoracle.com/wp-content/uploads/2017/01/img_586ddb26c1376.png
STEP 7 : Change the Target wallet from ‘Autologin’ to ‘Password’
Now we have to open the wallet with password because the key that we brought in from Source server cannot be imported when wallet type is ‘autologin’
First keep the backup of /u01/app/oracle/admin/brijesh/tde_wallet/cwallet.sso file
Run below command by connecting to target PDB using SQLPLUS
SQL> administer key management set keystore close;
keystore altered.

SQL> administer key management set keystore open identified by "<put_password>";
keystore altered.

SQL> select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;

WRL_TYPE          WRL_PARAMETER                                     WALLET_TYPE
-------------  ---------------------------------------------------  ---------------------------------
FILE             /u01/app/oracle/admin/brijesh/tde_wallet/           PASSWORD

STEP 8 : Import the copied keys into Target Database Wallet
Run below command by connecting to target PDB using SQLPLUS
SQL> administer key management import encryption keys with secret "<put_password>" from
   '/tmp/prodpdb1_2.p12' identified by "<put_password>" WITH BACKUP USING '/tmp/tde_backup_001';
keystore altered.

STEP 9 : Set the source PDB to read only mode
Now from PuTTY we will connect to our ‘Production’ database service using SQL*Plus and set the source PRODPDB1 pluggable databsae to READ ONLY mode in the Oracle Database Cloud Service
$ sqlplus / as sysdba

SQL> show pdbs

CON_ID      CON_NAME                      OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
 2          PDB$SEED                      READ ONLY    NO
 3          PRODPDB1                      READ WRITE   NO

SQL> alter pluggable database PRODPDB1 close immediate;

Pluggable database altered.

SQL> show pdbs

CON_ID        CON_NAME                     OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
 2            PDB$SEED                    READ ONLY   NO
 3            PRODPDB1                    MOUNTED

SQL> alter pluggable database PRODPDB1 open read only;
Pluggable database altered.

SQL> show pdbs

CON_ID       CON_NAME                     OPEN MODE   RESTRICTED
---------- ------------------------------ ---------- ----------
 2          PDB$SEED                      READ ONLY    NO
 3          PRODPDB1                      READ ONLY    NO

SQL> EXIT;


STEP 10 : Clone the Source PDB into Target database
Now from PuTTY we will connect to our ‘brijesh’ database service using SQL*Plus and clone the PRODPDB1 to the ‘brijesh’ database service.
$ sqlplus / as sysdba

SQL> create pluggable database clone_pdb from PRODPDB1@DB_LINK1 keystore identified by "<put_password>";
Pluggable database created.

SQL> show pdbs
CON_ID      CON_NAME                      OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
 2         PDB$SEED                        READ ONLY   NO
 3         PDB1                            READ WRITE  NO
 4         CLONE_PDB                       MOUNTED

SQL> alter pluggable database CLONE_PDB open;
Pluggable database altered.

SQL> show pdbs
CON_ID      CON_NAME                      OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
 2         PDB$SEED                       READ ONLY    NO
 3         PDB1                           READ WRITE   NO
 4         CLONE_PDB                      READ WRITE   NO

SQL> select name, status, message, action from pdb_plug_in_violations where name='CLONE_PDB';
no rows selected

STEP 11 : Revert back the source PDB to READ WRITE Mode
Execute on the Source database from SQLPLUS
SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
 2 PDB$SEED READ ONLY NO
 3 PRODPDB1 READ ONLY NO
SQL> alter pluggable database PRODPDB1 close immediate;
Pluggable database altered.

SQL> alter pluggable database PRODPDB1 open;

Pluggable database altered.

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
 2 PDB$SEED READ ONLY NO
 3 PRODPDB1 READ WRITE NO

STEP 12 : Revert back the Source wallet from ‘Password’ to ‘Autologin’
We will now revert the container wallet from Password to Autologin type. To accomplish this, we reintroduce the cwallet.sso file and then restart the Container Database.
$ mv /u01/app/oracle/admin/PROD/tde_wallet/cwallet.sso_bkp /u01/app/oracle/admin/PROD/tde_wallet/cwallet.sso

[oracle@Production admin]$ sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Thu Jan 5 04:47:49 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 2667577344 bytes
Fixed Size 8796336 bytes
Variable Size 687867728 bytes
Database Buffers 1962934272 bytes
Redo Buffers 7979008 bytes
Database mounted.
Database opened.

SQL> select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;

WRL_TYPE          WRL_PARAMETER                              WALLET_TYPE
-------------  -------------------------------------------  --------------------
FILE             /u01/app/oracle/admin/PROD/tde_wallet/       AUTLOGIN

STEP 13 : Revert back the Target wallet from ‘Password’ to ‘Autologin’
Similarly to the previous step we will revert back  the Target wallet to ‘autologin’
SQL> select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;

WRL_TYPE          WRL_PARAMETER                              WALLET_TYPE
-------------  -------------------------------------------  --------------------
FILE             /u01/app/oracle/admin/brijesh/tde_wallet/       AUTLOGIN

STEP 14 : Verify the newly cloned Target PDB
Connect to newly cloned PDB in your target container database and verify
[oracle@brijesh ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Thu Jan 5 04:58:46 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> show pdbs

CON_ID      CON_NAME                      OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
 2         PDB$SEED                       READ ONLY    NO
 3         PDB1                           READ WRITE   YES
 4         CLONE_PDB                      READ WRITE   NO

SQL> sho con_name
CON_NAME
------------------------------
CDB$ROOT

SQL> alter session set container=CLONE_PDB;
Session altered.

SQL> sho con_name
CON_NAME
------------------------------
CLONE_PDB

SQL> create table dummy_CLONE_PDB(x number);
Table created.

SQL> insert into dummy_CLONE_PDB values(1);
1 row created.

SQL> commit;
Commit complete.

This completes the cloning of Oracle 12c Pluggable database from one Oracle Cloud Service to another
Oracle Database 12.1.0.2c: Hot Cloning of Pluggable Databases
Cloning of a Pluggable Database (PDB) in Oracle Multitenant is a great feature and is very useful. Oracle Database 12c Release 1 has introduced various enhancements to the cloning of Pluggable Databases to overcome the limitations imposed by the initial release of Oracle Database 12c (12.1.0.1). In Oracle 12.1.0.2c: PDBs can be hot cloned, i.e. you don’t need to put the source PDB

Cloning of a Pluggable Database (PDB) in Oracle Multitenant is a great feature and is very useful. Oracle Database 12c Release 1 has introduced various enhancements to the cloning of Pluggable Databases to overcome the limitations imposed by the initial release of Oracle Database 12c (12.1.0.1). In Oracle 12.1.0.2c:

PDBs can be hot cloned, i.e. you don’t need to put the source PDB in read-only for cloning so that it can be cloned to a CDB online. In Oracle 12.1.0.1c, a pluggable database can be cloned only if it is read-only.
PDBs can be remote cloned, i.e. the source PDB may be located in a remote container. This feature was listed in Oracle Database 12c Release 1 (12.1.0.1), but didn’t work.
There is no need to create the directories where the database files of clone PDB will be placed. If the directory is not already present, it will be automatically created as part of the cloning operation.
In this article, I will demonstrate hot cloning of a PDB locally as well as remotely. For the purpose of this demonstration, I have two virtual machines, each with a container database.

Current scenario:
Host: host01
Container Database 12.1.0.2c: cdb1 with pluggable database pdb1

Host: host02
Container Database 12.1.0.2c: destcdb with pluggable database pdb1

Demonstration 1: Hot cloning of PDB locally
We will clone the pluggable database pdb1 to pdb1clone in the same CDB, i.e. cdb1 on host host01

Source host: host01
Source PDB: pdb1 in container database cdb1

Destination host: host01
Destination PDB: pdb1clone in container database cdb1

Currently, there is only one PDB called pdb1 currently open in READ WRITE mode in the container database cdb1.

CDB1>select name, cdb from v$database;

CDB1>select name, cdb from v$database;
NAME      CDB
--------- ---
CDB1      YES

CDB1>select con_id, name, open_mode from v$pdbs;
    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         2 PDB$SEED                       READ ONLY
         3 PDB1                           READ WRITE

CDB1>select name from v$datafile where con_id = 3;
NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/example01.dbf
Verify that directory corresponding to data files of target PDB, i.e. pdb1clone is not present:

[oracle@host01 cdb1]$ ls -l /u01/app/oracle/oradata/cdb1/
total 1782620
-rw-r----- 1 oracle asmadmin  17973248 Jul 26 15:03 control01.ctl
drwxr-x--- 2 oracle oinstall      4096 Jun 29 12:06 pdb1
drwxr-x--- 2 oracle oinstall      4096 Jun 29 11:59 pdbseed
-rw-r----- 1 oracle asmadmin  52429312 Jul 25 15:18 redo01.log
-rw-r----- 1 oracle asmadmin  52429312 Jul 25 15:18 redo02.log
-rw-r----- 1 oracle asmadmin  52429312 Jul 26 15:03 redo03.log
-rw-r----- 1 oracle asmadmin 650125312 Jul 26 15:02 sysaux01.dbf
-rw-r----- 1 oracle asmadmin 817897472 Jul 26 15:02 system01.dbf
-rw-r----- 1 oracle asmadmin  62922752 Jul 26 15:02 temp01.dbf
-rw-r----- 1 oracle asmadmin 173023232 Jul 26 15:01 undotbs01.dbf
-rw-r----- 1 oracle asmadmin   5251072 Jul 25 15:18 users01.dbf

[oracle@host01 cdb1]$ ls -l /u01/app/oracle/oradata/cdb1/ | grep pdb1
drwxr-x--- 2 oracle oinstall      4096 Jun 29 12:06 pdb1
Using the CREATE PLUGGABLE DATABASE ... FROM command we will clone the existing PDB (pdb1) to create a new PDB (pdb1clone) in the same container database (cdb1). This statement copies the files associated with the source PDB to a new location and associates the files with the target PDB.

CDB1>create pluggable database pdb1clone from pdb1
     file_name_convert = ('pdb1','pdb1clone');
Pluggable database created.
We can see that the new PDB called pdb1clone is in MOUNTED state when created and is opened successfully thereafter.

CDB1>sho pdbs

CDB1>sho pdbs
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB1CLONE                      MOUNTED

CDB1>alter pluggable database pdb1clone open;
Pluggable database altered.

CDB1>sho pdbs
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB1CLONE                      READ WRITE NO
CDB1>alter session set container=pdb1clone;
Session altered.

CDB1>sho con_name
CON_NAME
------------------------------
PDB1CLONE

CDB1>select count(*) from hr.employees;
  COUNT(*)
----------
       107
Note that the directory for the data files of the clone PDB pdb1clone has been created automatically in the location specified using FILE_NAME_CONVERT.

[oracle@host01 cdb1]$ ls -l /u01/app/oracle/oradata/cdb1/pdb1clone

[oracle@host01 cdb1]$ ls -l /u01/app/oracle/oradata/cdb1/pdb1clone
total 2089832
-rw-r----- 1 oracle asmadmin 1304174592 Jul 25 15:53 example01.dbf
-rw-r----- 1 oracle asmadmin   20979712 Jul 25 15:52 pdb1clone_temp012016-06-29_12-06-27-PM.dbf
-rw-r----- 1 oracle asmadmin    5251072 Jul 25 15:53 SAMPLE_SCHEMA_users01.dbf
-rw-r----- 1 oracle asmadmin  555753472 Jul 25 15:53 sysaux01.dbf
-rw-r----- 1 oracle asmadmin  272637952 Jul 25 15:53 system01.dbf
Hence, we have been able to hot clone a PDB locally without:

Placing the source PDB in READ ONLY mode
Creating the directory for the destination PDB
Demonstration 2: Hot cloning of PDB remotely
We will clone the pluggable database pdb1 in CDB cdb1 on host host01 to pdb1new in another CDB, i.e. destcdb on host host02:

Source host: host01
Source PDB: pdb1 in container database cdb1

Destination host: host02
Destination PDB: pdb1new in container database destcdb

Currently, there is only one PDB called pdb1 open in READ WRITE mode in destination container database destcdb:

DESTCDB>sho pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
On the target container database destcdb, we need to create the database link to connect to source container database cdb1 which will be used in the CREATE PLUGGABLE DATABASE.

DESTCDB>create database link cdb1_link
        connect to system identified by oracle using 'host01:1521/cdb1';
Database link created.
Verify that the source pluggable database (pdb1@cdb1) that we want to clone is in READ WRITE mode.

CDB1> select con_id, name, open_mode from v$pdbs where name = 'PDB1';

    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         3 PDB1                           READ WRITE
Let’s execute the CREATE PLUGGABLE DATABASE statement using the database link (cdb1_link) as previously defined.

DESTCDB> create pluggable database pdb1new from pdb1@cdb1_link;
 create pluggable database pdb1new from pdb1@cdb1_link
                                                     *
ERROR at line 1:
ORA-65016: FILE_NAME_CONVERT must be specified
Let’s find out location of datafiles for pdb1@cdb1 on host01:

CDB1>alter session set container = pdb1;
Session altered.
CDB1>select name from v$datafile;
NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/undotbs01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/cdb1/pdb1/example01.dbf
Verify that directory corresponding to data files of target PDB, i.e. pdb1new, is not present on the target host host02:

[oracle@host02 destcdb]$ ls -l /u01/app/oracle/oradata/destcdb
total 1761816
-rw-r----- 1 oracle asmadmin  17973248 Jul 25 15:35 control01.ctl
drwxr-x--- 2 oracle oinstall      4096 Jul 24 16:04 PDB1
drwxr-x--- 2 oracle oinstall      4096 Jul 24 15:57 pdbseed
-rw-r----- 1 oracle asmadmin  52429312 Jul 24 16:09 redo01.log
-rw-r----- 1 oracle asmadmin  52429312 Jul 25 15:35 redo02.log
-rw-r----- 1 oracle asmadmin  52429312 Jul 24 16:08 redo03.log
-rw-r----- 1 oracle asmadmin 629153792 Jul 25 15:34 sysaux01.dbf
-rw-r----- 1 oracle asmadmin 817897472 Jul 25 15:35 system01.dbf
-rw-r----- 1 oracle asmadmin  62922752 Jul 25 15:26 temp01.dbf
-rw-r----- 1 oracle asmadmin 173023232 Jul 25 15:35 undotbs01.dbf
-rw-r----- 1 oracle asmadmin   5251072 Jul 25 15:19 users01.dbf
Let’s specify FILE_NAME_CONVERT and re-execute the CREATE PLUGGABLE DATABASE statement using the database link (cdb1_link) we previously defined:

DESTCDB>create pluggable database pdb1new from pdb1@cdb1_link
        file_name_convert = ('/u01/app/oracle/oradata/cdb1/pdb1','/u01/app/oracle/oradata/destcdb/pdb1new');
Pluggable database created.
By default the new pluggable database is created in MOUNTED state and can be opened.

DESTCDB>sho pdbs
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB1NEW                        MOUNTED

DESTCDB>alter pluggable database pdb1new open;
Pluggable database altered.

DESTCDB>sho pdbs
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB1NEW                        READ WRITE NO
DESTCDB>alter session set container=pdb1new;
Session altered.
DESTCDB>select count(*) from hr.employees;
  COUNT(*)
----------
       107
Verify that the directory for data files of pbdnew has been created automatically on host02 in the location specified using FILE_NAME_CONVERT:

[oracle@host02 pdb1new]$ ls -l /u01/app/oracle/oradata/destcdb/pdb1new
total 2089832
-rw-r----- 1 oracle asmadmin 1304174592 Jul 25 15:41 example01.dbf
-rw-r----- 1 oracle asmadmin   20979712 Jul 25 15:39 pdb1_temp012016-06-29_12-06-27-PM.dbf
-rw-r----- 1 oracle asmadmin    5251072 Jul 25 15:41 SAMPLE_SCHEMA_users01.dbf
-rw-r----- 1 oracle asmadmin  555753472 Jul 25 15:41 sysaux01.dbf
-rw-r----- 1 oracle asmadmin  272637952 Jul 25 15:41 system01.dbf
Hence, we have been able to hot clone a PDB remotely without:

Placing the source PDB in READ ONLY mode
Creating the directory for the destination PDB.

Step by step Clone PDB within CDB in Oracle Database 12c

Step by step Clone PDB within CDB in Oracle Database 12c

Oracle database 12c is providing new feature to clone database in very simple steps.
It is very easy and simple to clone database in Oracle version 12c.

Here, I am listing step by step to clone PDB (Plug-able Database ) database within it's CDB (Container Database).

Example shows, clone database from pdborcl to new database pbduporcl.

Below are the Steps,

SrNo
Description
Step 1
Setting the Source PDB to READ ONLY Mode
Step 2
Create directory for new clone PDB.
Step 3
Configure OMF to the directory of the Clone PDB
Step 4
Clone the PDB within CDB
Step 5
Set source PDB back to open mode.
Step 6
Check status of all PDBs

Step 1:- Setting the Source PDB to READ ONLY Mode


Connect with SYS user and make pdborcl database to read only mode.
-bash-4.1$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed May 9 11:28:40 2016
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> alter pluggable database pdborcl close immediate;
Pluggable database altered.

SQL> alter pluggable database pdborcl open read only;
Pluggable database altered.

SQL>
SQL> exit


Step 2:- Create directory for new clone PDB

Now Create directory structure for new pdbduporcl database.

-bash-4.1$ pwd
/mnt/devops_0/Oracle/db1212/db1212dat/orcl12c


-bash-4.1$ mkdir pdbduporcl
-bash-4.1$ ls -ltr
total 2080842
drwxr-x---+ 2 db1212 oinstall         5 May  8 14:19 pdbseed
-rw-r-----+ 1 db1212 oinstall  52429312 May  9 01:31 redo02.log
-rw-r-----+ 1 db1212 oinstall  52429312 May  9 11:30 redo03.log
-rw-r-----+ 1 db1212 oinstall 206577664 May  9 11:31 temp01.dbf
-rw-r-----+ 1 db1212 oinstall   5251072 May  9 11:35 users01.dbf
-rw-r-----+ 1 db1212 oinstall 361766912 May  9 11:45 undotbs01.dbf
-rw-r-----+ 1 db1212 oinstall 838868992 May  9 11:45 system01.dbf
-rw-r-----+ 1 db1212 oinstall 744497152 May  9 11:45 sysaux01.dbf
-rw-r-----+ 1 db1212 oinstall  52429312 May  9 11:45 redo01.log
drwxr-x---+ 2 db1212 oinstall         7 May  9 11:45 pdborcl
-rw-r-----+ 1 db1212 oinstall  17973248 May  9 11:45 control01.ctl
drwxr-xr-x+ 2 db1212 oinstall         2 May  9 11:45 pdbduporcl


Step 3:- Configure OMF to the directory of the Clone PDB (Optional)

-bash-4.1$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Wed May 9 11:48:01 2016
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> alter system set db_create_file_dest='/mnt/devops_0/Oracle/db1212/db1212dat/orcl12c/pdbduporcl';
System altered.
SQL>


Step 4:- Clone the PDB within CDB

Now below steps will clone database from pdborcl to pdbduporcl.

-bash-4.1$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Wed May 9 11:49:53 2016
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>
SQL> create pluggable database pdbduporcl from pdborcl;
Pluggable database created.


Open new PDB database

SQL> alter pluggable database pdbduporcl open;
Pluggable database altered.


Add TNS Entery in tnsnames.ora file.

pdbduporcl =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = orac12c)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdbduporcl)
    )
  )

Connect to new PDB database

-bash-4.1$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Wed May 9 12:35:18 2016
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> conn sys/sysdba123@pdbduporcl as sysdba
Connected.
SQL>
SQL> show con_name
CON_NAME
------------------------------
PDBDUPORCL
SQL>


Step 5:- Set source PDB back to open mode.

Now, set Source PDB back to open mode and make it available to users.

-bash-4.1$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Wed May 9 12:37:41 2016
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>
SQL> alter pluggable database pdborcl close immediate;
Pluggable database altered.

SQL>
SQL> alter pluggable database pdborcl open;
Pluggable database altered.


Step 6:- Check status of all PDBs


Now check the status of all PDBs as below.

-bash-4.1$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Wed May 9 12:37:41 2016
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> select con_id, dbid, name, open_mode from v$pdbs;

    CON_ID  DBID          NAME                          OPEN_MODE
-------------- --------------  ---------------------------  ------------------
         2         3707469898 PDB$SEED                  READ ONLY
         3         1332221668 PDBORCL                   READ WRITE
         4         1313681298 PDBDUPORCL           READ WRITE

SQL>

Reference:-






ORA-00059: Maximum Number Of DB_FILES Exceeded in 19C database

When I am adding datafile to my 19C database facing the below error. SQL> alter tablespace DATA  add datafile '/u01/data/data15.dbf...