ORA-28365: wallet is not open encountered below issue while Starting the Database



SQL> startup
ORACLE instance started.

Total System Global Area 2.1475E+10 bytes
Fixed Size                 12170960 bytes
Variable Size            2818574640 bytes
Database Buffers         1.8589E+10 bytes
Redo Buffers               54935552 bytes
Database mounted.
ORA-00603: ORACLE server session terminated by fatal error
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-28365: wallet is not open
Process ID: 12599
Session ID: 260 Serial number: 21611


                                   Solution:
Start the instance in mount mode

[oracle@ora]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Fri Jul 10 17:02:32 2020

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area 2.1475E+10 bytes
Fixed Size                 12170960 bytes
Variable Size            2818574640 bytes
Database Buffers         1.8589E+10 bytes
Redo Buffers               54935552 bytes
Database mounted.

Check Wallet status
SQL> select STATUS,WRL_PARAMETER WALLET_DIR,WALLET_TYPE from V$ENCRYPTION_WALLET;

STATUS
------------------------------
WALLET_DIR
--------------------------------------------------------------------------------
WALLET_TYPE
--------------------
CLOSED
/apps/DEV/product/12.2.0/dbhome_1/admin/DEV/wallet/
UNKNOWN


SQL> administer key management set keystore open identified by "tde1986";

keystore altered.

SQL> select * from v$encryption_wallet;

WRL_TYPE   WRL_PARAMETER  STATUS  WALLET_TYPE   WALLET_OR ULLY_BAC
    CON_ID    FILE
/apps/DEV/product/12.2.0/dbhome_1/admin/DEV/wallet/   OPEN   PASSWORD    SINGLE    NO       0


SQL> alter database open;


Database altered.

EXP-00008: ORACLE error 1455 encountered


I an facing below error when attempting to export schema level on 12.1 oracle database.

[oracle@sonu EXPIMP]$ exp system/password owner=DEV file=DEV.dmp log=DEV.log statistics=none

EXP-00008: ORACLE error 1455 encountered
ORA-01455: converting column overflows integer datatype
EXP-00056: ORACLE error 1403 encountered
ORA-01403: no data found


solution :-


Try adding below parameter to resolved this issue.

constraints=n indexes=n statistics=none

[oracle@sonu EXPIMP]$ exp system/password owner=DEV file=DEV.dmp log=DEV.log constraints=n indexes=n statistics=none


issue resolved.



JOB QUEUE PROCESSES parameter in 12C database


JOB QUEUE PROCESSES is used for background process in Oracle such as DBMS JOBS or scheduler job is running from this parameter.

If you set the value to zero then no scheduler or DBMS jobs is running in background. all jobs disabled to run.

Note:
Up to 12c, JOB_QUEUE_PROCESSES is set to a value in the range of 0 to 1000.
For 18c, JOB_QUEUE_PROCESSES is set to a value in the range of 0 to 4000.

Check the value of JOB QUEUE PROCESSES

[oracle@ora ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jul 7 10:23:35 2020

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>

SQL> Show parameter job_queue_processes

NAME                   TYPE     VALUE
---------------------- -------- -------
job_queue_processes    integer  4000

SQL> alter system set job_queue_processes=0;

System altered.

SQL> Show parameter job_queue_processes

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
job_queue_processes                  integer     0

ORA-20001: Statistics Advisor: Invalid task name for the current user


During the 12c database creation process , you can see ORA-20001 error in the alert log file when the “SYS.ORA $ AT_OS_OPT_SY_ <NN>” auto job runs. To fix the error, it is necessary to drop the job and recreate it. Errors will be as follows.

In alert log i am getting below error:

Error
ORA-12012: error on auto execute of job "SYS"."ORA$AT_OS_OPT_SY_1200"
ORA-20001: Statistics Advisor: Invalid task name for the current user
ORA-06512: at "SYS.DBMS_STATS", line 47207
ORA-06512: at "SYS.DBMS_STATS_ADVISOR", line 882
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 20059
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 22201
ORA-06512: at "SYS.DBMS_STATS", line 47197
2020-06-24T23:21:43.781403-04:00

Cause:
Issue occurred while creating database with DBCA, the advisory package is not created properly. This issue is happening because of the unavailability of the Stats Advisor Tasks from the created database.
It is Oracle known Bug and can appear when database is created with dbca in oracle 12.2

Solution:

connect with sysdba privilege user and run following query:

SQL> col name for a30
select name, ctime, how_created
from sys.wri$_adv_tasks
where owner_name = 'SYS'
and name in ('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK');

no rows selected

Initialize the package with following commands:

SQL> EXEC dbms_stats.init_package();

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

Verify the package create. It will fixed the issue

SQL> col name for a30
select name, ctime, how_created
from sys.wri$_adv_tasks
where owner_name = 'SYS'
and name in ('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK');
NAME                           CTIME     HOW_CREATED
------------------------------ --------- ------------------------------
AUTO_STATS_ADVISOR_TASK        26-JUN-20 CMD
INDIVIDUAL_STATS_ADVISOR_TASK  26-JUN-20 CMD


In this case, you should connect with sys via sqlplus and drop and recreate the tasks correctly.
Drop operations can be done as follows.


SQL> select name, ctime, how_created,OWNER_NAME from sys.wri$_adv_tasks
where name in ('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK');

NAME                           CTIME     HOW_CREATED
------------------------------ --------- ------------------------------
OWNER_NAME
--------------------------------------------------------------------------------
AUTO_STATS_ADVISOR_TASK        26-JUN-20 CMD
SYS

INDIVIDUAL_STATS_ADVISOR_TASK  26-JUN-20 CMD
SYS


SQL> DECLARE
v_tname VARCHAR2(32767);
BEGIN
v_tname := 'AUTO_STATS_ADVISOR_TASK';
DBMS_STATS.DROP_ADVISOR_TASK(v_tname);
END;
/

PL/SQL procedure successfully completed.

SQL> DECLARE
v_tname VARCHAR2(32767);
BEGIN
v_tname := 'INDIVIDUAL_STATS_ADVISOR_TASK';
DBMS_STATS.DROP_ADVISOR_TASK(v_tname);
END;
/
 
PL/SQL procedure successfully completed.

SQL> col name for a30
select name, ctime, how_created
from sys.wri$_adv_tasks
where owner_name = 'SYS'
and name in ('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK');

no rows selected

It should then be re-created as follows.

SQL> EXEC DBMS_STATS.INIT_PACKAGE();

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

Verify the package create. It will fixed the issue


SQL> col name for a30
select name, ctime, how_created
from sys.wri$_adv_tasks
where owner_name = 'SYS'
and name in ('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK');

NAME                           CTIME     HOW_CREATED
------------------------------ --------- ------------------------------
AUTO_STATS_ADVISOR_TASK        26-JUN-20 CMD
INDIVIDUAL_STATS_ADVISOR_TASK  26-JUN-20 CMD


Recheck issue in alert log.

you are not setup as a buyer


You are not setup as a worker. To access this form you need to be a worker EBS R12.2.4

This USER ID -should have assigned proper employee from HR Module.
 HR Navigation: (Human Resources>> People>> Enter and Maintain), Person types should be ‘employee’ now.
Assigning Employee to User ID:
Add this new employee to your user name ( System Administrator>> Security>> User>> Define ), Add newly created employee in the person field.
Define this Employee as  BUYER (OAF page will open)

Adding a new buyer ( Purchasing>> Setup>> Personnel>> Buyers),Click Add Buyer button and search for your newly created employee and select it and save it. New buyer is added.



Oracle Data Masking

                           
                                        Overview of Oracle Data Masking

Enterprises run the risk of breaching sensitive information when copying production data into non-production environments for the purposes of application development, testing, or data analysis. Oracle Data Masking helps reduce this risk by irreversibly replacing the original sensitive data with fictitious data so that production data can be shared safely with non-production users. Accessible through Oracle Enterprise Manager, Oracle Data Masking provides end-to-end secure automation for provisioning test databases from production in compliance with regulations.

                                           Data Masking Concepts

Data masking (also known as data scrambling and data anonymization) is the process of replacing sensitive information copied from production databases to test non-production databases with realistic, but scrubbed, data based on masking rules. Data masking is ideal for virtually any situation when confidential or regulated data needs to be shared with non-production users. These users may include internal users such as application developers, or external business partners such as offshore testing companies, suppliers and customers. These non-production users need to access some of the original data, but do not need to see every column of every table, especially when the information is protected by government regulations.

Data masking enables organizations to generate realistic and fully functional data with similar characteristics as the original data to replace sensitive or confidential information. This contrasts with encryption or Virtual Private Database, which simply hides data, and the original data can be retrieved with the appropriate access or key. With data masking, the original sensitive data cannot be retrieved or accessed.

Names, addresses, phone numbers, and credit card details are examples of data that require protection of the information content from inappropriate visibility. Live production database environments contain valuable and confidential data—access to this information is tightly controlled. However, each production system usually has replicated development copies, and the controls on such test environments are less stringent. This greatly increases the risks that the data might be used inappropriately. Data masking can modify sensitive database records so that they remain usable, but do not contain confidential or personally identifiable information. Yet, the masked test data resembles the original in appearance to ensure the integrity of the application.

                                               Roles of Data Masking Users

The following types of users participate in the data masking process for a typical enterprise:

Application database administrator or application developer

This user is knowledgeable about the application and database objects. This user may add additional custom database objects or extensions to packaged applications, such as the Oracle E-Business suite.

Information security administrator

This user defines information security policies, enforces security best practices, and also recommends the data to be hidden and protected.

                                     Related Oracle Security Offerings

Besides data masking, Oracle offers the following security products:

Virtual Private Database or Oracle Label Security — Hides rows and data depending on user access grants.

Transparent Data Encryption — Hides information stored on disk using encryption. Clients see unencrypted information.

DBMS_CRYPTO — Provides server packages that enable you to encrypt user data.

Database Vault — Provides greater access controls on data.

                                       Agent Compatibility for Data Masking

Data masking supports Oracle Database 9i and newer releases. If you have a version prior to 11.1, you can use it by implementing the following workaround.

Replace the following file...

AGENT_HOME/sysman/admin/scripts/db/reorg/reorganize.pl
... with this file:

OMS_HOME/sysman/admin/scripts/db/reorg/reorganize.pl

                                   Format Libraries and Masking Definitions

To mask data, the Data Masking Pack provides two main features:

                                         Masking format library

The format library contains a collection of ready-to-use masking formats. The library consists of format routines that you can use for masking. A masking format can either be one that you create, or one from the list of Oracle-supplied default masking formats.

As a matter of best practice, organizations should create masking formats for all commonly regulated information so that the formats can be applied to the sensitive data regardless of which database the sensitive data resides in. This ensures that all sensitive data is consistently masked across the entire organization.

                                                Masking definitions

A masking definition defines a data masking operation to be implemented on one or more tables in a database. Masking definitions associate table columns with formats to use for masking the data. They also maintain the relationship between columns that are not formally declared in the database using related columns.

You can create a new masking definition or use an existing definition for a masking operation. To create a masking definition, you specify the column of the table for which the data should be masked and the format of masked data. If the columns being masked are involved in unique, primary key, or foreign key constraints, data masking generates the values so that the constraints are not violated. Masking ensures uniqueness per character using decimal arithmetic. For example, a 5-character string generates a maximum of only 99999 unique values. Similarly, a 1-character string generates a maximum of only 9 unique values.

You would typically export masking definitions to files and import them on other systems. This is important when the test and production sites reside on different Oracle Management Systems or on entirely different sites.



                                Data Masking Workflow

The following figure shows that the production database is cloned to a staging region and then masked there. During the masking process, the staging and test areas are tightly controlled like a production site                                 









Data masking is an iterative and evolving process handled by the security administrator and implemented by the database administrator. When you first configure data masking, try out the masking definition on a test system, then add a greater number of columns to the masking definition and test it to make sure it functions correctly and does not break any application constraints. During this process, you should exercise care when removing all embedded references to the real data while maintaining referential integrity.
After data masking is configured to your satisfaction, you can use the existing definition to repeatedly mask after cloning. The masking definition, however, would need to evolve as new schema changes require new data and columns to be masked.
After the masking process is complete, you can distribute the database for wide availability. If you need to ship the database to another third-party site, you are required to use the Data Pump Export utility, and then ship the dump file to the remote site. However, if you are retaining the masked data in-house,



Data Masking Task Sequence

 The task sequence in this section demonstrates the data masking workflow and refers you to additional information about some of the tasks in the sequence. Before reviewing this sequence, note that there are two options for completing this process:

                              Exporting/importing to another database

You can clone the production database to a staging area, mask it, then export/ import it to another database before delivering it to in-house testers or external customers. This is the most secure approach.

                          Making the staging area the new test region

You can clone the production database to a mask staging area, then make the staging area the new test region. In this case, you should not grant testers SYSDBA access or access to the database files. Doing so would compromise security. The masked database contains the original data in unused blocks and in the free list. You can only purge this information by exporting/importing the data to another database.

The following basic steps guide you through the data masking process, with references to other sections for supporting information.

Review the application database and identify the sources of sensitive information.
Define mask formats for the sensitive data. The mask formats may be simple or complex depending on the information security needs of the organization.
Create a masking definition to associate table columns and edition view objects to these mask formats. Data masking determines the database foreign key relationships and adds foreign key columns to the mask.
Save the masking definition and generate the masking script.
Verify if the masked data meets the information security requirements. Otherwise, refine the masking definition, restore the altered tables, and reapply the masking definition until the optimal set of masking definitions has been identified.
Clone the production database to a staging area, selecting the masking definition to be used after cloning. Note that you can clone using Oracle Enterprise Manager, which enables you to add masking to the Oracle Enterprise Manager clone workflow. However, if you clone outside of Oracle Enterprise Manager, you must initiate masking from Oracle Enterprise Manager after cloning is complete. The cloned database should be controlled with the same privileges as the production system, because it still contains sensitive production data.
After cloning, make sure you change the passwords as well as update or disable any database links, streams, or references to external data sources. Back up the cloned database, or minimally the tables that contain masked data. This can help you restore the original data if the masking definition needs to be refined further.

After masking, test all of your applications, reports, and business processes to ensure they are functional. If everything is working, you can export the masking definition to keep it as a back-up.
After masking the staging site, make sure to drop any tables named MGMT_DM_TT before cloning to a test region. These temporary tables contain a mapping between the original sensitive column value and the mask values, and are therefore sensitive in nature.
During masking, Oracle Enterprise Manager automatically drops these temporary tables for you with the default "Drop temporary tables created during masking" option. However, you can preserve these temporary tables by deselecting this option. In this case, you are responsible for deleting the temporary tables before cloning to the test region.

After masking is complete, ensure that all tables loaded for use by the substitute column format or table column format are going to be dropped. These tables contain the mask values that table column or substitute formats will use. It is recommended that you purge this information for security reasons.
Clone the database to a test region, or use it as the new test region. When cloning the database to an external or unsecured site, you should use Export or Import. Only supply the data in the database, rather than the database files themselves.
As part of cloning production for testing, provide the masking definition to the application database administrator to use in masking the database.

---------------------------------------Data Masking statement with example------------


Every organization maintains the sensitive data regarding their operations. like  personal identifiable data, personal sensitive data or commercially sensitive data. organizations are required to protect their sensitive data falling to the wrong hands.

Data can be protect by doing the common stuff such as data encryption, strong passwords, and minimal privileges. but sometimes we need to compromise to do this things as per requirement and performance of databases.

To avoid the all difficulties in Oracle introduced the Data masking concept(DATA REDACTION) in Version 12C  to provide more security to the sensitive data.

The main advantage of data redaction police is dynamically provides realistic-looking data to the customers  instated of real data.

In order to perform data masking user need execute permission on sys.dbms_redact

SYS>> grant execute on sys.dbms_redact to data;

Data masking can be done dynamically or statically to protect sensitive data.

You can redact column data by using one of the following methods:

Full redaction
Partial redaction
Regular expressions
Random redaction
No redaction

Full redaction

Full redaction will redact all of the contents of the column data. The redacted value returned to the querying application user depends on the data type of the column.

Columns of the NUMBER data type are redacted with a zero (0).

Columns of the character data types are redacted with a single space.



FULL DATA MASKING

DATA>>select * from dmtest;







DATA>>connect / as sysdba

Connected.



SYS>>grant execute on sys.dbms_redact to data;



SYS>>connect data/mask;

Connected.



Run the below procedure to Full mask the data.

DATA>>BEGIN

2  DBMS_REDACT.ADD_POLICY(

3  object_schema => ‘DATA’,

4  object_name => ‘DMTEST’,

5  column_name => ‘U_ATM_PIN’,

6  policy_name => ‘REDACT_PIN’,

7  function_type => DBMS_REDACT.FULL,

8  expression => ‘1=1’);

9  END;

10  /



PL/SQL procedure successfully completed.







DATA>>select * from dmtest;







FULL DATA UNMASKING

Run the below procedure to Full unmask the data.

DATA>>BEGIN

2 DBMS_REDACT.alter_policy(

3  object_schema=>’DATA’,

4 object_name=>’DMTEST’,

5  policy_name=>’REDACT_PIN’,

6 action=>DBMS_REDACT.modify_expression,

7  column_name=>’U_ATM_PIN’,

8 expression=>’SYS_CONTEXT(”USERENV”,”SESSION_USER”) != ”DATA”’);

9 END;

10  /





DATA>>select * from dmtest;







Partial redaction

Partial redaction will redact a portion of the column data.

In this case we redacting Account number  with asterisks (*) except for the last 4 digits.







Run the below procedure to Partially mask the data.

DATA>>BEGIN

2   DBMS_REDACT.ADD_POLICY(

3    object_schema => ‘DATA’,

4    object_name => ‘DMTEST’,

5    column_name => ‘U_ACC_NO’,

6    policy_name => ‘REDACT_ACCNO’,

7    function_type => DBMS_REDACT.PARTIAL,

8    function_parameters =>’VVVVFVVVVFVVVVFVVVV,VVVV-VVVV-VVVV-VVVV,*,1,12′,

9    expression => ‘1=1’);

10  END;

11  /







DATA>>select * from dmtest;







Dropping the Redaction Police



We can drop the Redaction police by executing the following procedure.



DATA>>BEGIN

2  DBMS_REDACT.DROP_POLICY (

3  object_schema  => ‘DATA’,

4  object_name => ‘DMTEST’,

5  policy_name => ‘REDACT_ACCNO’);

6 END;

DATA>>/



PL/SQL procedure successfully completed.









DATA>>select * from dmtest;







Regular expressions

Regular expression can use regular expressions to look for patterns of data to redact.

you can use regular expressions to redact email addresses, which can have varying character lengths. It is designed for use with character data only.



Random redaction

The redacted data presented to the querying application user appears as randomly generated values each time it is displayed, depending on the data type of the column.



DATA>>select * from USERINFO;









Run the below procedure for random mask the data.

DATA>>BEGIN

2  DBMS_REDACT.ADD_POLICY(

3  object_schema => ‘DATA’,

4  object_name => ‘USERINFO’,

5  column_name => ‘PHNO’,

6  policy_name => ‘REDACT_PHNO’,

7  function_type => DBMS_REDACT.RANDOM,

8  expression => ‘SYS_CONTEXT(”USERENV”,”SESSION_USER”) = ”APPUSER”’);

9  END;

10  /



PL/SQL procedure successfully completed.







DATA>>select * from userinfo;







DATA>>select PHNO  from userinfo;







Grant Select privilege to appuser for select the data of userinfo table.





Connect the appuser and try to select the  DATA user USERINFO table.







You can find every time it showing random values for PHNO column.



Connect the DATA user and try to select the data.







Run the below procedure to show actual values to the APPUSER.

DATA>>BEGIN

2  DBMS_REDACT.ALTER_POLICY(

3  object_schema => ‘DATA’,

4  object_name => ‘USERINFO’,

5  policy_name => ‘REDACT_PHNO’,

6  action => DBMS_REDACT.MODIFY_EXPRESSION,

7  expression => ‘SYS_CONTEXT(”USERENV”,”SESSION_USER”) != ”APPUSER”’);

8  END;

9  /



PL/SQL procedure successfully completed.







Connect the APPUSER user and try to select the data.









You can find every time it showing random values for PHNO column.

How to open the PDBS automatically when CDB restarts

In oracle 12c , when we startup the CDB, the PDBS will be in MOUNTED stage.
We need to open them manually using alter pluggable PDB open.

To make the PDBS open automatically, we can use the saved state option .

This feature is available from Oracle 12.1.0.2  onwards

1.  Restart the container database:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.


SQL> startup
ORACLE instance started.

Total System Global Area 1.4529E+10 bytes
Fixed Size                  7649368 bytes
Variable Size            8489273256 bytes
Database Buffers         5939134464 bytes
Redo Buffers               93011968 bytes
Database mounted.
Database opened.

2. Check the status of PDBS:

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           MOUNTED
         4 PDB2                           MOUNTED
         5 NONCDB                         MOUNTED


 We can see the PDBS are in MOUNTED stage after starting CDB. Lets open them

3. Open the PDBS:

SQL> alter pluggable database NONCDB open;

Pluggable database altered.

SQL>  alter pluggable database PDB1 open;

Pluggable database altered.

SQL> alter pluggable database PDB2 open;

Pluggable database altered.

SQL>  show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           READ WRITE NO
         5 NONCDB                         READ WRITE NO

 Now use save state command to save the states, so that next time ,when container db restarts,
PDBs will in READ WRITE MODE automatically.

 4. Save the PDB state:

SQL> alter pluggable database NONCDB save state;

Pluggable database altered.

SQL>  alter pluggable database PDB1 save state;

Pluggable database altered.

SQL> alter pluggable database PDB2 save state;

Pluggable database altered.


5. check the saved state in dba_pdb_saved_states

SQL> select con_name, state from dba_pdb_saved_states;

CON_NAME                STATE
----------------------- --------------
NONCDB                  OPEN
PDB1                    OPEN
PDB2                    OPEN


6.Bounce the container database, to check the PDB state:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.


SQL> startup
ORACLE instance started.

Total System Global Area 1.4529E+10 bytes
Fixed Size 7649368 bytes
Variable Size 8489273256 bytes
Database Buffers 5939134464 bytes
Redo Buffers 93011968 bytes
Database mounted.
Database opened.

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO
5 NONCDB READ WRITE NO


We can observed that PDBs are in READ WRITE mode automatically.

DISCARD STATE OPTION:

  We can discard the saved state, so that next time CDB restarts, PDBs will start with MOUNT STATE only.

Lets discard state for PDB2 .

SQL> select con_name, state from dba_pdb_saved_states;

CON_NAME                STATE
----------------------- --------------
NONCDB                  OPEN
PDB1                    OPEN
PDB2                    OPEN


SQL> alter pluggable database PDB2 discard state;

Pluggable database altered.

SQL>   select con_name, state from dba_pdb_saved_states;

CON_NAME                STATE
----------------------- --------------
NONCDB                  OPEN
PDB1                        OPEN


1. Bounce the CDB to check the PDB State:

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           READ WRITE NO
         5 NONCDB                         READ WRITE NO

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 1.4529E+10 bytes
Fixed Size                  7649368 bytes
Variable Size            8489273256 bytes
Database Buffers         5939134464 bytes
Redo Buffers               93011968 bytes
Database mounted.
Database opened.

SQL>  show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           MOUNTED
         5 NONCDB                         READ WRITE NO




We can observe that, only PDB2 started in MOUNTED stage, because we have discarded the saved state for PDB2 .

As this feature is not available in 12.1.0.1 , we can create a trigger to open the PDBs automatically, with container database startup.

CREATE TRIGGER open_all_pdbs
  AFTER STARTUP ON DATABASE
BEGIN
   EXECUTE IMMEDIATE ‘ALTER PLUGGABLE DATABASE ALL OPEN’;
END ;
/


-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Reference :-





java.sql.SQLSyntaxErrorException: Access was denied to the user in MySQL 8.4.

 java.sql.SQLSyntaxErrorException: Access denied for user 'SIT'@'%' to database 'SIT' means the MySQL user SIT does ...