PLS-00905: object APPS.APPS is invalid

Today development team has reported an issue that when they are trying to compile any package which
have apps.<> schema reference thrown an error:

SQL> alter package XX_XXXX_PKG compile body;

Warning: Package Body altered with compilation errors.

SQL> show error
Errors for PACKAGE BODY XX_XXXX_PKG:

LINE/COL ERROR
-------- -----------------------------------------------------------------
1130/16  PL/SQL: Statement ignored
1130/16  PLS-00905: object APPS.APPS is invalid
1379/16  PL/SQL: Statement ignored
1379/16  PLS-00905: object APPS.APPS is invalid
1563/16  PL/SQL: Statement ignored
1563/16  PLS-00905: object APPS.APPS is invalid
1791/16  PL/SQL: Statement ignored
1791/16  PLS-00905: object APPS.APPS is invalid
SQL>


Cause: 
An invalid package specification or stored subprogram was referenced.
A package specification or stored subprogram is invalid if its source code or
any database object it references has been DROPped, REPLACEd, or ALTERed since it was last compiled.

Action: 
Find out what invalidated the package specification or stored subprogram,
then make sure that Oracle can recompile it without errors.
Now we should make sure that we have created the package in the right place.

Steps-1.

 Package code I have compiled successfully in other instances like QA,
however got the error in dev instance. If we remove schema reference (apps) then it’s compiling from dev also
but as per coding standard it has good practice to use apps.<>.

Steps-2.

 Now we should make sure that we have created the package in the right place.

Connect with apps schema :-

[Dev@oracle ~]$ sqlplus apps/apps

SQL > select owner, object_name, object_type
  from dba_objects
 where object_name='XX_XXXX_PKG'
 order by 1,2,3;

Output- Perfect no issue in place.
owner            object_name      object_type
APPS            XX_XXXX_PKG       PACKAGE
APPS            XX_XXXX_PKG       PACKAGE BODY


Steps-3.

I have checked the compiler log again and found APPS.APPS that means some developer mistakenly crated the apps
name package on apps schema. That may be our case.

Now check :

SQL > select owner, object_name, object_type
  from dba_objects
 where OBJECT_NAME='APPS'
 order by 1,2,3;

found one object created with name apps.

Steps-4.

Now we should drop apps object

SQL > drop package apps;

Steps-5.

Try compiling the package in dev instance again.

I hope issue fix.



Inventory load failed


Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:

[ora@oracle OPatch]$ opatch lsinventory

Oracle Interim Patch Installer version 12.1.0.1.3
Copyright (c) 2019, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/db/tech_st/12.1.0.2
Central Inventory : /u01/app/db/oraInventory
   from           : /u01/app/db/tech_st/12.1.0.2/oraInst.loc
OPatch version    : 12.1.0.1.3
OUI version       : 12.1.0.2.0
Log file location : /u01/app/db/tech_st/12.1.0.2/cfgtoollogs/opatch/opatchAM_1.log

List of Homes on this system:

Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
   Oracle Home dir. path does not exist in Central Inventory
   Oracle Home is a symbolic link
   Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo

OPatch failed with error code 73


Solution


[ora@oracle ~]$ cd $ORACLE_HOME/oui/bin
[ora@oracle bin]$ pwd
/u01/app/db/tech_st/12.1.0.2/oui/bin

[ora@oracle bin]$ vi attachHome.sh

[ora@oracle bin]$ ./attachHome.sh

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 8191 MB    Passed
The inventory pointer is located at /etc/oraInst.loc

'AttachHome' was successful.

Retest issue.

[ora@oracle OPatch]$ opatch lsinventory

Issue has been resolved.



Recover a dropped table in Oracle 11g

How to recover a dropped table in Oracle 11g when flashback mode is off

After dropping table and before restoring it from the recycle bin, run the following query:

Login with table owner schema.

SQL > SELECT OBJECT_NAME, ORIGINAL_NAME, TYPE FROM RECYCLEBIN;

if table is there than use below statement to restore.

Restore the table with the following command:

SQL > FLASHBACK TABLE drop_table_NAME TO BEFORE DROP;

Recovering dropped table is easy in Oracle, provided that the table was not dropped with PURGE option. In case the table is dropped and space occupied by the table is released and the table does not get moved into the recycle bin. But if table is dropped without PURGE option, Oracle has this very neat feature - Recycle bin, similar to the recycle bin in Windows. There are two recyle bin views in Oracle: USER_RECYCLEBIN and DBA_RECYCLEBIN, Synonym RECYCLEBIN points to your USER_RECYCLEBIN.

The recycle bin can be turned on and off with RECYCLEBIN initialization parameter. When table is dropped, it get rename to system-generated name preceeded with BIN and stored in recycle bin. The important thing to know is that after table has been dropped, it's only been renamed, the table segmants are still in the tablespace, unchanged. the space occupied by the table will not be reclaimed until the table has been purged from the recycle bin.

While in the recycle bin, the table can even be queried using the newly generated name that starts qwith BIN$.

The table can easily be recovered from the recycle bin using flashback drop, which will rename the table to its original name.

You can check flashback mode by running

SELECT FLASHBACK_ON FROM V$DATABASE;

First check the parameter Recyclebin is set to true. Recycle bin is a data dictionary table that contains information about dropped objects. Dropped tables and any associated objects such as indexes, constraints, nested tables, and the likes are not removed and still occupy space. They continue to count against user space quotas, until specifically purged from the recycle bin or the unlikely situation where they must be purged by the database because of tablespace space constraints.

SHOW PARAMETER RECYCLEBIN;

if recyclebin is set to off, perform the following steps:


ALTER SYSTEM SET RECYCLEBIN=ON SCOPE=SPFILE;

verify that recyclebin parameter is now set to ON

shutdown the database
SHUTDOWN IMMEDIATE

Restart the database
STARTUP

then run
SELECT * FROM RECYCLEBIN;

and see if your table is in there. If it is, use the following quesry to restore it:
FLASHBACK TABLE  TO BEFORE DROP;

Then check if the table is back:
SELECT * FROM USER_TABLES WHERE TABLE_NAME=;

**********************************************************

SQL >  SELECT * FROM RECYCLEBIN;

SQL >  SELECT object_name, original_name FROM dba_recyclebin WHERE owner = 'Schema_name';


SQL >  SELECT object_name, original_name, createtime FROM recyclebin;

SQL >  desc dba_recyclebin;

SQL >  select OWNER,OBJECT_NAME,ORIGINAL_NAME,CREATETIME,DROPTIME from dba_recyclebin WHERE owner = 'Schema_name';

Login with drop table owner schema

SQL > show user;

HR

SQL > FLASHBACK TABLE DROP_TABLE_NAME TO BEFORE DROP;



Clear Recyclebin

To remove all dropped objects from the recyclebin (current user):

PURGE RECYCLEBIN;
To remove all dropped objects from the recyclebin (system wide, available to SYSDBA only or, starting with version 12c, to users having the PURGE DBA_RECYCLEBIN system privilege):

PURGE DBA_RECYCLEBIN;
Tables can also be dropped without sending them to the recyclebin. Example:

DROP TABLE tba PURGE;
Tables inside recycle bin can be purged individually. Example:

PURGE TABLE tba;

Drop a table:

SQL> DROP TABLE tba;
Undrop the table:


SQL> FLASHBACK TABLE tba TO BEFORE DROP;


ORA-48913: Writing into trace file failed, file size limit


Error detailed from alert log.

Error message: ORA-48913: Writing into trace file failed, file size limit [10485760] reached
ORA-609 : opiodr aborting process unknown ospid (4397_47562687353296)
ORA-609 : opiodr aborting process unknown ospid (28545_47989084123600)
ORA-609 : opiodr aborting process unknown ospid (4703_47579510747600)
Non critical error ORA-48913 caught while writing to trace file "/u01/prod/db/tech_st/11.1.0/admin/PROD1_prod1/
diag/rdbms/prod1/PROD1/trace/PROD1_dbrm_13082.trc"
Error message: ORA-48913: Writing into trace file failed, file size limit [10485760] reached


Cause :-

We can increase the setting for the parameter MAX_DUMP_FILE_SIZE or set it to unlimited

MAX_DUMP_FILE_SIZE specifies the maximum size of trace files (excluding the alert file). Change this limit if you are concerned that trace files may use too much space.
A numerical value for MAX_DUMP_FILE_SIZE specifies the maximum size in operating system blocks.
A number followed by a K or M suffix specifies the file size in kilobytes or megabytes.
The special value string UNLIMITED means that there is no upper limit on trace file size. Thus, dump files can be as large as the operating system permits.




Solution :-


SQL> show parameter MAX_DUMP_FILE_SIZE;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_dump_file_size                   string      20480
SQL>

You would need to reset value of MAX_DUM_FILE_SIZE in order to have your trace files size extend beyond 20480 MB now. You can use “alert system” 
command to change the value. You may also set it to UNLIMITED.


SQL> ALTER SYSTEM SET max_dump_file_size=unlimited;


Pluggable Database (PDB) useful SQL statement

[ora@oracle]$ sqlplus system/****@PROD_CDB

SQL> show con_name;

CON_NAME
------------------------------
CDB$ROOT

SQL> select name from v$database;

NAME
---------
PROD

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB                         MOUNTED


SQL> ALTER SESSION SET CONTAINER= PDB;

Session altered.

SQL> ALTER PLUGGABLE DATABASE OPEN;

Warning: PDB altered with errors.


SQL> ALTER DATABASE OPEN;

Database altered.


SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 PDB                         READ WRITE YES

SQL> shut immediate

SQL> ALTER SYSTEM ENABLE RESTRICTED SESSION;

System altered.


SQL> select value from v$parameter where name='PDB';

SQL> select cause, message from pdb_plug_in_violations where name = 'PDB';

SQL> select * from v$services;


SQL> select SERVICE_ID,NAME,NETWORK_NAME from v$services;

SQL> select NAME from v$services;


[ora@oracle]$ sqlplus apps/****@PDB


SQL> select username from dba_users where USERNAME= 'APPS';

no rows selected


ORA-03113: end-of-file on communication channel


SQL> startup
ORACLE instance started.

Total System Global Area 8589934592 bytes
Fixed Size                  3731384 bytes
Variable Size            1140850760 bytes
Database Buffers         7432306688 bytes
Redo Buffers               13045760 bytes
Database mounted.
ORA-03113: end-of-file on communication channel
Process ID: 1264
Session ID: 814 Serial number: 23355


SQL>

[ora@sujeet ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Aug 22 21:07:49 2019

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount
ORACLE instance started.

Total System Global Area 8589934592 bytes
Fixed Size                  3731384 bytes
Variable Size            1140850760 bytes
Database Buffers         7432306688 bytes
Redo Buffers               13045760 bytes
SQL>
SQL> alter database mount;

Database altered.

SQL>
SQL>
SQL> alter database clear unarchived logfile group 1;

Database altered.

SQL> alter database clear unarchived logfile group 2;

Database altered.

SQL> alter database clear unarchived logfile group 3;

Database altered.

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL>
SQL>
SQL> startup
ORACLE instance started.

Total System Global Area 8589934592 bytes
Fixed Size                  3731384 bytes
Variable Size            1140850760 bytes
Database Buffers         7432306688 bytes
Redo Buffers               13045760 bytes
Database mounted.
Database opened.
SQL>

Renaming Data Files in 11g database

Renaming datafile (.dbf)  rename in 11g

Rename Datafiles in Single Tablespace  (Database Open Mode) :-

Caution : Backup your tablespace Before you will do some change in your tablespace.

Please find below steps to rename datafile.

We can use the alter tablespace rename datafile command,
but the tablespace most be offline and you must re-name at OS level command the data file while the tablespace is offline:


Steps:- 1

[ora@oracle ~]$ sqlplus / as sysdba

SQL> ALTER TABLESPACE SYSAUX OFFLINE NORMAL;


Tablespace altered.
SQL> !

Steps:- 2

Rename undo07.dbf to sysaux08.dbf


[ora@oracle ~]$ cd /u01/db/apps_st/data/

[ora@oracle data]$ ls -lrt|grep undo07.dbf
-rw-r----- 1 oraqa dba  3221233664 Aug  9 11:32 undo07.dbf

[ora@oracle data]$ chmod 755 undo07.dbf

[ora@oracle data]$ mv undo07.dbf sysaux08.dbf

[ora@oracle data]$ ls -lrt|grep undo07.dbf

[ora@oracle data]$ ls -lrt|grep sysaux08.dbf
-rwxr-xr-x 1 oraqa dba  3221233664 Aug  9 11:32 sysaux08.dbf


Steps:- 3

[ora@oracle data]$ sqlplus / as sysdba

SQL> ALTER TABLESPACE SYSAUX RENAME DATAFILE '/u01/db/apps_st/data/undo07.dbf' to '/u01/db/apps_st/data/sysaux08.dbf';

Tablespace altered.

Steps:- 4

SQL> ALTER TABLESPACE SYSAUX ONLINE;

Tablespace altered.

Steps:- 5

Verify tablespace status.

SQL> SELECT status
FROM dba_tablespaces
WHERE tablespace_name = 'SYSAUX';

STATUS
---------
ONLINE

Verify datafile name.

SQL> select TABLESPACE_NAME,FILE_NAME from dba_data_files where TABLESPACE_NAME = 'SYSAUX';

TABLESPACE_NAME-------------FILE_NAME
--------------------------------------------------------------------------------
SYSAUX                    /u01/db/apps_st/data/sysaux08.dbf





Rename datafile undo07.dbf to sysaux08.dbf completed.







ORA-48913: Writing into trace file failed, file size limit [10485760] reached

ERROR:-
Non critical error ORA-48913 caught while writing to trace file
 "/apps/PROD/db/diag/rdbms/prod/PROD/trace/PROD_dbrm_6874.trc"
Error message: ORA-48913: Writing into trace file failed, file size limit [10485760] reached.


In some environments DBAs limit the size of trace files generated by the database. This included all trace files that could get generated under USER_DUMP_DEST/DIAGNOSTIC_DEST). The parameter to set the limit for trace files is MAX_DUMP_FILE_SIZE and its value is in OS number of blocks. After setting this value, if any trace file size would increase form the size specified in this parameter, ORA-48913 would be recorded in alert log file.

Cause
The reason was Parameter MAX_DUMP_FILE_SIZE  is set too low.

Solution
We can increase the setting for the parameter MAX_DUMP_FILE_SIZE or set it to unlimited
MAX_DUMP_FILE_SIZE specifies the maximum size of trace files (excluding the alert file). Change this limit if you are concerned that trace files may use too much space.
A numerical value for MAX_DUMP_FILE_SIZE specifies the maximum size in operating system blocks.
A number followed by a K or M suffix specifies the file size in kilobytes or megabytes.
The special value string UNLIMITED means that there is no upper limit on trace file size. Thus, dump files can be as large as the operating system permits.


SQL> show parameter max_dump_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_dump_file_size string 20000


SQL> alter system set max_dump_file_size=UNLIMITED scope=both;

System altered.


SQL> show parameter max_dump_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_dump_file_size string UNLIMITED

OR

SQL> show parameter dump_file

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_dump_file_size                   string      20480


SQL> select max(lebsz) from x$kccle;

MAX(LEBSZ)
----------
       512

SQL> alter session set max_dump_file_size=’1024M’;

Session altered.

SQL> show parameter max_dump_file_size

NAME TYPE VALUE
———————————— ———– ———–
max_dump_file_size string 1024M





UnsatisfiedLinkError exception loading native library: njni11


UnsatisfiedLinkError exception loading native library: njni11 while running Autoconfig on DB Node


Context Value Management will now update the Context file
UnsatisfiedLinkError exception loading native library: njni11


FiX
---

To fix this issue, Edit the database node xml as below and re-run the autoconfig.

database xml file path :-/u01/db/tech_st/11.1.0/appsutil/VISPRD_hostname.xml

From
----
<          <JDK_TOP oa_var="s_jdktop">/u01/db/tech_st/11.1.0/appsutil/jre</JDK_TOP>
<          <JRE_TOP oa_var="s_jretop">/u01/db/tech_st/11.1.0/appsutil/jre</JRE_TOP>

To
---
>          <JDK_TOP oa_var="s_jdktop">/u01/db/tech_st/11.1.0/jdk/jre</JDK_TOP>
>          <JRE_TOP oa_var="s_jretop">/u01/db/tech_st/11.1.0/jdk/jre</JRE_TOP>


rerun auto-config on the DB node.

I hope issue resolved.


DBMS_AW_EXP: SYS.AW$EXPRESS: OLAP not enabled

Customer RecommendedDatapump Export (expdp) Raises Warnings Like "DBMS_AW_EXP: SYS.AW$EXPRESS: OLAP not enabled" (Doc ID 1638799.1)


APPLIES TO:
Oracle Database - Enterprise Edition - Version 11.2.0.4 and later
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Information in this document applies to any platform.

SYMPTOMS
Expdp on Oracle Standard Edition / Standard Edition 1 produces below warnings

DBMS_AW_EXP: SYS.AW$EXPRESS: OLAP not enabled
DBMS_AW_EXP: SYS.AW$AWMD: OLAP not enabled
DBMS_AW_EXP: SYS.AW$AWCREATE: OLAP not enabled
DBMS_AW_EXP: SYS.AW$AWCREATE10G: OLAP not enabled
DBMS_AW_EXP: SYS.AW$AWXML: OLAP not enabled
DBMS_AW_EXP: SYS.AW$AWREPORT: OLAP not enabled

CAUSE
The issue was explained in
Bug 17718899 - EXPDP ON STANDARD EDITION ONE SHOWS MESSAGES "OLAP NOT ENABLED"
closed with status 'Not a Bug' with below comment:

OLAP is not available in Standard Edition One (SE1) or Standard Edition (SE) as per the Database Licensing Information document,
it is only available in Enterprise Edition (EE).

This message is intentionally output for DATAPUMP and coded as such in 11.2.0.4 and higher Standard Edition One (SE1) or Standard Edition (SE).

From the Datapump perspective these are warnings and it is safe to ignore them.



NOTE:
The same symptoms can be encountered with the Oracle Enterprise Edition when OLAP is not installed.


SOLUTION
The OLAP related warnings can be safely ignored for DataPump usage.


REFERENCES
BUG:17718899 - EXPDP ON STANDARD EDITION ONE SHOWS MESSAGES "OLAP NOT ENABLED"

ORA-39142: incompatible version number 5.1 in dump file

ORA-39142: incompatible version number 5.1 in dump file


PROBLEM:

Trying to export schema/tables from higher Database version 12.2.0.1.0 and import to version 12.1.0.1.0, causing following error message.

ERROR:

Import: Release 12.1.0.2.0 - Production on Thu Jun 13 17:03:18 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-39142: incompatible version number 5.1 in dump file "/u01/expimp/schemas_name.dmp"


SOLUTION:

Please set parameter Version=12.1 in export command and import the data without issue.

Source Database(12.2.0.1):

expdp system/****@prod schemas=schemas_name Version=12.1 directory=EXPDP dumpfile=schemas_name.dmp log=schemas_name.log

expdp system/****@PROD directory=EXPDP tables=table_name Version=12.1dumpfile=schemas_name.dmp log=schemas_name.log

Target Database(12.1.0.1):

impdp system/****@prod  directory=EXPDP schemas=schemas_name dumpfile=schemas_name.dmp logfile=schemas_name.log

impdp system/****@PROD directory=EXPDP tables=table_name dumpfile=schemas_name.dmp log=schemas_name.log


apps-fnd-01564 oracle error 1653 in summit others


CAUSE:

Typically occurs when writing a new record to oracle database, but there is not enough space to write it. The data file in oracle tablespace called APPS_TS_TX_DATA should be set to AUTOEXTEND. If it is not, Oracle will not auto-increase the size of the datafile, and will raise the above error.






SOLUTION:

If all the reports end in COMPLETE – NORMAL, means its work.

If not than follow below steps.

 Run a Concurrent Manager Recovery

1. Please stop Concurrent managers by adcmctl.sh.

2. If not stopped, please abort managers by adcmctl.sh.

adcmctl.sh abort apps/****

3. Please kill if a process remains.

4. relink all (ADADMIN)

5. Please retry Concurrent Manager Recovery.

Navigate:
Oracle Applications Manager > Concurrent Managers OR Concurrent Requests > Site Map > Diagnostics and Repair > Concurrent Manager Recovery.


OR



SOLUTION:

WARNING: This procedure should be performed by your ORACLE Database Administrator. 
The SYSTEM user should have required privileges to perform this task. 

Enable Autoextend on the APPS_TS_TX_DATA tablespace by executing the following Oracle PL/SQL command:

SQL > ALTER DATABASE DATAFILE '/u01/ORACLE/ORADATA/APPS_TS_TX_DATA.ORA' AUTOEXTEND ON NEXT 3072K



ORA-19588: archived log RECID 1003 STAMP 2001986 is no longer valid


Error in RMAN backup log

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on d1 channel at 03/04/2018 23:16:26
ORA-19588: archived log RECID 1003 STAMP 2001986 is no longer valid

Recovery Manager complete.



SOLUTION:

This is not an error but a warning. The reason for this is archive logfile which this process is trying to take backup,
is already been backed up by another process and file is deleted

This error is mostly because of 2 reasons.

1. Either 2 backups like one full backup and one archive log backup are running in parallel.
And if the archivelog backup job tried to take backup of the archives, which were backed up and deleted by the full back
job running at the same time, then this warning will come.

So never schedule two backups at the same time

2. Another reason might be because of few invalid or deleted records in control file.

To clean up invalid records, crosscheck the archive log:



RMAN> CROSSCHECK ARCHIVELOG ALL;

RMAN> DELETE EXPIRED ARCHIVELOG ALL;


It is recommended to use delete input option while backing up archivelog.


RMAN> backup archivelog until time 'sysdate -1' delete input;


If we want to take backup of all archivelogs, then:
——————————————————————

RMAN> backup archivelog all delete input ;

Autoconfig Fails While Running Scripts "afdbprf.sh" and "adcrobj.sh" with 12.1 database

Upgraded database 11.1.0 for 12.1 database having AutoConfig Error

While implement and run AutoConfig form my newly upgraded database 11.1.0, getting the following errors

[AutoConfig Error Report]
The following report lists errors AutoConfig encountered during each
phase of its execution.  Errors are grouped by directory and phase.
The report format is:
         


  [PROFILE PHASE]
  AutoConfig could not successfully execute the following scripts:
    Directory: /u01/oracle/sonu/db/tech_st/11.1.0/appsutil/install/sonu_visr12
      afdbprf.sh              INSTE8_PRF         1


  [APPLY PHASE]
  AutoConfig could not successfully execute the following scripts:
    Directory: /u01/oracle/sonu/db/tech_st/11.1.0/appsutil/install/sonu_visr12
      adcrobj.sh              INSTE8_APPLY       1




AutoConfig is exiting with status 2

After checking privileges on $ORACLE_HOME/oracore/zoneinfo, timezone.dat and timezlrg.dat are not present ...

Cause
The multiple timezlrg.* and timezone.* are present so a choice of timezone versions are available.

Solution
To resolve the problem, create links to one of the timezone_nn.dat files. Unless you have a requirement for a specific version, choose the latest ...

[dba@sonu bin]$ cd $ORACLE_HOME/oracore/zoneinfo

[dba@sonu zoneinfo]$ ln -s timezone_18.dat timezone.dat
[dba@sonu zoneinfo]$ ln -s timezone_18.dat timezlrg.dat

After the above steps, again initiated the script, and completed successfully.


[dba@sonu bin]$ perl adconfig.pl

Enter the full file path to the Context file: /u01/db/tech_st/12.1.0.2/appsutil/sonu.xml


Enter the APPS user password:

Using Context file          : /u01/db/tech_st/12.1.0.2/appsutil/sonu.xml


Context Value Management will now update the Context file


        Updating Context file...COMPLETED


        Attempting upload of Context file and templates to database...COMPLETED


Updating rdbms version in Context file to db112
Updating rdbms type in Context file to 32 bits
Configuring templates from ORACLE_HOME ...

AutoConfig completed successfully.

ORA-01804: failure to initialize timezone information

ERROR:-
ORA-01804: failure to initialize timezone information


SP2-0152: ORACLE may not be functioning properly

Cause: -

The timezone information file was not properly read.

Solution:-

1. Set the ORA_TZFILE environment variable:
a. Set environment variable ORA_TZFILE:


$ export ORA_TZFILE=/home/oracle/app/oracle/product/12.1.0/oracore/zoneinfo/timezlrg_18.dat


[ora@sujeet ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Mar 6 10:27:42 2019

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>

NID-00135: There are number active threads.


SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
PROD      MOUNTED

[oracle@oracle sujeet$ nid TARGET=SYS/sys123 DBNAME=PROD SETNAME=YES

DBNEWID: Release 11.1.0.7.0 - Production on Thu Apr 26 18:04:18 2018

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Connected to database ACSEBSP (DBID=2199446376)

Connected to server version 11.1.0

NID-00135: There are 1 active threads


Change of database name failed during validation - database is intact.
DBNEWID - Completed with validation errors.

Solution:-

Action: Ensure that all threads are closed before retrying the operation. Start and open the database to perform crash recovery, then shut down with the NORMAL or IMMEDIATE options to close it cleanly. Finally, try running the utility again.


SQL> alter database open;

Database altered.

SQL> shutdown immediate
Database closed.Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.


[oracle@oracle sujeet$ nid TARGET=SYS/sys123 DBNAME=PROD SETNAME=YES

I hope issue resolved.

REP-0004: Warning: Unable to open user preference file.

Enter Password:
REP-0004: Warning: Unable to open user preference file.

=======================================================================
Cause:

Your local (customized) Oracle Reports preference file could not be opened.
This is just a warning, so the product will continue to run even if this occurs.
The possible causes of this error include the following:

Action:

You can take the following actions to rectify this error:

Case 1: Make sure the file prefs.ora is located in your "home" directory.

Case 2: Make sure that you have the privileges necessary to access the file. If you don't, change your privileges accordingly.

Case 3: Consult your operating system documentation or contact your system administrator.

Solution :-


Check whether u have prefs.ora file in your application user's home

[applmgr@sujeet scripts]$ locate prefs.ora
/u01/apps/tech_st/10.1.2/inventory/Templates/tools/admin/prefs.ora
/u01/apps/tech_st/10.1.2/tools/admin/prefs.ora

ORA-00059: Maximum Number Of DB_FILES Exceeded in 19C database

When I am adding datafile to my 19C database facing the below error. SQL> alter tablespace DATA  add datafile '/u01/data/data15.dbf&#...