500 Internal Server Error in in R12



Troubleshooting 500 Internal Server Error in Oracle E-Business suite

R12: Troubleshooting 500 Internal Server Error in Oracle E-Business suite [ID 813523.1]
Applies to:
Oracle Applications Technology Stack - Version: 12.0 to 12.0.6 - Release: 12.0 to 12.0
Generic UNIX
IBM IA64 AIX
Generic Linux
HP-UX PA-RISC (32-bit)
Oracle Solaris on x86 (32-bit)
Oracle Solaris on SPARC (64-bit)

Purpose

The troubleshooting guide is provided to assist in solving various types of '500 Internal Server Error' issues that occur in the browser window while accessing Oracle E-Busines suite R12. Though the error is very generic, the underlying error noticed in logfiles may differ. Hence, the guide includes those different types of error examples scenarios that relate problem symptoms and various solutions available for resolution.

Oracle Application Server 10.1.3 (OracleAS 10.1.3) is installed as part of Oracle Applications Release 12, and consists of the following components:

* Oracle Process Manager and Notification Server (OPMN)
* Oracle HTTP Server (Apache)
* Oracle Containers for J2EE (OC4J)
* Configured Application Modules
OACORE - Core Application Modules
OAFM - Oracle Transport Agent, MapViewer
FORMS - Forms (using Servlet Mode)

500 Internal Server Error

The HTTP Response status codes beginning with the digit "5" indicates cases in which the server is aware that it has encountered an error or is otherwise incapable of performing the request. Any client (e.g. the Web browser ) goes through the following cycle when it communicates with Web server:

* Obtain an IP address from the IP name of web site .
* This lookup (conversion of IP name to IP address) is provided by domain name servers (DNS).
* Open an IP socket connection to that IP address.
* Write an HTTP data stream through that socket.
* Receive an HTTP data stream back from your Web server in response. This data stream contains status codes whose values are determined by the HTTP protocol. Parse this data stream for status codes and other useful information. Now this error occurs in the final step when the client receives an HTTP status code that it recognises as '500'.

The error '500 Internal Server Error' is a very generic error message. This error can only be resolved by fixes to the Web server end and generally not a client-side problem. Hence it is important to locate and analyse the logs which should give further information about the error.

Error Block:

Check each of the error blocks for the complete error details. If it is matching the error details what is observed at your instance then refer the suggested solution.
Symptom:1
Browser Error : If You receive following error while accessing E-Business R12 instance

500 Internal ServerError
java.lang.NoClassDefFoundError
at oracle.apps.fnd.sso.AppsLoginRedirect.AppsSetting(AppsLoginRedirect.java:120)
at oracle.apps.fnd.sso.AppsLoginRedirect.init(AppsLoginRedirect.java:161)

Caused by: java.lang.Throwable: java.lang.RuntimeException:
oracle.apps.jtf.base.resources.FrameworkException: Cache not initialized
at oracle.apps.jtf.cache.CacheManager.registerCacheComponent(CacheManager.java:4691)
at oracle.apps.jtf.cache.CacheManager.registerCacheComponent(CacheManager.java:1731)
at oracle.apps.jtf.cache.CacheAdmin.registerComponentCacheNoDBSave(CacheAdmin.java:309)
at oracle.apps.fnd.cache.Cache.registerCache(Cache.java:159)
at oracle.apps.fnd.cache.Cache.initCache(Cache.java:123)
at oracle.apps.fnd.cache.Cache.(Cache.java:89)
at oracle.apps.fnd.cache.AppsCache.(AppsCache.java:86)
at oracle.apps.fnd.cache.AolCaches.getCache(AolCaches.java:155)
at oracle.apps.fnd.profiles.Profiles.(Profiles.java:247)
at java.lang.J9VMInternals.initializeImpl(Native Method)

Application.log:

Error initializing servlet
java.lang.NoClassDefFoundError
at oracle.apps.fnd.sso.AppsLoginRedirect.AppsSetting(AppsLoginRedirect.java:120)
at oracle.apps.fnd.sso.AppsLoginRedirect.init(AppsLoginRedirect.java:161)
at com.evermind[Oracle Containers for J2EE 10g (10.1.3.0.0) ].server.http.HttpApplication.loadServlet(HttpApplication.java:2231)
at com.evermind[Oracle Containers for J2EE 10g (10.1.3.0.0) ].server.http.HttpApplication.findServlet(HttpApplication.java:4617)
The above error has been observed both intermittently or sometimes consistently while accessing E-Business R12 instance via browser. In both cases the issue is with the port value for the java object cache. Either a port conflict at startup or multiple instances with JOC in distributed mode attempting to use the same port resulting in intermittent errors.

Solution:1
1. Choose a correct value or another value for $CONTEXT_FILE variable: s_java_object_cache_port
Make sure your System Profile "JTF_DIST_CACHE_PORT" has the same port specified as the value mentioned for variable: s_java_object_cache_port in context file. You can check the sytem profile value by running following sql

select fnd_profile.value('JTF_DIST_CACHE_PORT') from dual;
2. Please check the GUEST password are set properly in following places:
- DBC file under $INST_TOP/appl/fnd/12.0.0/secure
- System Profile option: 'Guest User Password'
select fnd_web_sec.validate_login('GUEST','') from dual ; should return 'Y'
- In context file by running command : grep -i guest $CONTEXT_FILE

For any issues with Guest user password mismatch, please refer the following note
Note 443353.1 -- How To Successfully Change The Guest Password In E-Business Suite 11.5.10 and R12

Symptom:2
Browser Error : The following error occurs in AIX platform
500 Internal Server Error
java.lang.NoClassDefFoundError: oracle.apps.fnd.profiles.Profiles (initialization failure)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:123)
at oracle.apps.fnd.sso.AppsLoginRedirect.AppsSetting(AppsLoginRedirect.java:120)
at oracle.apps.fnd.sso.AppsLoginRedirect.init(AppsLoginRedirect.java:161)
at com.evermind[Oracle Containers for J2EE 10g (10.1.3.0.0) ].server.http.HttpApplication.loadServlet(HttpApplication.java:2231)

Application.log
Exception in static block of jtf.cache.CacheManager. Stack trace is:
oracle.apps.jtf.base.resources.FrameworkException: IAS Cache initialization failed. The Distributed
Caching System failed to initialize on port: 12345. The list of hosts in the distributed cac
hing system is: . The port 12345 should be free on each host running the JVMs. The default port 12345 can be overridden
using -Doracle.apps.jtf.cache.IASCacheProvidercacheProvider.port=
The Java object Controller issue are generally specific to AIX platform. This failure was a result of the Applications JDK missing required java.security updates specific to the applications server and a port specific code issue within cache.jar.

Solution:2
1. Apply both the following patches:

Patch <<5261515>> "AS10.1.3: PATCH OVER AIX5L JDK15"
Patch <<5946958>> "JOC DISTRIBUTED MODE FAILURE WHEN REGISTERING GROUP MEMBERS"
Unpublished Bug <<5968938>> has been filed to include 5946958 in OracleAS 10.1.3.4

For more details, you can refer following note
Note 422766.1 -- Cache Initialization Failure on AIX platform

2. Sometimes those kind of error is also caused by firewall between concurrent manager node vs web node or database vs web node. Hence ensure to open the required firewall port.

Alternative Solution

Incase the above solutions doesnot solve the issue, then following steps can also be carried out after taking proper backup of the system
1. Remove any *.lock files in $ORA_CONFIG_HOME/10.1.3/j2ee//persistence/<group_default_group_/

2. Set the parameter LONG_RUNNING_JVM=false present in $ORA_CONFIG_HOME/10.1.3/j2ee/oacore/config/oc4j.properties and test the result.

3. Ensure that all the middle tier services are running fine with following command

$ adopmnctl.sh status -l
You are running adopmnctl.sh version 120.4.12000000.3
Checking status of OPMN managed processes...

Processes in Instance: test9_celaixclu9.celaixclu9.us.oracle.com
---------------------------------+--------------------+---------+---------
ias-component | process-type | pid | status
---------------------------------+--------------------+---------+---------
OC4JGroup:OC4J | OC4J:oafm | 548916 | Alive
OC4JGroup:OC4J | OC4J:forms | 307358 | Alive
OC4JGroup:OC4J | OC4J:oacore | 581638 | Alive
HTTP_Server | HTTP_Server | 598042 | Alive


4. Ensure that the instance meets Operating Systems Requirements, Software Requirements that is specified in Oracle Applications E-Business R12 Installation document.

5. If the instance has been created by cloning another instance or any issues with listener configuration is suspected then create a clean autoconfig envrionment and rerun autoconfig. Refer the following note

Note 391406.1 -- How to get a clean Autoconfig Environment

6. Clear the browser cache and E-Business suite instance cache.

7. Compile all JSP files after sourcing the environment. Run the command

cd $FND_TOP/patch/115/bin
ojspCompile.pl --compile --flush

8. If feasible, completely reboot the instance. Stop all middle tier services and stop database services as well OS services. Then restart all those services.

While you log a service request be sure to provide following information :

1. Access the following URL and complete all tests. Upload the result output to the associated metalink SR

http://host.domain:port/OA_HTML/jsp/fnd/aoljtest.jsp

(Replace the host, domain, port value with your instance details )

2. Upload the debug log files instance after reproducing the issue. You may refer following note for the same
Note 422419.1 -- R12 - How To Enable and Collect Debug for HTTP, OC4J and OPMN </group_default_group_

**************************************************************************

Pfile vs SPfile

Pfile vs SPfile [ID 249664.1]




                              Pfile vs SPfile


Until Oracle 8i DBAs have been using a text file called the pfile (parameter file) to store the database initialization parameters.

The pfile is read at instance startup time to get specific instance characteristics. Any changes made the pfile would only take effect when the database is restarted.

However, parameters that were dynamically alterable could be changed using the appropriate ALTER SYSTEM or ALTER SESSION statement, which would take effect immediately.

As of Oracle9i, new feature called the spfile (server parameter file). The spfile is a binary file that contains the same information as the old pfile.

The spfile is a server-side initialization parameter file; parameters stored in this file are persistent across database startups.

This makes all the changes made to the instance using the ALTER SYSTEM statement persistent. Oracle requires that you start an instance for the first time using the pfile and then create the spfile.

The server parameter file (also called SPFILE) is in a single location where all the necessary parameters are defined and stored. The defined parameter values are applicable for all the instances in the cluster.

The SPFILE permits dynamic changes without requiring you to bring down the instance.

You can still use the client side parameter file to manage parameter settings in Real Application Clusters; however, administrative convenience is sacrificed and the advantage of dynamic change is lost.

By default, if you do not specify PFILE in your STARTUP command, Oracle will use a server parameter file.


SERVER PARAMETER FILE ( SPFILE )
================================

A server parameter file is basically a repository for initialization parameters.

Initialization parameters stored in a SPFILE are persistent, meaning any parameter changes made while an instance is running can persist across instance shutdown and startup.

In this way, all the initialization parameters manually updated by ALTER SYSTEM SET commands become persistent.

It also provides a basis for the Oracle database server to self-tune.

Another advantage, particularly for multi-instance RAC systems, is that a single copy of the parameter file can be used by all instances. Even though a single file is used to specify parameters, it has different format styles to support both the common values for all instances, as well as the specific values for an individual instance.

A server parameter file is initially built from the traditional text initialization parameter file, using the create SPFILE statement. It is a binary file that cannot be browsed or edited with a text editor.

Oracle provides other interfaces for viewing and modifying parameter settings. At system startup, the default behavior of the STARTUP command is to read a SPFILE to obtain initialization parameter settings. If the STARTUP command doesn't have a PFILE clause, it reads the SPFILE from a location
specified by the operating system.

If you choose to use the traditional text initialization parameter file, you must specify the PFILE clause when issuing the STARTUP command.


SETTING THE SERVER PARAMETER FILE VALUES
=========================================

Use the SID designator to set instance-specific parameter values in the server parameter file.

For settings across the database, use a '*', and for a specific instance, set the prefix with SID as indicated below.

*.OPEN_CURSORS=400 # For database-wide setting
RACDB1.OPEN_CURSORS=800# For RACDB1 instance

Note that even though open_cursors is set at 400 for all instances in the first entry, the value of 800 remains in effect for the SID 'RACDB1'.

Some initialization parameters are dynamic since they can be modified using the ALTER SESSION or ALTER SYSTEM statement while an instance is running. Use the following syntax to dynamically alter
initialization parameters:

ALTER SESSION SET parameter_name = value
ALTER SYSTEM SET parameter_name = value [DEFERRED]

Use the SET clause of the ALTER SYSTEM statement to set or change initialization parameter values. Additionally, the SCOPE clause specifies the scope of a change as described below:

SCOPE = SPFILE

(For both static and dynamic parameters, changes are recorded in the spfile, to be given effect in the next restart.)

SCOPE = MEMORY

(For dynamic parameters, changes are applied in memory only. No static parameter change is allowed.)

SCOPE = BOTH

For dynamic parameters, the change is applied in both the server parameter file and memory. No static parameter change is allowed.)

For dynamic parameters, we can also specify the DEFERRED keyword. When specified, the change is effective only for future sessions.

HERE ARE A FEW EXAMPLES

===========================

The following statement affects all instances. However, the values are only effective for the current instances, they are not written to binary SPFILE.

ALTER SYSTEM SET OPEN_CURSORS=500 SID='*' SCOPE=MEMORY;

The next statement resets the value for the instance 'RACDB1'.
At this point, the database-wide setting becomes effective for SID of RACDB1.

ALTER SYSTEM RESET OPEN_CURSORS SCOPE=SPFILE sid='RACDB1';

To reset a parameter to its default value throughout the cluster database, use the command:

ALTER SYSTEM RESET OPEN_CURSORS SCOPE=SPFILE sid='*';


CREATING A SERVER PARAMETER FILE

=================================

The server parameter file is initially created from a text initialization parameter file (init.ora).

It must be created prior to its use in the STARTUP command.
The create SPFILE statement is used to create a server parameter file.

The following example creates a server parameter file from an initialization parameter file.

CREATE SPFILE FROM PFILE='/u01/oracle/product/920/dbs/initRAC1.ora';

Below is another example that illustrates creating a server parameter file and supplying a name.

CREATE SPFILE='/u01/oracle/product/920/dbs/racdb_spfile.ora'
FROM PFILE='/u01/oracle/product/920/dbs/init.ora';

EXPORTING THE SERVER PARAMETER FILE
===================================

We can export the server parameter file to create a traditional text initialization parameter file.

This would be useful for:
1) Creating backups of the server parameter file.
2) For diagnostic purposes to list all of the parameter values currently used by an instance.
3) Modifying the server parameter file by first exporting it, editing the output file, and then recreating it.

The following example creates a text initialization parameter file from the server parameter file:

CREATE PFILE FROM SPFILE;

The example below creates a text initialization parameter file from a server parameter file, where the names of the files are specified:

CREATE PFILE='/u01/oracle/product/920/dbs/racdb_init.ora'
FROM SPFILE='/u01/oracle/product/dbs/racdb_spfile.ora';

Refer to 'Oracle 9i Database Reference' for all the parameters that can be changed with an ALTER SYSTEM command...


IS MY DATABASE USING SPFILE ?

=============================

Am I using spfile or pfile ?

The following query will let you know..

1) SQL> SELECT name,value FROM v$parameter WHERE name = 'spfile';

NAME VALUE
---------- --------------------------------------------------
spfile /fsys1/oracle/product/9.2.0/spfileTEST.ora


2) SQL> show parameter spfile;

The v$spparameter view
The contents of the SPFILE can be obtained from the V$SPPARAMETER view:

SQL> ALTER SYSTEM SET timed_statistics=FALSE SCOPE=SPFILE;
System altered.

SQL> SELECT name,value FROM v$parameter WHERE name='timed_statistics';

NAME VALUE
-------------------- ---------------------
timed_statistics TRUE

SQL> SELECT name,value FROM v$spparameter WHERE name='timed_statistics';

NAME VALUE
-------------------- ---------------------
timed_statistics FALSE 


*****************************************************************************

Unable To Start Concurrent Manager And Failing With Message Concurrent Manager cannot find error description for CONC-Get plsql file name

Unable To Start Concurrent Manager And Failing With Message Concurrent Manager cannot find error description for CONC-Get plsql file names [ID 1161386.1]

In this Document
Symptoms
Cause
Solution
This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review.
Applies to:
Oracle Application Object Library - Version: 11.5.10.2 and later [Release: 11.5.10 and later ]
Information in this document applies to any platform.
Symptoms
On R11.5.10.2, it is not possible to start concurrent manager using adcmctl, but it is possible to start it using adstrtal. The ICM log file is showing the following error:

Concurrent Manager cannot find error description for CONC-Get plsql file names

Running the script cmclean does not fix the issue. FNDLIBR processes are showing as defunct on the operating system as follows:

oracle 27109 25564 0 15:07 ? 00:00:00 [FNDLIBR]
Cause
The package fnd_file was invalid.
This can be verified by checking the results for the following query:

select obj.owner,obj.object_type, obj.object_name , err.text
from dba_errors err, dba_objects obj
where err.name =obj.object_name
and err.owner =obj.owner
and obj.status ='INVALID'
order by 1,2;

Solution

Please execute the following to re-create the package FND_FILE:

1. Navigate to the directory containing the package files:

cd $FND_TOP/patch/115/sql/

2. Login to SqlPlus as apps, and run the following:

@AFCPPIOB.pls

***************************************************************************

How to check which Techstack patchsets have been applied on 11i or R12

How to check which Techstack patchsets have been applied on 11i or R12 ? [ID 390864.1]

In this Document
Goal
Solution
References

Applies to:
Application Install - Version: 11.5.10 to 12.1.2 - Release: to 12.1
Information in this document applies to any platform.
Goal
How to check which Techstack patchsets have been applied
Solution 
For Single Tier Release 11i :

SET head off Lines 120 pages 100
col n_patch format A65
col bug_number format A10
col patch_name format A10
spool atg_pf_ptch_level.txt
select ' atg_pf ' FROM dual;
/

select bug_number, decode(bug_number,
'3438354', '11i.ATG_PF.H'
,'4017300' ,'11i.ATG_PF.H.RUP1'
,'4125550' ,'11i.ATG_PF.H.RUP2'
,'4334965' ,'11i.ATG_PF.H RUP3'
,'4676589' ,'11i.ATG_PF.H RUP4'
,'5382500' ,'11i.ATG_PF.H RUP5 HELP'
,'5473858' ,'11i.ATG_PF.H.5'
,'5674941' ,'11i.ATG_PF.H RUP5 SSO Integrat'
,'5903765' ,'11i.ATG_PF.H RUP6'
,'6117031' ,'11i.ATG_PF.H RUP6 SSO 10g Integration'
,'6330890' ,'11i.ATG_PF.H RUP6 HELP'
) n_patch, last_update_date
FROM ad_bugs
WHERE bug_number
IN ( '3438354', '4017300', '4125550', '4334965', '4676589', '5382500', '5473858', '5674941', '5903765', '6117031', '6330890' );

For Multi Tier Release 11i :

set serveroutput on size 100000
DECLARE
TYPE p_patch_array_type is varray(100) of varchar2(10);
TYPE a_abstract_array_type is varray(100) of varchar2(60);
p_patchlist p_patch_array_type;
a_abstract a_abstract_array_type;
p_appltop_name VARCHAR2(50);
p_patch_status VARCHAR2(15);
p_appl_top_id NUMBER;

CURSOR alist_cursor IS
SELECT appl_top_id, name
FROM ad_appl_tops;

procedure println(msg in varchar2)
IS
BEGIN
dbms_output.put_line(msg);
END;

BEGIN
open alist_cursor;

p_patchlist := p_patch_array_type( '3438354'
,'4017300'
,'4125550'
,'4334965'
,'4676589'
,'5382500'
,'5473858'
,'5674941'
,'5903765'
,'6117031'
,'6330890'
);
a_abstract := a_abstract_array_type( '11i.ATG_PF.H'
,'11i.ATG_PF.H.RUP1'
,'11i.ATG_PF.H.RUP2'
,'11i.ATG_PF.H RUP3'
,'11i.ATG_PF.H RUP4'
,'11i.ATG_PF.H RUP5 HELP'
,'11i.ATG_PF.H.5'
,'11i.ATG_PF.H RUP5 SSO Integrat'
,'11i.ATG_PF.H RUP6'
,'11i.ATG_PF.H RUP6 SSO 10g Integration'
,'11i.ATG_PF.H RUP6 HELP'
);


LOOP
FETCH alist_cursor INTO p_appl_top_id, p_appltop_name;
EXIT WHEN alist_cursor%NOTFOUND;
IF p_appltop_name NOT IN ('GLOBAL','*PRESEEDED*')
THEN
println(p_appltop_name || ':');
for i in 1..p_patchlist.count
LOOP
p_patch_status := ad_patch.is_patch_applied('11i', p_appl_top_id, p_patchlist(i));
println('..Patch ' || a_abstract(i)
||' '||p_patchlist(i)||' was '||
p_patch_status);
END LOOP;
END IF;
println('.');
END LOOP;
close alist_cursor;
END;
/

For Single Tier Release 12 :


SET head off Lines 120 pages 100
col n_patch format A65
col bug_number format A10
col patch_name format A10
spool atg_pf_ptch_level.txt
select ' atg_pf ' FROM dual;
/

select bug_number, decode(bug_number,
'6272680', 'R12.ATG_PF.A.delta.4'
,'6077669', 'R12.ATG_PF.A.delta.3'
,'5917344', 'R12.ATG_PF.A.delta.2'
) n_patch, last_update_date
FROM ad_bugs
WHERE bug_number
IN ('6272680', '6077669', '5917344');


For Multi Tier Release R12 :

set serveroutput on size 100000
DECLARE
TYPE p_patch_array_type is varray(100) of varchar2(10);
TYPE a_abstract_array_type is varray(100) of varchar2(60);
p_patchlist p_patch_array_type;
a_abstract a_abstract_array_type;
p_appltop_name VARCHAR2(50);
p_patch_status VARCHAR2(15);
p_appl_top_id NUMBER;

CURSOR alist_cursor IS
SELECT appl_top_id, name
FROM ad_appl_tops;

procedure println(msg in varchar2)
IS
BEGIN
dbms_output.put_line(msg);
END;

BEGIN
open alist_cursor;

p_patchlist := p_patch_array_type( '6272680'
,'6077669'
,'5917344'
);
a_abstract := a_abstract_array_type( 'R12.ATG_PF.A.delta.4'
,'R12.ATG_PF.A.delta.3'
,'R12.ATG_PF.A.delta.2'
);


LOOP
FETCH alist_cursor INTO p_appl_top_id, p_appltop_name;
EXIT WHEN alist_cursor%NOTFOUND;
IF p_appltop_name NOT IN ('GLOBAL','*PRESEEDED*')
THEN
println(p_appltop_name || ':');
for i in 1..p_patchlist.count
LOOP
p_patch_status := ad_patch.is_patch_applied('11i', p_appl_top_id, p_patchlist(i));
println('..Patch ' || a_abstract(i)
||' '||p_patchlist(i)||' was '||
p_patch_status);
END LOOP;
END IF;
println('.');
END LOOP;
close alist_cursor;
END;
/

NB:For the new coming patches 11i ATG RUP7, or the new R12 patches and so on..., please edit the select script and add the patch number you wish to check

******************************************************************************

Archiver Best Practices

                  Archiver Best Practices

INTRODUCTION:

Archiving provides the mechanism needed to backup the changes of the database.
The archive files are essential in providing the necessary information to
recover a database. However, as transaction rates increase, we are facing
more and more problems in devising an efficient archiving strategy that does
not impede with database performance which also accomodates your MTTR
(Mean Time to Recover) service levels. This paper describes some of the best
practices to help tune archiver and provides suggestions in preventing
archiving outages.

ONLINE REDO LOG AND ARCHIVE FILE CONFIGURATION:


Since archiver reads from log files, we also need to configured log files
properly to tune archiving. Log files are primary written by log writer
(LGWR) and should be read by archiver (ARCH) or any process doing recovery.
The disks are write intensive and at times read intensive but there is generally
low concurrency. We suggest that log files should be Oracle multiplexed or
hardware mirrored. They should never be placed on the same disks as the
archive files. Ideally, they should be located in their own set of disks
separated from all other files. The members or mirrored files should be on
different disks and controllers to prevent any single point of failure and
to increase thoroughput. Due to its IO behavior and importance, log files
ideally should be on raw disks with a recommended RAID 0+1 configuration
(mirroring and fine grain striping). Striping introduces parallelism to the
disk writes, thus, it could speed up sequential writes by increasing the
amount of data whose write would complete in a quantum of time.

The Archive files are always on ufs (UNIX file systems) with ideally a
RAID 0+1 configuration. Again, fine grained striping whenever archives are
on dedicated disks is recommended. Archives should always be separated from
the log files.

ARCHIVER STEPS:

Generically, archiver will
1) read the control file to find any unarchived logs,
2) open online redo log members to be read,
3) allocate redo log buffers (log_archive_buffers),
4) (async) read of the online redo log (log_archive_buffer_size),
usually aioread call if supported by operating system
uses alternative redo log members per buffer stream
5) fill redo log buffers,
6) (async) write of to the archive files (log_archive_buffer_size),
creates ufs archive file if not already created
first checks if buffer is full or if end of log
makes aiowrite call if supported by operating system
7) update control file with new information,
modify the archive log link list and change redo log statuses
8) starts the loop again.

In Oracle Parallel Server, archiver can also prompt idle instances to archive
their logs to prevent any thread of redo from falling far behind. This is
often referred to as kicking an idle instance. This help ensures that online
redo logs are archived out from all enabled threads so that media recovery in
a standby database environment does not fall behind.
 
ARCHIVER TUNING:

When encountering archiver busy waits or checkpoint busy waits warnings in the
Alert file, one should:

1) evaluate the number and size of the online redo logs
Most often, increasing the size and the number of online redo log
groups will give archiver more time to catch up.
Adding more online logs does not help a situation where the archiver
cannot keep up with LGWR. It can help if there are bursts of redo
generation since it gives ARCH more time to average its processing
rate over time.

2) evaluate checkpoint interval and frequency
There are several possible actions include adding DBWR processes,
increasing db_block_checkpoint_batch, reducing db_block_buffers.
Turning on or allowing async IO capabilities definitely helps
alleviate most DBWR inefficiencies.

3) consider adding multiple archiver processes
Create 'alter system archive log all' scripts to spawn archive
processes at some fixed interval may be required. These processes
once spawned will assist archiver in archiving any unarchived log in
that thread of redo. Once it has been completed, the temporary
processes will go away.

4) tune archiver process
change log_archive_buffer_size (max 128 in some ports)
change log_archive_buffer (max 8 in some ports)
In many platforms, a patch needs to be applied to increase
these values. In some ports, 7.3.2 fixes this limitation.

5) check operating system supportability of async IO
Async reads should help tremendously.
Async writes may help if OS supports aio on file systems.
Please check with your vendor if the current version of
your operating system supports async IO to file systems (ufs).

6) Check for system or IO contention.

Check queue lengths, CPU waits and usage, disk/channel/controller
level bottlenecks. Please check operating system manuals for the
appropriate commands to monitor system performance. For example,
some UNIX ports can use "sar -d" and "iostat" to identify disk
bottlenecks.

It is common for environments where there is intensive batch processing to
see ARCH fall behind of LGWR. In those cases, you should review the above
suggestions. In many cases increasing the number and/or size of log groups as
well as spawning extra archive processes is the most sufficient solution.

ARCHIVING STRATEGY:
There are three primary goals:
* Ensure that all online redo logs are archived and backed up
successfully.
* Prevent any archiver busy waits.
* Keep all archives on disk from last database backup to reduce
recovery time.

Ensure that all online redo logs are archived and backed up
-----------------------------------------------------------
To accomplish this first goal, one needs to monitor the database, archiver
progress (by looking at V$LOG and archive trace files), archive destination,
and tape management procedure. You should never archive a log until the ARC
column in V$LOG is set to YES. Scripts can be written that log into the
database and query V$LOG to build a set of archives to write out to tape.
The tape management procedure should have checksums to ensure that the
archive file was successfully backed up. Error checking and a good reporting
tool are essential in detecting and resolving archiving and tape backup/restore
errors. In 7.3, Oracle provides a checksumming mechanism when copying redo
from the online redo log to the archive files. This new init.ora parameter
is called log_block_checksum.

Having multiple members to a logfile group is also advisable. If there
are multiple members, all members in a group are used to perform the
archive process. Assuming that there are three members, the first chunk
is read from one of the members and is being written to the archive while
reading a second chunk from second member, then a third chunk from third
member, then back to first and the process continues in round robin fashion.
If a corruption is found in one of the members, it is validated (read again)
and if the corruption still exists, reading from that member is stopped
and the rest are used.

This is one of the big benefits of using multiple members instead of
mirroring at o/s level. Oracle knows about the multiple members so it can
optimize on archiving, but it does not know about the mirrors. One other big
benefit with using multiple members is that a separate write is issued for
each member, so the odds of having a totally corrupted redo log is
diminished (corruption written to one mirror will usually propagate over to
all other copies).

Note: All archives from all enabled threads need to be backed up. If you have
an idle instance, it will still create archive header files that are essential
for media recovery. This applies to Oracle Parallel Server only.

Prevent any archiver busy waits
-------------------------------
To prevent archiver busy waits, the archiver should be tuned by adjusting
log_archive_buffers and log_archive_buffer_size. The tuning tips described
earlier in the paper should be followed.

Keep all archives on disk from last database backup
---------------------------------------------------
Keeping all archives on disk from last database backup will reduce recovery
time by bypassing the time required to restore the archives from tape. This
may reduce MTTR (Mean Time to Recover) dramatically.

You may be able to achieve this by creating several archive destinations
or having one large archive destination. For example, lets assume you have
several archive destinations. Archiver is writing to archive DEST 1. When
DEST 1 fills up to a certain threshold, say 80% (enough room for two more
archive files), you can switch the archive destination by issuing the
command, 'alter system archive log to 'DEST2'. Archiver will then archive
the NEXT log in the new destination.

Tape backups can occur in DEST1 while archiver is writing to DEST2. This
reduces IO contention for those disks. Furthermore, depending on the size of
the destination, you can optimally keep a large number of archives on disk.
Before switching back to DEST1, we can purge the archives in DEST 1 that
has been successfully backed up to tape.

Some sites have one very large archive destination instead of several archive
destinations. Again, scripts are created to monitor and to log in to the
database to determine which Oracle archives to back up to tape. These archives
are backed up as soon as possible. A purging algorithm is produce to purge
only those files that have been successfully backed up to tape and with a
timestamp that is older than the beginning of the last successful Oracle hot
backup. Unfortunately, there may be some additional disk contention with this
plan due to the IO concurrency from the archive process(es) and tape backup
process(es).

ARCHIVER MONITORING:
Other best practices include monitoring the statuses of log files to check for
STALE or INVALID logs. If the logs remain STALE, then you should investigate
any possible media problems and relocate or recreate new members to maintain
the level of resiliency for the logs. STALE logs imply that there are missing
writes in this log. Oracle considers incomplete logs as STALE; so, you
get STALE logs after a shutdown abort or if the LGWR process simply cannot
write to that redo log member. Archiver can easily detect if there are
missing changes in the redo log member by verifying the correctness of the
redo log block. If the archiver detects a problem, it will switch to another
member searching for a sound set of redo log blocks. The archiver will never
complain if it can create an "good" archive file from the composite information
of all the online redo log members.

If archiver falls behind often, then one can spawn extra archiver processes.
We recommend monitoring V$LOG to alert and spawn extra archiver processes
whenever there are more than 2 logs that need archiving.

Note: There is an enhancement, bug 260126, to allow for the ability to have
several archiver processes.

Checking the alert.log for archiver or checkpoint errors, archiver and
log writer background trace files for errors, and archive destination for lack
of free space are essential in catching most potential archiving related
problems

*********************************************************************************

Backup and Recovery Scenarios

Backup and Recovery Scenarios [ID 94114.1]

In this Document
Purpose
Instructions for the Reader
Troubleshooting Details
BACKUP SCENARIOS
a) Consistent backups
b) Inconsistent backups
c) Database Archive mode
d) Backup Methods
e) Incremental backups
f) Support scenarios

RECOVERY SCENARIOS
1. Online Block Recovery.
2. Thread Recovery.
3. Media Recovery.
Media Failure and Recovery in Noarchivelog Mode
Media Failure and Recovery in Archivelog Mode
a) Point in Time recovery:
b) Recovery without control file
c) Recovery of missing datafile with rollback segments
d) Recovery of missing datafile without undo segments
e) Recovery with missing online redo logs
f) Recovery with missing archived redo logs
g) Recovery with resetlogs option
h) Recovery with corrupted undo segments.
i) Recovery with System Clock change.
j) Recovery with missing System tablespace.
k) Media Recovery of offline tablespace
l) Recovery of Read-Only tablespaces

References

Applies to:

Oracle Server - Personal Edition - Version: 7.2.3.0 to 10.2.0.4 - Release: 7.2.3 to 10.2
Oracle Server - Enterprise Edition - Version: 7.3.4.5 to 10.2.0.4 [Release: 7.3.4 to 10.2]
Oracle Server - Standard Edition - Version: 7.2.2.0 to 10.2.0.4 [Release: 7.2.2 to 10.2]
Information in this document applies to any platform.
***Checked for relevance on 01-Mar-2011***
Purpose
Describe various Backup and Recovery Scenarios.
Instructions for the Reader
A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting.
Troubleshooting Details

BACKUP SCENARIOS

 
a) Consistent backups

A consistent backup means that all data files and control files are consistent to a point in time. I.e. they have the same SCN. This is the only method of backup when the database is in NO Archive log mode.
b) Inconsistent backups
An Inconsistent backup is possible only when the database is in Archivelog mode. You must apply redo logs to the data files, in order to restore the database to a consistent state. Inconsistant backups can be taken using RMANwhen the database is open.
Inconsistant backups can also be taken using other OS tools provided the tablespaces (or database) is put into backup mode.
ie: SQL> alter tablespace data begin backup;
SQL> alter database begin backup; (version 10 and above only)
c) Database Archive mode

The database can run in either Archivelog mode or noarchivelog mode. When you first create the database, you specify if it is to be in Archivelog mode. Then in the init.ora file you set the parameter log_archive_start=true so that archiving will start automatically on startup.
If the database has not been created with Archivelog mode enabled, you can issue the command whilst the database is mounted, not open.

SQL> alter database Archivelog;.
SQL> log archive start
SQL> alter database open;
SQL> archive log list

This command will show you the log mode and if automatic archival is set.
d) Backup Methods

Essentially, there are two backup methods, hot and cold, also known as online and offline, respectively. A cold backup is one taken when the database is shutdown. The database must be shutdown cleanly. A hot backup is on taken when the database is running. Commands for a hot backup:

For non RMAN backups:

1. Have the database in archivelog mode (see above)
2. SQL> archive log list
--This will show what the oldest online log sequence is. As a precaution, always keep the all archived log files starting from the oldest online log sequence.
3. SQL> Alter tablespace tablespace_name BEGIN BACKUP;
or SQL> alter database begin backup (for v10 and above).
4. --Using an OS command, backup the datafile(s) of this tablespace.
5. SQL> Alter tablespace tablespace_name END BACKUP
--- repeat step 3, 4, 5 for each tablespace.
or SQL> alter database end backup; for version 10 and above
6. SQL> archive log list
---do this again to obtain the current log sequence. You will want to make sure you have a copy of this redo log file.
7. So to force an archived log, issue
SQL> ALTER SYSTEM SWITCH LOGFILE
A better way to force this would be:
SQL> alter system archive log current;
8. SQL> archive log list
This is done again to check if the log file had been archived and to find the latest archived sequence number.
9. Backup all archived log files determined from steps 2 and 8.
10. Back up the control file:
SQL> Alter database backup controlfile to 'filename'

For RMAN backups:

see Note.<<397315.1>> RMAN - Sample Backup Scripts 10g
or the appropriate RMAN documentation.
e) Incremental backups

These are backups that are taken on blocks that have been modified since the last backup. These are useful as they don't take up as much space and time. There are two kinds of incremental backups Cumulative and Non cumulative.

Cumulative incremental backups include all blocks that were changed since the last backup at a lower level. This one reduces the work during restoration as only one backup contains all the changed blocks.
Noncumulative only includes blocks that were changed since the previous backup at the same or lower level.

Using rman, you issue the command "backup incremental level n"

Oracle v9 and below RMAN will back up empty blocks, oracle v10.2 RMAN will not back up empty blocks

f) Support scenarios
When the database crashes, you now have a backup. You restore the backup and
then recover the database. Also, don't forget to take a backup of the control
file whenever there is a schema change
.
RECOVERY SCENARIOS

Note: All online datafiles must be at the same point in time when completing recovery;

There are several kinds of recovery you can perform, depending on the type of failure and the kind of backup you have. Essentially, if you are not running in archive log mode, then you can only recover the cold backup of the database and you will lose any new data and changes made since that backup was taken. If, however, the database is in Archivelog mode you will be able to restore the database up to the time of failure. There are three basic types of recovery:

1. Online Block Recovery.
This is performed automatically by Oracle.(pmon) Occurs when a process dies while changing a buffer. Oracle will reconstruct the buffer using the online redo logs and writes it to disk.

2. Thread Recovery.
This is also performed automatically by Oracle. Occurs when an instance crashes while having the database open. Oracle applies all the redo changes in the thread that occurred since the last time the thread was checkpointed.

3. Media Recovery.
This is required when a data file is restored from backup. The checkpoint count in the data files here are not equal to the check point count in the control file.

Now let's explain a little about Redo vs Undo.

Redo information is recorded so that all commands that took place can be repeated during recovery. Undo information is recorded so that you can undo changes made by the current transaction but were not committed. The Redo Logs are used to Roll Forward the changes made, both committed and non- committed changes. Then from the Undo segments, the undo information is used to
rollback the uncommitted changes.
Media Failure and Recovery in Noarchivelog Mode

In this case, your only option is to restore a backup of your Oracle files. The files you need are all datafiles, and control files. You only need to restore the password file or parameter files if they are lost or are corrupted.
Media Failure and Recovery in Archivelog Mode

In this case, there are several kinds of recovery you can perform, depending on what has been lost. The three basic kinds of recovery are:

1. Recover database - here you use the recover database command and the database must be closed and mounted. Oracle will recover all datafiles that are online.

2. Recover tablespace - use the recover tablespace command. The database can be open but the tablespace must be offline.

3. Recover datafile - use the recover datafile command. The database can be open but the specified datafile must be offline.

Note: You must have all archived logs since the backup you restored from, or else you will not have a complete recovery.

a) Point in Time recovery:
A typical scenario is that you dropped a table at say noon, and want to recover it. You will have to restore the appropriate datafiles and do a point-in-time recovery to a time just before noon.

Note: you will lose any transactions that occurred after noon. After you have recovered until noon, you must open the database with resetlogs. This is necessary to reset the log numbers, which will protect the database from having the redo logs that weren't used be applied.

The four incomplete recovery scenarios all work the same:

Recover database until time '1999-12-01:12:00:00';
Recover database until cancel; (you type in cancel to stop)
Recover database until change n;
Recover database until cancel using backup controlfile;

Note: When performing an incomplete recovery, the datafiles must be online. Do a select * from v$recover_file to find out if there are any files which are offline. If you were to perform a recovery on a database which has tablespaces offline, and they had not been taken offline in a normal state, you will lose them when you issue the open resetlogs command. This is because the data file needs recovery from a point before the resetlogs option was used.

b) Recovery without control file
If you have lost the current control file, or the current control file is inconsistent with files that you need to recover, you need to recover either by using a backup control file command or create a new control file. You can also recreate the control file based on the current one using the 'SQL> backup control file to trace' command which will create a script for you to run to create a new one. Recover database using backup control file command must be used when using a control file other that the current. The database must then be opened with
resetlogs option.

c) Recovery of missing datafile with rollback segments
The tricky part here is if you are performing online recovery. Otherwise you can just use the recover datafile command. Now, if you are performing an online recovery, you will need to create a new undo tablespace to be used. Once the old tablespace has been recovered it can be dropped once any uncommitted transactions have rolled back.

d) Recovery of missing datafile without undo segments
There are three ways to recover in this scenario, as mentioned above.
1. recover database;
2. recover datafile 'c:\orant\database\usr1orcl.ora';
3. recover tablespace user_data;

e) Recovery with missing online redo logs
Missing online redo logs means that somehow you have lost your redo logs before they had a chance to archived. This means that crash recovery cannot be performed, so media recovery is required instead. All datafiles will need to be restored and rolled forwarded until the last available archived log file is applied. This is thus an incomplete recovery, and as such, the recover
database command is necessary.

As always, when an incomplete recovery is performed, you must open the database with resetlogs.
Note: the best way to avoid this kind of a loss, is to mirror your online log files.

f) Recovery with missing archived redo logs If your archives are missing, the only way to recover the database is to restore from your latest backup. You will have lost any uncommitted
transactions which were recorded in the archived redo logs. Again, this is why Oracle strongly suggests mirroring your online redo logs and duplicating copies of the archives.

g) Recovery with resetlogs option
Reset log option should be the last resort, however, as we have seen from above, it may be required due to incomplete recoveries. (recover using a backup control file, or a point in time recovery). It is imperative that you backup up the database immediately after you have opened the database with reset logs. It is possible to recover through a resetlogs, and made easier with Oracle V10, but easier
to restore from the backup taken after the resetlogs

h) Recovery with corrupted undo segments.

If an undo segment is corrupted, and contains uncommitted system data you may not be able to open the database.

The best alternative in this situation is to recover the corrupt block using the RMAN blockrecover command next best would be to restore the datafile from backup and do a complete recovery.

If a backup does not exist and If the database is able to open (non system object) The first step is to find out what object is causing the rollback to appear corrupted. If we can determine that, we can drop that object.

So, how do we find out if it's actually a bad object?

1. Make sure that all tablespaces are online and all datafiles are online. This can be checked through via the v$recover_file view.

2. Put the following in the init.ora:
event = "10015 trace name context forever, level 10"

This event will generate a trace file that will reveal information about the transaction Oracle is trying to roll back and most importantly, what object Oracle is trying to apply the undo to.

Note: In Oracle v9 and above this information can be found in the alert log.

Stop and start the database.

3. Check in the directory that is specified by the user_dump_dest parameter (in the init.ora or show parameter command) for a trace file that was generated at startup time.

4. In the trace file, there should be a message similar to: error recovery tx(#,#) object #.

TX(#,#) refers to transaction information.
The object # is the same as the object_id in sys.dba_objects.

5. Use the following query to find out what object Oracle is trying to perform recovery on.

select owner, object_name, object_type, status
from dba_objects where object_id = ;

6. Drop the offending object so the undo can be released. An export or relying on a backup may be necessary to restore the object after the corrupted undo segment is released.

i) Recovery with System Clock change.
You can end up with duplicate timestamps in the datafiles when a system clock changes. This usually occurs when daylight saving comes into or out of the picture. In this case, rather than a point in time recovery, recover to a specify log or SCN

j) Recovery with missing System tablespace.
The only option is to restore from a backup.

k) Media Recovery of offline table-space

When a tablespace is offline, you cannot recover datafiles belonging to this tablespace using recover database command. The reason is because a recover database command will only recover online datafiles. Since the tablespace is offline, it thinks the datafiles are offline as well, so even if you recover database and roll forward, the datafiles in this tablespace will not be touched. Instead, you need to perform a recover tablespace command. Alternatively, you could restored the datafiles from a cold backup, mount the database and select from the v$datafile view to see if any of the datafiles are offline. If they are, bring them online, and then you can perform a recover database command.
l) Recovery of Read-Only tablespaces

If you have a current control file, then recovery of read only tablespaces is no different than recovering read-write files. The issues with read-only tablespaces arise if you have to use a backup control file. If the tablespace is in read-only mode, and hasn't changed to read-write since the last backup, then you will be able to media recovery using a backup control file by taking the tablespace offline. The reason here is that when you are using the backup control file, you must open the database with resetlogs. And we know that Oracle wont let you read files from before a resetlogs was done. However, there is an exception with read-only tablespaces. You will be able to take the datafiles online after you have opened the database.


*************************************************************************

Sample Backup Scripts 10g

RMAN - Sample Backup Scripts 10g

RMAN - Sample Backup Scripts 10g [ID 397315.1]
Modified 08-JUN-2010 Type HOWTO Status PUBLISHED
In this Document
Goal
Solution 
Applies to:
Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 10.2.0.1 - Release: 10.1 to 10.2
Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 10.2.0.5.0 [Release: 10.1 to 10.2]
Information in this document applies to any platform.
Goal
 Audience: Novice RMAN users.
The following note provides a DBA with several RMAN sample backup scripts. The scripts are very basic and an be executed as shown in examples.

Solution
RMAN - Sample Backup Scripts 10g
• Backup up Whole Database Backups with RMAN
• Backing up Individual Tablespaces with RMAN
• Backing up Individual Datafiles and Datafile Copies with RMAN
• Backing up Control Files with RMAN
• Backing up Server Parameter Files with RMAN
• Backing up Archived Redo Logs with RMAN
• Backing up the Whole database including archivelogs
=====================================================================================

Making Whole Database Backups with RMAN


You can perform whole database backups with the database mounted or open. To perform a whole database backup from the RMAN prompt the BACKUP DATABASE command can be used. The simplest form of the command requires no parameters, as shown in this example:

RMAN> backup database;
In the following example no backup location was specified meaning that the backups will automatically be placed in the Flash Recovery Area (FRA). If the FRA has not been setup then all backups default to $ORACLE_HOME/dbs.

How to check if the RFA has been setup:

SQL> show parameter recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /recovery_area
db_recovery_file_dest_size big integer 50G

If your FRA is not setup (ie values are null) please refer to the following note for assistance in setting it up.

Note 305648.1 What is a Flash Recovery Area and how to configure it ?

If you wish to place your backup outside the FRA then following RMAN syntax may be used.

RMAN> backup database format '/backups/PROD/df_t%t_s%s_p%p';

• Backing Up Individual Table-spaces with RMAN

RMAN allows individual tablespaces to be backed up with the database in open or mount stage.

RMAN> backup tablespace SYSTEM, UNDOTBS, USERS;

• Backing Up Individual Datafiles and Datafile Copies with RMAN

The flexibilty of being able to backup a single datafile is also available. As seen below you are able to reference the datafile via the file# or file name. Multiple datafiles can be backed up at a time.

RMAN> backup datafile 2;

RMAN> backup datafile 2 format '/backups/PROD/df_t%t_s%s_p%p';

RMAN> backup datafile 1,2,3,6,7,8;

RMAN> backup datafile '/oradata/system01.dbf';

• Backing Up the current controlfile & Spfile

The controlfile and spfile are backed up in similar ways. Whenever a full database backup if performed, the controlfile and spfile are backed up. In fact whenever file#1 is backed up these two files are backed up also.

It is also good practise to backup the controlfile especially after tablespaces and datafiles have been added or deleted.

If you are not using an RMAN catalog it is more impotant that you frequently backup of your controlfile. You can also configure another method of controlfile backup which is referred to as 'autobackup of controlfile'.

Refer to the manual for more information regarding this feature.

RMAN> backup current controlfile;

RMAN> backup current controlfile format '/backups/PROD/df_t%t_s%s_p%p';

RMAN> backup spfile;

• Backing Up Archivelogs

It is important that archivelogs are backed up in a timely manner and correctly removed to ensure the file system does not fill up. Below are a few different examples. Option one backs up all archive logs to the FRA or default location. Option two backs up all archivelogs generate between 7 and 30 days and option three backs up archive logs from log sequence number XXX until logseq YYY then deletes the archivelogs. It also backups the archive logs to a specified location.

RMAN> backup archivelog all;

RMAN> backup archivelog from time 'sysdate-30' until time 'sysdate-7';

RMAN> backup archivelog from logseq=XXX until logseq=YYY delete input format '/backups/PROD/%d_archive_%T_%u_s%s_p%p';

• Backing up the Whole database including archivelogs

Below is an example of how the whole database can be backed up and at the same time backup the archive logs and purge them following a successful backup. The first example backups up to the FRA, however it you wish to redirect the output the second command shows how this is achieved.

RMAN> backup database plus archivelog delete input;

RMAN> backup database plus archivelog delete input format '/backups/PROD/df_t%t_s%s_p%p';


************************************************END**************************************************

How to find location of Install, Autoconfig, Patching , Clone and other logs in R12




How to find location of Install, Autoconfig, Patching , Clone and other logs in R12 [ID 804603.1]
--------------------------------------------------------------------------------
In this Document
Goal
Solution
--------------------------------------------------------------------------------
Applies to:
Oracle Applications Manager - Version: 12.0
Information in this document applies to any platform.

Goal
How to find location of Install, Autoconfig, Patching , Clone and other logs in R12

Solution
Log files are useful in troubleshooting issues in Oracle Applications.

Here is the list of Log file location in Oracle Applications for Startup/Shutdown, Cloning, Patching, DB & Apps Listener and various components in Apps R12/12i:

Note:Instance top ($INST_TOP) is new directory added in R12 to keep the log files ,Startup/stop scripts for the application tier

A. Startup/Shutdown Log files for Application Tier in R12
=========================================================
i) Startup/Shutdown error message text files like adapcctl.txt, adcmctl.txt… :

$INST_TOP/apps/$CONTEXT_NAME/logs/appl/admin/log

ii) Startup/Shutdown error message related to tech stack (10.1.2, 10.1.3 forms/reports/web) :

$INST_TOP/apps/$CONTEXT_NAME/logs/ora/ (10.1.2 & 10.1.3)
$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.3/Apache/error_log[timestamp]
$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.3/opmn/ (OC4J~…, oa*, opmn.log)
$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.2/network/ (listener log)
$INST_TOP/apps/$CONTEXT_NAME/logs/appl/conc/log (CM log files)

B. Log files related to cloning in R12
=======================================
Preclone (adpreclone.pl) log files in source instance

i) Database Tier-$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/(StageDBTier_

******************************************************************************


 

ORA-1403 FRM-40735

When-New-Form-Instance Trigger Raised Unhandled Exception ORA-1403 FRM-40735

When-New-Form-Instance Trigger Raised Unhandled Exception ORA-1403 FRM-40735 [ID 265657.1]

--------------------------------------------------------------------------------

Applies to:
Oracle Order Management - Version: 11.5.9 to 11.5.10.3 - Release: 11.5 to 11.5
Information in this document applies to any platform.
Form:OEXOEORD.FMB - Sales Orders
Checked for relevance on 21-Jun-2010
Symptoms
While trying to enter a new customer in the Sales Orders Form,
receive the following errors:
.
FRM-40735: WHEN-NEW-FORM-INSTANCE trigger unhandled exception.
ORA-1403.
ORA-1403: no data found.
.
Taken to the Add Customer Form, and receive the following errors:

FRM-41084: Error getting Group Cell.
FRM-40105: Unable to resolve reference to item INLINE_ADDRESS..
FRM-41045: Cannot find item: invalid ID.
Note: You cannot pass the account number because account number auto-generation is enabled.
Value for account_number must be unique.
.
Sales Orders Form:

FRM-40212: Invalid value for field SOLD_TO_CONTACT.

Errors in the Forms Trace (.FRD log) :
.
FRM-40815: Variable GLOBAL.GHR_INSTALLATION_STATE does not exist.
FRM-40815: Variable GLOBAL.OPM_GML_INSTALLED1 does not exist.
FRM-40815: Variable GLOBAL.ADD_CUSTOMER_PROFILE does not exist.
FRM-40815: Variable GLOBAL.TELESALES_CALL_ADD_CUSTOMER does not exist.
FRM-40815: Variable GLOBAL.CONTACT_SEARCH does not exist.
FRM-40815: Variable GLOBAL.RECALCULATE_CHARGE does not exist.
FRM-40815: Variable GLOBAL.TELESALES_CALL_ADD_CUSTOMER does not exist.
FRM-40815: Variable GLOBAL.RECALCULATE_CHARGE does not exist.
Cause
Profile Option Setting
Solution
1. Set profile option 'Default Country' to local country (i.e.: United States).
2. Set profile option OM:Add Customer to 'All'.


Note: Some customers have observed that the above do not take affect unless the Defaulting Rule Generator has been run after resetting these profile options.
References
BUG:3228397 - ADD CUSTOMER FROM SALES ORDERS FORM FRM-40735 ORA-01403



CONCURRENT_REQUESTS is very high and should be purged

Health Check Alert: The number of records in the table FND_CONCURRENT_REQUESTS is very high and should be purged to avoid performance issues

Health Check Alert: The number of records in the table FND_CONCURRENT_REQUESTS is very high and should be purged to avoid performance issues [ID 1095625.1]
Modified 04-JAN-2011 Type REFERENCE Status PUBLISHED
In this Document
Purpose
Scope
Health Check Alert: The number of records in the table FND_CONCURRENT_REQUESTS is very high and should be purged to avoid performance issues
Description
Risk
Recommendation
References
Applies to:

Oracle Application Object Library - Version: ALL
Information in this document applies to Any Platform,.
Health Check Category: Performance
Severity Level: Warning
Purpose

This document provides a quick reference explaining the following Health Check Alert:

The number of records in the table FND_CONCURRENT_REQUESTS is very high and should be purged to avoid performance issues
Scope

This document is intended for Database Administrators (DBA) / System Administrators / Application Managers.
Health Check Alert: The number of records in the table FND_CONCURRENT_REQUESTS is very high and should be purged to avoid performance issues

IMPORTANT

My Oracle Support provides a proactive health check that automatically detects and notifies you of the existence of this issue before it impacts your business. To leverage this proactive support capability, install the Oracle Configuration Manager (OCM). Information on installing the Oracle Configuration Manager can be found on the Collector tab of My Oracle Support. To view the full portfolio of health checks available, see Note 868955.1
Description

Checks the number of records in the table FND_CONCURRENT_REQUESTS for any performance issues at concurrent manager tier.

Risk

There are 500000+ records in the FND_CONCURRENT_REQUESTS table which can cause performance issues with the the concurrent processing sub-system. We recommended you purge the eligible records.

How to configure printers ins EBS 11i/R12

configure printers ins EBS 11i/R12

Oracle E-Business Suite offers two printing solutions to handle all your printing requirements. For most printing needs, the Pasta Utility offers quick setup and easy maintenance. For additional flexibility, Oracle E-Business Suite allows you to define your own printer drivers and print styles.

We can summarize these configuration as
1. Setup the printer at the OS level
2. Add a valid entry in the hosts file (Printer Name and the IP Address)
3. Login to System Administrator responsibility
4. Navigate to Install > Printer > Register
5. Define a new printer by entering the Printer Name you have set in the hosts file
6. Save
7. Bounce the Concurrent Manager
8. Submit any standard concurrent request

**************************************************END*******************************************************

Unable to Create Entity Object from 10.1.3 Jdev


 Unable to Create Entity Object from 10.1.3 Jdev

While creating my first workspace on Oracle JDeveloper 10.1.3 I encountered following error

Error:
Unable to Create Entity Object from 10.1.3 Jdev(R12).The check box for the table and synonym are not enabled.

Solutions:
Add all the BC4J libraries to the project for resolving this issue.

Right click the project properties and select Business Components and press OK.This will definitely resolve the issue.Then you can create the EO without any problem.

 

 

  

Oracle Exceptions


Oracle/PLSQL: Exception Handling

Oracle has a standard set of exceptions already named as follows:

Oracle Exception Name
Oracle Error
Explanation
DUP_VAL_ON_INDEX
ORA-00001
You tried to execute an INSERT or UPDATE statement that has created a duplicate value in a field restricted by a unique index.
TIMEOUT_ON_RESOURCE
ORA-00051
You were waiting for a resource and you timed out.
TRANSACTION_BACKED_OUT
ORA-00061
The remote portion of a transaction has rolled back.
INVALID_CURSOR
ORA-01001
You tried to reference a cursor that does not yet exist. This may have happened because you've executed a FETCH cursor or CLOSE cursor before OPENing the cursor.
NOT_LOGGED_ON
ORA-01012
You tried to execute a call to Oracle before logging in.
LOGIN_DENIED
ORA-01017
You tried to log into Oracle with an invalid username/password combination.
NO_DATA_FOUND
ORA-01403
You tried one of the following:
1.    You executed a SELECT INTO statement and no rows were returned.
2.    You referenced an uninitialized row in a table.
3.    You read past the end of file with the UTL_FILE package.
TOO_MANY_ROWS
ORA-01422
You tried to execute a SELECT INTO statement and more than one row was returned.
ZERO_DIVIDE
ORA-01476
You tried to divide a number by zero.
INVALID_NUMBER
ORA-01722
You tried to execute an SQL statement that tried to convert a string to a number, but it was unsuccessful.
STORAGE_ERROR
ORA-06500
You ran out of memory or memory was corrupted.
PROGRAM_ERROR
ORA-06501
This is a generic "Contact Oracle support" message because an internal problem was encountered.
VALUE_ERROR
ORA-06502
You tried to perform an operation and there was a error on a conversion, truncation, or invalid constraining of numeric or character data.
CURSOR_ALREADY_OPEN
ORA-06511
You tried to open a cursor that is already open.




ORA-00059: Maximum Number Of DB_FILES Exceeded in 19C database

When I am adding datafile to my 19C database facing the below error. SQL> alter tablespace DATA  add datafile '/u01/data/data15.dbf...