Password Policies in Oracle Application




 Just recently got the requirement that user passwords should follow the password security policy of the client. Oracle EBS delivers a number of password policies which may fullfill the need in your organization but to make passwords more secure you have to extend the policies.

Of course it's possible to implement Oracle Access Manager combined with Oracle Internet Directory to hook in to Active Directory but that combination will not be used (yet :-)) so will also not be discussed in this article.

Profile Options
A number of profile options relate to the seeded delivered password policies.
  • Signon Password Case: This profile will enable case sensitivity for password. To make it case sensitive it will make passwords more secure because combination of upper and lower characters will be used. When passwords are not case sensitive the entered password by the user will be uppered and validated against the encrypted password stored with the user.
  • Signon Password Custom: This profile will provide the name of the custom java class which enables the use of custom, client specific, password policies.
  • Signon Password Failure Limit: This profile provides the number of login attempts an user can do. When the number of attempts exceeds this setting the users' account will be blocked.
  • Signon Password Hard To Guess: Setting this profile to Yes will provide the following password policies:
    1) the password containts at least one letter AND at least one number
    2) the password does not contain the username
    3) the password doest not contain any repeating characters
  • Signon Password Length: This profile will give the minimum length of an user password
  • Signon Password No Reuse: This profile will provide the number of days an user must wait before reusing an earlier used password.
  • - See more at: http://oracleebsapps.blogspot.in/2011/08/signon-password-policies.html#sthash.l3vgKDu4.dpuf



    • Signon Password Case: This profile will enable case sensitivity for password. To make it case sensitive it will make passwords more secure because combination of upper and lower characters will be used. When passwords are not case sensitive the entered password by the user will be uppered and validated against the encrypted password stored with the user.
    • Signon Password Custom: This profile will provide the name of the custom java class which enables the use of custom, client specific, password policies.
    • Signon Password Failure Limit: This profile provides the number of login attempts an user can do. When the number of attempts exceeds this setting the users' account will be blocked.
    • Signon Password Hard To Guess: Setting this profile to Yes will provide the following password policies:
      1) the password containts at least one letter AND at least one number
      2) the password does not contain the username
      3) the password doest not contain any repeating characters
    • Signon Password Length: This profile will give the minimum length of an user password
    • Signon Password No Reuse: This profile will provide the number of days an user must wait before reusing an earlier used password.
    User setting
    By enabling the password expiration option on usernames they are enforced to change their password after a certain number of days or after logging in succesfully a number of times. - See more at: http://oracleebsapps.blogspot.in/2011/08/signon-password-policies.html#sthash.l3vgKDu4.dpuf
    Signon Password Case
    This profile will enable case sensitivity for password. To make it case sensitive it will make passwords more secure because combination of upper and lower characters will be used. When passwords are not case sensitive the entered password by the user will be uppered and validated against the encrypted password stored with the user.
    Signon Password Custom: This profile will provide the name of the custom java class which enables the use of custom, client specific, password policies.
    Signon Password Failure Limit: This profile provides the number of login attempts an user can do. When the number of attempts exceeds this setting the users' account will be blocked.
    Signon Password Hard To Guess: Setting this profile to Yes will provide the following password policies:
    1) the password containts at least one letter AND at least one number
    2) the password does not contain the username
    3) the password doest not contain any repeating characters
    Signon Password Length: This profile will give the minimum length of an user password
    Signon Password No Reuse: This profile will provide the number of days an user must wait before reusing an earlier used password.

    User setting
    By enabling the password expiration option on usernames they are enforced to change their password after a certain number of days or after logging in succesfully a number of times. - See more at: http://oracleebsapps.blogspot.in/2011/08/signon-password-policies.html#sthash.l3vgKDu4.dpuf
     User setting
    By enabling the password expiration option on usernames they are enforced to change their password after a certain number of days or after logging in succesfully a number of times.




    I just recently got the requirement that user passwords should follow the password security policy of the client. Oracle eBS delivers a number of password policies which may fullfill the need in your organization but to make passwords more secure you have to extend the policies.

    Of course it's possible to implement Oracle Access Manager combined with Oracle Internet Directory to hook in to Active Directory but that combi will not be used (yet :-)) so will also not be discussed in this article.

    Profile Options
    A number of profile options relate to the seeded delivered password policies.

    • Signon Password Case: This profile will enable case sensitivity for password. To make it case sensitive it will make passwords more secure because combination of upper and lower characters will be used. When passwords are not case sensitive the entered password by the user will be uppered and validated against the encrypted password stored with the user.
    • Signon Password Custom: This profile will provide the name of the custom java class which enables the use of custom, client specific, password policies.
    • Signon Password Failure Limit: This profile provides the number of login attempts an user can do. When the number of attempts exceeds this setting the users' account will be blocked.
    • Signon Password Hard To Guess: Setting this profile to Yes will provide the following password policies:
      1) the password containts at least one letter AND at least one number
      2) the password does not contain the username
      3) the password doest not contain any repeating characters
    • Signon Password Length: This profile will give the minimum length of an user password
    • Signon Password No Reuse: This profile will provide the number of days an user must wait before reusing an earlier used password.
    User setting
    By enabling the password expiration option on usernames they are enforced to change their password after a certain number of days or after logging in succesfully a number of times.

    - See more at: http://oracleebsapps.blogspot.in/2011/08/signon-password-policies.html#sthash.H2KLDvIr.dpuf

    ASM Interview Questions and Answers



    What is ASM in Oracle?

    Oracle ASM is Oracle’s volume manager specially designed for Oracle database data. It is available since Oracle database version 10g and many improvements have been made in versions 11g release 1 and 2. 

    ASM offers support for Oracle RAC clusters without the requirement to install 3rd party software, such as cluster aware volume managers or filesystems.

    ASM is shipped as part of the database server software (Enterprise and Standard editions) and does not cost extra money to run.

    ASM simplifies administration of Oracle related files by allowing the administrator to reference disk groups
    rather than individual disks and files, which are managed by ASM.

    The ASM functionality is an extention of the Oracle Managed Files (OMF) functionality that also includes striping and mirroring to provide balanced and secure storage. The new ASM functionality can be used in combination with existing raw and cooked file systems, along with OMF and manually managed files.


    Advantages of ASM in Oracle?
    Provides automatic load balancing over all the available disks, thus reducing hot spots in the file system

    Prevents fragmentation of disks, so you don't need to manually relocate data to tune I/O performance

    Adding disks is straight forward - ASM automatically performs online disk reorganization when you add or remove storage

    Uses redundancy features available in intelligent storage arrays

    The storage system can store all types of database files

    Using disk group makes configuration easier, as files are placed into disk groups

    ASM provides stripping and mirroring (fine and coarse gain - see below)

    ASM and non-ASM oracle files can coexist

    Striping—ASM spreads data evenly across all disks in a disk group to optimize performance and utilization. This even distribution of database files eliminates the need for regular monitoring and I/O performance tuning.

    For example, if there are six disks in a disk group, pieces of each ASM file are written to all six disks. These pieces come in 1 MB chunks known as extents. When a database file is created, it is striped (divided into extents and distributed) across the six disks, and allocated disk space on all six disks grows evenly. When reading the file, file extents are read from all six disks in parallel, greatly increasing performance.

    Mirroring—ASM can increase availability by optionally mirroring any file. ASM mirrors at the file level, unlike operating system mirroring, which mirrors at the disk level. Mirroring means keeping redundant copies, or mirrored copies, of each extent of the file, to help avoid data loss caused by disk failures. The mirrored copy of each file extent is always kept on a different disk from the original copy. If a disk fails, ASM can continue to access affected files by accessing mirrored copies on the surviving disks in the disk group.

    ASM supports 2-way mirroring, where each file extent gets one mirrored copy, and 3-way mirroring, where each file extent gets two mirrored copies.

    Online storage reconfiguration and dynamic rebalancing—ASM permits you to add or remove disks from your disk storage system while the database is operating. When you add a disk, ASM automatically redistributes the data so that it is evenly spread across all disks in the disk group, including the new disk. This redistribution is known as rebalancing. It is done in the background and with minimal impact to database performance. When you request to remove a disk, ASM first rebalances by evenly relocating all file extents from the disk being removed to the other disks in the disk group.

    Managed file creation and deletion—ASM further reduces administration tasks by enabling files stored in ASM disk groups to be Oracle-managed files. ASM automatically assigns filenames when files are created, and automatically deletes files when they are no longer needed.

    What is ASM instance in Oracle?
    The ASM functionality is controlled by an ASM instance. This is not a full database instance, just the memory structures and as such is very small and lightweight. 

    Characteristics of Oracle ASM instance
    --------------------------------------
    1. do not have controlfile and datafiles, do not have online redo logs

    2. do have init.ora and a passwordfile
    3. for connecting remotely, create passwordfile and set following in init.ora
    remote_login_passwordfile=exclusive
    create a password file:
    $ORACLE_HOME/bin/orapwd file=orapw+ASM1 password=yourpw entries=10
    4. ASM instance can not be in open status as there are not datafiles. Can be in mount (although
    there is no controlfile) and nomount status. When in mount status, database can use the
    diskgroup. The mount status actually means mount disk groups.


    What are ASM Background Processes in Oracle?
    Both an Oracle ASM instance and an Oracle Database instance are built on the same technology. Like a database instance, an Oracle ASM instance has memory structures (System Global Area) and background processes. Besides, Oracle ASM has a minimal performance impact on a server. Rather than mounting a database, Oracle ASM instances mount disk groups to make Oracle ASM files available to database instances.

    There are at least two new background processes added for an ASM instance:

    ASM Instance Background Processes:
    ---------------------------------
    ARBx (ASM) Rebalance working process
    ARBn performs the actual rebalance data extent movements in an Automatic Storage Management instance. There can be many of these processes running at a time, named ARB0, ARB1, and so on.These processes are managed by the RBAL process. The number of ARBx processes invoked is directly influenced by the asm_power_limit parameter.

    RBAL (Re-balancer)
    RBAL runs in both database and ASM instances. In the database instance, it does a global open of ASM disks. In an ASM instance, it also coordinates rebalance activity for disk groups.RBAL, which coordinates rebalance activities
    for disk resources controlled by ASM.

    Database Instance ASM Background Processes:
    ------------------------------------------
    In the database instances, there are three background process to support ASM, namely:

    ASMB, this process contact CSS using the group name and acquires the associated ASM connect string. The connect string is subsequently used to connect to the ASM instance.

    RBAL, which performs global opens on all disks in the disk group.A global open means that more than one database instance can be accessing the ASM disks at a time.

    O00x, a group slave processes, with a numeric sequence starting at 000.

    What are the components of components of ASM are disk groups?
    The main components of ASM are disk groups, each of which comprise of several physical disks that are controlled as a single unit. The physical disks are known as ASM disks, while the files that reside on the disks are know as ASM files. The locations and names for the files are controlled by ASM, but user-friendly aliases and directory structures can be defined for ease of reference.

    Failure groups are defined within a disk group to support the required level of redundancy. For two-way mirroring you would expect a disk group to contain two failure groups so individual files are written to two locations.

    What are ASM instance initialization parameters?
    INSTANCE_TYPE - Set to ASM or RDBMS depending on the instance type. The default is RDBMS.

    DB_UNIQUE_NAME - Specifies a globally unique name for the database. This defaults to +ASM but must be altered if you intend to run multiple ASM instances.

    ASM_POWER_LIMIT -The maximum power for a rebalancing operation on an ASM instance. The valid values range from 1 to 11, with 1 being the default. The higher the limit the more resources are allocated resulting in faster rebalancing operations. This value is also used as the default when the POWER clause is omitted from a rebalance operation.

    ASM_DISKGROUPS - The list of disk groups that should be mounted by an ASM instance during instance startup, or by the ALTER DISKGROUP ALL MOUNT statement. ASM configuration changes are automatically reflected in this parameter.

    ASM_DISKSTRING - Specifies a value that can be used to limit the disks considered for discovery. Altering the default value may improve the speed of disk group mount time and the speed of adding a disk to a disk group. Changing the parameter to a value which prevents the discovery of already mounted disks results in an error. The default value is NULL allowing all suitable disks to be considered.

    Advantages of ASM in Oracle?
    Provides automatic load balancing over all the available disks, thus reducing hot spots in the file system

    Prevents fragmentation of disks, so you don't need to manually relocate data to tune I/O performance

    Adding disks is straight forward - ASM automatically performs online disk reorganization when you add or remove storage

    Uses redundancy features available in intelligent storage arrays

    The storage system can store all types of database files

    Using disk group makes configuration easier, as files are placed into disk groups

    ASM provides stripping and mirroring (fine and coarse gain - see below)

    ASM and non-ASM oracle files can coexist

    Striping—ASM spreads data evenly across all disks in a disk group to optimize performance and utilization. This even distribution of database files eliminates the need for regular monitoring and I/O performance tuning.

    For example, if there are six disks in a disk group, pieces of each ASM file are written to all six disks. These pieces come in 1 MB chunks known as extents. When a database file is created, it is striped (divided into extents and distributed) across the six disks, and allocated disk space on all six disks grows evenly. When reading the file, file extents are read from all six disks in parallel, greatly increasing performance.

    Mirroring—ASM can increase availability by optionally mirroring any file. ASM mirrors at the file level, unlike operating system mirroring, which mirrors at the disk level. Mirroring means keeping redundant copies, or mirrored copies, of each extent of the file, to help avoid data loss caused by disk failures. The mirrored copy of each file extent is always kept on a different disk from the original copy. If a disk fails, ASM can continue to access affected files by accessing mirrored copies on the surviving disks in the disk group.

    ASM supports 2-way mirroring, where each file extent gets one mirrored copy, and 3-way mirroring, where each file extent gets two mirrored copies.

    Online storage reconfiguration and dynamic rebalancing—ASM permits you to add or remove disks from your disk storage system while the database is operating. When you add a disk, ASM automatically redistributes the data so that it is evenly spread across all disks in the disk group, including the new disk. This redistribution is known as rebalancing. It is done in the background and with minimal impact to database performance. When you request to remove a disk, ASM first rebalances by evenly relocating all file extents from the disk being removed to the other disks in the disk group.

    Managed file creation and deletion—ASM further reduces administration tasks by enabling files stored in ASM disk groups to be Oracle-managed files. ASM automatically assigns filenames when files are created, and automatically deletes files when they are no longer needed.

    Why should we use separate ASM home?
    ASM should be installed separately from the database software in its own ORACLE_HOME directory. This will allow you the flexibility to patch and upgrade ASM and the database software independently.

    How many ASM instances should one have?

    Several databases can share a single ASM instance. So, although one can create multiple ASM instances on a single system, normal configurations should have one and only one ASM instance per system.

    For clustered systems, create one ASM instance per node (called +ASM1, +ASM2, etc).

    How many diskgroups should one have?
    Generally speaking one should have only one disk group for all database files - and, optionally a second for recovery files (see FRA).

    Data with different storage characteristics should be stored in different disk groups. Each disk group can have different redundancy (mirroring) settings (high, normal and external), different fail-groups, etc. However, it is generally not necessary to create many disk groups with the same storage characteristics (i.e. +DATA1, +DATA2, etc. all on the same type of disks).

    To get started, create 2 disk groups - one for data and one for recovery files. Here is an example:

    CREATE DISKGROUP data    EXTERNAL REDUNDANCY DISK '/dev/d1', '/dev/d2', '/dev/d3', ....;
    CREATE DISKGROUP recover EXTERNAL REDUNDANCY DISK '/dev/d10', '/dev/d11', '/dev/d12', ....;

    Here is an example how you can enable automatic file management with such a setup:

    ALTER SYSTEM SET db_create_file_dest   = '+DATA' SCOPE=SPFILE;
    ALTER SYSTEM SET db_recovery_file_dest = '+RECOVER' SCOPE=SPFILE;

    You may also decide to introduce additional disk groups - for example, if you decide to put historic data on low cost disks, or if you want ASM to mirror critical data across 2 storage cabinets.

    What is ASM Rebalancing?

    The rebalancing speed is controlled by the ASM_POWER_LIMIT initialization parameter. Setting it to 0 will disable disk rebalancing.

    ALTER DISKGROUP data REBALANCE POWER 11;


    What happens when an Oracle ASM diskgroup is created?
    When an ASM diskgroup is created, a hierarchialfilesystem structure is created.

    How does this filesystem structure appear?
    Oracle ASM diskgroup'sfilesystem structure is similar to UNIX filesystem hierarchy or Windows filesystem hierarchy.

    Where are the Oracle ASM files stored?
    Oracle ASM files are stored within the Oracle ASM diskgroup. If we dig into internals, oracle ASM files are stored within the Oracle ASM filesystem structures.

    How are the Oracle ASM files stored within the Oracle ASM filesystem structure?
    Oralce ASM files are stored within the Oracle ASM filesystem structures as objects that RDBMS instances/Oracle database instance access. RDBMS/Oracle instance treats the Oracle ASM files as standard filesystem files.

    What are the Oracle ASM files that are stored within the Oracle ASM file hierarchy?
    Files stored in Oracle ASM diskgroup/Oracl ASM filestructures include:
    1) Datafile
    2) Controlfiles
    3) Server Parameter Files(SPFILE)
    4) Redo Log files

    What happens when you create a file/database file in ASM?What commands do you use to create database files?
    Some common commands used for creating database files are :
    1) Create tabespace
    2) Add Datafile
    3) Add Logfile
    For example,
    SQL> CREATE TABLESPACE TS1 DATAFILE '+DATA1' SIZE 10GB;
    Above command creates a datafile in DATA1 diskgroup
     
    How can you access a databasefile in ASM diskgroup under RDBMS?

    Once the ASM file is created in ASM diskgroup, a filename is generated. This file is now visible to the user via the standard RDBMS view V$DATAFILE.
     
    What will be the syntax of ASM filenames?

    ASM filename syntax is as follows:
    +diskgroup_name/database_name/database_file_type/tag_name.file_number.incarnation
    where,
    +diskgroup_name - Name of the diskgroup that contains this file
    database_name - Name of the database that contains this file
    datafile - Can be one among 20 different ASM file types
    tag_name - corresponds to tablespace name for datafiles, groupnumber for redo log files
    file_number - file_number in ASM instance is used to correlate filenames in database instance
    incarnation_number - It is derived from the timestamp. IT is used to provide uniqueness

    What is an incarnation number?
    An incarnation number is a part of ASM filename syntax. It is derived from the timestamp. Once the file is created, its incarnation number doesnot change.

    What is the use of an incarnation number in Oracle ASM filename?
    Incarnation number distinguishes between a new file that has been created using the same file number and another file that has been deleted

    ASM's SPFile will be residing inside ASM itself. This could be found out in number of ways, looking at the alert log of ASM when ASM starts
    Machine:        x86_64
    Using parameter settings in server-side spfile +DATA/asm/asmparameterfile/registry.253.766260991
    System parameters with non-default values:
      large_pool_size          = 12M
      instance_type            = "asm"
      remote_login_passwordfile= "EXCLUSIVE"
      asm_diskgroups           = "FLASH"
      asm_diskgroups           = "DATA"
      asm_power_limit          = 1
      diagnostic_dest          = "/opt/app/oracle"
    Or using the asmcmd's spget command which shows the spfile location registered with GnP profile
    ASMCMD> spget
    +DATA/asm/asmparameterfile/registry.253.766260991

    How to connect client to server in oracle

    Start to Oracle Net Connections

    This chapter is designed to help novice users set up and test a simple but common configuration—one between a client application and a database over a TCP/IP network.

    Prerequisites to Establishing Connectivity

    The tasks in this quick start guide show a TCP/IP connection between a client computer and a database server. The following about the database server and client computers is assumed:
    • Database Server Computer
      • It is running on the same network as the client.
      • An Oracle database is installed.
      • TCP/IP protocol support is installed.
      • A listener is configured.
    • Client Computer
      • It is running on the same network as the database server.
      • Oracle Client is installed.
      • TCP/IP protocol support is installed.

     Steps:-1

    Confirm Network Availability

    Before using Oracle Net to connect a client computer to a database server, confirm that the client computer can successfully communicate with the database server computer. Evaluating network connectivity can eliminate network-based errors.
    To confirm network connectivity:
    1. Confirm that the database server computer can communicate with itself with a loop-back test.
      A loopback test is a connection from the database server back to itself. Many network protocols provide a means of testing network connections. The utility PING can be used for TCP/IP network.
      In a TCP/IP network, each computer has a unique IP Address. A name resolution service, such as Domain Name System, can be used to map the IP address of a computer with its host name. If a name resolution service is not used, then the mapping is typically stored in a centrally maintained file called hosts. This file is located in the /etc directory on UNIX and the \winnt directory on Windows.

      For example, an entry for a database server computer named sales-server may look like the following:

      #IP address of server     host name       alias
      144.25.186.203            sales-server    sales.us.acme.com
       
      To use PING, enter the following at the command line:
      ping database_server_host
      
      
      The database_server_host is the host name of the database server computer. For example:
      ping sales-server
      
      
      If the loopback was unsuccessful, try using the IP address of the database server. For example:
      ping 144.25.186.203
      
      
    2. Verify the client computer can successfully communicate with the database server computer.
      This varies according to the network protocol. For TCP/IP, you can use PING, FTP or TELNET utilities. If the client computer cannot reach the server, verify that the network cabling and network interface cards are correctly connected. Contact your network administrator to correct these problems.

       Steps:-2


    Start the Listener and the Database Server

    The listener and database server must be running in order for the database server to receive connections.
    1. Start the listener with the Listener Control utility. From the command line, enter:
      lsnrctl
      LSNRCTL> START [listener_name]
      
      
      where listener_name is the name of the listener defined in the listener.ora file. It is not necessary to identify the listener if you are using the default listener, named LISTENER.
      A status message indicating that the listener has started successfully displays.
    2. Start the database:
      1. Start SQL*Plus without connecting to the database:
        sqlplus /nolog
        
        
      2. Connect to the database as SYSDBA:
        SQL> CONNECT username/password as sysdba
        
        
        For example, SYSTEM/MANAGER is a SYSDBA user.
      3. Enter the STARTUP command, specifying the database name and full path of the parameter file:
        SQL> STARTUP database_name pfile=file
        
        
        If you do not specify the PFILE option, the Oracle database uses the standard initialization parameter file located in the $ORACLE_BASE/admin/db_name/pfile/sid directory on UNIX platforms, and ORACLE_BASE\admin\db_name\pfile\sid directory on Windows. If you do not specify a database name, then the database uses the value of the DB_NAME parameter specified in the initialization parameter file.

    3. Confirm that database service registration with the listener has completed. From the Listener Control utility, enter:
      LSNRCTL> SERVICES [listener_name]
      
      
      The SERVICES command lists the services supported by the database, along with at lease one available service handler.


       Steps:-3

    Configure the Client for Connection to a Database

    Once network connectivity has been verified, you can use easy connect naming to connect to the database.
    NOTE:
    Oracle Database 10g does not support the use of Oracle Names. Neither Oracle Database 10g clients nor Oracle Databases can use Oracle Names, including by LDAP proxy, to resolve naming. Oracle8i and Oracle9i clients can still use Oracle Names to resolve naming for an Oracle Database 10g database; however, customers are strongly recommended to migrate to LDAP to take advantage of the new features of Oracle Database 10g.
    The easy connect naming method can eliminate the need for service name lookup in the tnsnames.ora files for TCP/IP environments. This naming method provides out-of-the-box TCP/IP connectivity to databases. It extends the functionality of the host naming  method by enabling clients to connect to a database server with an optional port and service name in addition to the host name of the database.
    CONNECT username/password@host[:port][/service_name][/instance_name]
    where:
    
    
    If you have performed Oracle Database server install in Typical mode, the default service name used by oracle instance is ORCL, and the following easy connect syntax can be used to connect to that instance:
    CONNECT username/password@host/ORCL
    

    Alternate Connection using Oracle Net Configuration Assistant

    If you do not wish to use the easy connect naming method, you can use Oracle Net Configuration Assistant to create a net service name, a simple name for the database service. The net service name resolves to the connect descriptor, that is, the network address of the database and the name of the database service. The client will use the net service name to connect to the database.
    The following example shows the net service name sales mapped to a connect descriptor for a database called sales.us.acme.com. A client can use sales mapped to connect to sales.us.acme.com.
    sales= 
     (DESCRIPTION= 
       (ADDRESS=(PROTOCOL=tcp)(HOST=sales-server)(PORT=1521))
       (CONNECT_DATA=
         (SERVICE_NAME=sales.us.acme.com)))
    
    
    To configure a client with a net service name:
    1. Start Net Configuration Assistant.
      The Welcome page appears.
    2. Select Local Net Service Name Configuration.
    3. Click Next.
      The Net Service Name Configuration page appears.
    4. Click Add, and then click Next.

      The Net Service Name Configuration, Database Version page appears:
    1. If the destination service is an Oracle9i or Oracle8i database, then select Oracle8i or later database or service. If destination service is an Oracle8 release 8.0 database, then select Oracle8 release 8.0 database or service.
    2. Click Next.
      The Net Service Name Configuration, Service Name appears.
    3. Enter the name that identifies the database.
      The service name is defined during database creation. If you are unsure what the service name is, ask the Database Administrator who created the database.
    4. Click Next.
      The Net Service Name Configuration, Select Protocols page appears.
    5. Select the protocol on which the listener is configured to listen. Note that this protocol must also be installed on the client. The listener is configured to listen on TCP/IP by default.
    6. Click Next.
      The page appropriate for the selected protocol appears.
    7. Enter the appropriate protocol parameters for the selected protocol in the fields provided, and then click Next.
      The Net Service Name Configuration, Test page appears.
    8. Click Yes perform a test.
      During a test, Oracle Net Configuration Assistant contacts the remote database service, establishes a connection, and ends contact.
      A successful test results in the following message:
      Connecting...Test successful.
      
      
      If the test fails, it can be because the:
      • Default username (scott) and password (tiger) are not valid
      • Protocol address information does not match the listener information
      • Listener is not running
      • Destination database service is down
      Depending on your diagnosis of the problem, perform one of the following tasks:
      • Click Change Login to change the username and password for the connection.
      • Click Back to review protocol address information.
      • Start the listener or database on the server,
    9. Click Next.
      The Net Service Name Configuration, Net Service Name page appears.
    10. Accept the default net service name or enter another net service name in the Net Service Name field. The name you enter should be unique to the client.
    11. Click Next.
      The Net Service Name Configuration, Another Net Service Name page appears.
    12. Click No, and then click Next.
      The Net Service Name Configuration, Configuration Done page appears.
    13. Click Next, and then click Finish.

       Steps:-4


    Connect to the Database

    From the client computer, connect to the database server as follows.
    1. Start SQL*Plus:
      sqlplus
      
      
    2. Connect to the database as follows:
      CONNECT username/password@net_service_name 
       
       
       

    Full form of computer terms


    * HTTP - Hyper Text Transfer Protocol.
    * HTTPS - Hyper Text Transfer Protocol Secure.
    * IP - Internet Protocol.
    * URL - Uniform Resource Locator.
    * USB - Universal Serial Bus.
    * VIRUS - Vital Information Resource Under Seized.
    * 3G - 3rd Generation.
    * GSM - Global System for Mobile Communication.
    * CDMA - Code Divison Multiple Access.
    * UMTS - Universal Mobile Telecommunication System.
    * SIM - Subscriber Identity Module.
    * AVI = Audio Video Interleave
    * RTS = Real Time Streaming
    * SIS = Symbian OS Installer File
    * AMR = Adaptive Multi-Rate Codec
    * JAD = Java Application Descriptor
    * JAR = Java Archive
    * JAD = Java Application Descriptor
    * 3GPP = 3rd Generation Partnership Project
    * 3GP = 3rd Generation Project
    * MP3 = MPEG player lll
    * MP4 = MPEG-4 video file
    * AAC = Advanced Audio Coding
    * GIF = Graphic InterchangeableFormat
    * JPEG = Joint Photographic ExpertGroup
    * BMP = Bitmap
    * SWF = Shock Wave Flash
    * WMV = Windows Media Video
    * WMA = Windows Media Audio
    * WAV = Waveform Audio
    * PNG = Portable Network Graphics
    * DOC = Document (Microsoft Corporation)
    * PDF = Portable Document Format
    * M3G = Mobile 3D Graphics
    * M4A = MPEG-4 Audio File
    * NTH = Nokia Theme (series 40)
    * THM = Themes (Sony Ericsson)
    * MMF = Synthetic Music Mobile Application File
    * NRT = Nokia Ringtone
    * XMF = Extensible Music File
    * WBMP = Wireless Bitmap Image
    * DVX = DivX Video
    * HTML = Hyper Text Markup Language
    * WML = Wireless Markup Language
    * CD - Compact Disk.
    * DVD - Digital Versatile Disk.
    * CRT - Cathode Ray Tube.
    * DAT - Digital Audio Tape.
    * DOS - Disk Operating System.
    * GUI - Graphical User Interface.
    * HTTP - Hyper Text Transfer Protocol.
    * IP - Internet Protocol.
    * ISP - Internet Service Provider.
    * TCP - Transmission Control Protocol.
    * UPS - UninterruptiblePower Supply.
    * HSDPA - High Speed Downlink Packet Access.
    * EDGE - Enhanced Data Rate for GSM [Global System for Mobile Communication] Evolution.
    * VHF - Very High Frequency.
    * UHF - Ultra High Frequency.
    * GPRS - General Packet Radio Service.
    * WAP - Wireless Application Protocol.
    * TCP - Transmission Control Protocol .
    * ARPANET - Advanced Research Project Agency Network.
    * IBM - International Business Machines.
    * HP - Hewlett Packard.
    * AM/FM - Amplitude/ Frequency Modulation.
    * WLAN - Wireless Local Area Network.

    Concurrent Requests Stuck in Pending Phase

    Summary of Possible Reasons and Solutions for the Problem Where All  Concurrent Requests Stuck in Pending Phase

    Solution:-

    Below are several different possible solutions to the problem where
    concurrent requests are stuck in pending status:
    1.  When shutting down the concurrent managers are there any FNDLIBR
    processes still running at the OS level?   If so, do a kill -9 on them,
    then restart the concurrent managers.
    2.  Try Relinking $FND_TOP.
    3.  Rebuild the concurrent manager views.

    As applmgr run the following:-

    From the OS:
    This is non-destructive.
    Concurrent Manager views can be rebuilt by running the following command  at the command line:
    Ensure that concurrent managers are shutdown.
    FNDLIBR FND FNDCPBWV apps/apps SYSADMIN 'System Administrator' SYSADMIN
    Restart the concurrent managers.
    4.  The Profile Option 'Concurrent: OPS Request Partitioning' may be set
    incorrectly. This profile option should always be set to
    OFF, regardless of whether you are running OPS(RAC) or not, because the profile is obsolete.
    5.  The System Profile Option: Concurrent Active Requests is possibly to 0.
    a.  Log into Oracle Applications as SYSADMIN.
      b.  Select System Administrator responsibility.
    C.  Navigate to PROFILE > SYSTEM.
     d.  Query for %CONC%ACTIVE%.
      e.  Change the profile option for 'Concurrent: Active Request Limit'  to Null (blank).
    f.  Exit Oracle Applications and log in again for the change to take affect.
    g.  Run a new concurrent request.
    6.  The Concurrent managers were brought down, while an outstanding request
    was still running in the background.  In which case, update the
    FND_CONCURRENT_REQUESTS table as follows:
    sql>
    update fnd_concurrent_requests
    set status_code='X', phase_code='C'
    where status_code='T';
    sql> commit;
    7.   The control_code for concurrent_queue_name = 'FNDCRM' is 'N' in the
    FND_CONCURRENT_QUEUES table,  which means 'Target node/queue unavailable'.
    This value should be NULL (CRM is running; target and actual process amount
    are the same), or 'A' ('Activate concurrent manager' control status).
    Set the control_code to 'A' in fnd_concurrent_queues for the Conflict Resolution

    Manager:
     a.  Logon to Oracle Applications database server as 'applmgr'.
    b.  Verify the Applications environment is setup correctly
    ($ORACLE_HOME and $ORACLE_SID).
    c.  Logon to SQL*Plus as 'APPS' and run the following SQL statement:
     update fnd_concurrent_queues set control_code = 'A' where concurrent_queue_name = 'FNDCRM';
     commit;
     d.  Verify the status of the concurrent managers through the
    Concurrent -> Manager -> Administer form. 
    If the CRM is still not active, bounce (deactivate, activate) the Internal
    Concurrent Manager.  This is done through the Concurrent > Manager >  Administer form
    from the 'System Administrator' responsibility. It can also be done through
    the CONCSUB command at the command level.      
    Setting the control_code to 'A' in the fnd_concurrent_queues table for the
    Conflict Resolution Manager indicates that this concurrent manager
    is to be activated with the parameter values specified through this table
    for this manager (MAX_PROCESSES, CACHE_SIZE, etc).

    8.  What is the cache size?  

    Try increasing the cache size then
    stop/restart the concurrent managers.
    If concurrent requests are rarely prioritized and there are managers
    that service short-running requests, consider setting the cache size to
    equal at least twice the number of target processes.  This increases the
    throughput of the concurrent managers by attempting to avoid any sleep time. 
    For example:
    If more than one manager or worker processes the same type of requests
    with only a small cache size, it may be unable to process any jobs in a
    single processing cycle, because other processes have already run the cached requests.
    When this happens, it is important to note that the manager will sleep
    before refreshing its cache.  To increase manager throughput where there
    are sufficient requests of the required type in the queue, increase the
    cache size to improve the chance of the manager finding work to process
    and thus avoid having to enter a sleep phase.
    TIP: Ensure that the system is not resource-constrained before attempting
    to increase the rate of concurrent processing in this way, otherwise,
    these changes may actually reduce concurrent processing throughput because
    jobs take longer to run.
     

    Row Chaining and row Migration in 10g

    What is Row Chaining?

    Row Chaining happens when a row is too large to fit into a single database block. For example, if you use a 8KB block size for your database and you need to insert a row of 16KB into it, Oracle will use 2/3 blocks and store the row in chain of data blocks for that segment. And Row Chaining happens only when the row is being inserted.

    you can show the block size of database by

    SQL> conn / as sysdba
    SQL> show parameter db_block_size

    What is Row Migration?

    Row Migration Occurs when a row that originally fitted into one data block is updated so
    that the overall row length increases, and the block's free space is already
    completely filled. In this case, Oracle migrates the data for the entire row
    to a new data block, assuming the entire row can fit in a new block. Oracle
    preserves the original row piece of a migrated row to point to the new block
    containing the migrated row: the rowid of a migrated row does not change.

    How to find Row Chaining and row migration?


    There are 3 ways to identified Row Chaining and Row Migration

    1) by Analyze command
    2) USER_TABLES
    3) V$SYSSTAT

    Example :-

    SQL*Plus: Release 10.2.0.1.0 - Production on Sun Sep 22 13:00:32 2013
    Copyright (c) 1982, 2005, Oracle. All rights reserved.

    SQL> set linesize 1000
    SQL> set pagesize 1000
    SQL>
    SQL>
    SQL>
    SQL> conn sonu@bsnl
    Enter password:
    Connected.
    SQL>
    SQL>
    SQL> @D:\oracle\APPl\BEFTN\RDBMS\ADMIN\utlchain.sql

    Table created.

    SQL>
    SQL> desc CHAINED_ROWS ;
    Name
    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    OWNER_NAME
    TABLE_NAME
    CLUSTER_NAME
    PARTITION_NAME
    SUBPARTITION_NAME
    HEAD_ROWID
    ANALYZE_TIMESTAMP

    SQL> select count(*) from CHAINED_ROWS ;

    COUNT(*)
    ----------
    0


    SQL> SELECT owner, table_name, chain_cnt
    2 FROM user_tables
    3 WHERE chain_cnt > 0
    4* and owner = 'STLBAS'
    SQL>
    SQL> /

    no rows selected

    SQL>
    SQL>
    SQL>SQL> SELECT name, value FROM v$sysstat WHERE name = 'table fetch continued row';

    no rows selected

    SQL>

    Above we cannot find any row chaining below we create a table of row chaining


    SQL> drop table chain_row_test ;

    Table dropped.

    SQL> create table chain_row_test (a varchar(4000),b varchar(4000), c varchar(4000));

    Table created.

    SQL> insert into chain_row_test (a,b,c) values ( 1, rpad('*',40000000,'*'), rpad('*',2300000,'*') )

    SQL> /

    1 row created.

    SQL> commit ;

    Commit complete.

    SQL> SELECT count(*) FROM chained_rows;

    COUNT(*)
    ----------
    0

    SQL> analyze table chain_row_test list chained rows into chained_rows ;

    Table analyzed.

    SQL> SELECT count(*) FROM chained_rows;

    COUNT(*)
    ----------
    1

    SQL> select dbms_rowid.rowid_block_number(rowid) from chain_row_test;

    DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
    ------------------------------------
    25620

    SQL> insert into chain_row_test (a,b,c) values ( 1, rpad('*',5620000000,'*'), rpad('*',55500000,'*') )
    SQL> /

    1 row created.

    SQL> commit ;

    Commit complete.

    SQL> select dbms_rowid.rowid_block_number(rowid) from chain_row_test;

    DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
    ------------------------------------
    85530
    25620

    SQL> SELECT count(*) FROM chained_rows;

    COUNT(*)
    ----------
    1

    SQL> analyze table chain_row_test list chained rows into chained_rows ;

    Table analyzed.


    SQL> SELECT count(*) FROM chained_rows;

    COUNT(*)
    ----------
    3

    SQL> select dbms_rowid.rowid_block_number(rowid) from chain_row_test;

    DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
    ------------------------------------
    85530
    25620

    SQL> ANALYZE TABLE chain_row_test LIST CHAINED ROWS;

    Table analyzed.

    SQL> SELECT name, value FROM v$sysstat WHERE name = 'table fetch continued row';
    NAME VALUE

    table fetch continued row 3


    SQL> SELECT table_name, initial_extent, num_rows, blocks, empty_blocks, avg_space,
    2 chain_cnt, avg_row_len
    3 FROM user_tables
    4 WHERE table_name = 'CHAIN_ROW_TEST'
    5 /

    TABLE_NAME INITIAL_EXTENT NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE CHAIN_CNT AVG_ROW_LEN
    ------------------------------ -------------- ---------- ---------- ------------ ---------- ---------- -----------
    CHAIN_ROW_TEST 65536 5 16 3 4852 3 4826


    SQL> EXEC dbms_stats.gather_table_stats('sonu','CHAIN_ROW_TEST');

    PL/SQL procedure successfully completed.

    SQL>

    we can find which rows are chained

    SQL> SELECT a, SUBSTR (b, 1, 10), SUBSTR (c, 1, 10)
    2 FROM chain_row_test
    3 WHERE ROWID IN (SELECT head_rowid
    4 FROM chained_rows);

    A
    -----------------------------------------------------------------------

    SUBSTR(B,1 SUBSTR(C,1
    ---------- ----------
    1
    ********** **********

    1
    ********** **********

    1
    ********** **********


    Avoid and eliminate Chained or Migrated rows ?

    1-ALTER TABLE ... MOVE command.

    You can MOVE the table by using ALTER TABLE ... MOVE statement that enables you to relocate data of a non partitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot be modified using ALTER TABLE. You can also use the ALTER TABLE ... MOVE statement with the COMPRESS keyword to store the new segment using table compression.

    Note : Moving a table changes the rowids of the rows in the table. This causes indexes on the table to be marked UNUSABLE,and DML accessing the table using these indexes will receive an ORA-01502 error. The indexes on the table must bedropped or rebuilt. Likewise, any statistics for the table become invalid and new statistics should be collected aftermoving the table.


    2-Increase PCTFREE.

    The PCTFREE parameter sets the minimum percentage of a data block to be reserved as free space for possible updates to rows that already exist in that block. For example, assume that you specify the following parameter within a CREATE TABLE statement.
    If the PCTFREE has been set to a low value, that means there is not enough room in the block for updates. To avoid migration,all tables that are updated should have their PCTFREE set so that there is enough space within the block for updates.You need to increase PCTFREE to avoid migrated rows. If you leave more free space available in the block for updates, then the row will have more room to grow.

    3- Import/Export can be taken as an approach to eliminate the migrated rows.

    4- Avoid to create tables with more than 255 columns.

    .dbc and .cfg file

    What is a .dbc and a .cfg file?

    The .dbc file is mostly used to define database parameters,stands for database connect descriptor file, used to connectto database, it authenticate users against database in FND_USER table.

    How it work's

    The .dbc file is actually the database connector descriptor file used to connect to database and this file is by-default located in $FND_TOP/secure directory, this file is quite important as whenever any program likes to connect to database like forms it uses dbc file where there you find the Guest_user_pwd ,when the guest user connect it does not allow it to directly connect to the database but it first connect goes via this file and cross verify the password of the guest which is kept in this file.

    DBC file is quite important as whenever Java or any other program like forms want to connect to database it uses dbc file.

    Typical entry in dbc file are:
    GUEST_USER_PWDAPPS_JDBC_URLDB_HOST


    The dbc file is used to set up the connectivity for a given database. Each vendor requires different values to determine the server name, database name, user and password, plus any native options including special instructions for the vendor's client API or utility loader.When you drag an Output Table/Input Table etc. onto the canvas, the DBC file lets the component know where to direct the activity. The DBC file is also the mechanism through which a given table can be accessed and its column definition pulled automatically into the given component's DML definition.

    There will be lot of dbc file under $FND_SECURE, how we determine which dbc file to use from $FND_SECURE ?

    This value is determined from profile option "Applications Database ID"
    Authenication:-

    In order to authenticate your user against database in FND_USER table this request is passed from core Apache Module to mod_pls ( plsql) and it uses few files like wdbsvr.app in Apache/modplsql/conf $CONTEXT_NAME.dbc , tnsnames.ora in 806/network/admin/$CONTEXT_NAME , GUEST user & few other file so focus on these files & directories & see what changed recently.

    .dbc file in R12 -->the dbc file in R12 is in the new $INSTANCE_TOP

    $INST_TOP/appl/fnd/12.0.0/secure or $INSTANCE_TOP/apps//appl/fnd/12.0.o/secure.

    Last but not the least .cfg is a unix-level file that is fed into a Korn Shell script as an auto-parameter feeder for a parameter-driven shell script. Cfg files are unix-defined where dbc files are AbI-defined.

    A Database Connection (DBC) file holds information used by application servers to identify and authenticate with an application database. DBC files must be created on all tiers if it is a multi-tier system.

    To recreate DBC file Login to your system as the APPLMGR, ensure that your environment is set correctly by running /.env.
    Run common_top/admin/install/adgendbc.sh (UNIX)
    This will create the DBC file for the current environment instance. Verify log file to check environment, which should be pointing to 806.Location of file is in /admin/install/adgendbc.txt.

    DBC file Parameters

    Verify dbc file exists and is in the right location. $FND_TOP/secure//_.dbc. Verify db_host using %uname -aVerify db_sid from v$instance or echo $ORACLE_SID.

    Parameters:
    APPL_SERVER_ID APPS_JDBC_DRIVER_TYPE=THIN DB_PORT <> DB_HOST GWYUID FNDNAM <> TWO_TASK GUEST_USER_PWD. this is application account and should exists in applications with out any responsibility.
    Note:Make sure environment value specified for DB_HOST is db host not application host.
    Optional parameters:

    FND_MAX_JDBC_CONNECTIONS
    Maximum number of open connections in the JDBC connection cache. This number is dependent on the amount of memory available, the number of processes specified in the init.ora file of the database.

    FND_IN_USE_CONNECTION_TIMEOUT
    Maximum number of seconds a connection can be in use. In order to avoid connections being locked up for too long, the connection cache uses this parameter to forcibly close connections that have been locked for over this time. If this parameter is unspecified,connections in use will not be cleaned up. This should be set to a number larger than the time taken to complete the largest transaction.It is difficult to estimate and recommendation is not to use this.

    FND_UNUSED_CONNECTION_TIMEOUT
    Maximum number of seconds an unused connection can remain in the cache. The connection cache will close and remove from the cache any connection that has been idle for longer than this specified limit.
    Checking DBC file entries: Check Encrypted String for APPL_SERVER_ID parameter in the DBC file with entry in db using following sql stmt.These should be same.
    Log in as apps/
    Select server_id,server_address from FND_APPLICATION_SERVERS;

    The output server_id should match APPL_SERVER_ID and server_address should match IPADDRESS of db host.

    %nslookup Address: xxx.xx.xxx.xx Name: Address:
    verify hostname above with db_host parameter in dbc file.

    Checking db connectivity using DBC file :


    syntax:

    jre -v oracle.apps.fnd.security.AdminAppServer apps/ STATUS dbc=

    Output :
    Database Server
    ---------------
    DATABASE_ID: *********
    AUTHENTICATION: OFF

    Application Server
    ------------------
    APPL_SERVER_STATUS: VALID
    APPL_SERVER_ID: 2CF3A427AB3AE01EE0439C143428E01E24739190873060609592242102215611


    Status return's APPL_SERVER_ID and other db related info. This verifies DBC setup.

    Yellow bar

                        Yellow bar Issue in R12

    Yellow Warning Bar

    The Yellow Warning Bar is a warning that your applet is not running in a trusted mode

    1. Create the digital certificate.
    1a.Go to the master web server and set up your environment pointing to the web server’s APPL_TOP.
    1b.Enter the following command:
    adjkey -initialize
    You are prompted for an entity name and an organization name.
    adjkey takes the following actions:
    - Creates a certificate (.cer file) in the admin directory under your APPL_TOP.
    - Imports this certificate into the identitydb.obj file, which is located in the
    user’s home directory
    - Creates adsign.txt in the admin directory of your APPL_TOP.
    This file is used to pass arguments to JRI for signing jar files.



    NOTE: If you have multiple web servers in your Release 11i environment, you must copy identitydb.obj and adsign.txt to the appropriate directories on all other web servers.
    DO NOT re-run the adjkey command on the other web servers. All Applications Java code should be signed using one digital certificate, unique to your site.Running adjkey multiple times will produce multiple certificates, causing each web server to have JAR files with different signatures.


    2. Repackage JInitiator with the new digital certificate.
    This step modifies JInitiator so that it recognizes your digital signature as a trusted entity.  This must be done each time you create a new digital certificate.
    During this step you run the adjbuild.sh script. Please follow the instructions on how to do this in the  “Repackage JInitiator with your digital certificate” section of Chapter 6 in Installing Oracle Applications,Release 11i.


    NOTE: If you have multiple web servers in your Release 11i environment, you must copy the resulting oajinit.exe
    file from the $OA_HTML (%OA_HTML% if on Windows NT) directory to the $OA_HTML (or %OA_HTML%) directory of all your other Release 11i web servers.

    3. Regenerate jar files
    3a.  Run AD Administration on each one of your Release 11i web servers
    3b.  Select “Maintain Applications Files” from the main menu
    3c.  Select “Generate product jar files” from the next menu.
    This will regenerate and sign all the jar files using your new digital certificate, which your newly repackaged JInitiator client software will recognize.

    NOTE: At this point, if your users attempt to connect to the Applications,they will see the yellow warning banner in their Applications windows indicating that the Java security authorization has failed.
    This is because none of the clients have yet received the new certificate bundled with JInitiator. Continue to the next step to correct this.

    4. Install the new JInitiator on your client PCs.
    The oajinit.exe file is the file users will download to install and configure the newly repackaged JInitiator.
    Before installing the new JInitiator, the users should deinstall their current JInitiator.
    This is a two step process:
    - Close all browser sessions and windows.
    - Deinstall JInitiator using the Windows Control Panel,Add/Remove Programs applet.
    a) Jinitiator needs to be completely removed from the machine using Control
    Panel -> Add/Remove programs -> Remove Jinitiator  1.1.7.27 Export
    b) Make sure the browser is completely shutdown – e.g. for Netscape, check the Taskbar to make sure all netscape.exe processes finish
    c) Delete the whole Jinitiator directory from c:\ Program Files\Oracle\ to make sure all files are removed
    d) Start Apps client and download a fresh version of Jinitiator

    These are located on the Middle Tier, webserver  -  should be owned by APPLMGR
    ADCERT.txt     (certificate directive file)

    UNIX    :     $APPL_TOP/admin/adcert.txt
    ADSIGN.txt     (used to pass arguments to JRI for signing JAR files.)

    UNIX    :     $APPL_TOP/admin/adsign.txt
    APPLTOP.cer    (Certificate file that gets imported into the identitydb.obj

    UNIX    :     $APPL_TOP/admin/appltop.cer
    OAJINIT.exe    (Repackaged Jinitiator executeable that includes identitydb.obj)

    UNIX    :     $OA_HTML/oajinit.exe
    IDENTITYDB.obj (Identity Database File that holds trusted digital certificates)

    UNIX    :     $HOME/identitydb.obj

    Finding fragmentation at table level and tablespace level in Oracle and steps to remove them




    Use this query :-


    select owner,table_name,blocks,num_rows,avg_row_len,round(((blocks*8/1024)),2)||'MB' "TOTAL_SIZE",
        round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE",
        round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE"
        from all_tables  WHERE Owner NOT IN (
           'SYS', 'OUTLN', 'SYSTEM', 'CTXSYS', 'DBSNMP',
           'LOGSTDBY_ADMINISTRATOR', 'ORDSYS',
           'ORDPLUGINS', 'OEM_MONITOR', 'WKSYS', 'WKPROXY',
           'WK_TEST', 'WKUSER', 'MDSYS', 'LBACSYS', 'DMSYS',
           'WMSYS', 'OLAPDBA', 'OLAPSVR', 'OLAP_USER',
           'OLAPSYS', 'EXFSYS', 'SYSMAN', 'MDDATA',
           'SI_INFORMTN_SCHEMA', 'XDB', 'ODM');

    Or This one :- It will collect the data which are having more than 100MB fragmentation.


    select owner,table_name,blocks,num_rows,avg_row_len,round(((blocks*8/1024)),2)||'MB' "TOTAL_SIZE", round((num_rows*avg_row_len
    /1024/1024),2)||'Mb' "ACTUAL_SIZE", round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE" from
    dba_tables where owner in('a','b','c','d') and round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2)
    > 100 order by 8 desc;


    ========================================================================

    Description

    This script lists details of the extents within a tablespace. This will help you determine if a tablespace is fragmented or not. A tablespace with little or no fragmentation the biggest, smallest and average will be similar (e.g. as in a LOCALLY MANAGED tablespace of UNIFORM SIZE)

    Parameters

    None.

    SQL Source


    set pages 50

    PROMPT
    PROMPT Tablespace Freespace Fragmentation Report
    PROMPT

    column "Blocks" format 999999
    column "Free" format 999999
    column "Pieces" format 99999
    column "Biggest" format 999999
    column "Smallest" format 999999
    column "Average" format 999999
    column "Dead" format 9999
    select substr(ts.tablespace_name,1,12) "Tspace",
           tf.blocks "Blocks",
           sum(f.blocks) "Free",
           count(*) "Pieces",
           max(f.blocks) "Biggest",
           min(f.blocks) "Smallest",
           round(avg(f.blocks)) "Average",
           sum(decode(sign(f.blocks-5),-1,f.blocks,0)) "Dead"
    from   dba_free_space f,
           dba_data_files tf,
           dba_tablespaces ts
    where  ts.tablespace_name=f.tablespace_name
    and    ts.tablespace_name = tf.tablespace_name
    group by ts.tablespace_name,tf.blocks
    /

    exit


    Description

    This script lists details how chained or migrated rows there are within a table. It may help you determine if a table needs to be rebuilt. In order for this script to be effective, you must analyze your tables regularly.

    Parameters

    None.

    SQL Source

    CLEAR
    SET HEAD ON
    SET VERIFY OFF
    set pages 100
    set lines 79

    PROMPT
    PROMPT Table Fragmentation Report
    PROMPT

    col owner form a12
    col table_name form a20
    col empty_blocks form 999,999 heading "Empty Blks"
    col blocks form 999,999 heading "Blks"
    col pct form 99

    select owner, table_name, num_rows, chain_cnt, (chain_cnt*100/num_rows) pct, empty_blocks, blocks
    from dba_tables
    where chain_cnt > 0
    and owner not in ('SYS','SYSTEM')
    /

    exit

    ================================================================

    select table_name,round((blocks*8),2) "size (kb)" ,
                                round((num_rows*avg_row_len/1024),2) "actual_data (kb)",
                                (round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) "wasted_space (kb)"
    from user_tab_partitions
    where (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
    order by 4 desc;

    ===============================================================
    steps to remove them :-


    If you find reclaimable space % value more than 20% then we can expect fragmentation in the table. To remove this use the following options:

    1. Export and import the table (difficult to implement in production environment)
    2. Move table in to different or same tablespace (Depends upon the free space available in the tablespace)

    Here for the 2nd  option:

    1. Collect status of all the indexes on the table.
    select index_name,status from user_indexes where table_name like 'table_name';

    status may be valid or unusable.

    2. Command to move in to new tablespace:

    alter table <tb_name> enable row movement;---- Run this command before moving table
    alter table table_name move tablespace new_tablespace_name

    3. Command to move in to old tablespace

    alter table table_name move tablespace old_tablespace_name

    If we have free space available  in the tablespace which contain the table. Then we can replace step 2 and 3 by

    alter table table_name move ;
    alter table <tb_name> disable row movement;---- Run this command after moving table

    4. rebuild all indexes
    We need to rebuild all the indexes on the table because of move command all the index goes into unusable state.
    select index_name from user_indexes where table_name like 'table_name';
    alter index index name rebuild online;

    5. check status of all the indexes
    select index_name,status from user_indexes where table_name like 'table_name';
    here value in status field must be valid.

    6. Crosscheck space is reclaimed in tablespace

    SELECT /* + RULE */  df.tablespace_name "Tablespace",
           df.bytes / (1024 * 1024) "Size (MB)",
           SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
           Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
           Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
      FROM dba_free_space fs,
           (SELECT tablespace_name,SUM(bytes) bytes
              FROM dba_data_files
             GROUP BY tablespace_name) df
     WHERE fs.tablespace_name (+)  = df.tablespace_name
     GROUP BY df.tablespace_name,df.bytes
    UNION ALL
    SELECT /* + RULE */ df.tablespace_name tspace,
           fs.bytes / (1024 * 1024),
           SUM(df.bytes_free) / (1024 * 1024),
           Nvl(Round((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1),
           Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
      FROM dba_temp_files fs,
           (SELECT tablespace_name,bytes_free,bytes_used
              FROM v$temp_space_header
             GROUP BY tablespace_name,bytes_free,bytes_used) df
     WHERE fs.tablespace_name (+)  = df.tablespace_name
     GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used
     ORDER BY 4 DESC;

    Above query will show increase in value of Free Size MB field of tablespce which owns fragmented table.

    7. Gather table states:
    exec dbms_stats.gather_table_stats('schema_name','table_name');

    Comparing Between 2 schema

    We can compare two schema with the help  of the below queries:

     compare the two users in terms of tables


    SELECT TABLE_NAME FROM ALL_TABLES
    WHERE TABLE_NAME NOT IN
    (SELECT TABLE_NAME FROM ALL_TABLES WHERE OWNER='EMP1')
    AND OWNER='EMP2';


    compare the two users in terms of objects 


    SELECT OBJECT_NAME||' '||OBJECT_TYPE FROM ALL_OBJECTS
    WHERE OBJECT_NAME NOT IN
    (SELECT OBJECT_NAME FROM ALL_OBJECTS WHERE OWNER='EMP1')
    AND OWNER='EMP2';

    when we transfer data from one schema to other schema and we want to verify them after successful transfer.

    Trace Files

    Trace files are stored in the Automatic Diagnostic Repository (ADR), in the trace directory under each ADR home. To help you locate individual trace files within this directory, you can use data dictionary views. For example, you can find the path to your current session's trace file or to the trace file for each Oracle Database process.

    To find the trace file for your current session:
    • Submit the following query:
      SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Default Trace File';
      
      The full path to the trace file is returned.
    To find all trace files for the current instance:
    • Submit the following query:
      SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';
      
      The path to the ADR trace directory for the current instance is returned.
    To determine the trace file for each Oracle Database process:
    • Submit the following query:
      SELECT PID, PROGRAM, TRACEFILE FROM V$PROCESS;
      
      
      In oracle10g, we find trace files under below directory->

      /oracle/<SID>/saptrace/usertrace

      but in oracle11g

      /oracle/<SID>/saptrace/diag/rdbms/qas/<SID>/trace. Although usertrace folder is present in oracle11g-> /oracle/<SID>/saptrace/usertrace..

       TRACE Utility

      The Oracle TRACE utility is used to measure timing statistics for a given query, a batch process, or an entire system. It is a fast method of finding where potential bottlenecks on the system reside. TRACE has the following functionality:
      • TRACE runs the query and generates statistics about an Oracle query that is executed.
      • TRACE helps developers analyze every section of a query.

      Simple Steps for TRACE with a Simple Query

      The steps for setting up and running Oracle's TRACE utility are as follows:
      1. Set the following init.ora parameters:
          TIMED.STATISTICS = TRUE
          MAX_DUMP_FILE_SIZE = 2000000 (Not 2M)
          USER_DUMP_DEST = /oracle8/rich_trc
          
        In Oracle7, the database must be shut down and restarted for these parameters to take effect. In Oracle8, the TIMED_STATISTICS parameter may be set via an ALTER SESSION (for an individual session) or ALTER SYSTEM (for the entire system) command. The USER_DUMP_DEST specifies the location to put the files and the MAX_DUMP_FILE_SIZE specifies the maximum file size.

      2. Enable TRACE for a SQL*Plus session (this starts TRACing for an individual session):
          alter session set SQL_TRACE true;
          
      3. Run the query to be TRACEd:
          select    table_name,
                    owner, 
                    initial_extent,
                    uniqueness
          from	    ind2
          where	    owner || '' = 'SCOTT';
          --    (Note: An index on "OWNER" is suppressed)
          
      4. Disable TRACE for the SQL*Plus session:
          alter session set SQL_TRACE false;
          
      5. You can also enable TRACE for all sessions by setting the SQLTRACE parameter in the init.ora. You must shut down and restart the database for this to take effect. This is not suggested!
          SQL_TRACE = TRUE
          
        After running TRACE, your output file will look something like the following:
          5_19554.trc
          
        TIP Setting TIMED_STATISTICS=TRUE in the init.ora will begin TRACing upon the user's command. But, be carefulsetting SQLTRACE=TRUE in the init.ora will cause the entire system and all queries to be TRACEd and could cause performance degradations.

      6. Run TKPROF to put the TRACE file into readable format:
          tkprof 5_19554.trc rich2.prf explain=system/manager
          
        The TKPROF utility translates the TRACE file generated by the SQLTRACE facility to a readable format. You can run TKPROF against a TRACE file that you have previously created, or you can run it while the program that is creating the TRACE file is still running. Options for TKPROF are listed next.
          tkprof tracefile output_file [sort = parameters]
          [print =number] [explain=username/password@sid]
          
        Command-Line Options:
        TRACEfile
        The name of the TRACE file containing the statistics by SQL_TRACE.
        output_file
        The name of the file where TKPROF writes its output.
        SORT= parameters
        The order in which to display the statements in the output. There are about 20 different options for sorting the output-you can even combine these options.
        PRINT=number
        The number of statements to include in the output. included, TKPROF will list all statements in the output.
        EXPLAIN=username/password@sid
        Run the EXPLAIN PLAN on the user's SQL statements in the TRACE file. This option will create a PLAN_TABLE of its own, so the user will need to have privileges to create the table and space in which to create it. When TKPROF is finished, this table is dropped.

        New options in 7.3+:
        INSERT= filename
        This option creates a script to create a table and store the TRACE file statistics for each SQL statement TRACEd.
        RECORD= filename
        This option will produce a file of all the user's SQL statements.
        SYS= YES/NO
        This option allows the user to request the recursive SQL statements not be displayed in the output. The default is set to YES.
        SORT= parameters
        There is a tremendous number of sorting options that are available. My favorites are FCHCPU (CPU time of fetch), FCHDSK (disk reads for fetch), FCHCU and FCHQRY (memory reads for fetch), FCH ROW (number of rows fetched), EXEDSK (disk reads during execute), EXECU and EXEQRY (memory reads during execute), EXEROW (rows processed during execute), EXECPU (execute CPU time), and PRSCNT (times parsed).
        TIP
        The TKPROF utility puts a TRACEd output into a readable format. Without running TKPROF, it would be difficult to read the output of a TRACE. By specifying "explain=username/password" (noted earlier), we are able to get the EXPLAIN PLAN execution path in addition to the execution statistics of the query.
      7. The output of the file rich2.prf (query with the index suppressed):
      8.   select    table_name,
                    owner,
                    initial_extent,
                    uniqueness
          from      ind2
          where     owner = 'SCOTT';
        
        	count	cpu	elap	disk	query	current	  rows
        
        Parse:	    1     1        2       0        0         0
        Execute:    1     0        0       0        0         2      0
        Fetch:	    2    69      113     142      430         0     36
          
        Execution Plan (no index used):
        TABLE ACCESS (FULL) OF 'IND2'
          
        The output shows 142 disk reads and 430 memory reads (query + current). Having such a high number of disk reads compared to physical reads is certainly a potential problem. The execution path shows a full table scan confirming that we may have a potential problem.TIP A TRACEd query with a large number of physical reads usually indicates a missing index. The disk column indicates the physical reads (usually where an index is not used) and the query added to the current columns indicates the memory reads (usually reads where an index is being used).

      9. Here's what happens when I rerun the query (after restarting the system) to be TRACEd, now using an index on the owner table:
          select    table_name,
                    owner,
                    initial_extent,
                    uniqueness
          from      ind2
          where     owner = 'SCOTT';
          (The index on "OWNER" is not suppressed)
        
        	count	cpu	elap	disk	query	current	  rows
        
        Parse:	    2     0        0       0        0         0
        Execute:    2     0        0       0        0         0      0
        Fetch:	    4     6        6       0      148         0     72 
          
        Execution Plan (index used):
        TABLE ACCESS (BY ROWID) OF 'IND2'
          INDEX (RANGE SCAN) OF 'IND2_1' (NON-UNIQUE)
          
        TIPA TRACEd query output with only memory reads (query-consistent reads) indicates that an index is being used.

      The Sections of a TRACE Output

      The TRACE utility has multiple sections including the SQL statements, statistics, information, and the EXPLAIN PLAN. Each of these different topics are discussed in the following sections.
      The SQL Statement
      The first section of a TKPROF statement is the SQL statement. This statement will be the exact same as the statement that was executed. If there were any hints or comments in the statement, they would be retained in this output. This can be helpful when you are reviewing the output from multiple sessions. If you find a statement causing problems, you would be able to search for the exact statement. Remember, some of the statements from Oracle forms are generated dynamically.
      The Statistics Section
      This section contains all the statistics for this SQL statement and all the recursive SQL statements generated to satisfy this statement. In this section there are eight columns, the first being the type of call to the database. There are three types of calls, parse, execute, and fetch. Each type of call will generate a separate line of statistics. The other seven columns are the statistics for each type of call.
      count
      The number of times this type of call was made.
      cpu
      The total CPU time for all of the calls of this type for this statement. If the TIMED_STATISTICS parameter in the init.ora is not set to TRUE, this statistic and the elapsed statistic will be 0.
      elapsed
      The total elapsed time for this call.
      disk
      The total number of data blocks retrieved from disk to satisfy this call.
      query
      The total number of data buffers retrieved from memory for this type SELECT statements usually retrieve buffers in this mode.
      current
      The total number of data buffers retrieved from memory for this type of call. UPDATE, INSERT, or DELETE the usual access buffers in this mode.
      rows
      The total number of rows processed by this statement. The rows statements will appear in the row of Fetch statistics. INSERTS, UPDATES, and DELETES will appear in the execute row.

    ORA-00059: Maximum Number Of DB_FILES Exceeded in 19C database

    When I am adding datafile to my 19C database facing the below error. SQL> alter tablespace DATA  add datafile '/u01/data/data15.dbf&#...