ORA-16009


ORA-16009: remote archive log destination must be a STANDBY database 

Cause: The database associated with the archive log destination service name is other than the required STANDBY type database. Remote archival of redo log files is not allowed to non-STANDBY database instances. 

Action: Take the necessary steps to create the required compatible STANDBY database before retrying the ARCHIVE LOG processing. 


To check the database, run this query:

SQL> select database_role, open_mode from v$database;

PHYSICAL STANDBY MOUNTED

Automatic Storage Management (ASM)


Introduction to Automatic Storage Management (ASM)


Overview of Oracle Automatic Storage Management (ASM)

ASM is a volume manager and a file system for Oracle database files that supports single-instance Oracle Database and Oracle Real Application Clusters (Oracle RAC) configurations. ASM is Oracle's recommended storage management solution that provides an alternative to conventional volume managers, file systems, and raw devices.
ASM uses disk groups to store datafiles; an ASM disk group is a collection of disks that ASM manages as a unit. Within a disk group, ASM exposes a file system interface for Oracle database files. The content of files that are stored in a disk group are evenly distributed, or striped, to eliminate hot spots and to provide uniform performance across the disks. The performance is comparable to the performance of raw devices.
You can add or remove disks from a disk group while a database continues to access files from the disk group. When you add or remove disks from a disk group, ASM automatically redistributes the file contents and eliminates the need for downtime when redistributing the content.
The ASM volume manager functionality provides flexible server-based mirroring options. The ASM normal and high redundancy disk groups enable two-way and three-way mirroring respectively. You can use external redundancy to enable a Redundant Array of Inexpensive Disks (RAID) storage subsystem to perform the mirroring protection function.
ASM also uses the Oracle Managed Files (OMF) feature to simplify database file management. OMF automatically creates files in designated locations. OMF also names files and removes them while relinquishing space when tablespaces or files are deleted.
ASM reduces the administrative overhead for managing database storage by consolidating data storage into a small number of disk groups. This enables you to consolidate the storage for multiple databases and to provide for improved I/O performance.
ASM files can coexist with other storage management options such as raw disks and third-party file systems. This capability simplifies the integration of ASM into pre-existing environments.
Oracle Enterprise Manager includes a wizard that enables you to migrate non-ASM database files to ASM. ASM also has easy to use management interfaces such as SQL*Plus, the ASMCMD command-line interface, and Oracle Enterprise Manager.
Understanding ASM Concepts

  •  ASM Instances
An ASM instance is built on the same technology as an Oracle Database instance. An ASM instance has a System Global Area (SGA) and background processes that are similar to those of Oracle Database. However, because ASM performs fewer tasks than a database, an ASM SGA is much smaller than a database SGA. In addition, ASM has a minimal performance effect on a server. ASM instances mount disk groups to make ASM files available to database instances; ASM instances do not mount databases.
ASM metadata is the information that ASM uses to control a disk group and the metadata resides within the disk group. ASM metadata includes the following information:
  • The disks that belong to a disk group
  • The amount of space that is available in a disk group
  • The filenames of the files in a disk group
  • The location of disk group datafile data extents
  • A redo log that records information about atomically changing data blocks
ASM and database instances require shared access to the disks in a disk group. ASM instances manage the metadata of the disk group and provide file layout information to the database instances.
ASM instances can be clustered using Oracle Clusterware; there is one ASM instance for each cluster node. If there are several database instances for different databases on the same node, then the database instances share the same single ASM instance on that node.
If the ASM instance on a node fails, then all of the database instances on that node also fail. Unlike a file system failure, an ASM instance failure does not require restarting the operating system. In an Oracle RAC environment, the ASM and database instances on the surviving nodes automatically recover from an ASM instance failure on a node.
Figure 1-1 shows a single node configuration with one ASM instance and multiple database instances. The ASM instance manages the metadata and provides space allocation for the ASM files. When a database instance creates or opens an ASM file, it communicates those requests to the ASM instance. In response, the ASM instance provides file extent map information to the database instance.

Figure 1-1 ASM for Single-Instance Oracle Databases


Description of Figure 1-1 follows
In Figure 1-1, there are two disk groups: one disk group has four disks and the other has two disks. The database can access both disk groups. The configuration in Figure 1-1 shows multiple database instances, but only one ASM instance is needed to serve the multiple database instances.
Figure 1-2 shows an ASM cluster in an Oracle RAC environment where ASM provides a clustered pool of storage. There is one ASM instance for each node serving multiple Oracle RAC or single-instance databases in the cluster. All of the databases are consolidated and sharing the same two ASM disk groups.


Figure 1-2 ASM Cluster Configuration with Oracle RAC


Description of Figure 1-2 follows
A clustered storage pool can be shared by multiple single-instance Oracle Databases as shown in Figure 1-3. In this case, multiple databases share common disk groups. A shared ASM storage pool is achieved by using Oracle Clusterware. However, in such environments an Oracle RAC license is not required.
ASM instances that are on separate nodes do not need to be part of an ASM cluster and do not communicate with each other. However, multiple nodes that are not part of an ASM cluster cannot share a disk group. To share a disk group among multiple nodes, you must install Oracle Clusterware on all of the nodes, regardless of whether you install Oracle RAC on the nodes.

Figure 1-3 ASM Cluster with Single-Instance Oracle Databases

Description of Figure 1-3 follows

ASM Disk Groups

A disk group consists of multiple disks and is the fundamental object that ASM manages. Each disk group contains the metadata that is required for the management of space in the disk group.
Files are allocated from disk groups. Any ASM file is completely contained within a single disk group. However, a disk group might contain files belonging to several databases and a single database can use files from multiple disk groups. For most installations you need only a small number of disk groups, usually two, and rarely more than three.
Disk group components include disks, files, and allocation units. Figure 1-4 shows the relationships among ASM disk group components.

Mirroring and Failure Groups

Mirroring protects data integrity by storing copies of data on multiple disks. The disk group type determines the mirroring levels with which Oracle creates files in a disk group.
When you create a disk group, you specify an ASM disk group type based on one of the following three redundancy levels:
  • Normal for 2-way mirroring
  • High for 3-way mirroring
  • External to not use ASM mirroring, such as when you configure hardware RAID for redundancy
The disk group type determines the mirroring levels with which Oracle creates files in a disk group. The redundancy level controls how many disk failures are tolerated without dismounting the disk group or losing data.
ASM mirroring is more flexible than traditional RAID mirroring because you can specify the redundancy level for each file. Two files can share the same disk group with one file being mirrored while the other is not.
When ASM allocates an extent for a normal redundancy file, ASM allocates a primary copy and a secondary copy. ASM chooses the disk on which to store the secondary copy in a different failure group other than the primary copy. Failure groups are used to place mirrored copies of data so that each copy is on a disk in a different failure group. The simultaneous failure of all disks in a failure group does not result in data loss.
You define the failure groups for a disk group when you create an ASM disk group. After a disk group is created, you cannot alter the redundancy level of the disk group. To change the redundancy level of a disk group, create another disk group with the appropriate redundancy and then move the files to the new disk group. Oracle recommends that you create failure groups of equal size to avoid space imbalance and uneven distribution of mirror data.
If you omit the failure group specification, then ASM automatically places each disk into its own failure group. Normal redundancy disk groups require at least two failure groups. High redundancy disk groups require at least three failure groups. Disk groups with external redundancy do not use failure groups.  
ASM Disks
ASM disks are the storage devices that are provisioned to ASM disk groups. Examples of ASM disks include:
  • A disk or partition from a storage array
  • An entire disk or the partitions of a disk
  • Logical volumes
  • Network-attached files (NFS)
When you add a disk to a disk group, you either assign a disk name or the disk is given an ASM disk name automatically. This name is different from the name used by the operating system. In a cluster, a disk may be assigned different operating system device names on different nodes, but the disk has the same ASM disk name on all of the nodes. In a cluster, an ASM disk must be accessible from all of the instances that share the disk group.
If the disks are the same size, then ASM spreads the files evenly across all of the disks in the disk group. This allocation pattern maintains every disk at the same capacity level and ensures that all of the disks in a disk group have the same I/O load. Because ASM load balances among all of the disks in a disk group, different ASM disks should not share the same physical drive.

Allocation Units

Every ASM disk is divided into allocation units (AU). An AU is the fundamental unit of allocation within a disk group. A file extent consists of one or more AU. An ASM file consists of one or more file extents.
When you create a disk group, you can set the ASM AU size to be between 1 MB and 64 MB in powers of two, such as, 1, 2, 4, 8, 16, 32, or 64. Larger AU sizes typically provide performance advantages for data warehouse applications that use large sequential reads.

About ASM Files

Files that are stored in ASM disk groups are called ASM files. Each ASM file is contained within a single ASM disk group. Oracle Database communicates with ASM in terms of files. This is identical to the way Oracle Database uses files on any file system. You can store the following file types in ASM disk groups:
  • Control files
  • Datafiles, temporary datafiles, and datafile copies
  • SPFILEs
  • Online redo logs, archive logs, and Flashback logs
  • RMAN backups
  • Disaster recovery configurations
  • Change tracking bitmaps
  • Data Pump dumpsets
Note:
Oracle executables and ASCII files, such as alert logs and trace files, cannot be stored in ASM disk groups.
ASM automatically generates ASM file names as part of database operations, including tablespace creation. ASM file names begin with a plus sign (+) followed by a disk group name. You can specify user-friendly aliases for ASM files and create a hierarchical directory structure for the aliases. The following sections describe the ASM file components:

Extents

The contents of ASM files are stored in a disk group as a set, or collection, of data extents that are stored on individual disks within disk groups. Each extent resides on an individual disk. Extents consist of one or more allocation units (AU). To accommodate increasingly larger files, ASM uses variable size extents.
Variable size extents enable support for larger ASM datafiles, reduce SGA memory requirements for very large databases, and improve performance for file create and open operations. The size of the extent map that defines a file can be smaller by a factor of 8 and 64 depending on the file size. The initial extent size is equal to the allocation unit size and it increases by a factor of 8 and 64 at predefined thresholds. This feature is automatic for newly created and resized datafiles when the disk group compatibility attributes are set to Oracle Release 11 or higher. For information about compatibility attributes, 
Figure 1-4 shows the ASM file extent relationship with allocation units. Extent size is always equal to AU for the first 20000 extent sets (0 - 19999). Figure 1-4shows the first eight extents (0 to 7) distributed on four ASM disks. After the first 20000 extent sets, the extent size becomes 8*AU for next 20000 extent sets (20000 - 39999). This is shown as bold rectangles labeled with the extent set numbers 20000 to 20007, and so on. The next increment for an ASM extent is 64*AU (not shown in the figure).
The ASM coarse striping is always equal to the disk group AU size, but fine striping size always remains 128KB in any configuration (not shown in the figure). The AU size is determined at creation time with the allocation unit size (AU_SIZE) disk group attribute. The values can be 1, 2, 4, 8, 16, 32, and 64 MB.

Figure 1-4 ASM File Allocation in a Disk Group


Description of Figure 1-4 follows

ASM Striping

ASM striping has two primary purposes:
  • To balance loads across all of the disks in a disk group
  • To reduce I/O latency
Coarse-grained striping provides load balancing for disk groups while fine-grained striping reduces latency for certain file types by spreading the load more widely.
To stripe data, ASM separates files into stripes and spreads data evenly across all of the disks in a disk group. The stripes are equal in size to the effective AU. The coarse-grained stripe size is always equal to the AU size. The fine-grained stripe size always equals 128 KB; this provides lower I/O latency for small I/O operations such as redo log writes.

File Templates

Templates are collections of attribute values that are used to specify file mirroring and striping attributes for an ASM file when it is created. When creating a file, you can include a template name and assign desired attributes based on an individual file rather than the file type.
A default template is provided for every Oracle file type, but you can customize templates to meet unique requirements. Each disk group has a default template associated with each file type.
ASM Disk Group Administration
This section describes ASM disk group administration and it contains the following topics:

About Discovering Disks

The disk discovery process locates the operating system names for disks that ASM can access. Disk discovery is also used to find all of the disks that comprise a disk group to be mounted. This can include the disks that you want to add to a disk group and the disks that you might consider adding to a disk group.
An ASM instance requires an ASM_DISKSTRING initialization parameter value to specify its discovery strings. Only pathnames that the ASM instance has permission to open are discovered. The exact syntax of a discovery string depends on the platform and ASMLIB libraries. The pathnames that an operating system accepts are always usable as discovery strings.
About Mounting Disk Groups
A disk group must be mounted by a local ASM instance before database instances can access the files in the disk group. Mounting the disk group requires discovering all of the disks and locating the files in the disk group that is being mounted.
You can explicitly dismount a disk group. Oracle reports an error if you attempt to dismount a disk group when any of the disk group files are open. It is possible to have disks fail in excess of the ASM redundancy setting. If this happens, then the disk group is forcibly dismounted. This shuts down any database instances that are using the disk group.
About Adding and Dropping Disks
The discovery string specifies the disk or disks that you can add. These disks include disks that are already in the disk group as well as new disks.
You can add a disk to an existing disk group to add space and to improve throughput. The discovery string specifies the disk or disks that you want to add. This can include disks that are already in the disk group as well as new disks. The disks that you add must be discovered by every ASM instance using theASM_DISKSTRING initialization parameter. After you add a disk, ASM rebalancing operations move data onto the new disk. To minimize the rebalancing I/O, it is more efficient to add multiple disks at the same time.
You can drop a disk from a disk group if it fails or to re-provision capacity. You can also manually drop a disk that has excessive soft errors before the disk fails. Use the ASM disk name to drop a disk, not the discovery string device name. If an error occurs while writing to a disk, then Oracle drops the disk automatically.
Online Storage Reconfigurations and Dynamic Rebalancing
Rebalancing a disk group moves data between disks to ensure that every file is evenly spread across all of the disks in a disk group. When all of the files are evenly dispersed, all of the disks are evenly filled to the same percentage; this ensures load balancing. Rebalancing does not relocate data based on I/O statistics nor is rebalancing started as a result of statistics. ASM rebalancing operations are controlled by the size of the disks in a disk group.
ASM automatically initiates a rebalance after storage configuration changes, such as when you add, drop, or resize disks. The power setting parameter determines the speed with which rebalancing operations occur.
You can manually start a rebalance to change the power setting of a running rebalance. A rebalance is automatically restarted if the instance on which the rebalancing is running stops; databases can remain operational during rebalancing operations. A rebalance has almost no effect on database performance because only one megabyte at a time is locked for relocation and only writes are blocked.



Preparing Storage for ASM

This chapter describes how to prepare your storage subsystem before you configure Automatic Storage Management (ASM). When preparing your storage to use ASM, first determine the storage option for your system and then prepare the disk storage for the specific operating system environment as described in this chapter. This chapter contains the following topics:

Preparing Disks for ASM

You can create an ASM disk group using one of the following storage resources:
  • Raw disk partition—A raw partition can be the entire disk drive or a section of a disk drive. However, the ASM disk cannot be in a partition that includes the partition table because the partition table can be overwritten.
  • Logical unit numbers (LUNs)—Using hardware RAID functionality to create LUNs is a recommended approach. Storage hardware RAID 0+1 or RAID5, and other RAID configurations, can be provided to ASM as ASM disks.
  • Raw logical volumes (LVM)—LVMs are supported in less complicated configurations where an LVM is mapped to a LUN, or an LVM uses disks or raw partitions. LVM configurations are not recommended by Oracle because they create a duplication of functionality. Oracle also does not recommended using LVMs for mirroring because ASM already provides mirroring.
  • NFS files—ASM supports NFS files as ASM disks. Oracle Database has built-in support for the network file system (NFS) and does not depend on OS support for NFS. Although NFS and ASM have overlapping functionality, ASM can load balance or mirror across NFS files.
The procedures for preparing storage resources for ASM are:
  1. Identify or create the storage devices for ASM by identifying all of the storage resource device names that you can use to create an ASM disk group. For example, on Linux systems, device names are typically presented from the /dev directory with the /dev/device_name_identifier name syntax.
  2. Change the ownership and the permissions on storage device resources. For example, the following steps are required on Linux systems:
    • Change the user and group ownership of devices to oracle:dba
    • Change the device permissions to read/write
    • On older Linux versions, you must configure raw device binding
After you have configured ASM, ensure that disk discovery has been configured correctly by setting the ASM_DISKSTRING initialization parameter.
ASM and Multipathing
Multipathing solutions provide failover by using redundant physical path components. These components include adapters, cables, and switches that reside between the server and the storage subsystem. If one or more of these components fails, then applications can still access their data, eliminating a single point of failure with the Storage Area Network (SAN), Host Bus Adapter, interface cable, or host port on a multiported storage array.
Multipathing is a software technology implemented at the operating system device driver level. Multipathing creates a pseudo device to facilitate the sharing and balancing of I/O operations across all of the available I/O paths. Multipathing also improves system performance by distributing the I/O load across all available paths. This provides a higher level of data availability through automatic failover and failback.
Although ASM is not designed with multipathing functionality, ASM does operate with multipathing technologies. Multipathing technologies are available from many sources. Storage vendors offer multipathing products to support their specific storage products, while software vendors usually develop multipathing products to support several server platforms and storage products.
Using ASM with Multipathing
ASM produces an error if ASM discovers multiple disk device paths. Because a single disk can appear multiple times in a multipath configuration, you must configure ASM to discover only the multipath disk.
With ASM, you can ensure the discovery of a multipath disk by setting the value of the initialization parameter ASM_DISKSTRING equal to the name of the pseudo device that represents the multipath disk. For example, if you are using EMC PowerPath multipathing software, you might set ASM_DISKSTRING to'/dev/rdsk/emcpower*'. When I/O is sent to the pseudo device, the multipath driver intercepts it and provides load balancing to the underlying subpaths. When using ASMLIB with ASM on Linux, you can ensure the discovery of the multipath disk by configuring ASM to scan the multipath disk first or to exclude the single path disks when scanning.
Recommendations for Storage Preparation
The following are guidelines for preparing storage for use with ASM:
  • Configure two disk groups, one for the datafile and the other for the Flash Recovery Area. For availability purposes, one is used as a backup for the other.
  • Ensure that LUNs, which are disk drives of partitions, that ASM disk groups use have similar storage performance and availability characteristics. In storage configurations with mixed speed drives, such as 10K and 15K RPM, I/O distribution is constrained by the slowest speed drive.
  • Be aware that ASM data distribution policy is capacity-based. LUNs provided to ASM have the same capacity for each disk group to avoid an imbalance.
  • Use the storage array hardware RAID 1 mirroring protection when possible to reduce the mirroring overhead on the server. Use ASM mirroring redundancy in the absence of a hardware RAID, or when you need host-based volume management functionality, such as mirroring across storage systems. You can use ASM mirroring in configurations when mirroring between geographically-separated sites over a storage interface.
    Hardware RAID 1 in some lower-cost storage products is inefficient and degrades the performance of the array. ASM redundancy delivers improved performance in lower-cost storage products.
  • Maximize the number of disks in a disk group for maximum data distribution and higher I/O bandwidth.
  • Create LUNs using the outside half of disk drives for higher performance. If possible, use small disks with the highest RPM.
  • Create large LUNs to reduce LUN management overhead.
  • Minimize I/O contention between ASM disks and other applications by dedicating disks to ASM disk groups for those disks that are not shared with other applications.
  • Choose a hardware RAID stripe size that is a power of 2 and less than or equal to the size of the ASM allocation unit.
  • Avoid using a Logical Volume Manager (LVM) because an LVM would be redundant. However, there are situations where certain multipathing or third party cluster solutions require an LVM. In these situations, use the LVM to represent a single LUN without striping or mirroring to minimize the performance impact.
  • For Linux, when possible, use the Oracle ASMLIB feature to address device naming and permission persistency.
    ASMLIB provides an alternative interface for the ASM-enabled kernel to discover and access block devices. ASMLIB provides storage and operating system vendors the opportunity to supply extended storage-related features. These features provide benefits such as improved performance and greater data integrity.
    Storage Considerations for Database Administrators
If you are a database administrator who is responsible for configuring your system's storage, then you need to consider not only the initial capacity of your system, but also your plans for future growth. ASM simplifies the task of accommodating growth. However, your growth plans can affect choices such as the size of the LUNs that are presented as ASM disks.
You need to also consider that I/O performance depends your host bus adapter (HBA) and your storage fabric, not just the storage disks. As you scale up the number of nodes in a cluster, you also need to scale up the storage subsystem.
For high availability, storage is only one component. Within storage, Oracle recommends that you configure the database work area to be separate from the recovery area. You also need a method to protect against disk failures by using hardware mirroring or host-based mirroring from a normal or high redundancy disk group. Furthermore, you also need to consider multipathing for HBAs and the fabric when considering storage availability. With ASM mirroring, the failure group configuration also affects high availability.


Administering ASM Instances

This chapter describes how to administer Automatic Storage Management (ASM) instances. It explains how to configure ASM instance parameters as well how to set Oracle Database parameters for use with ASM. The chapter also describes ASM instance administration as well as upgrading, patching, and authentication for ASM instance access. You can also use procedures in this chapter to migrate a database to use ASM.
Administering an ASM instance is similar to administering an Oracle Database instance, but the process requires fewer procedures. You can use Oracle Enterprise Manager and SQL*Plus to perform ASM instance administration tasks. This chapter contains the following topics:
Operating With Different Releases of ASM and Database Instances Simultaneously
Automatic Storage Management (ASM) in Oracle Database 11g supports both older and newer software versions of Oracle database instances, including Oracle Database 10g. Both forward and backward compatibility is maintained between Oracle Database 10g and 11g, enabling combinations of 10.1, 10.2 and 11.1 releases for ASM and database instances to successfully interoperate. For compatibility between Oracle Clusterware and ASM, the Oracle Clusterware release must be greater than or equal to the ASM release.
There are additional compatibility considerations when using disk groups with different releases of ASM and database instances. For information about disk group compatibility attributes settings.

When using different software versions, the database instance supports ASM functionality of the earliest release in use. For example:
  • A 10.1 database instance operating with an 11.1 ASM instance supports only ASM 10.1 features.
  • An 11.1 database instance operating with a 10.1 ASM instance supports only ASM 10.1 features.
The V$ASM_CLIENT view contains the SOFTWARE_VERSION and COMPATIBLE_VERSION columns with information about the software version number and instance compatibility level.
  • The SOFTWARE_VERSION column of V$ASM_CLIENT contains the software version number of the database or ASM instance for the selected disk group connection.
  • The COMPATIBLE_VERSION column contains the setting of COMPATIBLE parameter of the database or ASM instance for the selected disk group connection.
You can query the V$ASM_CLIENT view on both ASM and database instances. For an example showing a query on the V$ASM_CLIENT view, see Example 4-4. For more information about the V$ASM_CLIENT and V$ASM_* views, see "Using Views to Obtain ASM Information".

Configuring Initialization Parameters for an ASM Instance

This section discusses initialization parameter files and parameter settings for ASM instances. To install and initially configure an ASM instance, use Oracle Universal Installer (OUI) and Database Configuration Assistant (DBCA). Refer to your platform-specific Oracle Database Installation Guide for details about installing and configuring ASM.
After an ASM instance has been installed on a single-instance Oracle Database or in an Oracle Real Application Clusters (Oracle RAC) environment, the final ASM configuration can be performed. You only need to configure a few ASM-specific instance initialization parameters. The default values are sufficient in most cases.
This section contains the following topics:
Initialization Parameter Files for an ASM Instance
When installing ASM for a single-instance Oracle Database, DBCA creates a separate server parameter file (SPFILE) and password file for the ASM instance. When installing ASM in a clustered ASM environment where the ASM home is shared among all of the nodes, DBCA creates an SPFILE for ASM. In a clustered environment without a shared ASM home, DBCA creates a text-based initialization parameter file (PFILE) for ASM on each node.
You can use an SPFILE or PFILE as the ASM instance parameter file. If you use an SPFILE in a clustered ASM environment, then you must place the SPFILE on a shared raw device or on a cluster file system. If you do not use a shared ASM home, then the ASM instance uses a PFILE.
The same rules for file name, default location, and search order that apply to database initialization parameter files also apply to ASM initialization parameter files. For example, in single-instance UNIX and Linux Oracle Database environments, the server parameter file for ASM has the following path:
$ORACLE_HOME/dbs/spfile+ASM.ora
Setting ASM Initialization Parameters
There are several initialization parameters that you must set for an ASM instance. You can set these parameters when you create your database using DBCA. You can also set some of these parameters after database creation using Oracle Enterprise Manager or SQL ALTER SYSTEM or ALTER SESSION statements.
The INSTANCE_TYPE initialization parameter is the only required parameter in the ASM instance parameter file. The ASM* parameters use suitable defaults for most environments. You cannot use parameters with names that are prefixed with ASM* in database instance parameter files.
Some database initialization parameters are also valid for an ASM instance initialization file. In general, ASM selects the appropriate defaults for database parameters that are relevant to an ASM instance.
Automatic Memory Management for ASM
Automatic memory management automatically manages the memory-related parameters for both ASM and database instances with the MEMORY_TARGETparameter. Automatic memory management is enabled by default on an ASM instance, even when the MEMORY_TARGET parameter is not explicitly set. The default value used for MEMORY_TARGET is acceptable for most environments. This is the only parameter that you need to set for complete ASM memory management. Oracle strongly recommends that you use automatic memory management for ASM.
If you do not set a value for MEMORY_TARGET, but you do set values for other memory related parameters, Oracle internally calculates the optimum value forMEMORY_TARGET based on those memory parameter values. You can also increase MEMORY_TARGET dynamically, up to the value of the MEMORY_MAX_TARGETparameter, just as you can do for the database instance.
Although it is not recommended, you can disable automatic memory management by either setting the value for MEMORY_TARGET to 0 in the ASM parameter file or by running an ALTER SYSTEM SET MEMORY_TARGET=0 statement. When you disable automatic memory management, Oracle reverts to auto shared memory management and automatic PGA memory management. If you want to revert to Oracle Database 10g release 2 (10.2) functionality to manually manage ASM SGA memory, also run the ALTER SYSTEM SET SGA_TARGET=0 statement. You can then manually manage ASM memory using the information in"ASM Parameter Setting Recommendations", that discusses ASM memory-based parameter settings. Unless specified, the behaviors of all of the automatic memory management parameters in ASM instances is the same as in Oracle Database instances.
Note:
For a Linux environment, automatic memory management cannot work if /dev/shm is not available or is undersized. For more information, For information about platforms that support automatic memory management, 
Note:
The minimum MEMORY_TARGET for ASM is 256 MB. If you set MEMORY_TARGET to 100 MB, then Oracle increases the value forMEMORY_TARGET to 256 MB automatically.
ASM Parameter Setting Recommendations
This section contains information about the following parameters for ASM:
ASM_DISKGROUPS
The ASM_DISKGROUPS initialization parameter specifies a list of the names of disk groups that an ASM instance mounts at startup. Oracle ignores the value that you set for ASM_DISKGROUPS when you specify the NOMOUNT option at startup or when you issue the ALTER DISKGROUP ALL MOUNT statement. The default value of the ASM_DISKGROUPS parameter is a NULL string. If the parameter value is NULL or is not specified, then ASM does not mount any disk groups.
The ASM_DISKGROUPS parameter is dynamic. If you are using a server parameter file (SPFILE), then you should not need to manually alter the value ofASM_DISKGROUPS. ASM automatically adds a disk group to this parameter when the disk group is successfully created or mounted. ASM also automatically removes a disk group from this parameter when the disk group is dropped or dismounted. The following is an example of setting the ASM_DISKGROUPSparameter dynamically:
SQL> ALTER SYSTEM SET ASM_DISKGROUPS = 'CONTROLFILE, DATAFILE, LOGFILE, STANDBY'
When using a text initialization parameter file (PFILE), you must edit the initialization parameter file to add the name of any disk group that you want mounted automatically at instance startup. You must remove the name of any disk group that you no longer want automatically mounted. The following is an example of the ASM_DISKGROUPS parameter in the initialization file:
ASM_DISKGROUPS = CONTROLFILE, DATAFILE, LOGFILE, STANDBY
Note:
Issuing the ALTER DISKGROUP...ALL MOUNT or ALTER DISKGROUP...ALL DISMOUNT commands does not affect the value ofASM_DISKGROUPS.
ASM_DISKSTRING
The ASM_DISKSTRING initialization parameter specifies a comma-delimited list of strings that limits the set of disks that an ASM instance discovers. The discovery strings can include wildcard characters. Only disks that match one of the strings are discovered. The same disk cannot be discovered twice.
The discovery string format depends on the ASM library and the operating system that are in use. Pattern matching is supported; refer to your operating system-specific installation guide for information about the default pattern matching. For example, on a Linux server that does not use ASMLIB, to limit the discovery process to only include disks that are in the /dev/rdsk/ directory, set ASM_DISKSTRING to:
/dev/rdsk/*
The asterisk is required. To limit the discovery process to only include disks that have a name that ends in disk3 or disk4, set ASM_DISKSTRING to:
/dev/rdsk/*disk3/dev/rdsk/*disk4
The ? character, when used as the first character of a path, expands to the Oracle home directory. Depending on the operating system, when you use the ?character elsewhere in the path, it is a wildcard for one character.
The default value of the ASM_DISKSTRING parameter is a NULL string. A NULL value causes ASM to search a default path for all disks in the system to which the ASM instance has read and write access. The default search path is platform-specific. Refer to your operating system specific installation guide for more information about the default search path.
ASM cannot use a disk unless all of the ASM instances in the cluster can discover the disk through one of their own discovery strings. The names do not need to be the same on every node, but all disks must be discoverable by all of the nodes in the cluster. This may require dynamically changing the initialization parameter to enable adding new storage.
ASM_POWER_LIMIT
The ASM_POWER_LIMIT initialization parameter specifies the default power for disk rebalancing. The default value is 1 and the range of allowable values is 0 to11 inclusive. A value of 0 disables rebalancing. Higher numeric values enable the rebalancing operation to complete more quickly, but might result in higher I/O overhead.
ASM_PREFERRED_READ_FAILURE_GROUPS
The ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter value is a comma-delimited list of strings that specifies the failure groups that should be preferentially read by the given instance. This parameter is generally used only for clustered ASM instances and its value can be different on different nodes.For example:
diskgroup_name1.failure_group_name1, ...
The ASM_PREFERRED_READ_FAILURE_GROUPS parameter setting is instance specific. This parameter is only valid for clustered ASM instances and the default value is NULL.
Note:
The ASM_PREFERRED_READ_FAILURE_GROUPS parameter is valid only in Oracle RAC environments.
DB_CACHE_SIZE
You do not need to set a value for the DB_CACHE_SIZE initialization parameter if you use automatic memory management.
The setting for the DB_CACHE_SIZE parameter determines the size of the buffer cache. This buffer cache is used to store metadata blocks. The default value for this parameter is suitable for most environments.
DIAGNOSTIC_DEST
The DIAGNOSTIC_DEST initialization parameter specifies the directory where diagnostics for an instance are located. The value for an ASM instance is of the form:
diagnostic_dest/diag/asm/db_name/instance_name
For an ASM instance, db_name defaults to +asm.
INSTANCE_TYPE
The INSTANCE_TYPE initialization parameter must be set to ASM for an ASM instance. This is a required parameter and cannot be modified. The following is an example of the INSTANCE_TYPE parameter in the initialization file:
INSTANCE_TYPE = ASM
LARGE_POOL_SIZE
You do not need to set a value for the LARGE_POOL_SIZE initialization parameter if you use automatic memory management.
The setting for the LARGE_POOL_SIZE parameter is used for large allocations. The default value for this parameter is suitable for most environments.
PROCESSES
You do not need to set a value for the PROCESSES initialization parameter if you use automatic memory management.
The PROCESSES initialization parameter affects ASM, but generally you do not need to modify the setting. The default value provided is usually suitable.
REMOTE_LOGIN_PASSWORDFILE
The REMOTE_LOGIN_PASSWORDFILE initialization parameter specifies whether the ASM instance checks for a password file. This parameter operates the same for ASM and database instances.
SHARED_POOL_SIZE
You do not need to set a value for the SHARED_POOL_SIZE initialization parameter if you use automatic memory management.
The setting for the SHARED_POOL_SIZE parameter determines the amount of memory required to manage the instance. The setting for this parameter is also used to determine the amount of space that is allocated for extent storage. The default value for this parameter is suitable for most environments.
Setting Database Initialization Parameters for Use with ASM
When you do not use automatic memory management in a database instance, the SGA parameter settings for a database instance may require minor modifications to support ASM. When you use automatic memory management, the sizing data discussed in this section can be treated as informational only or as supplemental information to help determine the appropriate values that you should use for the SGA. Oracle highly recommends using automatic memory management.

The following are guidelines for SGA sizing on the database instance:
  • PROCESSES initialization parameter—Add 16 to the current value
  • LARGE_POOL_SIZE initialization parameter—Add an additional 600K to the current value
  • SHARED_POOL_SIZE initialization parameter—Aggregate the values from the following queries to obtain the current database storage size that is either already on ASM or will be stored in ASM. Next, determine the redundancy type and calculate the SHARED_POOL_SIZE using the aggregated value as input.
    SELECT SUM(bytes)/(1024*1024*1024) FROM V$DATAFILE;
    SELECT SUM(bytes)/(1024*1024*1024) FROM V$LOGFILE a, V$LOG b
           WHERE a.group#=b.group#;
    SELECT SUM(bytes)/(1024*1024*1024) FROM V$TEMPFILE 
           WHERE status='ONLINE'; 
    
    • For disk groups using external redundancy, every 100 GB of space needs 1 MB of extra shared pool plus 2 MB
    • For disk groups using normal redundancy, every 50 GB of space needs 1 MB of extra shared pool plus 4 MB
    • For disk groups using high redundancy, every 33 GB of space needs 1 MB of extra shared pool plus 6 MB

Disk Group Attributes

Disk group attributes are essentially parameters that are bound to a disk group, rather than an instance. The disk group attributes are:
  • AU_SIZE
    For information about allocation unit size and extents, For an example of the use of the AU_SIZE attribute,
  • COMPATIBLE.ASM
    For information about the COMPATIBLE.ASM attribute,
  • COMPATIBLE.RDBMS
    For information about the COMPATIBLE.RDBMS attribute,
  • DISK_REPAIR_TIME
    For information about the DISK_REPAIR_TIME attribute, 

Administering ASM Instances

The following section describes how to administer ASM instances under the following topics:

Administering ASM Instances with Server Control Utility

In addition to the ASM administration procedures that this section describes, you can use Server Control Utility (SRVCTL) in clustered ASM environments to perform the following ASM administration tasks:
  • Add and remove ASM instance records in the Oracle Cluster Registry (OCR)
  • Enable, disable, start, and stop ASM instances
  • Display the ASM instance configuration and status

Starting Up an ASM Instance

You start an ASM instance similarly to the way in which you start an Oracle database instance with some minor differences. When starting an ASM instance, note the following:
  • To connect to an ASM instance with SQL*Plus, set the ORACLE_SID environment variable to the ASM SID. The default ASM SID for a single-instance database is +ASM, and the default SID for ASM for an Oracle RAC node is +ASMnode_number where node_number is the number of the node. Depending on your operating system and whether you installed ASM in a separate ASM home, you might have to change other environment variables.
  • The initialization parameter file must contain the following entry:
    INSTANCE_TYPE = ASM
    This parameter indicates that an ASM instance, not a database instance, is starting.
  • When you run the STARTUP command, rather than trying to mount and open a database, this command attempts to mount the disk groups specified by the initialization parameter ASM_DISKGROUPS. If you have not entered a value for ASM_DISKGROUPS, then the ASM instance starts and Oracle displays an error that no disk groups were mounted. You can then mount disk groups with the ALTER DISKGROUP...MOUNT command.

    ASM provides a MOUNT FORCE option to enable ASM disk groups to be mounted in normal or high redundancy modes even though some ASM disks may be unavailable to the disk group at mount time. The default behavior without the FORCE option is to fail to mount a disk group that has damaged or missing disks.
    To successfully mount with the MOUNT FORCE option, ASM must be able to find at least one copy of the extents for all of the files in the disk group. In this case, ASM can successfully mount the disk group, but with potentially reduced redundancy. If all disks are available, then using the FORCE option causes the MOUNT command to fail as well. This discourages unnecessary and improper use of the feature.
    ASM puts the unavailable disks in an offline mode if ASM is unable to access them. ASM then begins timing the period that these disks are in an offline mode. If the disk offline time period exceeds the timer threshold, then ASM permanently drops those disks from the disk group. You can change the offline timer after a disk is put in an offline state by using the ALTER DISKGROUP OFFLINE statement.
    The MOUNT FORCE option is useful in situations where a disk is temporarily unavailable and you want to mount the disk group with reduced redundancy while you correct the situation that caused the outage.
    Note:
    An ASM instance mounts an incomplete disk group differently depending on the specified compatibility as discussed under the heading "Disk Group Compatibility".
  • The associated Oracle database instance does not need to be running when you start the associated ASM instance.
The following list describes how ASM interprets SQL*Plus STARTUP command parameters.
  • FORCE Parameter
    Issues a SHUTDOWN ABORT to the ASM instance before restarting it.
  • MOUNT or OPEN Parameter
    Mounts the disk groups specified in the ASM_DISKGROUPS initialization parameter. This is the default if no command parameter is specified.
  • NOMOUNT Parameter
    Starts up the ASM instance without mounting any disk groups.
  • RESTRICT Parameter
    Starts up an instance in restricted mode that enables access only to users with both the CREATE SESSION and RESTRICTED SESSION system privileges. The RESTRICT clause can be used in combination with the MOUNTNOMOUNT, and OPEN clauses.

    In restricted mode, database instances cannot use the disk groups. In other words, databases cannot open files that are in that disk group. Also, the disk group cannot be mounted by any other instance in the cluster. Mounting the disk group in restricted mode enables only one ASM instance to mount the disk group. This mode is useful to mount the disk group for repairing configuration issues.
The following is a sample SQL*Plus session for starting an ASM instance.

SQLPLUS /NOLOG
SQL> CONNECT SYS AS SYSASM
Enter password: sys_password
Connected to an idle instance.

SQL> STARTUP
ASM instance started

Total System Global Area   71303168 bytes
Fixed Size                 1069292 bytes
Variable Size              45068052 bytes
ASM Cache                  25165824 bytes
ASM disk groups mounted

About Restricted Mode

You can use the STARTUP RESTRICT command to control access to an ASM instance while you perform maintenance. When an ASM instance is active in this mode, all of the disk groups that are defined in the ASM_DISKGROUPS parameter are mounted in RESTRICTED mode. This prevents databases from connecting to the ASM instance. In addition, the restricted clause of the ALTER SYSTEM statement is disabled for the ASM instance. The ALTER DISKGROUP diskgroupnameMOUNT statement is extended to enable ASM to mount a disk group in restricted mode.
When you mount a disk group in RESTRICTED mode, the disk group can only be mounted by one instance. Clients of ASM on that node cannot access that disk group while the disk group is mounted in RESTRICTED mode. The RESTRICTED mode enables you to perform maintenance tasks on a disk group in the ASM instance without interference from clients.
Rebalance operations that occur while a disk group is in RESTRICTED mode eliminate the lock and unlock extent map messaging that occurs between ASM instances in an Oracle RAC environment. This improves the overall rebalance throughput. At the end of a maintenance period, you must explicitly dismount the disk group and remount it in normal mode.

Cluster Synchronization Services Requirements for ASM

The Cluster Synchronization Services (CSS) daemon provides cluster services for ASM, communication between the ASM and database instances, and other essential services. When DBCA creates a database, the CSS daemon is usually started and configured to start upon restart. If DBCA created the database, then you must ensure that the CSS daemon is running before you start the ASM instance.
CSS Daemon on UNIX and Linux Computers
To determine if the CSS daemon is running, run the command crsctl check cssd. If Oracle displays the message CSS appears healthy, then the CSS daemon is running. Otherwise, to start the CSS daemon and configure the host to always start the daemon upon restart, do the following:
  1. Log in to the host as the root user.
  2. Ensure that the entry $ORACLE_HOME/bin is in your PATH environment variable.
  3. Run the following command:
    localconfig add
CSS Daemon on Microsoft Windows Computers
You can also use the crsctl and localconfig commands to check the status of the CSS daemon or to start it. To use Windows GUI tools to determine whether the CSS daemon is properly configured and running, double-click the Services icon in the Windows Control Panel and locate the OracleCSServiceservice. The service's status should be Started and its startup type should be Automatic.
Note:
Refer to your Windows documentation for information about how to start a Windows service and how to configure it for automatic startup.

Shutting Down an ASM Instance

The ASM shutdown process is initiated when you run the SHUTDOWN command in SQL*Plus. Before you run this command, ensure that the ORACLE_SIDenvironment variable is set to the ASM SID so that you can connect to the ASM instance. Depending on your operating system and whether you installed ASM in a separate ASM home, you might have to change other environment variables before starting SQL*Plus. Oracle strongly recommends that you shut down all database instances that use the ASM instance before attempting to shut down the ASM instance.


SQLPLUS /NOLOG
SQL> CONNECT SYS AS SYSASM
Enter password: sys_password
Connected.
SQL> SHUTDOWN NORMAL

The following list describes the SHUTDOWN modes and describes the behavior of the ASM instance in each mode.
  • NORMAL Clause
    ASM waits for any in-progress SQL to complete before performing an orderly dismount of all of the disk groups and shutting down the ASM instance. Before the instance is shut down, ASM waits for all of the currently connected users to disconnect from the instance. If any database instances are connected to the ASM instance, then the SHUTDOWN command returns an error and leaves the ASM instance running. NORMAL is the default shutdown mode.
  • IMMEDIATE or TRANSACTIONAL Clause
    ASM waits for any in-progress SQL to complete before performing an orderly dismount of all of the disk groups and shutting down the ASM instance. ASM does not wait for users currently connected to the instance to disconnect. If any database instances are connected to the ASM instance, then theSHUTDOWN command returns an error and leaves the ASM instance running. Because the ASM instance does not contain any transactions, theTRANSACTIONAL mode is the same as the IMMEDIATE mode.
  • ABORT Clause
    The ASM instance immediately shuts down without the orderly dismount of disk groups. This causes recovery to occur upon the next ASM startup. If any database instance is connected to the ASM instance, then the database instance aborts.
ASM Background Processes
The following background processes are an integral part of Automatic Storage Management:
  • ARBn performs the actual rebalance data extent movements in an Automatic Storage Management instance. There can be many of these processes running at a time, named ARB0, ARB1, and so on.
  • ASMB runs in a database instance that is using an ASM disk group. ASMB communicates with the ASM instance, managing storage and providing statistics. ASMB can also run in the ASM instance. ASMB runs in ASM instances when the ASMCMD cp command runs or when the database instance first starts if the SPFILE is stored in ASM.
  • GMON maintains disk membership in ASM disk groups.
  • MARK marks ASM allocation units as stale following a missed write to an offline disk. This essentially tracks which extents require resync for offline disks.
  • RBAL runs in both database and ASM instances. In the database instance, it does a global open of ASM disks. In an ASM instance, it also coordinates rebalance activity for disk groups.
The processes described in the previous list are important for the ASM instance and should not be modified. In addition to the processes listed in this section, there are additional processes that run in both the ASM and database instances, such as database writer process (DBWn), log writer process (LGWR), Process Monitor Process (PMON), and System Monitor Process (SMON).
Also, there are ASM slave processes that run periodically to perform a specific task. For example, the Snnn transient slave process is responsible for performing the resync of extents at the time that the disk is brought online. The slave processes are not technically background processes.
For more information about Oracle database background processes, see the discussion about background processes in Oracle Database Concepts. For a description of the V$BGPROCESS view that displays information about background processes, 

Using ASM Rolling Upgrades

ASM rolling upgrades enable you to independently upgrade or patch clustered ASM nodes without affecting database availability, thus providing greater uptime. Rolling upgrade means that all of the features of a clustered ASM environment function when one or more of the nodes in the cluster uses different software versions.
Note:
Rolling upgrades only apply to clustered ASM instances, and you can only perform rolling upgrades on environments with Oracle Database 11g or later. In other words, you cannot use this feature to upgrade from Oracle Database 10g to Oracle Database 11g.
To perform a rolling upgrade, your environment must be prepared. If you are using Oracle Clusterware, then your Oracle Clusterware must be fully upgraded to the next patch or release version before you start the ASM rolling upgrade. In addition, you should prepare your Oracle Clusterware in a rolling upgrade manner to ensure high availability and maximum uptime.
Before you patch or upgrade the ASM software on a node, you must place the ASM cluster into rolling upgrade mode. This enables you to begin an upgrade and operate your environment in multiversion software mode. Do this by issuing the following SQL statement where number includes the version number, release number, update number, port release number, and port update number. Enter these values for number in a decimal-separated string enclosed in single quotation marks, for example, '11.1.0.7.0', to perform the upgrade as in the following example
ALTER SYSTEM START ROLLING MIGRATION TO '11.1.0.7.0';
The instance from which you run this statement verifies whether the value that you specified for number is compatible with the current installed version of your software. When the upgrade begins, the behavior of the clustered ASM environment changes, and only the following operations are permitted on the ASM instance:
  • Disk group mount and dismount
  • Database file open, close, resize, and delete
  • Limited access to fixed views and fixed packages
    Note:
    You can query fixed views and run anonymous PL/SQL blocks using fixed packages, such as DBMS_DISKGROUP. However, only local views are available; Oracle disables all global views when a clustered ASM environment is in rolling upgrade mode.
After the rolling upgrade has been started, you can shut down each ASM instance and perform the software upgrade. On start up, the updated ASM instance can rejoin the cluster. When you have migrated all of the nodes in your clustered ASM environment to the latest software version, you can end the rolling upgrade mode.
If a disk goes offline when the ASM instance is in rolling upgrade mode, then the disk remains offline until the rolling upgrade has ended. Also, the timer for dropping the disk is stopped until the ASM cluster is out of rolling upgrade mode.
You can also use the same procedure to roll back node upgrades if you encounter problems with the upgrade. The ASM functionality is compatible with the lowest software version that is on any of the nodes in the cluster during an upgrade.
The upgrade fails if there are rebalancing operations occurring anywhere in the cluster. You must wait until the rebalance completes before attempting to start a rolling upgrade. In addition, as long as there is one instance active in the cluster, the rolling upgrade state is preserved.
New instances that join the cluster immediately switch to a rolling upgrade state on startup. In other words, if a rolling upgrade is in progress in a clustered ASM environment and if any new ASM instance joins the cluster, then the new ASM instance is notified that the cluster is in rolling upgrade mode. You can use the following SQL function to query the state of a clustered ASM environment:
SELECT SYS_CONTEXT('sys_cluster_properties', 'cluster_state') FROM DUAL;
If all of the instances in a clustered ASM environment stop running, then when any of the ASM instances restart, the restarted instance will not be in rolling upgrade mode. To perform the upgrade after your instances restart, you must re-run the commands to restart the rolling upgrade operation. When the rolling upgrade completes, run the following SQL statement:
ALTER SYSTEM STOP ROLLING MIGRATION;
After you run this statement, Oracle performs the following operations:
  • Validates that all of the members of the cluster are at the same software version. If there are one or more ASM instances that have different versions, then Oracle displays an error and the cluster continues to be in rolling upgrade mode.
  • Updates the cluster-wide state so that the ASM instances are no longer in rolling upgrade mode; the ASM instances begin supporting the full clustered ASM functionality.
  • Rebalance operations that were pending are restarted if the setting for the ASM_POWER_LIMIT parameter enables this.
    Patching ASM Instances
For Oracle RAC environments, if you configure ASM in a home that is separate from the Oracle Database home, then when you apply patches you must apply them in a specific order. You must first ensure that your Oracle Clusterware version is at least equal to the version of the patch that you are applying. This may require you to patch the Oracle Clusterware home first. Then apply the patch to the ASM home, and finally, apply the patch to the Oracle Database home.
Note:
You must apply the patch to the ASM home before you apply it to the Oracle Database home.

Authentication for Accessing ASM Instances

This section describes the following topics:
The ASM and database instances must have equivalent operating system access rights. For example, the ASM instance and the database instance must have identical read and write permissions for the disks that comprise the related ASM disk group. For UNIX systems, this is typically provided through shared UNIX group membership. On Windows systems, the ASM service can run as Administrator.
An ASM instance does not have a data dictionary, so the only way to connect to an ASM instance is by using one of three system privileges, SYSASM, SYSDBA, or SYSOPER. There are three modes of connecting to ASM instances:
  • Local connection using operating system authentication
  • Local connection using password authentication
  • Remote connection by way of Oracle Net Services using password authentication
Note:
If you create an ASM instance using Database Configuration Assistant (DBCA), or if you create the ASM instance using Database Upgrade Assistant (DBUA), then the user SYS should have SYSASM privileges.
SYSASM Privilege for ASM
SYSASM is a system privilege that enables the separation of the SYSDBA database administration privilege from the ASM storage administration privilege. Access to the SYSASM privilege is granted by membership in an operating system group that is designated as the OSASM group. This is similar to SYSDBA and SYSOPER privileges, which are system privileges granted through membership in the groups designated as the OSDBA and OSOPER operating system groups. You can designate one group for all of these system privileges, or you can designate separate groups for each operating system privilege.
You can divide system privileges during ASM installation, so that database administrators, storage administrators, and database operators each have distinct operating system privilege groups. Use the Custom Installation option to designate separate operating system groups as the operating system authentication groups for privileges on ASM. Table 3-1 lists the operating system authentication groups for ASM, and the privileges that their members are granted:
Table 3-1 Operating System Authentication Groups for ASM
GroupPrivilege Granted to Members
OSASM
SYSASM privilege, which provides full administrative privilege for the ASM instance.
OSDBA for ASM
SYSDBA privilege on the ASM instance. This privilege grants access to data stored on ASM, and in the current release, grants the SYSASM administrative privileges.
OSOPER for ASM
SYSOPER privilege on the ASM instance.
If you do not want to divide system privileges access into separate operating system groups, then you can designate one operating system group as the group whose members are granted access as OSDBA, OSOPER, OSASM, and OSDBA for ASM, and OSOPER for ASM privileges. The default operating system group name for all of these is dba. You can also specify OSASM, OSDBA for ASM, and OSOPER for ASM when you perform a custom installation of ASM. Furthermore, you can specify OSDBA and OSOPER when performing a custom database installation.
Whether you create separate operating system privilege groups or use one group to provide operating system authentication for all system privileges, you should use SYSASM to connect to and administer an ASM instance. In Oracle 11g release 1, both SYSASM and SYSDBA are supported privileges; however, if you use the SYSDBA privilege to administer an ASM instance, then Oracle will write warning messages to the alert log, indicating that the SYSDBA privilege is deprecated on an ASM instance for administrative commands. In a future release, the privilege to administer an ASM instance with SYSDBA will be removed.
Operating system authentication using membership in the group or groups designated as OSDBA, OSOPER, and OSASM is valid on all Oracle platforms. Connecting to an ASM instance as SYSASM grants you full access to all of the available ASM disk groups and management functions.

Accessing an ASM Instance

This section describes how to connect to an ASM instance. In the examples where you provide a user name, you are prompted for a password.
Note:
The SYS user is created by default by DBCA during installation process with all three system privileges.
Use the following statement to connect locally to an ASM instance using operating system authentication:
sqlplus / AS SYSASM
Use the following statement to connect locally using password authentication:
sqlplus SYS AS SYSASM
Use the following statement to connect remotely using password authentication:
sqlplus sys@\"myhost.mydomain.com:1521/asm\" AS SYSASM
Use the following statement to connect to an ASM instance with SYSDBA privilege:
sqlplus / AS SYSDBA
Oracle writes messages to the alert log if you issue ASM administrative commands that will be accessible only to the SYSASM privilege in future releases.

Creating Users with the SYSASM Privilege

When you are logged in to an ASM instance as SYSASM, you can use the combination of CREATE USER and GRANT SQL statements to create a new user who has the SYSASM privilege. These commands update the password file for the local ASM instance. Similarly, you can revoke the SYSASM privilege from a user using the REVOKE command, and you can drop a user from the password file using the DROP USER command. The following example describes how to perform these operations for the user identified as new_user:
REM create a new user, then grant the SYSASM privilege
SQL> CREATE USER new_user IDENTIFIED by new_user_passwd;
SQL> GRANT SYSASM TO new_user;

REM connect the user to the ASM instance
SQL> CONNECT new_user AS SYSASM;
Enter password:

REM revoke the SYSASM privilege, then drop the user
SQL> REVOKE SYSASM FROM new_user;
SQL> DROP USER new_user;

Operating System Authentication for ASM

Membership in the operating system group designated as the OSASM group provides operating system authentication for the SYSASM system privilege. OSASM is provided exclusively for ASM. Initially, only the user that installs ASM is a member of the OSASM group, if you use a separate operating system group for that privilege. However, you can add other users. Members of the OSASM group are authorized to connect using the SYSASM privilege and have full access to ASM, including administrative access to all disk groups that are managed by that ASM instance.
On Linux and UNIX systems, the default operating system group designated as OSASM, OSOPER, and OSDBA is dba. On Windows systems, the default name designated as OSASM, OSOPER, and OSDBA is ora_dba.
Note:
The user who is the software owner for the Oracle Database home, that Oracle documentation describes as the oracle user must be a member of the group that is designated as the OSDBA group for the ASM home. This is automatically configured when ASM and an Oracle Database share the same Oracle home. If you install the ASM and database instances in separate homes, then you must ensure that you create a separate OSDBA group for ASM, and that you designate the correct group memberships for each OSDBA group. Otherwise, the database instance will not be able to connect to the ASM instance.
Password File Authentication for ASM

Password file authentication for ASM can work both locally and remotely. To enable password file authentication, you must create a password file for ASM. A password file is also required to enable Oracle Enterprise Manager to connect to ASM remotely.
If you select the ASM storage option, then DBCA creates a password file for ASM when it initially configures the ASM disk groups. Similar to a database password file, the only user added to the password file when DBCA creates it is SYS. To add other users to the password file, you can use the CREATE USER and GRANT commands as described previously in the section titled "About the SYSASM Privilege for ASM".
If you configure an ASM instance without using DBCA, then you must manually create a password file and GRANT the SYSASM privilege to user SYS.
Migrating a Database to Use ASM
With a new installation of Oracle Database and ASM, you can initially create your database and select the ASM storage option. If you have an existing Oracle database that stores database files in the operating system file system or on raw devices, then you can migrate some or all of your datafiles to ASM storage.
Oracle provides several methods for migrating your database to ASM. Using ASM will enable you to realize the benefits of automation and simplicity in managing your database storage. You can use the following methods to migrate to ASM as described in this section:
Note:
You must upgrade to at least Oracle Database 10g before migrating your database to ASM.

Using Oracle Enterprise Manager to Migrate Databases to ASM

Enterprise Manager enables you to perform cold and hot database migration with a GUI. You can access the migration wizard from the Enterprise Manager Home page under the Change Database heading.
Manually Migrating to ASM Using Oracle Recovery Manager
You can use Oracle Recovery Manager (RMAN) to manually migrate to ASM. You can also use RMAN to migrate a single tablespace or datafile to ASM.
Migrating to ASM Best Practices White Papers on Oracle Technology Network (OTN)
The Oracle Maximum Availability Architecture (MAA) Web site provides excellent best practices technical white papers based on different scenarios, such as:
  • Minimal Downtime Migration to ASM
  • Platform Migration using Transportable Tablespaces
  • Platform Migration using Transportable Database

ORA-00059: Maximum Number Of DB_FILES Exceeded in 19C database

When I am adding datafile to my 19C database facing the below error. SQL> alter tablespace DATA  add datafile '/u01/data/data15.dbf...