Friday, 30 August 2013

Enable Oracle auditing


With increasing application complexity, data privacy laws, hosting, outsourcing, and the use of electronic records in litigation the need for database and application audit reports is increasing.

  • If a client sued your company, could you give a full report listing changes to their account data ?
  • If a code change lost your company millions of dollars, could you identify who ran it ?
  • If you found client credit card numbers on the internet, could you identify the leak ?

All transactions that run in an Oracle database will leave evidence in the table blocks, undo tablespace, redo logs, controlfiles, or in memory, but a lot of that evidence is transient, and searching through many GB of files and memory dumps with a Hex editor is expensive and time consuming. With a few simple commands you can start collecting audit information for future use to clearly identify who, what, where, and when. In addition to tracking changes audit does have side benefits for system management like identifying unused tables, unused indexes, inactive accounts, failed transactions etc.

Oracle databases allow you to audit everything from a failed user login down to selecting a specific table attribute or row, but remember that auditing does use CPU and IO, so will slowdown the transactions you audit and increase disk usage. There is a cost/benefit to auditing of granularity vs performance, so if DML execution time is critical on your system you should choose an audit method with minimum impact, only audit specific data like financial payments, or only audit DDL. Because audit can write a lot of data to tables or logs you need to define a policy for retention, purging, and archival. Remember that any audit records on a compromised system may have been modified to cover up the breach, so extra care should be taken to secure audit records on the OS, in the database, or preferable on an a secure remote server.

As a rough guide, if you increase security on a system, you should increase auditing to confirm that security. If you are worried about access to sensitive data, then along with database security and auditing you should try to limit the number of access routes to that data, the number of copies of that data kept on disk, and also implement a data scrubbing process to execute when refreshing DEV and TEST environments from PROD.

  • Low security databases :- e.g development and test databases, read-only historical databases, databases with transient data (e.g not backed up). At an absolute minimum all databases should be auditing logons. This will produce minimal records of who logged in, with no impact on DML transaction times.
  • Medium security databases :- The majority of databases should fit in this category - e.g stock exchange trading databases, telecoms billing databases, finance systems. These databases should audit SESSION, DDL, DCL to identify changes in table structure, code, and privileges. This will produce minimal records with no impact on DML transaction times. If additional audit is needed, then enabling supplemental logging will capture all DDL and DML transactions with minimal additional server load so they can be retrieved by log mining if needed.
  • High security databases :- This includes databases with sensitive information like HR salary and bonus information, Credit Card details, password stores, audit stores etc. These databases should audit SESSION, DDL, DCL, and data owners should identify what level of DML audit or even select audit is required, which tables should be audited, and what form of auditing is easiest to manage. Enabling supplemental logging will capture all DML and DDL transactions with minimum additional server load so they can be retrieved by log mining if needed. If the audit records need to be accessed on a regular basis, then an auditing solution which leaves records in tables would be a better solution than Logminer.

Below are some basic methods that Oracle provides for database auditing, with examples of how to access the audit information.

AUDIT_SYS_OPERATIONS parameter - Audit connections by sys and sysdba to .aud files.
When you enable the parameter audit_sys_operations the database will write a trace file of the session actions to the udump directory. This parameter should be enabled on ALL production databases.
Enable It
alter system set audit_sys_operations=TRUE scope=spfile;
-- then restart
shutdown immediate;

Verify it
show parameter audit_sys_operations;

What does it give me ?

Check the .aud file. You will find a log of transactions run by the database session including SESSION, DML, DDL, Select.

AUDIT_TRAIL parameter - Audit SESSION, DCL, DDL, DML, Select statements to table AUD$ or OS files.
Setting up Oracle Audit using audit_trail parameter will cause the database to write audit records to table SYS.AUD$ or OS files. The DBA can then choose which audit options to enable to capture information about SESSION, DDL, DCL, DML, and select statements. This is the easiest and most common method of auditing an oracle database, and this parameter should be set on ALL production databases. Usually there will be a lot more DML statements than DDL and DCL, so many companies choose to only audit SESSION, DDL, and DCL.

AUDIT_TRAIL settings

Parameter Value Meaning
DB :- Enables database auditing and directs all audit records to the database audit trail (SYS.AUD$), except for records that are always written to the operating system audit trail
DB_EXTENDED :- Does all actions of AUDIT_TRAIL=DB and also populates the SQL bind and SQL text columns of the SYS.AUD$ table
XML :- Enables database auditing and directs all audit records in XML format to an operating system file
XML_EXTENDED Does all actions of AUDIT_TRAIL=XML, adding the SQL bind and SQL text columns
OS :- (recommended) Enables database auditing and directs all audit records to an operating system file

Enable it

alter system set audit_trail=DB, EXTENDED scope=spfile;
shutdown immediate;

# Audit logon, DDL and DCL (the following is only a basic list of audit options - see documentation for more detail)

audit create session by access; -- this collects login details including OSUSER, HOST etc, but unfortunately not program.
audit audit system by access;
audit grant any privilege by access;
audit grant any object privilege by access;
audit grant any role by access;
audit create user by access;
audit create any table by access;
audit create public database link by access;
audit create any procedure by access;
audit alter user by access;
audit alter any table by access;
audit alter any procedure by access;
audit alter database by access;
audit alter system by access;
audit alter profile by access;
audit drop user by access;
audit drop any procedure by access;
audit drop any table by access;
audit drop profile by access;

# If you choose to audit DML on a schema then it may generate a lot of data. Enable DML audit selectively.

audit select table, insert table, update table, delete table by payroll by access;

Verify it
show parameter audit_trail;

select * from dba_stmt_audit_opts
select * from dba_priv_audit_opts;

What does it give me ?

Select from the AUD$, DBA_AUDIT_TRAIL, DBA_AUDIT_SESSION tables/views to find records relating to audited SESSION, DDL, DML, DCL, Select statements.

Basic audit output listing user actions
select os_username,username,timestamp,action_name,returncode from dba_audit_session;
 Audit report including the transaction statement for user PAYROLL.

select os_username,username,timestamp,action_name,sql_text
from dba_audit_trail where username = 'PAYROLL' order by timestamp;

# Remember that any information that identifies client information such as module and osuser could be faked from the client, so should be separately verified.

AUDIT VAULT - copy audit records to a remote server for protection and analysis.
Oracle Audit Vault is an Oracle product for storing and managing audit settings and audit records, which must be purchased in addition to your database license. An agent is installed on the database server and collects audit records from AUD$, FGA_LOG$, or OS files, and loads them back into the warehouse which is secured by Database Vault. Audit Vault can also extract audit settings for viewing, modification, or copying between databases. Audit Vault comes with pre-built fact and dimension tables, along with load scripts and pre-built reports for viewing logons, transactions etc. You can use the Audit Vault console to configure auditing rules.

AWR / STATSPACK - record information from dynamic tables for later analysis.
AWR is part of the Database Diagnostic Pack which is an extra cost option on top of your database license. AWR is supposed to replace STATSPACK, but statspack is still available for Oracle 11g. Both tools read from database dynamic views and write to WRH$ and STATS$ tables for later use in performance analysis reports, but these tables can also be used for checking transaction history. Because these tools were written for performance analysis they only log transactions with CPU and IO over pre-set thresholds, so the WRH$ and STATS$ tables are only useful for checking large transactions, as small transactions may not be recorded.
Enable it
Take a snapshot to save dynamic view information to permanent tables.
-- AWR
exec dbms_workload_repository.create_snapshot;
exec statspack.snap
Verify it
select snap_id, first_time from sys.wrh$_log;

select snap_id, snap_time from perfstat.stats$snapshot;

What does it give me ?

Select from the WHR$ and STATS$ tables to find large DDL, DML, Select transactions that started prior to the time the snap was taken.

--AWR report listing sql statements for table "employee"

select sql_text from wrh$_sqltext where sql_text like '%employee%';
--Statspack report listing sql statements for table "employee"
select sql_text from perfstat.STATS$SQLTEXT where sql_text like '%employee%';


To more easily trace the source of statements in V$SQL or DBA_2PC_PENDING developers can add statement and commit comments.

Example of a commit comment

--run an insert to a remote database with a commit comment
insert into payroll.employee_interface@prod values (1,'JOHN);
What does it give me ?
select local_tran_id, tran_comment from dba_2pc_pending;
Example of a statement comment
insert /* EMPLOYEE_LOAD_PKG.LOAD_EMPLOYEE */ into payroll.employee select * from payroll.employee_interface;
What does it give me ?
select sql_text from v$sql where sql_text like '%EMPLOYEE_LOAD_PKG%'
DATABASE VAULT - an extra layer of security to stop and/or audit access to specific data and transactions.

If you have the Database Vault option enabled you can use it to audit on DDL and DML. For example if a new Realm was created to protect the PAYROLL schema, and set to audit on Success and Failure, then it would record all actions on that schema. Database Vault is an extra cost option on top of your database license.

Enable it

Create a new PAYROLL realm
exec dvsys.dbms_macadm.create_realm('PAYROLL','PAYROLL','Y',3);
Add the payroll schema to the realm
exec dvsys.dbms_macadm.add_auth_to_realm('PAYROLL','PAYROLL',1);
Secure all objects in the payroll schema.
exec dvsys.dbms_macadm.add_object_to_realm('PAYROLL','PAYROLL','%','%');
Verify it
select id#, name from dvsys.realm_t$ where name = 'PAYROLL';
select grantee from dvsys.realm_auth$ where realm_id# in (
select id# from dvsys.realm_t$ where name = 'PAYROLL');
select owner, object_name from dvsys.realm_object$ where realm_id# in (
select id# from dvsys.realm_t$ where name = 'PAYROLL');
What does it give me ?

The ability to select DDL, DML, SELECT audit information from tables owned by DVSYS, or view security reports in the Database Vault console.

select timestamp, username, userhost, action_name, returncode from dvsys.audit_trail$ where username = 'PAYROLL';
 Or you can use the Database Vault console to view security violations

You can call DBMS_APPLICATION_INFO.SET_MODULE and DBMS_APPLICATION_INFO.SET_ACTION from the application code to populate V$SESSION.MODULE and V$SESSION.ACTION. Well-written applications should use DBMS_APPLICATION_INFO to provide more information about running programs.
Enable it
exec dbms_application_info.set_module('EMPLOYEE_INTERFACE','IMPORT_EMPLOYEE_PKG');
exec dbms_application_info.set_action('VERIFY_EMPLOYEE_PKG');

What does it give me ?
The ability to more easily track the actions of a program via V$SESSION, and associated trace files.

select module, action from v$session where module = 'EMPLOYEE_INTERFACE';

DBMS_CRYPTO - collect checksums to be certain
Audit records found on a compromised system are always suspect, so to be absolutely certain of what has changed one option is to record row and object checksums and keep them in a remote secure location. Very few databases require this level of audit.

DBMS_ERRLOG - Error Logging into tables
Error logging tables make use of the DBMS_ERRLOG package and the "log errors into" clause.

DBMS_FGA - Audit specific statements to table FGA_LOG$
Fine Grained Audit allows you to audit very specific changes to reduce audit records and limit impact to performance. E.g only audit a select statement when it includes the SALARY attribute, or only audit changes to bank account details.
Enable it

Audit select of attribute PAYROLL.EMPLOYEE.SALARY
exec dbms_fga.add_policy (object_schema=>'PAYROLL',
audit_column => 'SALARY',
statement_types => 'SELECT');

Verify it
select * from dba_audit_policies;
What does it give me ?
The ability to audit very specific actions to minimise audit records - e.g only selects that include a specific attribute.
select timestamp,db_user,object_schema,object_name,scn,sql_text
from dba_fga_audit_trail
where db_user = 'PAYROLL';

DBMS_FLASHBACK / FLASHBACK QUERY - select old data from UNDO tablespace
Flashback query selects from the UNDO tablespace. Undo records are aged out of the undo tablespace, so to retain them longer you may want to increase undo retention, increase the size of the undo tablespace, or use UNDO RETENTION GUARANTEE when you create your undo tablespace. As UNDO records are logged by normal database transactions, there is no extra workload to run Flashback Query, but undo records are normally aged out of the undo tablespace, so this is not a good solution for checking what happened last month. For long term storage of UNDO records see FLASHBACK ARCHIVE below.

Using DBMS_FLASHBACK to select a previous state from the employee table

exec dbms_flashback.enable_at_time(to_timestamp('2009-07-08 22:37:24', 'YYYY-MM-DD HH24:MI:SS'));
select salary from payroll.employee where employee_id = 1;
exec dbms_flashback.disable;

 Using the "as of timestamp" clause to select a previous state from the employee table
select salary
from payroll.employee as of timestamp to_timestamp('2009-07-08 22:37:24', 'YYYY-MM-DD HH24:MI:SS')
where employee_id = 1;

DBMS_LOGMNR / SUPPLEMENTAL LOGGING - Log transaction statements to REDO logs.
Supplemental logging adds the transaction statement into the redo logs. Logminer can then extract and store those transactions from the archive logs. Supplemental logging only adds a small performance impact, and you only need to mine the data you want, when you want it, so this is a low impact option for auditing all DDL and DML, but it does not audit selects.
Enable it
shutdown immediate;
startup mount exclusive;
alter database add supplemental log data;
shutdown immediate;

Verify it
select supplemental_log_data_min from v$database;
What does it give me?
Since Logminer records the redo and undo for all transactions in the database including DDL, DML, DCL, triggers, audit etc there is a huge amount of data collected, including all DDL and DML on the tables with supplemental logging, but unfortunately not select statements.

Mine a specific log

select name, first_time from v$archived_log;
exec dbms_logmnr.add_logfile(logfilename => '/ora01/flash_recovery_area/AUDIT1/archivelog/2009_07_08/o1_mf_1_108_55b65m23_.arc', OPTIONS => DBMS_LOGMNR.NEW);
exec dbms_logmnr.start_logmnr(options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
Generate a report on the transactions that ran against table PAYROLL.EMPLOYEE.
select scn, operation, sql_redo, sql_undo from V$LOGMNR_CONTENTS where table_name = 'EMPLOYEE' and username = 'PAYROLL';

CHANGE DATA CAPTURE - Log table changes to a CDC table.
Change Data Capture uses Logminer to log data changes to a change table.
Enable it.

Create the change table.
exec dbms_logmnr_cdc_publish.create_change_table ('payroll','payroll_ct','SYNC_SET','payroll','employee','employee_id NUMBER,salary NUMBER','BOTH', 'Y','N','N','Y','N','Y','N','','N');
Create a logminer subscription handle
variable subhandle NUMBER;
execute dbms_logmnr_cdc_subscribe.get_subscription_handle (CHANGE_SET => 'SYNC_SET', DESCRIPTION => 'Changes to classes table', SUBSCRIPTION_HANDLE => :subhandle);
Create a logminer subscription
execute dbms_logmnr_cdc_subscribe.subscribe (subscription_handle => :subhandle, source_schema => 'payroll', source_table => 'employee', column_list => 'employee_id, salary');
Activate the subscription
execute dbms_logmnr_cdc_subscribe.activate_subscription (SUBSCRIPTION_HANDLE => :subhandle);
Verify it.
select count(*) from payroll_ct;
What does it give me ?

Select from the change table to list DML data changes.

variable viewname varchar2(40)
execute dbms_logmnr_cdc_subscribe.extend_window (subscription_handle => :subhandle);
execute dbms_logmnr_cdc_subscribe.prepare_subscriber_view (SUBSCRIPTION_HANDLE => :subhandle,
SOURCE_SCHEMA => 'payroll',
SOURCE_TABLE => 'employees',
VIEW_NAME => :viewname);
print viewname
column myview new_value subscribed_view noprint
select :viewname myview FROM dual;
select * FROM &subscribed_view;

Materialised Views can be used to copy tables to a remote location for comparison with the primary dataset. Materialised View Logs can also be used to identify rows that have changed since the last refresh.
Enable it
grant create materialized view to payroll_audit;
Create the mview log
connect payroll
create materialized view log on employee;
grant select on payroll.employee to payroll_audit;
grant select on MLOG$_EMPLOYEE to payroll_audit;
Create the mview
connect payroll_audit
create materialized view payroll_audit.employee_mview refresh fast as select * from payroll.employee;
exec dbms_mview.refresh('EMPLOYEE_MVIEW','F');
Verify it
select * from employee_mview;
What does it give me ?
Mviews can be used to copy user data and audit records to a secure database at regular intervals, and you can select mview log entries for transactions since the last mview refresh.
select employee_id, dmltype$$ from payroll.MLOG$_EMPLOYEE;

You can call DBMS_SESSION.SET_IDENTIFIER from the application to populate V$SESSION.CLIENT_IDENTIFIER with additional information. This is useful for keeping track of session information and trace files.
Enable it.
exec dbms_session.set_identifier('EMPLOYEE_BATCH');
What does it give me ?
The ability to more easily track user sessions in the database.
select sid from v$session where client_identifier = 'EMPLOYEE_BATCH';
DBMS_STREAMS_ADM - capture, propagate, transform, and apply DDL and DML changes to another table or another database.
Steams combines Logminer and AQ to provide a method to replicate data to another location. This can be used to copy data to a secure schema or remote database for later comparison with the primary data.

It is also possible with Streams to send archivelogs to a remote server and do the Logmining there to avoid anyone intercepting the logs, mined data, or AQ transactions before they are replicated.

Enable it.

 Example code to create a Streams Capture Rule.
table_name => 'payroll.employee',
streams_type => 'capture',
streams_name => 'payroll_employee_stream',
queue_name => 'strmadmin.employee_queue',
include_dml => true,
include_ddl => true,
inclusion_rule => true);
Example code to create a Streams Propagate Rule.
table_name => 'payroll.employee',
streams_name => 'payroll_employee_propagate',
source_queue_name => 'strmadmin.employee_queue',
destination_queue_name => 'strmadmin.employee_queue@AUDIT',
include_dml => true,
include_ddl => true,
source_database => 'PROD',
inclusion_rule => true);
 Example code to create a Streams Apply Rule.
table_name => 'payroll.employee',
streams_type => 'apply',
streams_name => 'payroll_employee_apply',
queue_name => 'strmadmin.employee_queue',
include_dml => true,
include_ddl => true,
source_database => 'PROD',
inclusion_rule => true);
Verify it.
update employee@prod set salary = 100 where employee_id=1;
select salary from employee@AUDIT where employee_id=1;
What does it give me ?

DML and DDL replicated to an audit log table, or a secure remote database. It is also possible to add rules to transform data into log formats or skip certain transactions.

DBMS_WORKLOAD_CAPTURE - records all transactions over a period to a file for playback.
DBMS_WORKLOAD_CAPTURE captures database activity to a file for playback. The file format is binary, so not easily used for audit purposes, but it is possible to grep specific statements out of the file. This tool records all DML, DDL, Select statements, so can create very large files very quickly, so is really only suitable for short-term auditing.

Enable it
create or replace directory replay as '/ora01/replay';
-- Start capture to file
dbms_workload_capture.start_capture (name => 'test_capture',
dir => 'REPLAY',
duration => NULL);
-- End capture to file
What does it give me ?
Files containing all DDL, DML, Select transactions run during the capture period. The output files can be played back, or searched for specific table names etc.

FLASHBACK DATABASE - return the database to a previous state.

Flashback database relies on additional Flashback Logs being written to the db_recovery_file_dest. This is an extra set of logs for the database to write, so will increase total IO and CPU usage on the database. Flashback logs can take up a lot of space, so usually DBAs will need to set a target retention period to limit the logs kept on disk, meaning that flashback database is only useful to rollback changes that happened in recently - e.g in the last 24 hours.
Enable it

shutdown immediate;
startup mount exclusive;
alter database flashback on;
shutdown immediate;
Verify it
show parameter db_recovery_file_dest;
show parameter db_recovery_file_dest_size;
select flashback_on from v$database;
select * from v$flashback_database_log;
What does it give me ?

The ability to flashback the whole database to a previous point in time to compare previous data values.

FLASHBACK ARCHIVE - store UNDO records in a permanent tablespace for long-term storage.
Flashback Archive / Oracle Total Recall extends the use of flashback query by saving the undo records in a tablespace so they are retained for a longer period. As it saves data from the UNDO tablespace, there should not be a performance impact on transactions. Oracle Total Recall is an extra cost option in Oracle 11g.

Enable it
create flashback archive default payroll_fla tablespace payroll_data quota 1G retention 1 year;
alter table payroll.employee flashback archive payroll_fla;
What does it give me ?

The same result as flashback query, but a much longer retention period.

PROFILES - Log extra information to USER$ and USER_HISTORY$
User profiles can be useful to record additional information about login failures, password changes, and stop reuse of passwords.
Enable it
create profile payroll_profile limit failed_login_attempts 10;
alter profile payroll_profile limit password_reuse_max 10;
alter user payroll profile payroll_profile;
Verify it
select * from dba_profiles where profile = 'PAYROLL_PROFILE';
select profile from dba_users where username = 'PAYROLL';
What does it give me ?

# Password history
select, uh.password, uh.password_date from user_history$ uh, user$ u where u.user#=uh.user# and = 'PAYROLL';
# Users who have had failed logins since the last successful login.
select name, ltime, lcount from user$ where lcount > 1;

RECYCLEBIN - store dropped tables for emergency restore.
The Oracle Recyclebin feature allows DBAs to see what tables have been dropped (until the recyclebin has been purged). Recyclebin functionality only impacts dropping tables, so won't cause a performance impact to DML transactions. The DBA can purge the recyclebin at regular intervals, and the objects in the recyclebin can be purged automatically by the database if it needs the space.
Enable it
alter system set recyclebin=on;
Verify it
show parameter recyclebin;
What does it give me ?

The ability to select from dropped tables, and restore them if needed.
select object_name, original_name from user_recyclebin;

  • It is possible to trace every database session by setting init.ora parameter sql_trace=TRUE, but the volume of trace files generated will probably be too big to search.
  • Oracle recommend that you set parameter trace_enabled=TRUE so that system errors are logged to trace files.
  • If you know the SID, or PID you can enable tracing to record what a session is running. You can also automate this via a trigger.

In 10g trace files will be written to user_dump_dest. In 11g trace files will be written to $ORACLE_BASE/diag/rdbms/DB_NAME/$ORACLE_SID/trace. Because so much information is logged, tracing is usually only enabled for a specific session for a short period of time, and DBAs need to decide what level of information to trace (e.g waits, bind variables etc)
Enable it.

Session tracing can be enabled via multiple methods including ALTER SESSION, ALTER SYSTEM, DBMS_SESSION, DBMS_SUPPORT, DBMS_SYSTEM, DBMS_MONITOR.

alter session set tracefile_identifier = 'robert_trace'; # so that the trace files from this session are easily identified
alter session set events '10046 trace name context forever, level 8';
alter system set sql_trace=true;
exec dbms_session.set_sql_trace(sql_trace => TRUE);
exec dbms_support.start_trace(waits=>TRUE, binds=>FALSE);
exec sys.dbms_system.set_ev(72,21237,10046,12,'');
select username, spid from v$process;
exec dbms_monitor.session_trace_enable(session_id=>75, serial_num=>21237);
What does it give me ?
Any DDL, DCL, DML, Select transaction run after session tracing is enabled will be logged to the trace file.

TRIGGERS - custom triggers and plsql to log audit to custom tables.

Triggers allow very specific auditing based on custom code, so can record exactly the information you want, but because the trigger fires during the transaction, the transaction needs to wait for the audit code to complete, impacting performance. Triggers can also call packages like UTL_FILE to write OS logfiles, UTL_MAIL to email DBAs, or UTL_HTTP to post messages to central logging websites. As this audit method uses custom code, it will require development and testing effort, but can be good for event driven, targeted auditing.

LOGON triggers
Login triggers can be useful for recording specific information not available in AUD$. E.g audit session to aud$ does not record the program used.
Audit user logon details to a table
Enable it.

create table payroll_audit.audit_users
username VARCHAR2(30),
osuser VARCHAR2(30),
sid NUMBER ,
host VARCHAR2(30),
ip_address VARCHAR2(30),
program VARCHAR2(48),
logon_time DATE
) ;

create or replace trigger sys.logon_audit_trigger
after logon on database
into payroll_audit.audit_users VALUES
sys_context('USERENV','OS_USER') ,
sys_context('USERENV','SID') ,
sys_context('USERENV','HOST') ,
sys_context('USERENV','IP_ADDRESS') ,
sys_context('USERENV','MODULE') ,
What does it give me ?

Select from the audit table to find out who logged into what schema.
select logon_time,username,program from audit_users
order by logon_time;
Trace actions by a particular user.
Enable it.

create or replace trigger payroll_trace_trigger
after logon on database
when (user='PAYROLL')
execute immediate 'alter session set sql_trace=true';
What does it give me ?
Check the trace file to see all DML, DDL, Select transactions run by the session.
Write custom login messages to the alert log.

Enable it.
create or replace trigger alert_trigger
after logon on database
when (user='PAYROLL')
dbms_system.ksdwrt(2, 'ORA-20000 Login by user PAYROLL');
What did it give me ?
Check the alert log to see when there were logins by user PAYROLL.

DDL Triggers

You can use DDL triggers and the Oracle System Events to capture current statements.

Enable it.
create table ddl_audit
(audit_date date,
username varchar2(30),
instance_number integer,
database_name varchar2(9),
object_type varchar2(19),
object_owner varchar2(30),
object_name varchar2(128),
sql_text varchar2(2000));

create or replace trigger BEFORE_DDL_TRG before ddl on database
l_sql_text ora_name_list_t;
l_count NUMBER;
l_puser VARCHAR2(30) := NULL;
l_sql varchar2(2000);
l_count := ora_sql_txt(l_sql_text);
l_puser := sys_context('USERENV', 'SESSION_USER');

l_count := ora_sql_txt(l_sql_text);
for i in 1..l_count
l_sql := l_sql||l_sql_text(i);
end loop;

insert into ddl_audit (audit_date, username, instance_number, database_name, object_type, object_owner, object_name, sql_text)
values (sysdate, l_puser,ora_instance_num,ora_database_name,ora_dict_obj_type,ora_dict_obj_owner,ora_dict_obj_name,l_sql);

when others then
show errors;
What does it give me ?

Select from the table to get a list of DDL statements executed in the database.
select * from ddl_audit;
DML Triggers

A well designed application will have table attributes for recording who changed the row, and when. Although these attributes can be populated by the application it is better to populate with a trigger so that ad-hoc updates and data fixes are also recorded.

Enable it.
create table payroll.employee(
CREATED_BY VARCHAR2(30), - audit attribute populated by trigger
CREATED_DATE DATE, - audit attribute populated by trigger
LAST_MODIFIED_BY VARCHAR2(30), - audit attribute populated by trigger
LAST_MODIFIED_DATE DATE); - audit attribute populated by trigger

alter table payroll.employee add constraint employee_pk primary key (employee_id);
In addition it is possible to use triggers to record row history in a logging table.
create table payroll_audit.employee_audit(

create or replace trigger payroll.employee_audit_trg
before insert or update or delete on payroll.employee
for each row
dml char(1);
if inserting

dml := 'I';

-- update the audit details for the record

:new.created_by := sys_context('USERENV','SESSION_USER');
:new.created_date := sysdate;
:new.last_modified_date := sysdate;

-- add a record to the audit table

insert into payroll_audit.employee_audit (dml, employee_id, employee_name, salary, notes, created_by, created_date, last_modified_by, last_modified_date) values (dml, :new.employee_id, :new.employee_name, :new.salary, :new.notes, :new.created_by, :new.created_date, :new.last_modified_by, :new.last_modified_date);
elsif updating
dml := 'U';

-- update the audit details for the record

:new.last_modified_date := sysdate;

-- add a record to the audit table
insert into payroll_audit.employee_audit (dml, employee_id, employee_name, salary, notes, created_by, created_date, last_modified_by, last_modified_date) values (dml, :new.employee_id, :new.employee_name, :new.salary, :new.notes, :new.created_by, :new.created_date, :new.last_modified_by, :new.last_modified_date);
dml := 'D';

-- add a record to the audit table
insert into payroll_audit.employee_audit (dml, employee_id, employee_name, salary, notes, created_by, created_date, last_modified_by, last_modified_date) values (dml, :old.employee_id, :old.employee_name, :old.salary, :old.notes, :old.created_by, :old.created_date, sys_context('USERENV','SESSION_USER'), sysdate);
end if;
show errors;
What does it give me ?

You can easily check which records have been modified, by who, and see the record history.
select employee_id, last_modified_by, last_modified_date from employee where employee_id =1;

select dml, employee_id, salary, last_modified_by, last_modified_date from employee_audit where employee_id = 1;

Other places to find data that may be useful for Audit purposes.

Backups, exports, export logs, development systems, standby databases

Previous backups, export files, export logs, development clones and physical and logical standby databases can contain an earlier dataset which can be used to identify what rows or objects have been modified.


The alert log contains information about failed transactions, jobs, checkpoints, create tablespace, and alter system statements, which can be useful in an audit.
Datafiles, Redo logs, Archive logs.

It is possible to analyze Redo logs, Datafiles, and Undo segments to identify changes and objects which have been dropped, or rows which have been deleted. Links to articles..

Check who has locks on a table or who is executing a package. This can be a good indicator of current activity on the database.
select name, session_id, mode_held from dba_dml_locks where owner = 'PAYROLL';
select owner, name, mode_held from dba_ddl_locks where owner = 'PAYROLL';

A job scheduled via DBMS_JOB will be listed in the DBA_JOBS table.

select job, log_user, what, last_date, this_date, next_date, failures, broken from dba_jobs;
A job scheduled via DBMS_SCHEDULER will be listed in the DBA_SCHEDULER_JOBS table, with execution history listed in DBA_SCHEDULER_JOB_RUN_DETAILS.
select owner, job_name, last_start_date, next_run_date, failure_count from dba_scheduler_jobs;
select owner, job_name, log_date from dba_scheduler_job_run_details;
DBA_OBJECTS, OBJ$ tables - Check when objects are created or last modified
The DBA_OBJECTS view is based on teh SYS.OBJ$ table which contains the data dictionary listing of all user objects, when they were created, and when they were last modified. If something has changed, this is the first place to look.
select name, ctime, mtime, stime, spare6 from obj$ where ctime > sysdate -1 or mtime > sysdate -1;

Comparing the last_number from dba_sequences and comparing it to the associated table attribute can help to identify if anyone has removed the last record. Take into account that sequences are cached in memory, so are not guaranteed to be consecutive, so use this information with caution.
select sequence_owner, sequence_name, last_number from dba_sequences;
If you have previously collected 100% statistics on tables and indexes using DBMS_STATS, then the recorded values can be used to identify what has changed by selecting from DBA_TABLES, DBA_INDEXES, DBA_TAB_HISTOGRAMS.
select count(*) from payroll.employee;
select table_name, num_rows from dba_tables where owner = 'PAYROLL';
select index_name, num_rows from dba_indexes where owner = 'PAYROLL';
select * from dba_tab_histograms where owner = 'PAYROLL'

Check who granted priveleges to a user
select grantor, grantee, privilege, owner, table_name from dba_tab_privs;
The sqlnet listener log lists connection requests for the database including IP address, username, osuser, program. It is usually in $ORACLE_HOME/network/log


This table allows an auditor to identify users that have been created, or have changed their password in the last 24 hours.
select name, ctime, ptime from user$ where ctime > sysdate -1 or ptime > sysdate -1;

V$ACTIVE_SESSION_HISTORY, V$SQLAREAThese views contains a good overview of recent activity on the database.
select last_active_time, parsing_user_id, sql_text from v$sql order by last_active_time;
If an index is being monitored and it has been used, then it may be possible to detect it in v$object_usage.

These tables can be used to identify who is currently logged on to the database. More detail can be added by calling DBMS_APPLICATION_INFO.
select username, osuser, machine, program, module, action from v$session where username = 'PAYROLL';

Every company, application, and database will have different audit requirements, and there are decisions to be made to balance audit granularity vs performance and disk usage, but the decision to collect audit data needs to be made before you need that data. I hope that the audit examples given in this document will help in making those decisions.


Managing Oracle audit trails

The Oracle audit command write the audit information to specific data dictionary views.

BEWARE: These audit trail rows can quickly fill-up your SYSTEM tablespace, and special care needs to be taken to ensure that you do not "lock-up" your database, by filling the SYSTEM tablespace. 

 Check these tables:
  • dba_audit_exists
  • dba_audit_object
  • dba_audit_session
  • dba_audit_statement
  • dba_audit_trail
We also have these metadata views for Oracle auditing options:
  • dba_obj_audit_opts
  • dba_priv_audit_opts
  • dba_stmt_audit_opts

Auditing the audit trail

Today, we need to audit the audit trail itself to prevent "inside jobs" and this Oracle command will record all changes to the audit trail.  

   by access;

Auditing user activity with the Oracle audit command

Oracle has syntax for auditing specific user activity.  To audit the activity of user sujeet we could issue these audit commands:

Audit all Oracle user activity. 

This audits everything including DDL (create table), DML (inserts, updates, deletes) and login/logoff events:
     audit all by sujeet by access;

Audit all Oracle user viewing activity:

   audit select table by sujeet by access;

Audit all Oracle user data change activity:

   audit update table, delete table,
      insert table by
sujeet by access;

Audit all Oracle user viewing activity:

   audit execute procedure by sujeet by access;
We can also query the dba_audit_trail view.  Here are the column descriptions from the Oracle documentation:
Column Datatype NULL Description
OS_USERNAME VARCHAR2(255)   Operating system login username of the user whose actions were audited
USERNAME VARCHAR2(30)   Name (not ID number) of the user whose actions were audited
USERHOST VARCHAR2(128)   Client host machine name
TERMINAL VARCHAR2(255)   Identifier of the user's terminal
TIMESTAMP DATE   Date and time of the creation of the audit trail entry (date and time of user login for entries created by AUDIT SESSION) in the local database session time zone
OWNER VARCHAR2(30)   Creator of the object affected by the action
OBJ_NAME VARCHAR2(128)   Name of the object affected by the action
ACTION NUMBER NOT NULL Numeric action type code. The corresponding name of the action type is in the ACTION_NAME column.
ACTION_NAME VARCHAR2(28)   Name of the action type corresponding to the numeric code in the ACTION column
NEW_OWNER VARCHAR2(30)   Owner of the object named in the NEW_NAME column
NEW_NAME VARCHAR2(128)   New name of the object after a RENAME or the name of the underlying object
OBJ_PRIVILEGE VARCHAR2(16)   Object privileges granted or revoked by a GRANT or REVOKE statement
SYS_PRIVILEGE VARCHAR2(40)   System privileges granted or revoked by a GRANT or REVOKE statement
ADMIN_OPTION VARCHAR2(1)   Indicates whether the role or system privilege was granted with the ADMIN option
GRANTEE VARCHAR2(30)   Name of the grantee specified in a GRANT or REVOKE statement
AUDIT_OPTION VARCHAR2(40)   Auditing option set with the AUDIT statement
SES_ACTIONS VARCHAR2(19)   Session summary (a string of 16 characters, one for each action type in the order ALTER, AUDIT, COMMENT, DELETE, GRANT, INDEX, INSERT, LOCK, RENAME, SELECT, UPDATE, REFERENCES, and EXECUTE. Positions 14, 15, and 16 are reserved for future use. The characters are:
  • - - None
  • S - Success
  • F - Failure
  • B - Both
LOGOFF_TIME DATE   Date and time of user log off
LOGOFF_LREAD NUMBER   Logical reads for the session
LOGOFF_PREAD NUMBER   Physical reads for the session
LOGOFF_LWRITE NUMBER   Logical writes for the session
LOGOFF_DLOCK VARCHAR2(40)   Deadlocks detected during the session
COMMENT_TEXT VARCHAR2(4000)   Text comment on the audit trail entry, providing more information about the statement audited Also indicates how the user was authenticated. The method can be one of the following:
  • DATABASE - Authentication was done by password
  • NETWORK - Authentication was done by Oracle Net Services or the Advanced Security option
  • PROXY - Client was authenticated by another user; the name of the proxy user follows the method type
SESSIONID NUMBER NOT NULL Numeric ID for each Oracle session
ENTRYID NUMBER NOT NULL Numeric ID for each audit trail entry in the session
STATEMENTID NUMBER NOT NULL Numeric ID for each statement run
RETURNCODE NUMBER NOT NULL Oracle error code generated by the action. Some useful values:
  • 0 - Action succeeded
  • 2004 - Security violation
PRIV_USED VARCHAR2(40)   System privilege used to execute the action
CLIENT_ID VARCHAR2(64)   Client identifier in each Oracle session
SESSION_CPU NUMBER   Amount of CPU time used by each Oracle session
EXTENDED_TIMESTAMP TIMESTAMP(6) WITH TIME ZONE   Timestamp of the creation of the audit trail entry (timestamp of user login for entries created by AUDIT SESSION) in UTC (Coordinated Universal Time) time zone
PROXY_SESSIONID NUMBER   Proxy session serial number, if an enterprise user has logged in through the proxy mechanism
GLOBAL_UID VARCHAR2(32)   Global user identifier for the user, if the user has logged in as an enterprise user
INSTANCE_NUMBER NUMBER   Instance number as specified by the INSTANCE_NUMBER initialization parameter
OS_PROCESS VARCHAR2(16)   Operating System process identifier of the Oracle process
TRANSACTIONID RAW(8)   Transaction identifier of the transaction in which the object is accessed or modified
SCN NUMBER   System change number (SCN) of the query
SQL_BIND NVARCHAR2(2000)   Bind variable data of the query
SQL_TEXT NVARCHAR2(2000)   SQL text of the query

 sample query against dba_audit_trail for standard Oracle auditing.

select 'standard audit', sessionid,
    proxy_sessionid, statementid, entryid, extended_timestamp, global_uid,
    username, client_id, null, os_username, userhost, os_process, terminal,
    instance_number, owner, obj_name, null, new_owner,
    new_name, action, action_name, audit_option, transactionid, returncode,
    scn, comment_text, sql_bind, sql_text,
    obj_privilege, sys_privilege, admin_option, grantee, priv_used,
    ses_actions, logoff_time, logoff_lread, logoff_pread, logoff_lwrite,
    logoff_dlock, session_cpu

Thursday, 29 August 2013

Locally Managed Tablespace convert to Dictionary Managed Tablespaces

When Oracle allocates space to a segment (like a table or index), a group of contiguous free blocks, called an extent, is added to the segment. Metadata regarding extent allocation and unallocated extents are either stored in the data dictionary, or in the tablespace itself. Tablespaces that record extent allocation in the dictionary, are called dictionary managed tablespaces, and tablespaces that record extent allocation in the tablespace header, are called locally managed tablespaces.

SQL> select tablespace_name, extent_management, allocation_type from dba_tablespaces;

------------------------------ ---------- ---------
SYSTEM                         DICTIONARY USER
SYS_UNDOTS                     LOCAL      SYSTEM
TEMP                           LOCAL      UNIFORM

Dictionary Managed Tablespaces (DMT):

Oracle use the data dictionary (tables in the SYS schema) to track allocated and free extents for tablespaces that is in "dictionary managed" mode. Free space is recorded in the SYS.FET$ table, and used space in the SYS.UET$ table. Whenever space is required in one of these tablespaces, the ST (space transaction) enqueue latch must be obtained to do inserts and deletes agianst these tables. As only one process can acquire the ST enque at a given time, this often lead to contention.
Execute the following statement to create a dictionary managed
SQL> CREATE TABLESPACE ts1 DATAFILE '/oradata/ts1_01.dbf' SIZE 50M 

Locally Managed Tablespaces (LMT):

Using LMT, each tablespace manages it's own free and used space within a bitmap structure stored in one of the tablespace's data files. Each bit corresponds to a database block or group of blocks. Execute one of the following statements to create a locally managed
SQL> CREATE TABLESPACE ts2 DATAFILE '/oradata/ts2_01.dbf' SIZE 50M

SQL> CREATE TABLESPACE ts3 DATAFILE '/oradata/ts3_01.dbf' SIZE 50M
Note the difference between AUTOALLOCATE and UNIFORM SIZE:
AUTOALLOCATE specifies that extent sizes are system managed. Oracle will choose "optimal" next extent sizes starting with 64KB. As the segment grows larger extent sizes will increase to 1MB, 8MB, and eventually to 64MB. This is the recommended option for a low or unmanaged environment.
UNIFORM specifies that the tablespace is managed with uniform extents of SIZE bytes (use K or M to specify the extent size in kilobytes or megabytes). The default size is 1M. The uniform extent size of a locally managed tablespace cannot be overridden when a schema object, such as a table or an index, is created.
Also not, if you specify, LOCAL, you cannot specify DEFAULT STORAGE, MINIMUM EXTENT or TEMPORARY.
Advantages of Locally Managed Tablespaces:

  • Eliminates the need for recursive SQL operations against the data dictionary (UET$ and FET$ tables)
  • Reduce contention on data dictionary tables (single ST enqueue)
  • Locally managed tablespaces eliminate the need to periodically coalesce free space (automatically tracks adjacent free space)
  • Changes to the extent bitmaps do not generate rollback information
Locally Managed SYSTEM Tablespace:

From Oracle9i release 9.2 one can change the SYSTEM tablespace to locally managed. Further, if you create a database with DBCA (Database Configuration Assistant), it will have a locally managed SYSTEM tablespace by default. The following restrictions apply:
  • No dictionary-managed tablespace in the database can be READ WRITE.
  • You cannot create new dictionary managed tablespaces
  • You cannot convert any dictionary managed tablespaces to local
Thus, it is best only to convert the SYSTEM tablespace to LMT after
all other tablespaces are migrated to LMT.

Segment Space Management in LMT:

From Oracle 9i, one can not only have bitmap managed tablespaces, but also bitmap managed segments when setting Segment Space Management to AUTO for a tablespace. Look at this example:
SQL> CREATE TABLESPACE ts4 DATAFILE '/oradata/ts4_01.dbf' SIZE 50M 
Segment Space Management eliminates the need to specify and tune the PCTUSED, FREELISTS, and FREELISTS GROUPS storage parameters for schema objects. The Automatic Segment Space Management feature improves the performance of concurrent DML operations significantly since different parts of the bitmap can be used simultaneously eliminating serialization for free space lookups against the FREELSITS. This is of particular importance when using RAC, or if "buffer busy waits" are deteted.

Convert between LMT and DMT:

The DBMS_SPACE_ADMIN package allows DBAs to quickly and easily
convert between LMT and DMT mode. Look at these examples:
SQL> exec dbms_space_admin.Tablespace_Migrate_TO_Local('ts1');
PL/SQL procedure successfully completed.

SQL> exec dbms_space_admin.Tablespace_Migrate_FROM_Local('ts2');
PL/SQL procedure successfully completed.

Migrate Dictionary managed Tablespaces to locally managed

Locally Managed Tablespaces

A tablespace that can manage extent allocation by itself is called locally managed tablespace. These tablespaces maintain a bitmap in each datafile to keep track of the freed or used status of blocks in that datafile. Each bit in the bitmap corresponds to a block or a group of blocks. When an extent is allocated or freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. These changes do not generate rollback information because they do not update tables in the data dictionary (except for special cases such as tablespace quota information).
The storage parameters NEXT, PCTINCREASE, MINEXTENTS, MAXEXTENTS, and DEFAULT STORAGE are not valid for extents that are managed locally !
Extent Allocation
Locally managed tablespaces can have uniform extent sizes or variable extent sizes that are determined by the system. Any of the options, UNIFORM or AUTOALLOCATE can be mentioned while creating the tablespace. For UNIFORM extents you can specify an extent size. The default size is 1MB. For AUTOALLOCATE extents you can specify the size of the initial extent and Oracle determines the optimal size of the additional extents, with a minimum extent size of 64KB. That is why these are called system-managed extents.
How Extents are Allocated
Oracle looks for free space to allocate to a new extent by first determining a candidate datafile in the tablespace and then searching the datafile's bitmap for the required number of adjacent free blocks. If that datafile does not have enough adjacent free space, Oracle looks in another datafile. When extents are deallocated, Oracle modifies the bitmap in the datafile.
Create A Locally Managed Tablespace
  DATAFILE 'users.dbf'
  1. Local management of extents avoids recursive space management operations, which can occur in dictionary managed tablespaces.
  2. Local management of extents automatically tracks adjacent free space, eliminating the need to coalesce free extents.
  3. Reliance on data dictionary is reduced.
  1. Temporary tablespaces that manage their extents locally can only use UNIFORM extent allocation.
  2. For permanent tablespaces the default extent size for system managed extents is 64KB.
  3. The storage parameters NEXT, PCTINCREASE, MINEXTENTS, MAXEXTENTS, and DEFAULT STORAGE are not valid for locally managed tablespaces.

Change normal tablespaces to locally managed

Check extent management of the tablespaces with following command:
SQL> SELECT tablespace_name,extent_management
       FROM dba_tablespaces;
------------------------------ ----------
SYSTEM                         DICTIONARY
TEMP                           LOCAL
USERS                          DICTIONARY
TAB                            DICTIONARY
IDX                            DICTIONARY
SYSAUX                         LOCAL
UNDO                           LOCAL

First change first all dictionary managed tablespaces except tablespace SYSTEM to locally managed with the following procedure:

SQL> execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('USERS'); PL/SQL procedure successfully completed.

The tablespace must be kept online and read/write during migration. Note that temporary tablespaces cannot be migrated.

Change SYSTEM tablespaces to locally managed

Before the SYSTEM tablespace can be migrated to locally managed format, you should ensure the following:
  • The database has a default temporary tablespace which is not SYSTEM
  • There are not any rollback segments in dictionary managed tablespaces
  • There is at least one online rollback segment in a locally managed tablespace, or an undo tablespace (if using automatic undo management mode) should be online.
  • All tablespaces other than the tablespace containing the undo space (undo tablespace or the tablespace containing the rollback segment) and the default temporary tablespace are in read-only mode.
  • There is a complete backup of the system.
  • The system is in restricted mode.
Notr, that we already have an UNDO Tablespace. The following query determines whether the SYSTEM tablespace is locally managed:
SQL> SELECT ts# FROM ts$ WHERE ts# = 0 AND bitmapped <> 0;
If 0 rows are returned, then the SYSTEM tablespace is dictionary managed. Otherwise, the SYSTEM tablespace is locally managed.
Steps to change SYSTEM tablespaces to locally managed
SQL> shutdown immediate SQL> startup restrict
ERROR at line 1:
ORA-10644: SYSTEM tablespace cannot be default temporary tablespace
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 227
ORA-06512: at line 1
ERROR at line 1:
ORA-10647: Tablespace other than SYSTEM, UNDO, TEMP not
found in read only mode
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 227
ORA-06512: at line 1
SQL> select tablespace_name from dba_tablespaces;
SQL> alter tablespace USERS read only;
SQL> alter tablespace TAB read only;
SQL> alter tablespace IDX  read only;
ERROR at line 1:
ORA-10648: Tablespace SYSAUX is not offline
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 227
ORA-06512: at line 1
SQL> alter tablespace SYSAUX offline;
PL/SQL procedure successfully completed.