Oracle logminer


PURPOSE
This paper details the mechanics of what Log Miner does, as well as
detailing the commands and environment it uses.

SCOPE & APPLICATION
For DBA's requiring further information about Log Miner.

The ability to provide a readable interface to the redo logs has been
asked by customers for a long time. The ALTER SYTSTEM DUMP LOGFILE
interface has been around for a long time, though its usefulness outside
Support is limited. There have been a number of third party products
E.g. BMC's PATROL DB-Logmaster ( SQL*Trax as was ) which provide some
functionality in this area. With Oracle release 8.1 there is a facility
in the Oracle kernel to do the same. LogMiner allows the DBA to audit
changes to data and performs analysis on the redo to determine trends,
aid in capacity planning, Point-in-time Recovery etc.


1. WHAT DOES LOG MINER DO?
==========================

Log Miner can be used against online or archived logs from either
the 'current' database or a 'foreign' database. The reason for this is that
it uses an external dictionary file to access meta-data, rather than
the 'current' data dictionary.

It is important that this dictionary file is kept in step with the
database which is being analyzed. If the dictionary used is out
of step from the redo then analysis will be considerably more difficult.
Building the external dictionary will be discussed in detail in section 3.

Log Miner scans the log/logs it is interested in, and generates, using the
dictionary file meta-data, a set of SQL statements which would have the
same effect on the database as applying the corresponding redo record.

Log miner prints out the 'Final' SQL that would have gone against
the database e.g.

Insert into Table x Values ( 5 );
Update Table x set COLUMN=newvalue WHERE ROWID='<>'
Delete from Table x WHERE ROWID='<>' AND COLUMN=value AND COLUMN=VALUE

We do not actually see the SQL that was issued, rather an executable
SQL statement that would have the same EFFECT. Since it is also stored in
the same redo record, we also generate the undo column which would be
necessary to roll this change out.

For SQL which rolls back, no undo SQL is generated, and the rollback
flag is set. An insert followed by a rollback therefore looks like:

REDO UNDO ROLLBACK

insert sql Delete sql 0
delete sql <null> 1

Because it operates against the physical redo records, multirow operations
are not recorded in the same manner e.g. DELETE FROM EMP WHERE DEPTNO=30
might delete 100 rows in the SALES department in a single statement,
the corresponding Log Miner output would show one row of output per row
in the database.

2. WHAT IT DOESN'T DO
=====================

1. 'Trace' Application SQL - use SQL_Trace/10046

Since Log Miner only generates low-level SQL, not what was issued,
you cannot use Log Miner to see exactly what was being done
based on the SQL. What you can see, is what user changed what data
at what time.

2. 'Replicate' an application - Since we do not cover everything.
also since DDL is not supported ( The insert into the tab$ etc.
is however the create table isn't )

3. Access data dictionary SQL In a visible form - Especially
UPDATE USER$ SET PASSWORD=<newpassword>

Other Known Current Limitations
===============================

Log Miner cannot cope with Objects
Log Miner cannot cope with Chained/Migrated Rows
Log Miner produces fairly unreadable output if there is no record of
the table in the dictionary file. See below for output

The database where the analysis is being performed must have a block size
of at least equal to that of the originating database. See [NOTE:117580.1]

3. FUNCTIONALITY
================

The Log Miner feature is made up of three procedures in the Log Miner
( dbms_logmnr) package, and one in the Dictionary ( dbms_logmnr_d )

These are built by the following scripts ( Run by catproc )

$ORACLE_HOME/rdbms/admin/dbmslogmnrd.sql
$ORACLE_HOME/rdbms/admin/dbmslogmnr.sql
$ORACLE_HOME/rdbms/admin/prvtlogmnr.plb

1. dbms_logmnr_d.build

This procedure builds the dictionary file used by the main Log Miner
package to resolve object names, and column datatypes. It should be
generated relatively frequently, since otherwise newer objects will
not be recorded.

It is possible to generate a Dictionary file from an 8.0.database and
use it to Analyze V80 redo logs. In order to do this run dbmslogmnrd.sql
Against the V80 database then follow the procedure as below. All
Analysis of the logfiles will have to take place while connected to
an 8.1 Database, since dbms_logmnr cannot operate against V80 because it
uses trusted callouts.

Any Redo relating to tables which are not included in the dictionary
file are dumped RAW e.g. If Log Miner cannot resolve the Table and
column references, then the following is output ( insert statement )

insert into UNKNOWN.objn:XXXX(Col[x],....) VALUES
( HEXTORAW('xxxxxx'), HEXTORAW('xxxxx')......)

PARAMETERS
==========

1. The name of the dictionary file you want to produce.
2. The name of the directory where you want the file produced.

The Directory must be writeable by the server i.e. included in
UTL_FILE_DIR path.

EXAMPLE
=======

BEGIN
dbms_logmnr_d.build(
dictionary_filename=> 'miner_dictionary.dic',
dictionary_location => '/export/home/sme81/aholland/testcases
/logminer'
);
END;
/

The dbms_logmnr package actually performs the redo analysis.

2. dbms_logmnr.add_logfile

This procedure registers the logfiles to be analyzed in this session.
Must be called once for each logfile. This populates the fixed table
X$logmnr_logs ( V$logmnr_logs ) with a row corresponding to the logfile.

Parameters
===========

1. The logfile to be analyzed.
2. Option
DBMS_LOGMNR.NEW (SESSION) First file to be put into PGA memory.
This initialises the V$logmnr_logs table.
and
DBMS_LOGMNR.ADDFILE
adds another logfile to the v$logmnr_logs PGA memory.
Has the same effect as NEW if there are no rows there
presently.

DBMS_LOGMNR.REMOVEFILE
removes a row from v$logmnr_logs.

Example
=======

Include all my online logs for analysis.........

BEGIN
dbms_logmnr.add_logfile(
'/export/home/sme81/aholland/database/files/redo03.log',
DBMS_LOGMNR.NEW );
dbms_logmnr.add_logfile(
'/export/home/sme81/aholland/database/files/redo02.log',
DBMS_LOGMNR.ADDFILE );
dbms_logmnr.add_logfile(
'/export/home/sme81/aholland/database/files/redo01.log',
DBMS_LOGMNR.ADDFILE );
END;
/

Full Path should be required, though an environment variable
is accepted. This is NOT expanded in V$LOGMNR_LOGS.

3. dbms_logmnr.start_logmnr;

This package populates V$logmnr_dictionary, v$logmnr_parameters,
and v$logmnr_contents.

Parameters
==========

1. StartScn Default 0
2. EndScn Default 0,
3. StartTime Default '01-jan-1988'
4. EndTime Default '01-jan-2988'
5. DictFileName Default '',
6. Options Default 0 Debug flag - uninvestigated as yet

A Point to note here is that there are comparisions made between the
SCN's, the times entered, and the range of values in the file.
If the SCN range OR the start/End range are not wholly contained
in this log, then the start_logmnr command will fail with the
Rather general ORA-1280 Fatal LogMiner Error.

4. dbms_logmnr.end_logmnr;

This is called with no parameters.

/* THIS IS VERY IMPORTANT FOR SUPPORT */

This procedure MUST be called prior to exiting the session that
was performing the analysis. This is because of the way the PGA
is used to store the dictionary definitions from the dictionary
file, and the V$LOGMNR_CONTENTS output. It appears that the
developers could not get access/agreement from whoever does
the memory management code, hence wrote their own.(?)

If you do not call end_logmnr, you will silently get
ORA-00600 [723] ........on logoff. This OERI is triggered
Because the PGA is bigger at logoff than its was at logon,
which is considered a space leak by the KSM code. The main problem
from a support perspective is that it is silent, i.e. not signalled back
to the users screen, because by then they have gone. This will generate
a lot to TARs initially , as DBA's try the new functionality and get
ORA-00600's every time they use the feature.

The way to spot Log Miner leaks is that the trace file produced by the
OERI 723 will have A PGA heap dumped with many Chunks of type 'Freeable'
With a description of "KRVD:alh"

4. OUTPUT
=========

Effectively, the output from Log Miner is the contents of V$logmnr_contents
The output is only visible during the life of the session which runs
start_logmnr. This is because all the Log Miner memory is PGA memory, so
it is neither visible to other sessions , not is it persistent.
As the session logs off, either dbms_logmnr.end_logmnr is run to clear
out the PGA , or an OERI 723 is signalled as described above.

Typically users are going to want to output sql_redo based on queries
by timestamp, segment_name or rowid.

v$logmnr_contents
Name Null? Type
------------------------------- -------- ----
SCN NUMBER
TIMESTAMP DATE
THREAD# NUMBER
LOG_ID NUMBER
XIDUSN NUMBER
XIDSLT NUMBER
XIDSQN NUMBER
RBASQN NUMBER
RBABLK NUMBER
RBABYTE NUMBER
UBAFIL NUMBER
UBABLK NUMBER
UBAREC NUMBER
UBASQN NUMBER
ABS_FILE# NUMBER
REL_FILE# NUMBER
DATA_BLK# NUMBER
DATA_OBJ# NUMBER
DATA_OBJD# NUMBER
SEG_OWNER VARCHAR2(32)
SEG_NAME VARCHAR2(32)
SEG_TYPE VARCHAR2(32)
TABLE_SPACE VARCHAR2(32)
ROW_ID VARCHAR2(19)
SESSION# NUMBER
SERIAL# NUMBER
USERNAME VARCHAR2(32)
ROLLBACK NUMBER
OPERATION VARCHAR2(32)
SQL_REDO VARCHAR2(4000)
SQL_UNDO VARCHAR2(4000)
RS_ID VARCHAR2(32)
SSN NUMBER
CSF NUMBER
INFO VARCHAR2(32)
STATUS NUMBER
PH1_NAME VARCHAR2(32)
PH1_REDO VARCHAR2(4000)
PH1_UNDO VARCHAR2(4000)
PH2_NAME VARCHAR2(32)
PH2_REDO VARCHAR2(4000)
PH2_UNDO VARCHAR2(4000)
PH3_NAME VARCHAR2(32)
PH3_REDO VARCHAR2(4000)
PH3_UNDO VARCHAR2(4000)
PH4_NAME VARCHAR2(32)
PH4_REDO VARCHAR2(4000)
PH4_UNDO VARCHAR2(4000)
PH5_NAME VARCHAR2(32)
PH5_REDO VARCHAR2(4000)
PH5_UNDO VARCHAR2(4000)

SQL> set heading off
SQL> select scn, user, sql_undo from v$logmnr_contents where
where segment_name = 'emp';

12134756 scott insert (...) into emp;
12156488 scott delete from emp where empno = ...
12849455 scott update emp set mgr =

This will return the results of an SQL statement without the column
headings. The columns that you are really going to want to query are the
"sql_undo" and "sql_redo" values because they give the transaction details and
syntax.

5. PLACEHOLDERS
===============

In order to allow users to be able to query directly on specific data values,
there are up to five PLACEHOLDERs included at the end of v$logmnr_contents.
When enabled, a user can query on the specific BEFORE and AFTER values of
a specific field, rather than a %LIKE% query against the SQL_UNDO/REDO fields.
This is implemented via an external file called logmnr.opt. ( see the Supplied
Packages manual entry on dbms_logmnr for further details) The file must exist
in the same directory as the dictionary file used, and contains the prototype
mappings of the PHx fields to the fields in the table being analyzed.

Example entry
=============
colmap = SCOTT EMP ( EMPNO, 1, ENAME, 2, SAL, 3 );

In the above example, when a redo record is encountered for the SCOTT.EMP
table, the full Statement redo and undo information populates the SQL_REDO and
SQL_REDO columns respectively, however the PH3_NAME, PH3_REDO and PH3_UNDO
columns will also be populated with 'SAL' , <NEWVALUE>, <OLDVALUE>
respectively,which means that the analyst can query in the form....

SLECT * FROM V$LOGMNR_CONTENTS
WHERE SEG_NAME ='EMP'
AND PH3_NAME='SAL'
AND PH3_REDO=1000000;

The returned PH3_UNDO column would return the value prior to the update. This
enables much more efficient queries to be run against V$LOGMNR_CONTENTS view,
and if, for instance, a CTAS was issued to store a physical copy, the column
can be indexed.

.




Copyright © 2007, All rights reserved by RudnikConsulting Inc