Friday, October 14, 2011

How To LogMiner utility in Oracle 10g

How To LogMiner utility in Oracle 10g
LogMiner is an Vaticinator usefulness. Using LogMiner one can ask the list of online redo log files and archived log files. It can be misused as a mighty aggregation canvass tool, as well as a tool for sophisticated aggregation psychotherapy.The LogMiner way can ameliorate the DBA to the gestate transformed records in redo log files by using a set of PL/SQL procedures and functions. Log Labourer extracts all DDL and DML process from the redo log files for vigil by a DBA via the inducement show canvas V$LOGMNR_CONTENTS. LogMiner Configurations : There are digit elementary objects in a LogMiner plan.The succeeding are as below : 1.) Source Database : The source database is the database that produces all the redo log files that we want LogMiner to analyze. 2.) LogMiner Directory : The LogMiner dictionary allows LogMiner to provide table and column names, instead of internal object IDs, when it presents the redo log data that we request.LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as binary data. For example, consider the following the SQL statement: SQL> insert into hr.jobs(job_id,job_tilte,min_salary,max_salary) values (\'TT_WT\',\'Technical Writer\',4000,11000) ; Without the dictionary, LogMiner will display: SQL> insert into \"UNKNOWN\".\"OBJ# 45522\"(\"COL 1\",\"COL 2\",\"COL 3\",\"COL 4\") values (HEXTORAW(\'45465f4748\'),HEXTORAW(\'546563686e6963616c20577269746572\'), HEXTORAW(\'c229\'),HEXTORAW(\'c3020b\')); LogMiner Dictionary Options : LogMiner requires a dictionary to translate object IDs into object names when it returns redo data to you. LogMiner gives the three options for supplying the dictionary: i.) Using the Online Catalog : Oracle recommends that you use this option when you will have access to the source database from which the redo log files were created and when no changes to the column definitions in the tables of interest are anticipated. This is the most efficient and easy-to-use option.To direct LogMiner to use the dictionary currently in use for the database, specifying the online catalog as our dictionary source when we start LogMiner, as follows: SQL> execute dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog); ii.) Extracting a LogMiner Dictionary to the Redo Log Files : Oracle recommends that we use this option when we do not expect to have access to the source database from which the redo log files were created, or if we anticipate that changes will be made to the column definitions in the tables of interest.To extract a LogMiner dictionary to the redo log files, the database must be open and in ARCHIVELOG mode and archiving must be enabled. While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary extracted to the redo log files is guaranteed to be consistent To extract dictionary information to the redo log files, use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_LOGS option. Do not specify a filename or location. SQL>execute dbms_logmnr_d.build(options=>dbms_logmnr_d.store_in_redo_logs); iii.) Extracting the LogMiner Dictionary to a Flat File : This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that we use either the online catalog or extract the dictionary from redo log files instead.Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify a filename for the dictionary and a directory path name for the file. This procedure creates the dictionary file. For example, enter the following to create the file dictionary.ora in /oracle/database: SQL> execute dbms_logmnr_d.build(\'dictionary.ora\',\'c:\\dictionary\', dbms_logmnr_d.store_in_flat_file); 3.) Redo log files : The redo log files contain the changes made to the database or database dictionary. Here in this case i will demonstrate the \"Extracting the logminer Dictionary to a flat file\" . In this case it requires a little bit of setup. Step 1 : Make a folder say \"dict\" C:\\>mkdir c:\\dict Step 2 : Add the parameter utl_file_dir in initialiation file : This specifies the location where dictionary will be created. SQL> alter system set utl_file_dir=\'C:\\dict\' scope=spfile; SQL> shut immediate SQL> startup Normally oracle records the change vector in the redo log files i.e. just the information that is required to reconstruct the operation at recovery time. If we want additional information in the redo log then we need to enable supplemental logging prior to generating log files that will be analyzed by LogMiner. Therefore, at the very least, we will enable minimal supplemental logging, as the following SQL statement shows: Step 3 : Enable the supplemental logging SQL> select supplemental_log_data_min from v$database ; SUPPLEME -------- NO /* Minimum supplemental logging is not enabled. */ SQL> alter system add supplemental log data; Database altered. SQL> select supplemental_log_data)min from v$database; SUPPLEME -------- YES /* Minimum supplemental logging is now enabled. */ SQL> alter system switch logfile; System altered. Step 4 : check the current redo logfile SQL> select g.group# , g.status , m.member from v$log g, v$logfile m where g.group# = m.group# and g.status = \'CURRENT\'; GROUP# STATUS MEMBER ---------- -------- ------------------------------------------- 3 CURRENT D:\\ORACLE\\PRODUCT\\10.2.0\\ORADATA\\NOIDA\\REDO03.LOG /* start fresh with a new log file which is the group 3.*/ Step 5 : Record the dictionary information into the file SQL> create table scott.test_logmnr (id number,name varchar2(10)); Table created. SQL> begin dbms_logmnr_d.build ( dictionary_filename => \'dictionary.dic\', dictionary_location => \'C:\\dict\', options => dbms_logmnr_d.store_in_flat_file ); end; / PL/SQL procedure successfully completed. /* This has recorded the dictionary information into the file \"C:\\dict\\dictionary.dic\". */ Step 6 : Make few changes in any schemas Now lets make a few user data changes by logging in as user SCOTT. SQL> conn scott/tiger@noida connected. SQL> insert into test_logmnr values (1,\'TEST1\'); 1 row created. SQL> insert into test_logmnr values (2,\'TEST2\'); 1 row created. SQL> commit; Commit complete. SQL> update test_logmnr set name = \'TEST\'; 2 rows updated. SQL> commit; Commit complete. SQL> delete from test_logmnr; 2 rows deleted. SQL> commit; Commit complete. Step 7 : Connect to sys schemas and add the redo logfile to logminer analysis : After these changes lets log back in as sysdba and start LogMiner session. SQL> conn / as sysdba connected. SQL> select g.group# , g.status , m.member from v$log g, v$logfile m where g.group# = m.group# and g.status = \'CURRENT\'; GROUP# STATUS MEMBER ---------- -------- ------------------------------------------- 3 CURRENT D:\\ORACLE\\PRODUCT\\10.2.0\\ORADATA\\NOIDA\\REDO03.LOG /* Log group 3 is still current, that means all of the changes we made are in this redo group. */ SQL> begin dbms_logmnr.add_logfile ( logfilename => \'D:\\ORACLE\\PRODUCT\\10.2.0\\ORADATA\\NOIDA\\REDO03.LOG\', options => dbms_logmnr.new ); end; / PL/SQL procedure successfully completed. /* DBMS_LOGMNR.ADD_LOGFILE builds up a list of redo log files for LogMiner analysis. The first file is added with the options => dbms_logmnr.new and rest are added with the options => dbms_logmnr.addfile */ SQL> select filename from v$logmnr_logs; FILENAME -------------------------------------------- D:\\ORACLE\\PRODUCT\\10.2.0\\ORADATA\\NOIDA\\REDO03.LOG /* Dictionary view v$logmnr_logs contains the list of log files that are added via DBMS_LOGMNR.ADD_LOGFILE. */ Step 8 : Populate the v$logmnr_contents SQL> begin dbms_logmnr.start_logmnr ( dictfilename => \'C:\\dict\\dictionary.dic\', options => dbms_logmnr.print_pretty_sql + dbms_logmnr.no_sql_delimiter + dbms_logmnr.ddl_dict_tracking ); end; / PL/SQL procedure successfully completed. DBMS_LOGMNR.START_LOGMNR starts a LogMiner session. It will populate the dictionary view v$logmnr_contents.with the contents of log files in the list we built with DBMS_LOGMNR.ADD_LOGFILE. v$logminer_contents is only accessible to the current session which has started LogMiner and only until the DBMS_LOGMNR.END_LOGMNR is called. There could be many options provided with START_LOGMNR which affects the data representation in v$logmnr_contents e.g. dbms_logmnr.print_pretty_sql will format the sql statements to enhance readability. dbms_logmnr.no_sql_delimiter will omit the \";\" from the end of the sql statements which is useful when sql are meant to be re-executed in PL/SQL routines.dbms_logmnr.ddl_dict_tracking tracks the DDL statements in the log files. Step 9 : Copy contents of v$logmnr_contents to a user table SQL> DROP TABLE myLogAnalysis; Table dropped. SQL> create table myLogAnalysis as select * from v$logmnr_contents; Table created. /* Its always better to copy contents of v$logmnr_contents to a user table and then perform the analysis as it is quite expensive to query v$logmnr_contents. Moreover, the user table can be indexed for better query performance. */ Step 10 : Ends the LogMiner session SQL> begin DBMS_LOGMNR.END_LOGMNR(); end; / PL/SQL procedure successfully completed. DBMS_LOGMNR.END_LOGMNR() ends the LogMiner session and v$logmnr_contents is no more accessible but our user table myLogAnalysis is still available which is a copy of v$logmnr_contents. /* The below query will show the changes made by the user SCOTT and either the table is TEST_LOGMNR or there is no table at all i.e. transaction start and transaction end statements etc. The output below shows the system change number for the change, the segment on which the change was made, the sql statement to redo the change and the sql statement to undo the change. */ SQL>select scn , seg_name , sql_redo , sql_undo from myLogAnalysis where username = \'SCOTT\' AND (seg_owner is null OR seg_owner = \'SCOTT\'); SCN SEG_NAME SQL_REDO SQL_UNDO ------ ----------- --------------------------------- --------------------------------- 639968 TEST_LOGMNR create table scott.test_logmnr (id number, name varchar2(10) ) 640039 set transaction read write 640039 TEST_LOGMNR insert into \"SCOTT\".\"TEST_LOGMNR\" delete from \"SCOTT\".\"TEST_LOGMNR\" values where \"ID\" = 1, \"ID\" = 1 and \"NAME\" = \'TEST1\' \"NAME\" = \'TEST1\' and ROWID = \'AAAM7vAAEAAAALcAAA\' 640041 TEST_LOGMNR insert into \"SCOTT\".\"TEST_LOGMNR\" delete from \"SCOTT\".\"TEST_LOGMNR\" values where \"ID\" = 2, \"ID\" = 2 and \"NAME\" = \'TEST2\' \"NAME\" = \'TEST2\' and ROWID = \'AAAM7vAAEAAAALcAAB\' 640044 commit 640047 set transaction read write 640047 TEST_LOGMNR update \"SCOTT\".\"TEST_LOGMNR\" update \"SCOTT\".\"TEST_LOGMNR\" set set \"NAME\" = \'TEST\' \"NAME\" = \'TEST1\' where where \"NAME\" = \'TEST1\' and \"NAME\" = \'TEST\' and ROWID = \'AAAM7vAAEAAAALcAAA\' ROWID = \'AAAM7vAAEAAAALcAAA\' 640047 TEST_LOGMNR update \"SCOTT\".\"TEST_LOGMNR\" update \"SCOTT\".\"TEST_LOGMNR\" set set \"NAME\" = \'TEST\' \"NAME\" = \'TEST2\' where where \"NAME\" = \'TEST2\' and \"NAME\" = \'TEST\' and ROWID = \'AAAM7vAAEAAAALcAAB\' ROWID = \'AAAM7vAAEAAAALcAAB\' 640050 commit 640052 set transaction read write 640058 TEST_LOGMNR delete from \"SCOTT\".\"TEST_LOGMNR\" insert into \"SCOTT\".\"TEST_LOGMNR\" where values \"ID\" = 1 and \"ID\" = 1, \"NAME\" = \'TEST\' and \"NAME\" = \'TEST\' ROWID = \'AAAM7vAAEAAAALcAAA\' 640058 TEST_LOGMNR delete from \"SCOTT\".\"TEST_LOGMNR\" insert into \"SCOTT\".\"TEST_LOGMNR\" where values \"ID\" = 2 and \"ID\" = 2, \"NAME\" = \'TEST\' and \"NAME\" = \'TEST\' ROWID = \'AAAM7vAAEAAAALcAAB\' 640066 commit Now we have more option to filter the data according to our requirement .Here are few examples: Filtering Data By Time : To filter data by time, set the STARTTIME and ENDTIME parameters. The procedure expects date values. Use the TO_DATE function to specify date and time, as in this example: SQL> execute DBMS_LOGMNR.START_LOGMNR(DICTFILENAME =>\'c:\\dict\\dictionary.ora\', STARTTIME => TO_DATE(\'20-Jul-2011 04:30:00\', \'DD-MON-YYYY HH:MI:SS\'),ENDTIME => TO_DATE(\'20-Jul-2011 04:45:00\', \'DD-MON-YYYY HH:MI:SS\')); If no STARTTIME or ENDTIME parameters are specified, the entire redo log is read from start to end, for each SELECT statement issued. The timestamps should not be used to infer ordering of redo records. we can infer the order of redo records by using the SCN. Filtering Data By SCN : To filter data by SCN (system change number), use the STARTSCN and ENDSCN parameters, as in this example: SQL> execute DBMS_LOGMNR.START_LOGMNR (dictfilename=>\'c:\\dict\\dictionary.ora\', STARTSCN => 100, ENDSCN => 150); The STARTSCN and ENDSCN parameters override the STARTTIME and ENDTIME parameters in situations where all are specified. If no STARTSCN or ENDSCN parameters are specified, the entire redo log is read from start to end, for each SELECT statement issued. Accessing LogMiner Information : LogMiner information is contained in the following views. We can use SQL to query them as we would any other view. V$LOGMNR_CONTENTS : Shows changes made to user and table information. V$LOGMNR_DICTIONARY : Shows information about the LogMiner dictionary file, provided the dictionary was created using the STORE_IN_FLAT_FILE option. The information shown includes the database name and status information. V$LOGMNR_LOGS : Shows information about specified redo logs. There is one row for each redo log. V$LOGMNR_PARAMETERS : Shows information about optional LogMiner parameters, including starting and ending system change numbers (SCNs) and starting and ending times.