What Does a COMMIT Do?-Redo and Undo-4
Now, earlier I mentioned that we were using a Java program and not PL/SQL for a reason—and that reason is a PL/SQL commit-time optimization. I said that our call to LGWR is by default a synchronous one and that we wait for it to complete its write. That is true in every version of the Oracle database for every programmatic language except PL/SQL.
The PL/SQL engine, realizing that the client does not know whether or not a COMMIT has happened in the PL/SQL routine until the PL/SQL routine is completed, does an asynchronous commit. It does not wait for LGWR to complete; rather, it returns from the COMMIT call immediately.
However, when the PL/SQL routine is completed, when we return from the database to the client, the PL/SQL routine will wait for LGWR to complete any of the outstanding COMMITs. So, if you commit 100 times in PL/SQL and then return to the client, you will likely find you waited for LGWR once—not 100 times—due to this optimization. Does this imply that committing frequently in PL/SQL is a good or OK idea? No, not at all—just that it is not as bad an idea as it is in other languages. The guiding rule is to commit when your logical unit of work is complete—not before.
Note This commit-time optimization in PL/SQL may be suspended when you are performing distributed transactions or Data Guard in maximum availability mode. Since there are two participants, PL/SQL must wait for the commit to actually be complete before continuing. Also, it can be suspended by directly invoking COMMIT WORK WRITE WAIT in PL/SQL.
To demonstrate that a COMMIT is a “flat response time” operation, we’ll generate varying amounts of redo and time the INSERTs and COMMITs. As we do these INSERTs and COMMITs, we’ll measure the amount of redo our session generates using this small utility function:
SQL> create or replace function get_stat_val( p_name in varchar2 ) return numberasl_val number;beginselect b.valueinto l_valfrom v$statname a, v$mystat bwhere a.statistic# = b.statistic#and a.name = p_name;return l_val;end;/
Note The owner of the previous function will need to have been directly granted the SELECT privilege on the V$ views V_$STATNAME and V_$MYSTAT.
Drop the table T (if it exists) and create an empty table T of the same structure asBIG_TABLE:
SQL> drop table t purge;
SQL> create table tas select *from big_tablewhere 1=0;Table created.
Note Directions on how to create and populate the BIG_TABLE table used in many examples are in the “Setting Up Your Environment” section at the very front of this book.
And we’ll measure the CPU and elapsed time used to commit our transaction using the DBMS_UTILITY package routines GET_CPU_TIME and GET_TIME. The actual PL/SQL block used to generate the workload and report on it is
SQL> set serverout on
SQL> declarel_redo number;l_cpu number;l_ela number;begindbms_output.put_line
( ‘-‘ || ‘ Rows’ || ‘ Redo’ ||’ CPU’ || ‘ Elapsed’ ); for i in 1 .. 6
l_redo := get_stat_val( ‘redo size’ );insert into t select * from big_table where rownum <= power(10,i);l_cpu := dbms_utility.get_cpu_time;l_ela := dbms_utility.get_time;commit work write wait;dbms_output.put_line
( ‘-‘ ||to_char( power( 10, i ), ‘9,999,999’) ||to_char( (get_stat_val(‘redo size’)-l_redo), ‘999,999,999’ ) || to_char( (dbms_utility.get_cpu_time-l_cpu), ‘999,999’ ) || to_char( (dbms_utility.get_time-l_ela), ‘999,999’ ) );end loop;end;
Note Times are in hundredths of seconds. Your results may vary depending on variables such as the number of records in BIG_TABLE, size of your log buffer, size and number of redo logs, number of log writer processes, and I/O subsystem.
As you can see, as we generate varying amount of redo from about 6500 bytes to 67MB, the difference in time to COMMIT is not measurable using a timer with a one- hundredth of a second resolution. As we were processing and generating the redo log, LGWR was constantly flushing our buffered redo information to disk in the background. So, when we generated 67MB of redo log information, LGWR was busy flushing every 1MB, or so. When it came to the COMMIT, there wasn’t much left to do—not much more than when we created ten rows of data. You should expect to see similar (but not exactly the same) results, regardless of the amount of redo generated.