Category: 11gR2

Exadata Smart Flash Logging Explained

I’ve seen some posts on the blogosphere where people attempt to explain (or should I say guess) how Exadata Smart Flash Logging works and most of them are wrong. Hopefully this post will help clear up some the misconceptions out there.

The following is an excerpt from the paper entitled “Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine” that goes into technical detail on the Exadata Smart Flash Logging feature.

Smart Flash Logging works as follows. When receiving a redo log write request, Exadata will do
parallel writes to the on-disk redo logs as well as a small amount of space reserved in the flash
hardware. When either of these writes has successfully completed the database will be
immediately notified of completion. If the disk drives hosting the logs experience slow response
times, then the Exadata Smart Flash Cache will provide a faster log write response time.
Conversely, if the Exadata Smart Flash Cache is temporarily experiencing slow response times
(e.g., due to wear leveling algorithms), then the disk drive will provide a faster response time.
Given the speed advantage the Exadata flash hardware has over disk drives, log writes should be
written to Exadata Smart Flash Cache, almost all of the time, resulting in very fast redo write
performance. This algorithm will significantly smooth out redo write response times and provide
overall better database performance.

The Exadata Smart Flash Cache is not used as a permanent store for redo data – it is just a
temporary store for the purpose of providing fast redo write response time. The Exadata Smart
Flash Cache is a cache for storing redo data until this data is safely written to disk. The Exadata
Storage Server comes with a substantial amount of flash storage. A small amount is allocated for
database logging and the remainder will be used for caching user data. The best practices and
configuration of redo log sizing, duplexing and mirroring do not change when using Exadata
Smart Flash Logging. Smart Flash Logging handles all crash and recovery scenarios without
requiring any additional or special administrator intervention beyond what would normally be
needed for recovery of the database from redo logs. From an end user perspective, the system
behaves in a completely transparent manner and the user need not be aware that flash is being
used as a temporary store for redo. The only behavioral difference will be consistently low
latencies for redo log writes.

By default, 512 MB of the Exadata flash is allocated to Smart Flash Logging. Relative to the 384
GB of flash in each Exadata cell this is an insignificant investment for a huge performance
benefit. This default allocation will be sufficient for most situations. Statistics are maintained to
indicate the number and frequency of redo writes serviced by flash and those that could not be
serviced, due to, for example, insufficient flash space being allocated for Smart Flash Logging.
For a database with a high redo generation rate, or when many databases are consolidated on to
one Exadata Database Machine, the size of the flash allocated to Smart Flash Logging may need
to be enlarged. In addition, for consolidated deployments, the Exadata I/O Resource Manager
(IORM) has been enhanced to enable or disable Smart Flash Logging for the different databases
running on the Database Machine, reserving flash for the most performance critical databases.

Reading Active SQL Monitor Reports Offline

Active SQL Monitor Reports require some files from the Internet to render the report in the browser. That’s no big deal if you have an Internet connection, but what if you do not? Generally if you load an Active SQL Monitor Report without an Internet connection, you will just see an empty page in your browser. There is a little trick I use to work around this issue — it’s to have a copy of the required swf and javascript files locally. Here is how I do that on my Mac assuming a couple of things:

  1. You know how to turn on the web server/Web Sharing in System Preferences > Sharing
  2. You know how to get wget (or use curl [part of OS X] to mimic the below commands)
# assuming you already have the web server running and have wget
cd /Library/WebServer/Documents
wget --mirror http://download.oracle.com/otn_software/emviewers/scripts/flashver.js
wget --mirror http://download.oracle.com/otn_software/emviewers/scripts/loadswf.js
wget --mirror http://download.oracle.com/otn_software/emviewers/scripts/document.js
wget --mirror http://download.oracle.com/otn_software/emviewers/sqlmonitor/11/sqlmonitor.swf
ln -s download.oracle.com/otn_software otn_software

Now edit /etc/hosts and add

127.0.0.1 download.oracle.com

Now when you load an Active SQL Monitor Report it will access those files from your local web server. Don’t forget to undo the /etc/hosts entry once you are back on the Internet. Also, keep in mind that these files may change so re-download them from time to time.

Option 2 – Firefox

If Firefox is your browser of choice, then there is another option. Having successfully rendered an Active SQL Monitor Report while online, you can work offline by using File > Work Offline. This should allow the use of a cached version of the swf and javascript files.

Creating Optimizer Trace Files

Many Oracle DBA’s are probably familiar with what Optimizer trace files are and likely know how to create them. When I say “Optimizer trace” more than likely you think of event 10053, right? SQL code like this probably is familiar then:

alter session set tracefile_identifier='MY_10053';
alter session set events '10053 trace name context forever';
select /* hard parse comment */ * from emp where ename = 'SCOTT';
alter session set events '10053 trace name context off';

In 11g, a new diagnostic events infrastructure was implemented and there are various levels of debug output that you can control for sql compilation. ORADEBUG shows us the hierarchy.

SQL> oradebug doc component SQL_Compiler

  SQL_Compiler                    SQL Compiler
    SQL_Parser                    SQL Parser (qcs)
    SQL_Semantic                  SQL Semantic Analysis (kkm)
    SQL_Optimizer                 SQL Optimizer
      SQL_Transform               SQL Transformation (kkq, vop, nso)
        SQL_MVRW                  SQL Materialized View Rewrite
        SQL_VMerge                SQL View Merging (kkqvm)
        SQL_Virtual               SQL Virtual Column (qksvc, kkfi)
      SQL_APA                     SQL Access Path Analysis (apa)
      SQL_Costing                 SQL Cost-based Analysis (kko, kke)
        SQL_Parallel_Optimization SQL Parallel Optimization (kkopq)
    SQL_Code_Generator            SQL Code Generator (qka, qkn, qke, kkfd, qkx)
      SQL_Parallel_Compilation    SQL Parallel Compilation (kkfd)
      SQL_Expression_Analysis     SQL Expression Analysis (qke)
      SQL_Plan_Management         SQL Plan Managment (kkopm)
    MPGE                          MPGE (qksctx)

My personal preference for Optimizer tracing is to stick with the most detailed level, in this case SQL_Compiler vs. just SQL_Optimizer.

Given that, we can do the following in 11g:

alter session set tracefile_identifier='MY_SQL_Compiler_TRACE';
alter session set events 'trace [SQL_Compiler.*]';
select /* hard parse comment */ * from emp where ename = 'SCOTT';
alter session set events 'trace [SQL_Compiler.*] off';

One of the big drawbacks of using the 10053 event or the SQL_Compiler event are that two things need to happen: 1) you have to have the SQL text and 2) a hard parse needs to take place (so there is actually sql compilation). What if you want to get an Optimizer trace file for a statement already executed in your database and is in the cursor cache? Chances are you know how to do #1 & #2 but it’s kind of a pain, right? Even more of a pain if the query is pages of SQL or you don’t have the application schema password, etc.

In 11gR2 (11.2) there was a procedure added to DBMS_SQLDIAG called DUMP_TRACE. The DUMP_TRACE procedure didn’t make the DBMS_SQLDIAG documentation but here is the declaration:

-- $ORACLE_HOME/rdbms/admin/dbmsdiag.sql
-------------------------------- dump_trace ---------------------------------
-- NAME: 
--     dump_trace - Dump Optimizer Trace
--
-- DESCRIPTION:
--     This procedure dumps the optimizer or compiler trace for a give SQL 
--     statement identified by a SQL ID and an optional child number. 
--
-- PARAMETERS:
--     p_sql_id          (IN)  -  identifier of the statement in the cursor 
--                                cache
--     p_child_number    (IN)  -  child number
--     p_component       (IN)  -  component name
--                                Valid values are Optimizer and Compiler
--                                The default is Optimizer
--     p_file_id         (IN)  -  file identifier
------------------------------------------------------------------------------
PROCEDURE dump_trace(
              p_sql_id         IN varchar2,
              p_child_number   IN number   DEFAULT 0,
              p_component      IN varchar2 DEFAULT 'Optimizer',
              p_file_id        IN varchar2 DEFAULT null);

As you can see, you can specify either Optimizer or Compiler as the component name which is the equivalent of the SQL_Compiler or SQL_Optimizer events. Conveniently you can use P_FILE_ID to add a trace file identifier to your trace file. The four commands used above can be simplified to just a single call. For example:

SQL> begin
  2    dbms_sqldiag.dump_trace(p_sql_id=>'6yf5xywktqsa7',
  3                            p_child_number=>0,
  4                            p_component=>'Compiler',
  5                            p_file_id=>'MY_TRACE_DUMP');
  6  end;
  7  /

PL/SQL procedure successfully completed.

If we look at the trace file we can see that DBMS_SQLDIAG.DUMP_TRACE added in a comment /* SQL Analyze(1443,0) */ and did the hard parse for us (Thanks!).

Enabling tracing for cur#=9 sqlid=as9bkjstppk0a recursive
Parsing cur#=9 sqlid=as9bkjstppk0a len=91 
sql=/* SQL Analyze(1443,0) */ select /* hard parse comment */ * from emp where ename = 'SCOTT'
End parsing of cur#=9 sqlid=as9bkjstppk0a
Semantic Analysis cur#=9 sqlid=as9bkjstppk0a
OPTIMIZER INFORMATION

******************************************
----- Current SQL Statement for this session (sql_id=as9bkjstppk0a) -----
/* SQL Analyze(1443,0) */ select /* hard parse comment */ * from emp where ename = 'SCOTT'
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0x16fd3a368       145  package body SYS.DBMS_SQLTUNE_INTERNAL
0x16fd3a368     12085  package body SYS.DBMS_SQLTUNE_INTERNAL
0x18e7fead8      1229  package body SYS.DBMS_SQLDIAG
0x16fdbddd0         1  anonymous block
*******************************************

Hopefully you don’t find yourself having to get too many Optimizer Trace Dumps, but if you do and you’re on 11.2, the hard work has been done for you.

Footnote:
Due to a bug in DBMS_ASSERT, you will need to specify a value for P_COMPONENT. If you leave it NULL, it will error like such:

SQL> begin
  2    dbms_sqldiag.dump_trace(p_sql_id=>'6yf5xywktqsa7',
  3                            p_child_number=>0,
  4                            p_component=>NULL,
  5                            p_file_id=>'MY_TRACE_DUMP');
  6  end;
  7  /
begin
*
ERROR at line 1:
ORA-44003: invalid SQL name
ORA-06512: at 'SYS.DBMS_ASSERT', line 160
ORA-06512: at 'SYS.DBMS_SQLDIAG', line 1182
ORA-06512: at line 2

Crowdsourcing Active SQL Monitor Reports

As my loyal readers will know, I have been a big (maybe BIG) fan of the SQL Monitor Report since it’s introduction in 11g. It would not surprise me if I have looked at over 1000 SQL Monitor Reports in the past 4+ years — so I’m pretty familiar with these bad boys. Since I find them so valuable (and many customers are now upgrading to 11g), I’ve decided to do a deep dive into the SQL Monitor Report at both Oracle OpenWorld 2011 in October and the UKOUG in December. I think I have some pretty interesting and educational examples, but for anyone willing to share Active SQL Monitor Reports from their system, I thought I would extend the possibility to have it publicly discussed at either one of these sessions (or even a future blog post). Sound cool? I think so, though I may be slightly biased.

The Rules & Requirements

Here are some rules, requirements, restrictions, etc.:

  1. The SQL Monitor Report requires Oracle Database 11g and the Oracle Database Tuning Pack.
  2. By sending me your SQL Monitor Report you implicitly grant permission to me to use it however I want (in my sessions, on my blog, on my refrigerator, etc.).
  3. If you want to scrub it (remove the SQL Text, rename tables, etc.), feel free, but if you make the report unusable, it will end up in the bit bucket.
  4. I will only consider SQL Monitor Reports that are of type EM or ACTIVE, not TEXT or HTML or XML.
  5. I prefer the statement uses Parallel Execution, but will accept serial statements nonetheless.
  6. Active SQL Monitor Reports can be either saved from the EM/DB Console SQL Monitoring page, or via SQL*Plus (see code below).
  7. Once you save your Active SQL Monitor Report, validate it is functional from your browser (don’t send me broken stuff).

In order to participate in this once in a lifetime offer, just email the Active SQL Monitor Report file as an attachment to sqlmon@structureddata.org. If you are going to be attending my session at either OOW11 or UKOUG11, let me know so if I choose your report I’ll notify you so you can bring your friends, significant other, boss, etc. Thanks in advance!

--
-- script to create an Active SQL Monitor Report given a SQL ID
-- 11.2 and newer (EM/ACTIVE types are not in 11.1)
--
set pagesize 0 echo off timing off linesize 1000 trimspool on trim on long 2000000 longchunksize 2000000 feedback off
spool sqlmon_4vbqtp97hwqk8.html

select dbms_sqltune.report_sql_monitor(report_level=>'ALL', type=>'EM', sql_id=>'4vbqtp97hwqk8') monitor_report from dual;

spool off

11.2.0.2 Patch Set For Oracle Database Server

Update: Mon Sep 13 16:02:36 PDT 2010
The 11.2.0.2 patch is now available for download (for real). If you downloaded it on Friday, there is no need to re-download it. It was mistakenly made public before it could be validated (test MD5 sums, etc.), but has since been validated. Enjoy!

Just a quick post that the 11.2.0.2 patch set for Oracle Database Server has been released for x86 and x86-64 platforms. The patchset number is 10098816 and is available for download from My Oracle Support.

Be sure to give note 1189783.1 Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2 a read as several things have changed in the patching process.