trusted online casino malaysia

Archive for the ‘Tuning’ Category.

Oracle Exadata V2 – Flash Cache

One of the things I didn’t really talk about in my first post on Exadata was the flash cache component of the storage servers. They are a key component of the “OLTP” claims that Oracle is making for the platform. So let’s talk about the hardware first. The storage servers have 4 of the Sun Flash Accelerator F20 PCIe cards. These cards hold 96G each for a total of 384G on each storage server. That’s well over a terabyte on the smallest quarter rack configuration. Here’s what they look like:

Note that they are only installed in the storage servers and not in the database servers. The cards are usually configured exclusively as Flash Cache, but can optionally have a portion defined as a “ram disk”.

Oracle has a White Paper here:

Exadata Smart Flash Cache and the Sun Oracle Database Machine

This white paper was published in late 2009 and it is specific to V2. It has some good information and is well worth reading. One of the comments I found interesting was the discussion of carving a piece of the Flash Cache out as a “disk”. Here’s the quote:

These high-performance logical flash disks can be used to store frequently accessed data. To use them requires advance planning to ensure adequate space is reserved for the tablespaces stored on them. In addition, backup of the data on the flash disks must be done in case media recovery is required, just as it would be done for data stored on conventional disks. This option is primarily useful for highly write intensive workloads where the disk write rate is higher than the disks can keep up with.

Do not confuse the use of these cards in the storage server with the new 11gR2 feature “Database Flash Cache”. That feature allows an extended SGA (level 2) cache to be created on a database server (if you are using Solaris or Oracle Enterprise Linux) and has nothing to do with the Exadata Smart Flash Cache which resides on the Exadata storage servers. Think of the Database Flash Cache as an extended SGA and the Exadata Smart Flash Cache as large “smart” disk cache. I say smart because it implements some of the same type of Oracle cache management features as the SGA.

Kevin Closson has a couple of good posts outlining the differences between Database Flash Cache and Exadata Smart Flash Cache here:

Pardon Me, Where Is That Flash Cache? Part I.
Pardon Me, Where Is That Flash Cache? Part II.

Note also that Exadata Smart Flash Cache does not affect writes (i.e. it is not a write cache).

So how do we see what’s going on with the Exadata Flash Cache? Well there are a couple of ways.

  1. We can use the cellcli utility on the storage servers themselves.
  2. We can look in v$sesstat (one of the best ways to do that is with Tanel Poder’s snapper script by the way).

Here’s a little output from the system showing method 1 (cellcli):

[root@dm01cel01 ~]# cellcli
CellCLI: Release 11.2.1.2.3 - Production on Fri Apr 30 16:09:29 CDT 2010

Copyright (c) 2007, 2009, Oracle.  All rights reserved.
Cell Efficiency Ratio: 38M

CellCLI> LIST METRICCURRENT WHERE objectType = 'FLASHCACHE'
         FC_BYKEEP_OVERWR                FLASHCACHE      0.0 MB
         FC_BYKEEP_OVERWR_SEC            FLASHCACHE      0.0 MB/sec
         FC_BYKEEP_USED                  FLASHCACHE      300.6 MB
         FC_BY_USED                      FLASHCACHE      135,533.7 MB
         FC_IO_BYKEEP_R                  FLASHCACHE      10,399.4 MB
         FC_IO_BYKEEP_R_SEC              FLASHCACHE      0.0 MB/sec
         FC_IO_BYKEEP_W                  FLASHCACHE      6,378.3 MB
         FC_IO_BYKEEP_W_SEC              FLASHCACHE      0.0 MB/sec
         FC_IO_BY_R                      FLASHCACHE      480,628.3 MB
         FC_IO_BY_R_MISS                 FLASHCACHE      55,142.4 MB
         FC_IO_BY_R_MISS_SEC             FLASHCACHE      0.0 MB/sec
         FC_IO_BY_R_SEC                  FLASHCACHE      0.1 MB/sec
         FC_IO_BY_R_SKIP                 FLASHCACHE      1,448,220.2 MB
         FC_IO_BY_R_SKIP_SEC             FLASHCACHE      12.8 MB/sec
         FC_IO_BY_W                      FLASHCACHE      178,761.9 MB
         FC_IO_BY_W_SEC                  FLASHCACHE      0.1 MB/sec
         FC_IO_ERRS                      FLASHCACHE      0
         FC_IO_RQKEEP_R                  FLASHCACHE      1051647 IO requests
         FC_IO_RQKEEP_R_MISS             FLASHCACHE      291829 IO requests
         FC_IO_RQKEEP_R_MISS_SEC         FLASHCACHE      0.0 IO/sec
         FC_IO_RQKEEP_R_SEC              FLASHCACHE      0.0 IO/sec
         FC_IO_RQKEEP_R_SKIP             FLASHCACHE      0 IO requests
         FC_IO_RQKEEP_R_SKIP_SEC         FLASHCACHE      0.0 IO/sec
         FC_IO_RQKEEP_W                  FLASHCACHE      176405 IO requests
         FC_IO_RQKEEP_W_SEC              FLASHCACHE      0.0 IO/sec
         FC_IO_RQ_R                      FLASHCACHE      21095663 IO requests
         FC_IO_RQ_R_MISS                 FLASHCACHE      1574404 IO requests
         FC_IO_RQ_R_MISS_SEC             FLASHCACHE      0.6 IO/sec
         FC_IO_RQ_R_SEC                  FLASHCACHE      1.6 IO/sec
         FC_IO_RQ_R_SKIP                 FLASHCACHE      4879720 IO requests
         FC_IO_RQ_R_SKIP_SEC             FLASHCACHE      26.8 IO/sec
         FC_IO_RQ_W                      FLASHCACHE      5665344 IO requests
         FC_IO_RQ_W_SEC                  FLASHCACHE      2.9 IO/sec

Continue reading ‘Oracle Exadata V2 – Flash Cache’ »

What is Exadata?

Well I’ve been holed up playing with an Exadata V2 machine for the past several weeks. Wow. Very interesting technology.

I must say that I believe the concept of offloading SQL processing to the storage layer is a game changer and I wouldn’t be at all surprised to see this as a standard feature a few years from now. What that means for other storage vendors is unclear at this point. So for this first post on the topic let me just describe the configuration (and some potential upgrades).

The basic architecture consists of a set of database severs and a set of storage servers.

Database Servers:

  • Sun x4170 (1RU 64x server)
  • 2 – Quad-core Intel Xeon E5540 2.53GHz processors
  • 72G Ram (18x4G Dimms – max of 144G using 8G DIMMs)
  • Dual-Port QDR InfiniBand Host Channel Adapter
  • HBA with 512MB Battery Backed Write Cache (only for internal disks???)
  • 4 – 146G internal drives (SAS 10,000 RPM)
  • dual hot swappable power supplies
  • no spare/empty slots!

Here’s what the Database Servers look like:

Storage Servers:

  • Sun x4275 (2RU 64x server)
  • 2 – Quad-core Intel Xeon E5540 (2.53GHz) processors
  • 24G Ram
  • Dual-Port QDR InfiniBand Host Channel Adapter
  • HBA with 512MB Battery Backed Write Cache (only for internal disks???)
  • dual hot swappable power supplies
  • 4 – 96G Sun Flash PCIe Cards (total of 384 GB)
  • 12 – 600 GB 15,000 RPM SAS or 2 TB 7,200 RPM SATA

Continue reading ‘What is Exadata?’ »

GATHER_PLAN_STATISTICS

Hey, I just thought of something. One of the things I do fairly regularly when struggling with a SQL statement that is not behaving goes like this:

  1. create a copy of the statement in a test script
  2. add the GATHER_PLAN_STATISTICS hint
  3. duplicate all the bind variables (with values used at parse if possible)
  4. run it
  5. run xplan on it

This provides the ability to see how the optimizer’s estimates line up with reality using the DBMS_XPLAN.DISPLAY_CURSOR function with the ALLSTATS format option. It’s very easy to see where the optimizer is having problems by comparing the A-Rows column (actual rows) with the E-Rows column (estimated rows). Anywhere there is a big difference (think orders of magnitude), there is a potential problem. Well actually it’s a little more complicated than that, because the A-Rows are cumulative while the E-Rows are not. You have to multiply the E-Row by Starts (or divide A-Rows by the number of executions) in order to compare apples to apples. If you need a little more info see this post by Jonathan Lewis or this one by Greg Rahn. Here’s how the output looks in case you haven’t seen it before:

SYS@LAB1024> @fs
Enter value for sql_text: %gather_plan%
Enter value for sql_id: 

SQL_ID         CHILD  PLAN_HASH      EXECS     AVG_ETIME      AVG_LIO SQL_TEXT
------------- ------ ---------- ---------- ------------- ------------ ------------------------------------------------------------
dvfumy503zfnw      0 1660344652          3           .00           25 select /*+ gather_plan_statistics */ t1.x, t2.y from kso.t1
                                                                      t1, kso.t2 t2 where t1.y = t2.x and rownum < 10


SYS@LAB1024> !cat dplan_allstats.sql
set lines 180
select * from table(dbms_xplan.display_cursor('&sql_id','&child_no','allstats  +peeked_binds'))
/

SYS@LAB1024> @dplan_allstats
Enter value for sql_id: dvfumy503zfnw
Enter value for child_no: 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  dvfumy503zfnw, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ t1.x, t2.y from kso.t1 t1, kso.t2 t2 where t1.y = t2.x and
rownum < 10

Plan hash value: 1660344652

--------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------
|*  1 |  COUNT STOPKEY                |              |      3 |        |     27 |00:00:00.01 |      75 |
|   2 |   NESTED LOOPS                |              |      3 |      9 |     27 |00:00:00.01 |      75 |
|   3 |    TABLE ACCESS FULL          | T1           |      3 |  53629 |     27 |00:00:00.01 |      15 |
|   4 |    TABLE ACCESS BY INDEX ROWID| T2           |     27 |      1 |     27 |00:00:00.01 |      60 |
|*  5 |     INDEX UNIQUE SCAN         | SYS_C0014104 |     27 |      1 |     27 |00:00:00.01 |      33 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<10)
   5 - access("T1"."Y"="T2"."X")


23 rows selected.

Just a quick note on the correlation between STARTS, E-ROWS, and A-ROWS. I picked a bit of a bad example due to the "rownum < 10" predicate. It causes processing to stop on the full table scan after 9 rows are retrieved. But it does show a mismatch in the number of estimated rows and the number of actual rows. Keep in mind that the actual rows are cumulative, so it's the total rows this step has handled regardless of the number of executions. This statement has been executed 3 times (see the 3 under Starts on line 1). And the full table scan was executed (Started) 3 times (once per execution). So the optimizer was expecting 53629 rows for each execution, but got on average 9 rows per execution. Note also that the Starts column is the actual number of times the operation was executed, not the number of times that the optimizer thinks it will be executed. Here's another little bit of output to show that Starts is actual, not estimated:


SYS@LAB1024> -- here's a query without the "rownum<10" predicate
SYS@LAB1024> @fs
Enter value for sql_text: %t1_nl_join_agg.sql%
Enter value for sql_id: 

SQL_ID         CHILD    PLAN_HASH        EXECS     AVG_ETIME      AVG_LIO SQL_TEXT
------------- ------ ------------ ------------ ------------- ------------ ------------------------------------------------------------
a36w6pfkgp2sy      0    174109304            1          3.77       53,745 select /*+ gather_plan_statistics */ avg(x*2) from ( select
                                                                          /* t1_nl_join_agg.sql */ t1.x, t2.y from kso.t1 t1, kso.t2
                                                                          t2 where t1.y = t2.x )


1 row selected.

SYS@LAB1024> @dplan_allstats
Enter value for sql_id: a36w6pfkgp2sy
Enter value for child_no: 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a36w6pfkgp2sy, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ avg(x*2) from ( select /* t1_nl_join_agg.sql */ t1.x,
t2.y from kso.t1 t1, kso.t2 t2 where t1.y = t2.x )

Plan hash value: 174109304

-------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------------
|   1 |  SORT AGGREGATE     |              |      1 |      1 |      1 |00:00:03.77 |   53745 |    169 |
|   2 |   NESTED LOOPS      |              |      1 |  53629 |  25994 |00:00:02.94 |   53745 |    169 |
|   3 |    TABLE ACCESS FULL| T1           |      1 |  53629 |  53629 |00:00:00.80 |     114 |    105 |
|*  4 |    INDEX UNIQUE SCAN| SYS_C0014104 |  53629 |      1 |  25994 |00:00:02.41 |   53631 |     64 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("T1"."Y"="T2"."X")


21 rows selected.

SYS@LAB1024> -- the stats are 100% correct on the full table scan (line 3), 
SYS@LAB1024> -- but off by 2X on the NL join (25994 vs. 53629 on line 2)
SYS@LAB1024> -- so let's mess up the stats on T1 to see if Starts is optimizer guess or actual
SYS@LAB1024> -- set rows in T1 to 1234
SYS@LAB1024> 
SYS@LAB1024> exec dbms_stats.set_table_stats(ownname => 'KSO', tabname => 'T1', numrows => 1234);

PL/SQL procedure successfully completed.

SYS@LAB1024> @flush_sql
Enter value for sql_id: a36w6pfkgp2sy

PL/SQL procedure successfully completed.

SYS@LAB1024> @t1_nl_join_agg.sql

    AVG(X*2)
------------
108794.25383

1 row selected.

SYS@LAB1024> @dplan_allstats
Enter value for sql_id: a36w6pfkgp2sy     
Enter value for child_no: 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a36w6pfkgp2sy, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ avg(x*2) from ( select /* t1_nl_join_agg.sql */
t1.x, t2.y from kso.t1 t1, kso.t2 t2 where t1.y = t2.x )

Plan hash value: 174109304

----------------------------------------------------------------------------------------------
| Id  | Operation           | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------
|   1 |  SORT AGGREGATE     |              |      1 |      1 |      1 |00:00:03.63 |   53745 |
|   2 |   NESTED LOOPS      |              |      1 |   1234 |  25994 |00:00:03.20 |   53745 |
|   3 |    TABLE ACCESS FULL| T1           |      1 |   1234 |  53629 |00:00:00.86 |     114 |
|*  4 |    INDEX UNIQUE SCAN| SYS_C0014104 |  53629 |      1 |  25994 |00:00:02.41 |   53631 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("T1"."Y"="T2"."X")


21 rows selected.

SYS@LAB1024> -- note that Starts on the Index Lookup inside the NL Join is 53629 (line 4)
SYS@LAB1024> -- this is the number of times that operation was actually executed,
SYS@LAB1024> -- not 1234, which is the number of times the optimizer thought is would be executed
SYS@LAB1024> -- so Starts and A-rows are actual values, E-Rows is the optimizer estimate
SYS@LAB1024> -- (at least for NL Join)

Back to the topic at hand, my normal steps (i.e. generate test script, add GATHER_PLAN_STATISTICS hint, etc…)

I have a script that helps generate the test script (see this post: Creating Test Scripts with Bind Variables). But there are definite drawbacks to this approach. It can be difficult to duplicate the exact environment that the optimizer sees when running the production code, even if you are testing on the production instance. The bottom line is that you have to be pretty careful to make sure that your test script is really doing the same thing that the production code is doing.

Which brings me to my idea (finally)! Here’s a quick way to get extended plan statistics for any statement without modifying the code or creating a test version. Drum roll please ….

Just create a one line SQL Profile (using create_1_hint_sql_profile.sql) with the GATHER_PLAN_STATISTICS hint. You can use dplan_allstats.sql to see the extended statistics. Here’s an example:

> !sql
sqlplus "/ as sysdba"

SQL*Plus: Release 11.2.0.1.0 Production on Thu Jan 28 17:49:56 2010

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS@LAB112> -- note: SQL_ID 84q0zxfzn5u6s is the statement I want to look at
SYS@LAB112>
SYS@LAB112> @create_1_hint_sql_profile
Enter value for sql_id: 84q0zxfzn5u6s
Enter value for profile_name (PROFILE_sqlid_MANUAL): PROFILE_84q0zxfzn5u6s_GPS
Enter value for category (DEFAULT): 
Enter value for force_matching (false): 
Enter value for hint: GATHER_PLAN_STATISTICS
Profile PROFILE_84q0zxfzn5u6s_MANUAL created.

PL/SQL procedure successfully completed.

SYS@LAB112> @sql_profiles
Enter value for sql_text: 
Enter value for name: 

NAME                           CATEGORY        STATUS   SQL_TEXT                                                               FOR
------------------------------ --------------- -------- ---------------------------------------------------------------------- ---
PROFILE_fgn6qzrvrjgnz          DEFAULT         DISABLED select /*+ index(a SKEW_COL1) */ avg(pk_col) from kso.skew a           NO
PROFILE_69k5bhm12sz98          DEFAULT         DISABLED SELECT dbin.instance_number,        dbin.db_name, dbin.instance_name,  NO
PROFILE_8js5bhfc668rp          DEFAULT         DISABLED select /*+ index(a SKEW_COL2_COL1) */ avg(pk_col) from kso.skew a wher NO
PROFILE_bxd77v75nynd8          DEFAULT         DISABLED select /*+ parallel (a 4) */ avg(pk_col) from kso.skew a where col1 >  NO
PROFILE_8hjn3vxrykmpf          DEFAULT         DISABLED select /*+ invalid_hint (doda) */ avg(pk_col) from kso.skew where col1 NO
PROFILE_7ng34ruy5awxq          DEFAULT         DISABLED select i.obj#,i.ts#,i.file#,i.block#,i.intcols,i.type#,i.flags,i.prope NO
PROFILE_84q0zxfzn5u6s_GPS      DEFAULT         ENABLED  select avg(pk_col) from kso.skew                                       NO

7 rows selected.

SYS@LAB112> @sql_profile_hints
Enter value for profile_name: PROFILE_84q0zxfzn5u6s_GPS

HINT
------------------------------------------------------------------------------------------------------------------------------------------------------
GATHER_PLAN_STATISTICS

1 rows selected.

SYS@LAB112> @avgskewi

AVG(PK_COL)
-----------
   15636133

1 row selected.

SYS@LAB112> @dplan_allstats
Enter value for sql_id: 84q0zxfzn5u6s
Enter value for child_no: 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  84q0zxfzn5u6s, child number 0
-------------------------------------
select avg(pk_col) from kso.skew where col1 = 136133

Plan hash value: 3723858078

----------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |           |      1 |        |      1 |00:00:00.01 |      35 |
|   1 |  SORT AGGREGATE              |           |      1 |      1 |      1 |00:00:00.01 |      35 |
|   2 |   TABLE ACCESS BY INDEX ROWID| SKEW      |      1 |     35 |     32 |00:00:00.01 |      35 |
|*  3 |    INDEX RANGE SCAN          | SKEW_COL1 |      1 |     35 |     32 |00:00:00.01 |       3 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("COL1"=136133)

Note
-----
   - SQL profile PROFILE_84q0zxfzn5u6s_GPS used for this statement
   - SQL plan baseline SQL_84Q0ZXFZN5U6S_3723858078 used for this statement


25 rows selected.

SYS@LAB112> @sql_hints
Enter value for sql_id: 84q0zxfzn5u6s
Enter value for child_no: 0

OUTLINE_HINTS
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
DB_VERSION('11.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX_RS_ASC(@"SEL$1" "SKEW"@"SEL$1" ("SKEW"."COL1"))

6 rows selected.

Works great! Now you don’t have to do all that work just to see the rowsource execution stats.

Note that the Outline Hints stored in the OTHER_XML column of V$sql_plan have no record of the GATHER_PLAN_STATISTICS hint, even though it was clearly executed (otherwise we’d have gotten a warning message in the XPLAN output and no A-Rows column).

Note also that this example was done on an 11gR2 database and that the statement was found in the SQL Management Base (i.e. there was a Baseline in place for this statement already). The Notes section of the plan output shows that both the Profile and the Baseline were used. This got me wondering if it really merged the hints from the Profile and the Baseline. The short answer is YES, it appears that it does. I’ve got an example, but since that’s a different topic and this post is already pretty long, I’ll just save it for another post.

As always, your comments are welcomed.

UPDATE: This has proved pretty useful, so I wrote a little script that just prompts for a sql_id and creates a SQL Profile with the GATHER_PLAN_STATISTICS hint. I have just uploaded it to the site here: gps.sql

UPDATE 2 (4/5/12): I have recently re-written the gps.sql script to use the SQL Patch facility which allows it to be used with statements that already have a SQL Profile attached to them.

Why Isn’t Oracle Using My Outline / Profile / Baseline?

I seem to have spent a lot of time during the last week having conversations about why Oracle is not locking plans, even when an Outline or SQL Profile has been created. I mean, their whole purpose in life is to keep the optimizer from changing plans, right? Here’s a bit of an email I sent with my thoughts on the issue during a conversation on the Oracle-L list.

First, I think you need to convince yourself as to whether the profile is being used or not. There is a column in v$sql (sql_profile) which will have the name of the profile if one is being used. Also xplan will show that a profile is being used at the bottom of the output.

If the profile is being used, but the plan is not what you expected, there are a couple of likely culprits.

First, if it’s a profile generated by SQL Tuning Advisor it’s quite possible that it will have one or more OPT_ESTIMATE hints – which apply scaling factors to various operations. These types of profiles can and do switch plans fairly easily, because they are not even attempting to lock anything. As the stats change, the plans can change.

The second common possibility is that Oracle decided to use an index hint that doesn’t specify the index name, but rather the columns that it would like to use an index on. For example:

INDEX_RS_ASC(@”SEL$1″ “TEST_TABLE”@”SEL$1” (“TEST_TABLE”.”COL2″ “TEST_TABLE”.”TEST_TABLE_ID”))

This hint does not specify an index, but rather columns to use an index on. I’m not sure when this form became a preferred approach but it certainly seems to show up pretty often. So that leaves the optimizer with the flexibility to pick an index based on stats. In systems with lots of indexes, this makes it more likely that a statement will switch plans even though it’s using an Outline or Profile. In fact, you may have a statement with multiple plans in the shared pool, pick one to create an Outline or Profile with, enable it, and have the new child use a plan that’s different from the plan of the child you used to create it with. Very frustrating.

So as I said in the email, the most likely cause for SQL Profiles not working is the non-specific form of index hints that are being used. However, as I was doing some research for this post, I found a couple situations where SQL Profiles just flat don’t work as advertised. I’ll show you an example, but first here are a few scripts that I will use (I sometimes don’t set “echo on” because it makes the examples so messy – so just hit the links to view the scripts):

Updated: 6/14/13
Note that I have rewritten the create_sql_profile.sql script so that it is not dependent on the rg_sqlprof1 script and to add additional functionality. Please use the newer one. Also I have rewritten the sql_profile_hints.sql script so that it works with 10g and 11g. I have disabled the links to the old versions in the list below.

sql_hints.sql – shows the hints stored in the v$sql_plan.other_xml column
sql_profile_hints.sql – shows the hints associated with a SQL Profile
sql_profile_hints11.sql – shows the hints associated with a SQL Profile (for 11g)
create_sql_profile.sql – creates a profile for a statement in the shared pool (wrapper for rg_sqlprof1.sql)
rg_sqlprof1.sql – does the real work to create a profile for a statement in the shared pool
fix_sql_profile_hint.sql – replaces a hint in a profile with whatever you want

Here’s the set up: a simple single table query that wants to do a full table scan and a second version that uses a hint to force an index. A profile created on the hinted statement should keep it from changing, but instead, it changes the plan back to a full table scan. Here’s the example:

Continue reading ‘Why Isn’t Oracle Using My Outline / Profile / Baseline?’ »

How to Attach a SQL Profile to a Different Statement

One of the old tricks with Outlines was to switch hints between two statements. This allowed hints to be applied to a statement to influence the plan that the optimizer chose. The original statement (without the hints) could then be made to behave by swapping the Outlines. It was always a questionable approach in my mind. However, DBMS_SQLTUNE has a built-in procedure to import hints into a SQL Profile which makes it feel a little less risky. So here’s a quick little script to do the old “Outline Switcheroo Trick”, but with SQL Profiles. It’s based on some work done a few months back by myself and Randolf Geist. Here’s a couple of posts to look at for background info:

This iteration has a couple of new scripts:

create_sql_profile2.sql – creates a profile for one statement based on hints from another (wrapper for rg_profile_hints3.sql)
rg_sqlprof3.sql – modified version of Randolf’s original script, pulls hints from v$sql_plan
sql_profile_hints.sql – shows the hints in a SQL Profile for 10g

NOTE: I have posted an update to this approach here:

How to Attach a SQL Profile to a Different Statement – Take 2

You may want to skip the example below and refer to it instead (don’t miss the caveats at the bottom of this post though).

Here’s an example:

> sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.3.0 - Production on Tue Jul 28 15:38:22 2009

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> @flush_pool

System altered.

Elapsed: 00:00:00.31
SQL> set echo on
SQL> set timing on
SQL> @avgskewi
SQL> select avg(pk_col) from kso.skew
  2  where col1 = 23489
  3  /

AVG(PK_COL)
-----------


Elapsed: 00:00:00.01
SQL> @avgskewi_hint
SQL> select /*+ full(skew) */ avg(pk_col) from kso.skew
  2  where col1 = 23489
  3  /

AVG(PK_COL)
-----------


Elapsed: 00:00:11.23
SQL> set timing off
SQL> @find_sql
SQL> select sql_id, child_number, plan_hash_value plan_hash, executions execs,
  2  (elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions) avg_etime,
  3  buffer_gets/decode(nvl(executions,0),0,1,executions) avg_lio,
  4  sql_text
  5  from v$sql s
  6  where upper(sql_text) like upper(nvl('&sql_text',sql_text))
  7  and sql_text not like '%from v$sql where sql_text like nvl(%'
  8  and sql_id like nvl('&sql_id',sql_id)
  9  order by 1, 2, 3
 10  /
Enter value for sql_text: %skew%
Enter value for sql_id: 

SQL_ID         CHILD  PLAN_HASH      EXECS     AVG_ETIME      AVG_LIO SQL_TEXT
------------- ------ ---------- ---------- ------------- ------------ ------------------------------------------------------------
7s0b9ygcrj77u      0 3723858078          1           .01          134 select avg(pk_col) from kso.skew where col1 = 23489
9r9wq9xqsw6mu      0  568322376          1         10.97      173,731 select /*+ full(skew) */ avg(pk_col) from kso.skew where col
                                                                      1 = 23489

SQL> @dplan
SQL> select * from table(dbms_xplan.display_cursor('&sql_id','&child_no','typical'))
  2  /
Enter value for sql_id: 7s0b9ygcrj77u
Enter value for child_no: 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  7s0b9ygcrj77u, child number 0
-------------------------------------
select avg(pk_col) from kso.skew where col1 = 23489

Plan hash value: 3723858078

------------------------------------------------------------------------------------------
| Id  | Operation                    | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |           |       |       |    53 (100)|          |
|   1 |  SORT AGGREGATE              |           |     1 |    11 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| SKEW      |    54 |   594 |    53   (2)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | SKEW_COL1 |    54 |       |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("COL1"=23489)


20 rows selected.

SQL> @dplan
SQL> select * from table(dbms_xplan.display_cursor('&sql_id','&child_no','typical'))
  2  /
Enter value for sql_id: 9r9wq9xqsw6mu
Enter value for child_no: 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  9r9wq9xqsw6mu, child number 0
-------------------------------------
select /*+ full(skew) */ avg(pk_col) from kso.skew where col1 = 23489

Plan hash value: 568322376

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       | 31719 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    11 |            |          |
|*  2 |   TABLE ACCESS FULL| SKEW |    54 |   594 | 31719  (37)| 00:00:43 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("COL1"=23489)


19 rows selected.

SQL> -- so one statement hinted to use full table scan the other not hinted is using an index 
SQL> -- let's try creating a profile from the hinted version and putting it on none hinted statement
SQL> set echo off
SQL> @create_sql_profile2
Enter value for sql_id to generate profile from: 9r9wq9xqsw6mu
Enter value for child_no to generate profile from: 0
Enter value for sql_id to attach profile to: 7s0b9ygcrj77u
Enter value for child_no to attach profile to: 0
Enter value for category: 
Enter value for force_matching: 

PL/SQL procedure successfully completed.

SQL> @sql_profiles
Enter value for sql_text: %skew%
Enter value for name: 

NAME                           CATEGORY        STATUS   SQL_TEXT                                                               FOR
------------------------------ --------------- -------- ---------------------------------------------------------------------- ---
PROFILE_922pr090z0bvm          DEFAULT         ENABLED  select sql_id, dbms_lob.substr(sql_text,3999,1) sql_text from dba_hist NO
PROFILE_7s0b9ygcrj77u_attach   DEFAULT         ENABLED  select avg(pk_col) from kso.skew                                       NO

SQL> @sql_profile_hints
Enter value for profile_name: PROFILE_7s0b9ygcrj77u

HINT
------------------------------------------------------------------------------------------------------------------------------------------------------
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('10.2.0.3')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
FULL(@"SEL$1" "SKEW"@"SEL$1")

SQL> @flush_pool

System altered.

SQL> @avgskewi

AVG(PK_COL)
-----------


SQL> @find_sql
Enter value for sql_text: %skew%
Enter value for sql_id: 

SQL_ID         CHILD  PLAN_HASH      EXECS     AVG_ETIME      AVG_LIO SQL_TEXT
------------- ------ ---------- ---------- ------------- ------------ ------------------------------------------------------------
7s0b9ygcrj77u      0  568322376          1         10.29      174,071 select avg(pk_col) from kso.skew where col1 = 23489

SQL> @dplan
Enter value for sql_id: 7s0b9ygcrj77u
Enter value for child_no: 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  7s0b9ygcrj77u, child number 0
-------------------------------------
select avg(pk_col) from kso.skew where col1 = 23489

Plan hash value: 568322376

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       | 31719 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    11 |            |          |
|*  2 |   TABLE ACCESS FULL| SKEW |    54 |   594 | 31719  (37)| 00:00:43 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("COL1"=23489)

Note
-----
   - SQL profile "PROFILE_7s0b9ygcrj77u" used for this statement


23 rows selected.

SQL> -- so this worked, statement is now using a profile generated from hints on another statement


A couple of notes:

  • Beware of table aliases as they can cause hints to fail.
  • Beware of any structural changes as they can cause hints to fail.
  • Beware of the non-specific format of the INDEX hint.
    (it leaves the optimizer with a lot of flexibility than you probably want it to have)
    (for more details, see this post: Why Isn’t Oracle Using My Outline / Profile / Baseline?

Your comments are always welcomed.

Oracle 11g SQL Plan Management – SQL Plan Baselines

Note, this post applies to 11gR1 (specifically 11.1.0.7).

11g has a new method of dealing with plan instability (i.e. the tendency of a statement to flip flop between plans). The goal has always been to provide the best performance, but until 11g there has not been a mechanism that had as a design goal, the idea of eliminating backwards movement (“performance regressions” as the Oracle documentation calls them). That is, not allowing a statement to switch to a plan that is significantly slower than the one it has already been executing. This new mechanism depends on a construct called a Baseline. So for this post I am not going to focus on the process of evolving plans to prevent “performance regressions”, but rather on the Baselines themselves. (hopefully I’ll get to the plan management part soon)

Here’s a little excerpt from the 11gR1 documentation:

Each time a SQL statement is compiled, the optimizer first uses a cost-based search method to build a best-cost plan, then tries to find a matching plan in the SQL plan baseline. If a match is found, the optimizer will proceed using this plan. Otherwise, it evaluates the cost of each accepted plan in the SQL plan baseline and selects the plan with the lowest cost. The best-cost plan found by the optimizer that does not match any plans in the plan history for the SQL statement represents a new plan, and is added as a non-accepted plan to the plan history. The new plan is not used until it is verified to not cause a performance regression. However, if a change in the system (such as a dropped index) causes all accepted plans to become non-reproducible, the optimizer will select the best-cost plan. Thus, the presence of a SQL plan baseline causes the optimizer to use conservative plan selection strategy for the SQL statement.

Clear as mud?

Now my description:

Baselines are the latest evolution in Oracle’s efforts to allow a plan to be locked. The previous efforts were Outlines and SQL Profiles. Both of those efforts were based on the idea that hints could be applied to limit the optimizers choices down to one. That approach seems a little flawed. Why not just save the plan and be done with it? I believe that’s what Baselines are intended to do. Unfortunately, they don’t appear to do it yet. But they do have an advantage over Outlines and SQL Profiles in that they at least save the plan_hash_value, so they know if they are reproducing the correct plan or not. (see my previous post, Do Baselines Use Hints? – Take 2) And hopefully, they will evolve to the point where the actual plan is saved along with the hash value – we’ll see.

So anyway, here’s some basic housekeeping stuff on Baselines:

  • Baselines will be used by default in 11gR1, if they exist. There is a parameter to control whether they are used or not (OPTIMIZER_USE_SQL_PLAN_BASELINE). It is set to TRUE by default.
  • Baselines will not be created by default in 11gR1. So, much like with the older Outlines or SQL Profiles, you must do something to create them.
  • There is a view called DBA_SQL_PLAN_BASELINES that exposes the Baselines that have been created.
  • Just like Outlines and SQL Profiles, Baselines apply to all instances in a RAC environment (they are not localized to a specific instance).

Baselines can be created a couple of different ways. They can be created automatically by the database as it’s running by setting the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter. They can also be created for statements in a SQL Tuning Set. But I’m not going to talk about either one of those approaches in this post, as I don’t think they really make much sense in the case where you want to use a Baseline to lock a plan for a specific statement. Instead, I am going to discuss creating a baseline for a statement that is in the shared pool via the DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE function.

All the function needs is a SQL_ID and a PLAN_HASH_VALUE. Optionally a parameter can be used to define the baseline as FIXED. If it’s FIXED then it gets priority over any other Baselines for that statement, except other FIXED Baselines. Confused? Well it’s not exactly the most straight forward setup. I’m kind of a simple guy, so at this point I’m thinking one FIXED Baseline is enough.

Before you get too bored, here’s a quick example:

SQL> select avg(pk_col) from kso.little_skew where col1 = 99320;

AVG(PK_COL)
-----------
     628922

SQL> @find_sql
Enter value for sql_text: %little%
Enter value for sql_id: 

SQL_ID         CHILD  PLAN_HASH      EXECS     AVG_ETIME      AVG_LIO SQL_TEXT
------------- ------ ---------- ---------- ------------- ------------ ------------------------------------------------------------
4dd2qjxbjav6u      0 2709260180          1           .02           89 select avg(pk_col) from kso.little_skew where col1 = 99320

SQL> @dplan
Enter value for sql_id: 4dd2qjxbjav6u
Enter value for child_no: 0

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  4dd2qjxbjav6u, child number 0
-------------------------------------
select avg(pk_col) from kso.little_skew where col1 = 99320

Plan hash value: 2709260180

-------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                  |       |       |     4 (100)|          |
|   1 |  SORT AGGREGATE              |                  |     1 |     9 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| LITTLE_SKEW      |     1 |     9 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | LITTLE_SKEW_COL1 |     1 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("COL1"=99320)

20 rows selected.

SQL> !cat create_baseline.sql
var ret number
exec :ret := dbms_spm.load_plans_from_cursor_cache(-
    sql_id=>'&sql_id', -
    plan_hashvalue=>&plan_hash_value,-
    fixed=>'&fixed');

SQL> @create_baseline
Enter value for sql_id: 4dd2qjxbjav6u
Enter value for plan_hash_value: 2709260180
Enter value for fixed: NO

PL/SQL procedure successfully completed.

SQL> !cat baselines.sql
set lines 155
col sql_text for a50 trunc
col last_executed for a28
col enabled for a7
col plan_hash_value for a16
col last_executed for a16
select spb.sql_handle, spb.plan_name, spb.sql_text,
spb.enabled, spb.accepted, spb.fixed,
to_char(spb.last_executed,'dd-mon-yy HH24:MI') last_executed
from
dba_sql_plan_baselines spb
where spb.sql_text like nvl('%'||'&sql_text'||'%',spb.sql_text)
and spb.sql_handle like nvl('&name',spb.sql_handle)
and spb.plan_name like nvl('&plan_name',spb.plan_name)
/

SQL> @baselines
Enter value for sql_text: %little%
Enter value for name:
Enter value for plan_name: 

SQL_HANDLE                     PLAN_NAME                      SQL_TEXT                                           ENABLED ACC FIX LAST_EXECUTED
------------------------------ ------------------------------ -------------------------------------------------- ------- --- --- ----------------
SYS_SQL_b77256b604589dec       SYS_SQL_PLAN_04589dec2fdfd157  select avg(pk_col) from kso.little_skew where col1 YES     YES NO

SQL> select avg(pk_col) from kso.little_skew where col1 = 99320;

AVG(PK_COL)
-----------
     628922

SQL> select avg(pk_col) from kso.little_skew where col1 = 99320;

AVG(PK_COL)
-----------
     628922

SQL> @find_sql
Enter value for sql_text:
Enter value for sql_id: 4dd2qjxbjav6u

SQL_ID         CHILD  PLAN_HASH      EXECS     AVG_ETIME      AVG_LIO SQL_TEXT
------------- ------ ---------- ---------- ------------- ------------ ------------------------------------------------------------
4dd2qjxbjav6u      0 2709260180          1           .02           89 select avg(pk_col) from kso.little_skew where col1 = 99320
4dd2qjxbjav6u      1 2709260180          1           .00            5 select avg(pk_col) from kso.little_skew where col1 = 99320

SQL> @dplan
Enter value for sql_id: 4dd2qjxbjav6u
Enter value for child_no: 1

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  4dd2qjxbjav6u, child number 1
-------------------------------------
select avg(pk_col) from kso.little_skew where col1 = 99320

Plan hash value: 2709260180

-------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                  |       |       |     4 (100)|          |
|   1 |  SORT AGGREGATE              |                  |     1 |     9 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| LITTLE_SKEW      |     1 |     9 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | LITTLE_SKEW_COL1 |     1 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("COL1"=99320)

Note
-----
   - SQL plan baseline SYS_SQL_PLAN_04589dec2fdfd157 used for this statement

24 rows selected.

So that’s a quick example of how easy it is to create a Baseline. Once a Baseline is created it behaves much like an Outline or SQL Profile, in that it will be applied to any SQL statement where the normalized text matches. It’s interesting to note that even though Outlines are way outdated in 11g, they still take precedence over Baselines. So if you have an Outline and a Baseline on the same statement, the Outline will be used. Here’s an example of that behavior:
Continue reading ‘Oracle 11g SQL Plan Management – SQL Plan Baselines’ »

SQL Profiles

Well I was wrong! SQL Profiles are better than Outlines. For a while now I have been saying that I didn’t like SQL Profiles because they were less stable than Outlines. Turns out that the OPT_ESTIMATE hint used by SQL Profiles which are created by the SQL Tuning Advisor is what I really didn’t like. I just didn’t know it.

Let me back up for a minute. I posted about Oracle’s SQL Tuning Advisor a while back. It’s a feature that was added to Oracle in version 10g. It basically looks a SQL statement and tries to come up with a better execution plan than the one the optimizer has picked. Since it is allowed as much time as it wants to do it’s analysis, the advisor can sometimes find better approaches. That’s because it can actually validate the optimizer’s original estimates by running various steps in a given plan and comparing the actual results to the estimates. When it’s all done, if it has found a better plan, it offers to implement that new plan via a SQL Profile. Those offered Profiles often have a lightly documented hint (OPT_ESTIMATE) that allows it to scale the optimizer estimates for various operations – essentially it’s a fudge factor. The problem with this hint is that, far from locking a plan in place, it is locking an empirically derived fudge factor in place. This still leaves the optimizer with a lot of flexibility when it comes to choosing a plan. It also sets up a commonly occurring situation where the fudge factors stop making sense as the statistics change. Thus the observation that SQL Profiles tend to sour over time.

I’ve been saying for some time that I don’t like buying generic Viagra online. Since it is very convenient and effective.

I have to give credit to Randolf Geist for making me take a second look at SQL Profiles. He commented on my Outlines post last week and recommended I give his post on SQL Profiles a look. I did and it really got me thinking. One of the things I liked the best about the post was that he created a couple of scripts to pull the existing hints from a statement in the shared pool or the AWR tables , and create a SQL Profile from those hints using the DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure. This makes perfect sense because the hints are stored with every plan (that’s what DBMS_XPLAN uses to spit them out if you ask for them). Unfortunately this procedure is only lightly documented. Also he had a nice script for pulling the hints from V$SQL_PLAN table which I have made use of as well.

So as always I have created a few scripts (borrowing from Randolf mostly).

create_sql_profile.sql – uses cursor from the shared pool
create_sql_profile_awr.sql – uses AWR tables
sql_profile_hints.sql – shows the hints in a SQL Profile for 10g

So here’s little example:
Note: unstable_plans.sql and awr_plan_stats.sql are discussed here: Unstable Plans (Plan Instability)

SQL> @unstable_plans
Enter value for min_stddev:
Enter value for min_etime:

SQL_ID        SUM(EXECS)   MIN_ETIME   MAX_ETIME   NORM_STDDEV
------------- ---------- ----------- ----------- -------------
0qa98gcnnza7h          4       42.08      208.80        2.8016

SSQL> @awr_plan_stats
Enter value for sql_id: 0qa98gcnnza7h

SQL_ID        PLAN_HASH_VALUE        EXECS          ETIME    AVG_ETIME        AVG_LIO
------------- --------------- ------------ -------------- ------------ --------------
0qa98gcnnza7h       568322376            3          126.2       42.079      124,329.7
0qa98gcnnza7h      3723858078            1          208.8      208.796   28,901,466.0

SQL> @create_sql_profile_awr
Enter value for sql_id: 0qa98gcnnza7h
Enter value for plan_hash_value: 568322376
Enter value for category:
Enter value for force_matching:

PL/SQL procedure successfully completed.

SQL> @sql_profiles
Enter value for sql_text:
Enter value for name: PROFIL%

NAME                           CATEGORY        STATUS   SQL_TEXT                                                               FOR
------------------------------ --------------- -------- ---------------------------------------------------------------------- ---
PROFILE_0qa98gcnnza7h          DEFAULT         ENABLED  select avg(pk_col) from kso.skew                                       NO

SQL> set echo on
SQL> @sql_profile_hints
SQL> set lines 155
SQL> col hint for a150
SQL> select attr_val hint
  2  from dba_sql_profiles p, sqlprof$attr h
  3  where p.signature = h.signature
  4  and name like ('&profile_name')
  5  order by attr#
  6  /
Enter value for profile_name: PROFILE_0qa98gcnnza7h

HINT
------------------------------------------------------------------------------------------------------------------------------------------------------
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('10.2.0.3')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
FULL(@"SEL$1" "SKEW"@"SEL$1")

SQL> @sql_hints_awr
SQL> select
  2  extractvalue(value(d), '/hint') as outline_hints
  3  from
  4  xmltable('/*/outline_data/hint'
  5  passing (
  6  select
  7  xmltype(other_xml) as xmlval
  8  from
  9  dba_hist_sql_plan
 10  where
 11  sql_id = '&sql_id'
 12  and plan_hash_value = &plan_hash_value
 13  and other_xml is not null
 14  )
 15  ) d;
Enter value for sql_id: 0qa98gcnnza7h
Enter value for plan_hash_value: 568322376

OUTLINE_HINTS
-----------------------------------------------------------------------------------------------------------------------------------------------------------
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('10.2.0.3')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
FULL(@"SEL$1" "SKEW"@"SEL$1")

A couple of additional points:

  • Outlines and SQL Profiles both take the same approach to controlling execution plans. They both attempt to force the optimizer down a certain path by applying hints behind the scenes. This is in my opinion an almost impossible task. The more complex the statement, the more difficult the task becomes. The newest kid on the block in this area (in 11g) is called a baseline and while it doesn’t abandon the hinting approach altogether, it does at least store the plan_hash_value – so it can tell if it regenerated the correct plan or not.
  • It does not appear that Outlines are being actively pursued by Oracle development anymore. So while they still work in 11g, they are becoming a little less reliable (and they were a bit quirky to begin with).
  • SQL Profiles have the ability to replace literals with bind variables similar to the cursor_sharing parameter. This means you can have a SQL Profile that will match multiple statements which use literals without having to set cursor_sharing for the whole instance.
  • Outlines take precedence over SQL Profiles. You can create both on the same statement and if you do, the outline will be used and the SQL Profile will be ignored. This is true in 11g as well, by the way.
  • Outlines don’t appear to use the OPT_ESTIMATE hint. So I believe it is still a valid approach to accept a SQL Profile as offered by the SQL Tuning Advisor and then create an Outline on top of it. It seems to work pretty well most of the time. (be sure and check the hints and the plan that gets produced)
  • Manually created SQL Profiles also don’t appear to use the OPT_ESTIMATE hint. So I believe it is also a valid approach to accept a SQL Profile as offered by the SQL Tuning Advisor and then create a SQL Profile on top of it. Note that you’ll have to use a different category, then drop the original, then enable the new SQL Profile. Which means this approach is a bit more work than just creating an Outline in the DEFAULT category.

Have a look at the difference between SQL Tuning Set generated hints and those created by a manual SQL Profile or an Outline (note that I have tried to change the object names to protect the innocent and in so doing may have made it slightly more difficult to follow) :

SQL>
SQL> select * from table(dbms_xplan.display_cursor('&sql_id','&child_no',''));
Enter value for sql_id: fknfhx8wth51q
Enter value for child_no: 1

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  fknfhx8wth51q, child number 1
-------------------------------------
SELECT /* test4 */ col1, col2, col3 ...


Plan hash value: 3163842146

----------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                          |       |       |  1778 (100)|          |
|   1 |  NESTED LOOPS                 |                          |  1039 | 96627 |  1778   (1)| 00:00:33 |
|   2 |   NESTED LOOPS                |                          |   916 | 57708 |  1778   (1)| 00:00:33 |
|*  3 |    TABLE ACCESS BY INDEX ROWID| TABLE_XXXX_LOOKUP        |   446 | 17840 |   891   (1)| 00:00:17 |
|*  4 |     INDEX RANGE SCAN          | INDEX_XXXX_IS_CPCI       | 12028 |       |    18   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID| TABLE_XXXX_IDENT         |     2 |    46 |     2   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN          | INDEX_XXXXIP_17_FK       |     2 |       |     1   (0)| 00:00:01 |
|*  7 |   INDEX UNIQUE SCAN           | PK_TABLE_XXXX_ASSIGNMENT |     1 |    30 |     0   (0)|          |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(( 
...
   4 - access("L"."COL1"=:N1)
   6 - access("L"."COL2"="I"."COL1")

Note
-----
   - SQL profile SYS_SQLPROF_012061f471d50001 used for this statement


85 rows selected.

SQL> @sql_profile_hints
Enter value for name: SYS_SQLPROF_012061f471d50001

HINT
------------------------------------------------------------------------------------------------------------------------------------------------------
OPT_ESTIMATE(@"SEL$1", TABLE, "L"@"SEL$1", SCALE_ROWS=0.0536172171)
OPT_ESTIMATE(@"SEL$1", INDEX_SKIP_SCAN, "A"@"SEL$1", PK_TABLE_XXXX_ASSIGNMENT, SCALE_ROWS=4)
COLUMN_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", "COL1", scale, length=6 distinct=1234 nulls=0 min=1000000014 max=1026369632)
COLUMN_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", "COL2", scale, length=12 distinct=2 nulls=0)
COLUMN_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", "COL3", scale, length=12 distinct=2 nulls=0)
TABLE_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", scale, blocks=5 rows=2400)
OPTIMIZER_FEATURES_ENABLE(default)

7 rows selected.

SQL> -- no direct hints - only stats and scaling on the profile created by the SQL Tuning Advisor
SQL> -- (i.e. the dreaded OPT_ESTIMATE hints and no directive type hints like INDEX or USE_NL)
SQL>
SQL> -- now let's try an outline on top of it
SQL> @create_outline

Session altered.

Enter value for sql_id: fknfhx8wth51q
Enter value for child_number: 1
Enter value for outline_name: KSOTEST1
Outline KSOTEST1 created.

PL/SQL procedure successfully completed.

SQL> @outline_hints
Enter value for name: KSOTEST1

HINT
------------------------------------------------------------------------------------------------------------------------------------------------------
USE_NL(@"SEL$1" "A"@"SEL$1")
USE_NL(@"SEL$1" "I"@"SEL$1")
LEADING(@"SEL$1" "L"@"SEL$1" "I"@"SEL$1" "A"@"SEL$1")
INDEX(@"SEL$1" "A"@"SEL$1" ("TABLE_XXXX_ASSIGNMENT"."COL1" "TABLE_XXXX_ASSIGNMENT"."COL2" "TABLE_XXXX_ASSIGNMENT"."COL3"))
INDEX_RS_ASC(@"SEL$1" "I"@"SEL$1" ("TABLE_XXXX_IDENT"."COL1"))
INDEX_RS_ASC(@"SEL$1" "L"@"SEL$1" ("TABLE_XXXX_LOOKUP"."COL1" "TABLE_XXXX_LOOKUP"."COL2"))
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
DB_VERSION('11.1.0.7')
OPTIMIZER_FEATURES_ENABLE('11.1.0.7')
IGNORE_OPTIM_EMBEDDED_HINTS

11 rows selected.

SQL> -- no OPT_ESTIMATE hints on the outline
SQL> -- directive type hints - INDEX, USE_NL, etc...
SQL> 
SQL> -- now let's try creating a manual profile
SQL> @create_sql_profile.sql
Enter value for sql_id: fknfhx8wth51q
Enter value for child_no: 1
Enter value for category: TEST
Enter value for force_matching:

PL/SQL procedure successfully completed.

SQL> @sql_profile_hints
Enter value for name: PROFILE_fknfhx8wth51q

HINT
------------------------------------------------------------------------------------------------------------------------------------------------------
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.1.0.7')
DB_VERSION('11.1.0.7')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX_RS_ASC(@"SEL$1" "L"@"SEL$1" ("TABLE_XXXX_LOOKUP"."COL1" "TABLE_XXXX_LOOKUP"."COL2"))
INDEX_RS_ASC(@"SEL$1" "I"@"SEL$1" ("TABLE_XXXX_IDENT"."COL1"))
INDEX(@"SEL$1" "A"@"SEL$1" ("TABLE_XXXX_ASSIGNMENT"."COL1" "TABLE_XXXX_ASSIGNMENT"."COL2" "TABLE_XXXX_ASSIGNMENT"."COL3"))
LEADING(@"SEL$1" "L"@"SEL$1" "I"@"SEL$1" "A"@"SEL$1")
USE_NL(@"SEL$1" "I"@"SEL$1")
USE_NL(@"SEL$1" "A"@"SEL$1")

11 rows selected.

SQL> -- no OPT_ESTIMATE with the SQL Profile we created manually !
SQL> -- again it's directive - USE_NL, INDEX, LEADING, etc...
SQL>

So I apologize to all you SQL Profiles out there who have been lumped together by my prejudiced view, just because of the acts of a few of your brethren (i.e. the ones created by the SQL Tuning Advisor). SQL Profiles do indeed have all the capabilities of Outlines and probably are a better choice in most cases than Outlines.

Thanks again to Randolf Geist for his comments and his ideas on creating manual SQL Profiles.

Bind Variable Peeking – Drives Me Nuts!

In the constant battle to provide consistent performance, Oracle took a giant step backwards with the 9i version by introducing an “Enhancement” called Bind Variable Peeking. I’ll explain what I mean in a minute, but first a bit of history.

When Oracle introduced histograms in 8i, they provided a mechanism for the optimizer to recognize that the values in a column were not distributed evenly. That is, in a table with 100 rows and 10 distinct values, the default assumption the optimizer would make, in the absence of a histogram, would be that no matter which value you picked – you would always get 100/10 or 10 rows back. Histograms let the optimizer know if that was not the case. The classic example would be 100 records with 2 distinct values where one value, say “Y”, occurred 99 times and the other value, say “N”, occurred only 1 time.  So without a histogram the optimizer would always assume that whether you requested records with a “Y” or an “N”, you would get half the records back (100/2 = 50). Therefore you always want to do a full table scan as opposed to using an index on the column. A histogram, assuming it was accurate (we’ll come back to that later), would let the optimizer know that the distribution was not normal (i.e. not spread out evenly – also commonly called skewed) and that a “Y” would get basically the whole table, while an “N” would get only 1%. This would allow the optimizer to pick an appropriate plan regardless of which value was specified in the Where Clause.

So let’s consider the implications of that. Would that improve the response time for the query where the value was “Y”. The answer is no. In this simple case, the default costing algorithm is close enough and produces the same plan that the histogram produces. The full table scan takes just as long whether the optimizer thought it was getting 50 rows or 99 rows. But what about the case where we specified the value of “N”. In this case, with a histogram we would pick up the index on that column and presumably get a much better response time than the full table scan. This is an important point. Generally speaking it is only the outliers, the exceptional cases if you will, where the histogram really makes a difference.

So at first glance, all appeared well with the world. But there was a fly in the ointment. You had to use literals in your SQL statements for the optimizer to be able use the histograms. So you had to write your statements like this:

SELECT XYZ FROM TABLE1 WHERE COLUMN1 = ‘Y’;

SELECT XYZ FROM TABLE1 WHERE COLUMN1 = ‘N’;

Not a problem in our simple example, because you only have two possibilities. But consider a statement with 2 or 3 skewed columns, each with a couple of hundred distinct values. The possible combinations could quickly grow into the millions. Not a good thing for the shared pool.

Enter our star: Bind Variable Peeking, a new feature introduced in 9i that was added to allow the optimizer to peek at the value of bind variables and then use a histogram to pick an appropriate plan, just like it would do with literals. The problem with the new feature was that it only looked at the variables once, when the statement was parsed. So let’s make our simple example a little more realistic by assuming we have a 10 million row table where 99% have a value of “Y” and 1% have a value of “N”. So in our example, if the first time the statement was executed it was passed a “Y”, the full table scan plan would be locked in and it would be used until the statement had to be re-parsed, even if the value “N” was passed to it in subsequent executions.

So let’s consider the implication of that. When you get the full table scan plan (because you passed a “Y” the first time) it behaves the same way no matter what which value you pass subsequently. Always a full table scan, always the same amount of work and the same basic elapsed time. From a user standpoint that seems reasonable. The performance is consistent. (this is the way it would work without a histogram by the way) On the other hand, if the index plan gets picked because the parse occurs with a value of “N”, the executions where the value is “N” will be even faster than they were before, but the execution with a value of “Y” will be incredibly slow. This is not at all what the user expects. They expect the response time to be about the same every time they execute a piece of code. And this is the problem with bind variable peeking. It’s basically just Russian Roulette. It just depends on what value you happen to pass the statement when it’s parsed (which could be any execution by the way).

So is Bind Variable Peeking a feature or a bug? Well technically it’s not a bug because it works the way it’s designed. I just happen to believe that it was not a good decision to implement it that way. But what other choices did the optimizer development group have?

  • They could have evaluated the bind variables and re-parsed  for every execution of every statement using bind variables. This would eliminate the advantage of having bind variables in the first place and would never work for high transaction systems. So it was basically not an option.
  • They could have just said no, and made us use literals in order to get the benefit of histograms (probably not a bad option in retrospect – the fact that they added _optim_peek_user_binds probably means that they decided later to give us that option via setting this hidden parameter).
  • They could have implemented a system where they could identify statements that might benefit from different plans based on the values of bind variables. Then peek at those variables for every execution of those “bind sensitive” statements (sound familiar? – that’s what they finally did in 11g with Adaptive Cursor Sharing).

So why is it such a pervasive problem? And I do believe it is a pervasive problem with 10g in particular. A couple of reasons come to mind:

  1. We’ve been taught to always use bind variables. It’s a best practice which allows SQL statements to be shared, thus eliminating a great deal of work/contention. Using bind variable is an absolute necessity when building scalable high transaction rate systems. (of course that doesn’t mean that you can’t bend the rule occasionally)
  2. 10g changed it’s default stats gathering method to automatically gather histograms. So in a typical 10g database there are a huge number of histograms, many of them inappropriate (i.e. on columns that don’t have significantly skewed distributions) and many of them created with very small sample sizes causing the histograms to be less than accurate. Note that 11g appears to be better on both counts – that is to say, 11g seems to create fewer inappropriate histograms and seems to create much more accurate histograms with small sample sizes. But the jury is still out on 11g stats gathering as it has not been widely adopted at this point in time.
  3. In my humble opinion, Bind Variable Peeking is not that well understood. When I talk to people about the issue, they usually have heard of it and have a basic idea what the problem is, but their behavior (in terms of the code they write and how they manage their databases) indicates that they don’t really have a good handle on the issue.

So what’s the best way to deal with this issue? Well recognizing that you have a problem is the first step to recovery, so being able to identify that you have a problem with plan stability is an appropriate first step. Direct queries against the Statspack or AWR tables are probably the best way to identify the issue. I’ve posted a couple of scripts that I find useful for this purpose previously – (unstable_plans.sql, awr_plan_stats.sql, awr_plan_change.sql). What you’re looking for is statements that flip flop back and forth between 2 or more plans. Note that there are other reasons for statements to change plans, but Bind Variable Peeking is the number one suspect. Here’s an example of their usage:

SQL> @unstable_plans
SQL> break on plan_hash_value on startup_time skip 1
SQL> select * from (
  2  select sql_id, sum(execs), min(avg_etime) min_etime, max(avg_etime) max_etime, stddev_etime/min(avg_etime) norm_stddev
  3  from (
  4  select sql_id, plan_hash_value, execs, avg_etime,
  5  stddev(avg_etime) over (partition by sql_id) stddev_etime
  6  from (
  7  select sql_id, plan_hash_value,
  8  sum(nvl(executions_delta,0)) execs,
  9  (sum(elapsed_time_delta)/decode(sum(nvl(executions_delta,0)),0,1,sum(executions_delta))/1000000) avg_etime
 10  -- sum((buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta))) avg_lio
 11  from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
 12  where ss.snap_id = S.snap_id
 13  and ss.instance_number = S.instance_number
 14  and executions_delta > 0
 15  and elapsed_time_delta > 0
 16  group by sql_id, plan_hash_value
 17  )
 18  )
 19  group by sql_id, stddev_etime
 20  )
 21  where norm_stddev > nvl(to_number('&min_stddev'),2)
 22  and max_etime > nvl(to_number('&min_etime'),.1)
 23  order by norm_stddev
 24  /
Enter value for min_stddev:
Enter value for min_etime:

SQL_ID        SUM(EXECS)   MIN_ETIME   MAX_ETIME   NORM_STDDEV
------------- ---------- ----------- ----------- -------------
1tn90bbpyjshq         20         .06         .24        2.2039
0qa98gcnnza7h         16       20.62      156.72        4.6669
7vgmvmy8vvb9s        170         .04         .39        6.3705
32whwm2babwpt        196         .02         .26        8.1444
5jjx6dhb68d5v         51         .03         .47        9.3888
71y370j6428cb        155         .01         .38       19.7416
66gs90fyynks7        163         .02         .55       21.1603
b0cxc52zmwaxs        197         .02         .68       23.6470
31a13pnjps7j3        196         .02        1.03       35.1301
7k6zct1sya530        197         .53       49.88       65.2909

10 rows selected.

SQL> @find_sql
SQL> select sql_id, child_number, plan_hash_value plan_hash, executions execs,
  2  (elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions) avg_etime,
  3  buffer_gets/decode(nvl(executions,0),0,1,executions) avg_lio,
  4  sql_text
  5  from v$sql s
  6  where upper(sql_text) like upper(nvl('&sql_text',sql_text))
  7  and sql_text not like '%from v$sql where sql_text like nvl(%'
  8  and sql_id like nvl('&sql_id',sql_id)
  9  order by 1, 2, 3
 10  /
Enter value for sql_text:
Enter value for sql_id: 0qa98gcnnza7h

SQL_ID         CHILD  PLAN_HASH        EXECS     AVG_ETIME      AVG_LIO SQL_TEXT
------------- ------ ---------- ------------ ------------- ------------ ------------------------------------------------------------
0qa98gcnnza7h      0  568322376            3          9.02      173,807 select avg(pk_col) from kso.skew where col1 > 0

SQL> @awr_plan_stats
SQL> break on plan_hash_value on startup_time skip 1
SQL> select sql_id, plan_hash_value, sum(execs) execs, sum(etime) etime, sum(etime)/sum(execs) avg_etime, sum(lio)/sum(execs) avg_lio
  2  from (
  3  select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
  4  nvl(executions_delta,0) execs,
  5  elapsed_time_delta/1000000 etime,
  6  (elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
  7  buffer_gets_delta lio,
  8  (buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
  9  from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
 10  where sql_id = nvl('&sql_id','4dqs2k5tynk61')
 11  and ss.snap_id = S.snap_id
 12  and ss.instance_number = S.instance_number
 13  and executions_delta > 0
 14  )
 15  group by sql_id, plan_hash_value
 16  order by 5
 17  /
Enter value for sql_id: 0qa98gcnnza7h

SQL_ID        PLAN_HASH_VALUE        EXECS          ETIME    AVG_ETIME        AVG_LIO
------------- --------------- ------------ -------------- ------------ --------------
0qa98gcnnza7h       568322376           14          288.7       20.620      172,547.4
0qa98gcnnza7h      3723858078            2          313.4      156.715   28,901,466.0

SQL> @awr_plan_change
SQL> break on plan_hash_value on startup_time skip 1
SQL> select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
  2  nvl(executions_delta,0) execs,
  3  (elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
  4  (buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
  5  from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
  6  where sql_id = nvl('&sql_id','4dqs2k5tynk61')
  7  and ss.snap_id = S.snap_id
  8  and ss.instance_number = S.instance_number
  9  and executions_delta > 0
 10  order by 1, 2, 3
 11  /
Enter value for sql_id: 0qa98gcnnza7h

   SNAP_ID   NODE BEGIN_INTERVAL_TIME            SQL_ID        PLAN_HASH_VALUE        EXECS    AVG_ETIME        AVG_LIO
---------- ------ ------------------------------ ------------- --------------- ------------ ------------ --------------
     21857      1 20-MAR-09 04.00.08.872 PM      0qa98gcnnza7h       568322376            1       31.528      173,854.0
     22027      1 27-MAR-09 05.00.08.006 PM      0qa98gcnnza7h                            1      139.141      156,807.0
     22030      1 27-MAR-09 08.00.15.380 PM      0qa98gcnnza7h                            3       12.451      173,731.0
     22031      1 27-MAR-09 08.50.04.757 PM      0qa98gcnnza7h                            2        8.771      173,731.0
     22032      1 27-MAR-09 08.50.47.031 PM      0qa98gcnnza7h      3723858078            1      215.876   28,901,466.0
     22033      1 27-MAR-09 08.57.37.614 PM      0qa98gcnnza7h       568322376            2        9.804      173,731.0
     22034      1 27-MAR-09 08.59.12.432 PM      0qa98gcnnza7h      3723858078            1       97.554   28,901,466.0
     22034      1 27-MAR-09 08.59.12.432 PM      0qa98gcnnza7h       568322376            2        8.222      173,731.5
     22035      1 27-MAR-09 09.12.00.422 PM      0qa98gcnnza7h                            3        9.023      173,807.3

9 rows selected.

So back to the question, what’s the best way to deal with the issue. In general, the best way to eliminate Bind Variable Peeking is as follows:

  1. Only create histograms on skewed columns.
  2. Use literals in where clauses on columns where you have histograms and want to use them. Note that it’s not necessary to use literals for every possible value of a skewed column. There may be only a few outlier values that result in significantly different plans. With a little extra code you can use literals for those values and bind variables for the rest of the values that don’t matter.
  3. If you can’t modify the code, consider turning off Bind Variable Peeking by setting the _OPTIM_PEEK_USER_BINDS parameter to false. You won’t get the absolute best performance for every possible statement, but you will get much more consistent performance, which is, in my opinion, more important than getting the absolute best performance. Keep in mind that this is a hidden parameter and so should be carefully tested and probably discussed with Oracle support prior to implementing it in any production system.
  4. You can also consider stronger methods of forcing the optimizer’s hand such as Outlines (see my previous posts on Unstable Plans and on Outlines). This option provides a quick method of locking in a single plan, but it’s not fool proof. Even with outlines, there is some possibility that the plan can change. Also note that this option is only palatable in situations where you have a relatively small number of problem SQL statements.
  5. Upgrade to 11g and let Adaptive Cursor Sharing take care of all your problems for you (don’t bet on it working without a little effort – I’ll try to do a post on that soon).

In summary, using literals with histograms on columns with skewed data distributions are really the only effective way to deal with the issue and still retain the ability for the optimizer to choose the absolute best execution plans. However, if circumstances prevent this approach, there are other techniques that can be applied. These should be considered temporary fixes, but may work well while a longer term solution is contemplated. From a philosophical stand point, I strongly believe that consistency is more important than absolute speed. So when a choice must be made, I would always favor slightly reduced but consistent performance over anything that didn’t provide that consistency.

Your comments are always welcome. Please let me know what you think.

Statistics Gathering

Karen Morton just posted a great paper on her blog about statistics gathering. The paper is titled “Managing Statistics for Optimal Query Performance“. I was excited to see it because I think gathering stats is one of the most important and least well understood aspects of managing an Oracle environment. I must admit that I was expecting a recommended method or framework for gathering stats, but actually the paper is really more about how the statistics are used along with general guidelines for gathering them, rather than a direct recommendation on how to gather them. Nevertheless, it is one of the best papers I’ve seen on the subject. She’s going to present the paper at the Hotsos Symposium to be held in Dallas the week of March 9th. I’m going to be there and am really looking forward to hearing what she has to say on the subject.

By the way, I can’t recommend this conference highly enough. If you really want to understand how Oracle works, this is the place to be. You should know that I am not generally a fan of formal training classes. I have often been disappointed because I felt like my time would have been better spent researching the subject matter myself. On the other hand, I have found a lot of value in working closely on a project with someone who knows the subject matter well, kind of like a mentor. But generally speaking, the formal classes have been less satisfying, except in the rare case where you get the great instructor that wrote the class. This symposium format on the other hand allows you to listen to a collection of really knowledgeable Oracle people packed into a short period of time. I have been to the Hotsos Symposiums for several years in a row and I always come away with pages of notes on things I want to investigate further. And the participants are, generally speaking, a collection of very bright Oracle people. So even the conversations between the presentations are often very interesting. Finally, they run two presentations at a time which allows you to pick the presentation that is most interesting. I have often found it hard to choose (don’t tell anyone, but I have on more than one occasion listened to the first half of one and then the second half of the other). So like I said, I find it to be a very productive few days.

But I digress, Karen’s paper is pretty long (24 pages) but it covers a vast amount of ground. There are a number of one liners that could be expanded into full papers. In the paper she discusses a number of topics including dealing with short comings of the optimizer in 10g. One of those issues is bind variable peeking (probably my least favorite optimizer feature, quirk, bug, … what ever you want to call it). I must say that I think it has caused far more problems than it solved, and I frankly don’t know what they were thinking when they put that feature in. I wrote a little about a way to get around it using outlines here. By the way, this reminds me of a cartoon I drew 20 years ago that looked very similar to this one (that I lifted off of Steve Karam’s blog)

Of course as Karen points out, the right way to deal with bind variable peeking issues it is to understand your data and use literals where they are appropriate, keeping in mind the number of additional statements that will need to be parsed and dealt with in the shared pool. She also points out that code could be written to selectively use literals for specific values, giving you a mix of literals and bind variables for the same statement. This approach should allow you to minimize the impact on the shared pool while still providing the optimizer with the data it needs to make good decisions (this is a great idea but I’ve never seen anyone actually do it). And of course she points out that 11g has a much better mechanism for dealing with this whole issue.

Another idea that really got me thinking was the use of dynamic sampling . Karen clearly shows one of the advantages of dynamic sampling in the case of correlated predicates (i.e. the optimizer assumes a query where car_model = ‘Mustang’ and car_make = ‘Ford’ are independent, when clearly they are not). She shows how dynamic sampling can be very useful in conjunction with normal statistics in this situation. (rats, now I’m going to have to go play around with that a bit – so much to do, so little time)

Finally, she discusses some of the statistics gathering options and differences in 9i, 10g, and 11g. The automatic creation of histograms is one of the main differences between 9i and 10g and she discusses this issue, but doesn’t go into to much detail on it. I must admit that I think 10g’s default setup does a very poor job when it comes to histograms. This is the one area I would have liked to see address a little more fully, but at 24 pages already I can understand why she had to draw the line somewhere. Anyway, by default 10g creates histograms on columns based on several factors including their use in where clauses. Unfortunately, histograms often get created on columns where their usefulness is questionable at best and they regularly get created with a very small sample sizes. The small sample sizes often result in significant inaccuracies. I personally think that allowing the gather stats job to automatically create histograms in 10g is really bad idea.

Anyway, this is a paper that is well worth reading in my opinion. Typical Hotsos Symposium fare!

Oracle 11g Real Time SQL Monitoring

One of the interesting new features of 11gR1 is the automatic tracking of long running SQL statements. Oracle calls this new feature Real-Time SQL Monitoring. There a couple of parameters that must be set to enable this behavior.

  • Parameter STATISTICS_LEVEL must be set to ALL or TYPICAL (the default)
  • Parameter  CONTROL_MANAGEMENT_PACK_ACCESS must be set to  DIAGNOSTIC+TUNING (the default)

 

Not all statements are tracked. Only statements that are consider long running invoke this new facility. The following two conditions qualify a statement for tracking:

  • The statement must take more than 5 seconds (or so) to execute
  •  Or the statement is executed in parallel mode

 

There are a couple of new views that have been added to expose this feature. They are V$SQL_MONITOR and V$SQL_PLAN_MONITOR. The data appears to be updated once per second and it contains very detailed information including row counts for each step in the plan. These views contain a record of each execution of the qualifying statements. While it is possible to query these views directly, a function (REPORT_SQL_MONITOR) has been provided in the DBMS_SQLTUNE package that produces a nicely formated report in html, xml, or (my favorite) text format . Here is a script that queries the V$SQL_MONITOR view (sql_monitor.sql) and another that executes the afore mentioned function (report_sql_monitor.sql). Here is an example of their usage:

  Continue reading ‘Oracle 11g Real Time SQL Monitoring’ »