trusted online casino malaysia

Archive for the ‘Plan Stability’ Category.

SQL Profiles

Well I was wrong! SQL Profiles are better than Outlines. For a while now I have been saying that I didn’t like SQL Profiles because they were less stable than Outlines. Turns out that the OPT_ESTIMATE hint used by SQL Profiles which are created by the SQL Tuning Advisor is what I really didn’t like. I just didn’t know it.

Let me back up for a minute. I posted about Oracle’s SQL Tuning Advisor a while back. It’s a feature that was added to Oracle in version 10g. It basically looks a SQL statement and tries to come up with a better execution plan than the one the optimizer has picked. Since it is allowed as much time as it wants to do it’s analysis, the advisor can sometimes find better approaches. That’s because it can actually validate the optimizer’s original estimates by running various steps in a given plan and comparing the actual results to the estimates. When it’s all done, if it has found a better plan, it offers to implement that new plan via a SQL Profile. Those offered Profiles often have a lightly documented hint (OPT_ESTIMATE) that allows it to scale the optimizer estimates for various operations – essentially it’s a fudge factor. The problem with this hint is that, far from locking a plan in place, it is locking an empirically derived fudge factor in place. This still leaves the optimizer with a lot of flexibility when it comes to choosing a plan. It also sets up a commonly occurring situation where the fudge factors stop making sense as the statistics change. Thus the observation that SQL Profiles tend to sour over time.

I’ve been saying for some time that I don’t like buying generic Viagra online. Since it is very convenient and effective.

I have to give credit to Randolf Geist for making me take a second look at SQL Profiles. He commented on my Outlines post last week and recommended I give his post on SQL Profiles a look. I did and it really got me thinking. One of the things I liked the best about the post was that he created a couple of scripts to pull the existing hints from a statement in the shared pool or the AWR tables , and create a SQL Profile from those hints using the DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure. This makes perfect sense because the hints are stored with every plan (that’s what DBMS_XPLAN uses to spit them out if you ask for them). Unfortunately this procedure is only lightly documented. Also he had a nice script for pulling the hints from V$SQL_PLAN table which I have made use of as well.

So as always I have created a few scripts (borrowing from Randolf mostly).

create_sql_profile.sql – uses cursor from the shared pool
create_sql_profile_awr.sql – uses AWR tables
sql_profile_hints.sql – shows the hints in a SQL Profile for 10g

So here’s little example:
Note: unstable_plans.sql and awr_plan_stats.sql are discussed here: Unstable Plans (Plan Instability)

SQL> @unstable_plans
Enter value for min_stddev:
Enter value for min_etime:

SQL_ID        SUM(EXECS)   MIN_ETIME   MAX_ETIME   NORM_STDDEV
------------- ---------- ----------- ----------- -------------
0qa98gcnnza7h          4       42.08      208.80        2.8016

SSQL> @awr_plan_stats
Enter value for sql_id: 0qa98gcnnza7h

SQL_ID        PLAN_HASH_VALUE        EXECS          ETIME    AVG_ETIME        AVG_LIO
------------- --------------- ------------ -------------- ------------ --------------
0qa98gcnnza7h       568322376            3          126.2       42.079      124,329.7
0qa98gcnnza7h      3723858078            1          208.8      208.796   28,901,466.0

SQL> @create_sql_profile_awr
Enter value for sql_id: 0qa98gcnnza7h
Enter value for plan_hash_value: 568322376
Enter value for category:
Enter value for force_matching:

PL/SQL procedure successfully completed.

SQL> @sql_profiles
Enter value for sql_text:
Enter value for name: PROFIL%

NAME                           CATEGORY        STATUS   SQL_TEXT                                                               FOR
------------------------------ --------------- -------- ---------------------------------------------------------------------- ---
PROFILE_0qa98gcnnza7h          DEFAULT         ENABLED  select avg(pk_col) from kso.skew                                       NO

SQL> set echo on
SQL> @sql_profile_hints
SQL> set lines 155
SQL> col hint for a150
SQL> select attr_val hint
  2  from dba_sql_profiles p, sqlprof$attr h
  3  where p.signature = h.signature
  4  and name like ('&profile_name')
  5  order by attr#
  6  /
Enter value for profile_name: PROFILE_0qa98gcnnza7h

HINT
------------------------------------------------------------------------------------------------------------------------------------------------------
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('10.2.0.3')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
FULL(@"SEL$1" "SKEW"@"SEL$1")

SQL> @sql_hints_awr
SQL> select
  2  extractvalue(value(d), '/hint') as outline_hints
  3  from
  4  xmltable('/*/outline_data/hint'
  5  passing (
  6  select
  7  xmltype(other_xml) as xmlval
  8  from
  9  dba_hist_sql_plan
 10  where
 11  sql_id = '&sql_id'
 12  and plan_hash_value = &plan_hash_value
 13  and other_xml is not null
 14  )
 15  ) d;
Enter value for sql_id: 0qa98gcnnza7h
Enter value for plan_hash_value: 568322376

OUTLINE_HINTS
-----------------------------------------------------------------------------------------------------------------------------------------------------------
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('10.2.0.3')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
FULL(@"SEL$1" "SKEW"@"SEL$1")

A couple of additional points:

  • Outlines and SQL Profiles both take the same approach to controlling execution plans. They both attempt to force the optimizer down a certain path by applying hints behind the scenes. This is in my opinion an almost impossible task. The more complex the statement, the more difficult the task becomes. The newest kid on the block in this area (in 11g) is called a baseline and while it doesn’t abandon the hinting approach altogether, it does at least store the plan_hash_value – so it can tell if it regenerated the correct plan or not.
  • It does not appear that Outlines are being actively pursued by Oracle development anymore. So while they still work in 11g, they are becoming a little less reliable (and they were a bit quirky to begin with).
  • SQL Profiles have the ability to replace literals with bind variables similar to the cursor_sharing parameter. This means you can have a SQL Profile that will match multiple statements which use literals without having to set cursor_sharing for the whole instance.
  • Outlines take precedence over SQL Profiles. You can create both on the same statement and if you do, the outline will be used and the SQL Profile will be ignored. This is true in 11g as well, by the way.
  • Outlines don’t appear to use the OPT_ESTIMATE hint. So I believe it is still a valid approach to accept a SQL Profile as offered by the SQL Tuning Advisor and then create an Outline on top of it. It seems to work pretty well most of the time. (be sure and check the hints and the plan that gets produced)
  • Manually created SQL Profiles also don’t appear to use the OPT_ESTIMATE hint. So I believe it is also a valid approach to accept a SQL Profile as offered by the SQL Tuning Advisor and then create a SQL Profile on top of it. Note that you’ll have to use a different category, then drop the original, then enable the new SQL Profile. Which means this approach is a bit more work than just creating an Outline in the DEFAULT category.

Have a look at the difference between SQL Tuning Set generated hints and those created by a manual SQL Profile or an Outline (note that I have tried to change the object names to protect the innocent and in so doing may have made it slightly more difficult to follow) :

SQL>
SQL> select * from table(dbms_xplan.display_cursor('&sql_id','&child_no',''));
Enter value for sql_id: fknfhx8wth51q
Enter value for child_no: 1

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  fknfhx8wth51q, child number 1
-------------------------------------
SELECT /* test4 */ col1, col2, col3 ...


Plan hash value: 3163842146

----------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                          |       |       |  1778 (100)|          |
|   1 |  NESTED LOOPS                 |                          |  1039 | 96627 |  1778   (1)| 00:00:33 |
|   2 |   NESTED LOOPS                |                          |   916 | 57708 |  1778   (1)| 00:00:33 |
|*  3 |    TABLE ACCESS BY INDEX ROWID| TABLE_XXXX_LOOKUP        |   446 | 17840 |   891   (1)| 00:00:17 |
|*  4 |     INDEX RANGE SCAN          | INDEX_XXXX_IS_CPCI       | 12028 |       |    18   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID| TABLE_XXXX_IDENT         |     2 |    46 |     2   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN          | INDEX_XXXXIP_17_FK       |     2 |       |     1   (0)| 00:00:01 |
|*  7 |   INDEX UNIQUE SCAN           | PK_TABLE_XXXX_ASSIGNMENT |     1 |    30 |     0   (0)|          |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(( 
...
   4 - access("L"."COL1"=:N1)
   6 - access("L"."COL2"="I"."COL1")

Note
-----
   - SQL profile SYS_SQLPROF_012061f471d50001 used for this statement


85 rows selected.

SQL> @sql_profile_hints
Enter value for name: SYS_SQLPROF_012061f471d50001

HINT
------------------------------------------------------------------------------------------------------------------------------------------------------
OPT_ESTIMATE(@"SEL$1", TABLE, "L"@"SEL$1", SCALE_ROWS=0.0536172171)
OPT_ESTIMATE(@"SEL$1", INDEX_SKIP_SCAN, "A"@"SEL$1", PK_TABLE_XXXX_ASSIGNMENT, SCALE_ROWS=4)
COLUMN_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", "COL1", scale, length=6 distinct=1234 nulls=0 min=1000000014 max=1026369632)
COLUMN_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", "COL2", scale, length=12 distinct=2 nulls=0)
COLUMN_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", "COL3", scale, length=12 distinct=2 nulls=0)
TABLE_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", scale, blocks=5 rows=2400)
OPTIMIZER_FEATURES_ENABLE(default)

7 rows selected.

SQL> -- no direct hints - only stats and scaling on the profile created by the SQL Tuning Advisor
SQL> -- (i.e. the dreaded OPT_ESTIMATE hints and no directive type hints like INDEX or USE_NL)
SQL>
SQL> -- now let's try an outline on top of it
SQL> @create_outline

Session altered.

Enter value for sql_id: fknfhx8wth51q
Enter value for child_number: 1
Enter value for outline_name: KSOTEST1
Outline KSOTEST1 created.

PL/SQL procedure successfully completed.

SQL> @outline_hints
Enter value for name: KSOTEST1

HINT
------------------------------------------------------------------------------------------------------------------------------------------------------
USE_NL(@"SEL$1" "A"@"SEL$1")
USE_NL(@"SEL$1" "I"@"SEL$1")
LEADING(@"SEL$1" "L"@"SEL$1" "I"@"SEL$1" "A"@"SEL$1")
INDEX(@"SEL$1" "A"@"SEL$1" ("TABLE_XXXX_ASSIGNMENT"."COL1" "TABLE_XXXX_ASSIGNMENT"."COL2" "TABLE_XXXX_ASSIGNMENT"."COL3"))
INDEX_RS_ASC(@"SEL$1" "I"@"SEL$1" ("TABLE_XXXX_IDENT"."COL1"))
INDEX_RS_ASC(@"SEL$1" "L"@"SEL$1" ("TABLE_XXXX_LOOKUP"."COL1" "TABLE_XXXX_LOOKUP"."COL2"))
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
DB_VERSION('11.1.0.7')
OPTIMIZER_FEATURES_ENABLE('11.1.0.7')
IGNORE_OPTIM_EMBEDDED_HINTS

11 rows selected.

SQL> -- no OPT_ESTIMATE hints on the outline
SQL> -- directive type hints - INDEX, USE_NL, etc...
SQL> 
SQL> -- now let's try creating a manual profile
SQL> @create_sql_profile.sql
Enter value for sql_id: fknfhx8wth51q
Enter value for child_no: 1
Enter value for category: TEST
Enter value for force_matching:

PL/SQL procedure successfully completed.

SQL> @sql_profile_hints
Enter value for name: PROFILE_fknfhx8wth51q

HINT
------------------------------------------------------------------------------------------------------------------------------------------------------
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.1.0.7')
DB_VERSION('11.1.0.7')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX_RS_ASC(@"SEL$1" "L"@"SEL$1" ("TABLE_XXXX_LOOKUP"."COL1" "TABLE_XXXX_LOOKUP"."COL2"))
INDEX_RS_ASC(@"SEL$1" "I"@"SEL$1" ("TABLE_XXXX_IDENT"."COL1"))
INDEX(@"SEL$1" "A"@"SEL$1" ("TABLE_XXXX_ASSIGNMENT"."COL1" "TABLE_XXXX_ASSIGNMENT"."COL2" "TABLE_XXXX_ASSIGNMENT"."COL3"))
LEADING(@"SEL$1" "L"@"SEL$1" "I"@"SEL$1" "A"@"SEL$1")
USE_NL(@"SEL$1" "I"@"SEL$1")
USE_NL(@"SEL$1" "A"@"SEL$1")

11 rows selected.

SQL> -- no OPT_ESTIMATE with the SQL Profile we created manually !
SQL> -- again it's directive - USE_NL, INDEX, LEADING, etc...
SQL>

So I apologize to all you SQL Profiles out there who have been lumped together by my prejudiced view, just because of the acts of a few of your brethren (i.e. the ones created by the SQL Tuning Advisor). SQL Profiles do indeed have all the capabilities of Outlines and probably are a better choice in most cases than Outlines.

Thanks again to Randolf Geist for his comments and his ideas on creating manual SQL Profiles.

Bind Variable Peeking – Drives Me Nuts!

In the constant battle to provide consistent performance, Oracle took a giant step backwards with the 9i version by introducing an “Enhancement” called Bind Variable Peeking. I’ll explain what I mean in a minute, but first a bit of history.

When Oracle introduced histograms in 8i, they provided a mechanism for the optimizer to recognize that the values in a column were not distributed evenly. That is, in a table with 100 rows and 10 distinct values, the default assumption the optimizer would make, in the absence of a histogram, would be that no matter which value you picked – you would always get 100/10 or 10 rows back. Histograms let the optimizer know if that was not the case. The classic example would be 100 records with 2 distinct values where one value, say “Y”, occurred 99 times and the other value, say “N”, occurred only 1 time.  So without a histogram the optimizer would always assume that whether you requested records with a “Y” or an “N”, you would get half the records back (100/2 = 50). Therefore you always want to do a full table scan as opposed to using an index on the column. A histogram, assuming it was accurate (we’ll come back to that later), would let the optimizer know that the distribution was not normal (i.e. not spread out evenly – also commonly called skewed) and that a “Y” would get basically the whole table, while an “N” would get only 1%. This would allow the optimizer to pick an appropriate plan regardless of which value was specified in the Where Clause.

So let’s consider the implications of that. Would that improve the response time for the query where the value was “Y”. The answer is no. In this simple case, the default costing algorithm is close enough and produces the same plan that the histogram produces. The full table scan takes just as long whether the optimizer thought it was getting 50 rows or 99 rows. But what about the case where we specified the value of “N”. In this case, with a histogram we would pick up the index on that column and presumably get a much better response time than the full table scan. This is an important point. Generally speaking it is only the outliers, the exceptional cases if you will, where the histogram really makes a difference.

So at first glance, all appeared well with the world. But there was a fly in the ointment. You had to use literals in your SQL statements for the optimizer to be able use the histograms. So you had to write your statements like this:

SELECT XYZ FROM TABLE1 WHERE COLUMN1 = ‘Y’;

SELECT XYZ FROM TABLE1 WHERE COLUMN1 = ‘N’;

Not a problem in our simple example, because you only have two possibilities. But consider a statement with 2 or 3 skewed columns, each with a couple of hundred distinct values. The possible combinations could quickly grow into the millions. Not a good thing for the shared pool.

Enter our star: Bind Variable Peeking, a new feature introduced in 9i that was added to allow the optimizer to peek at the value of bind variables and then use a histogram to pick an appropriate plan, just like it would do with literals. The problem with the new feature was that it only looked at the variables once, when the statement was parsed. So let’s make our simple example a little more realistic by assuming we have a 10 million row table where 99% have a value of “Y” and 1% have a value of “N”. So in our example, if the first time the statement was executed it was passed a “Y”, the full table scan plan would be locked in and it would be used until the statement had to be re-parsed, even if the value “N” was passed to it in subsequent executions.

So let’s consider the implication of that. When you get the full table scan plan (because you passed a “Y” the first time) it behaves the same way no matter what which value you pass subsequently. Always a full table scan, always the same amount of work and the same basic elapsed time. From a user standpoint that seems reasonable. The performance is consistent. (this is the way it would work without a histogram by the way) On the other hand, if the index plan gets picked because the parse occurs with a value of “N”, the executions where the value is “N” will be even faster than they were before, but the execution with a value of “Y” will be incredibly slow. This is not at all what the user expects. They expect the response time to be about the same every time they execute a piece of code. And this is the problem with bind variable peeking. It’s basically just Russian Roulette. It just depends on what value you happen to pass the statement when it’s parsed (which could be any execution by the way).

So is Bind Variable Peeking a feature or a bug? Well technically it’s not a bug because it works the way it’s designed. I just happen to believe that it was not a good decision to implement it that way. But what other choices did the optimizer development group have?

  • They could have evaluated the bind variables and re-parsed  for every execution of every statement using bind variables. This would eliminate the advantage of having bind variables in the first place and would never work for high transaction systems. So it was basically not an option.
  • They could have just said no, and made us use literals in order to get the benefit of histograms (probably not a bad option in retrospect – the fact that they added _optim_peek_user_binds probably means that they decided later to give us that option via setting this hidden parameter).
  • They could have implemented a system where they could identify statements that might benefit from different plans based on the values of bind variables. Then peek at those variables for every execution of those “bind sensitive” statements (sound familiar? – that’s what they finally did in 11g with Adaptive Cursor Sharing).

So why is it such a pervasive problem? And I do believe it is a pervasive problem with 10g in particular. A couple of reasons come to mind:

  1. We’ve been taught to always use bind variables. It’s a best practice which allows SQL statements to be shared, thus eliminating a great deal of work/contention. Using bind variable is an absolute necessity when building scalable high transaction rate systems. (of course that doesn’t mean that you can’t bend the rule occasionally)
  2. 10g changed it’s default stats gathering method to automatically gather histograms. So in a typical 10g database there are a huge number of histograms, many of them inappropriate (i.e. on columns that don’t have significantly skewed distributions) and many of them created with very small sample sizes causing the histograms to be less than accurate. Note that 11g appears to be better on both counts – that is to say, 11g seems to create fewer inappropriate histograms and seems to create much more accurate histograms with small sample sizes. But the jury is still out on 11g stats gathering as it has not been widely adopted at this point in time.
  3. In my humble opinion, Bind Variable Peeking is not that well understood. When I talk to people about the issue, they usually have heard of it and have a basic idea what the problem is, but their behavior (in terms of the code they write and how they manage their databases) indicates that they don’t really have a good handle on the issue.

So what’s the best way to deal with this issue? Well recognizing that you have a problem is the first step to recovery, so being able to identify that you have a problem with plan stability is an appropriate first step. Direct queries against the Statspack or AWR tables are probably the best way to identify the issue. I’ve posted a couple of scripts that I find useful for this purpose previously – (unstable_plans.sql, awr_plan_stats.sql, awr_plan_change.sql). What you’re looking for is statements that flip flop back and forth between 2 or more plans. Note that there are other reasons for statements to change plans, but Bind Variable Peeking is the number one suspect. Here’s an example of their usage:

SQL> @unstable_plans
SQL> break on plan_hash_value on startup_time skip 1
SQL> select * from (
  2  select sql_id, sum(execs), min(avg_etime) min_etime, max(avg_etime) max_etime, stddev_etime/min(avg_etime) norm_stddev
  3  from (
  4  select sql_id, plan_hash_value, execs, avg_etime,
  5  stddev(avg_etime) over (partition by sql_id) stddev_etime
  6  from (
  7  select sql_id, plan_hash_value,
  8  sum(nvl(executions_delta,0)) execs,
  9  (sum(elapsed_time_delta)/decode(sum(nvl(executions_delta,0)),0,1,sum(executions_delta))/1000000) avg_etime
 10  -- sum((buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta))) avg_lio
 11  from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
 12  where ss.snap_id = S.snap_id
 13  and ss.instance_number = S.instance_number
 14  and executions_delta > 0
 15  and elapsed_time_delta > 0
 16  group by sql_id, plan_hash_value
 17  )
 18  )
 19  group by sql_id, stddev_etime
 20  )
 21  where norm_stddev > nvl(to_number('&min_stddev'),2)
 22  and max_etime > nvl(to_number('&min_etime'),.1)
 23  order by norm_stddev
 24  /
Enter value for min_stddev:
Enter value for min_etime:

SQL_ID        SUM(EXECS)   MIN_ETIME   MAX_ETIME   NORM_STDDEV
------------- ---------- ----------- ----------- -------------
1tn90bbpyjshq         20         .06         .24        2.2039
0qa98gcnnza7h         16       20.62      156.72        4.6669
7vgmvmy8vvb9s        170         .04         .39        6.3705
32whwm2babwpt        196         .02         .26        8.1444
5jjx6dhb68d5v         51         .03         .47        9.3888
71y370j6428cb        155         .01         .38       19.7416
66gs90fyynks7        163         .02         .55       21.1603
b0cxc52zmwaxs        197         .02         .68       23.6470
31a13pnjps7j3        196         .02        1.03       35.1301
7k6zct1sya530        197         .53       49.88       65.2909

10 rows selected.

SQL> @find_sql
SQL> select sql_id, child_number, plan_hash_value plan_hash, executions execs,
  2  (elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions) avg_etime,
  3  buffer_gets/decode(nvl(executions,0),0,1,executions) avg_lio,
  4  sql_text
  5  from v$sql s
  6  where upper(sql_text) like upper(nvl('&sql_text',sql_text))
  7  and sql_text not like '%from v$sql where sql_text like nvl(%'
  8  and sql_id like nvl('&sql_id',sql_id)
  9  order by 1, 2, 3
 10  /
Enter value for sql_text:
Enter value for sql_id: 0qa98gcnnza7h

SQL_ID         CHILD  PLAN_HASH        EXECS     AVG_ETIME      AVG_LIO SQL_TEXT
------------- ------ ---------- ------------ ------------- ------------ ------------------------------------------------------------
0qa98gcnnza7h      0  568322376            3          9.02      173,807 select avg(pk_col) from kso.skew where col1 > 0

SQL> @awr_plan_stats
SQL> break on plan_hash_value on startup_time skip 1
SQL> select sql_id, plan_hash_value, sum(execs) execs, sum(etime) etime, sum(etime)/sum(execs) avg_etime, sum(lio)/sum(execs) avg_lio
  2  from (
  3  select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
  4  nvl(executions_delta,0) execs,
  5  elapsed_time_delta/1000000 etime,
  6  (elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
  7  buffer_gets_delta lio,
  8  (buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
  9  from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
 10  where sql_id = nvl('&sql_id','4dqs2k5tynk61')
 11  and ss.snap_id = S.snap_id
 12  and ss.instance_number = S.instance_number
 13  and executions_delta > 0
 14  )
 15  group by sql_id, plan_hash_value
 16  order by 5
 17  /
Enter value for sql_id: 0qa98gcnnza7h

SQL_ID        PLAN_HASH_VALUE        EXECS          ETIME    AVG_ETIME        AVG_LIO
------------- --------------- ------------ -------------- ------------ --------------
0qa98gcnnza7h       568322376           14          288.7       20.620      172,547.4
0qa98gcnnza7h      3723858078            2          313.4      156.715   28,901,466.0

SQL> @awr_plan_change
SQL> break on plan_hash_value on startup_time skip 1
SQL> select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
  2  nvl(executions_delta,0) execs,
  3  (elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
  4  (buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
  5  from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
  6  where sql_id = nvl('&sql_id','4dqs2k5tynk61')
  7  and ss.snap_id = S.snap_id
  8  and ss.instance_number = S.instance_number
  9  and executions_delta > 0
 10  order by 1, 2, 3
 11  /
Enter value for sql_id: 0qa98gcnnza7h

   SNAP_ID   NODE BEGIN_INTERVAL_TIME            SQL_ID        PLAN_HASH_VALUE        EXECS    AVG_ETIME        AVG_LIO
---------- ------ ------------------------------ ------------- --------------- ------------ ------------ --------------
     21857      1 20-MAR-09 04.00.08.872 PM      0qa98gcnnza7h       568322376            1       31.528      173,854.0
     22027      1 27-MAR-09 05.00.08.006 PM      0qa98gcnnza7h                            1      139.141      156,807.0
     22030      1 27-MAR-09 08.00.15.380 PM      0qa98gcnnza7h                            3       12.451      173,731.0
     22031      1 27-MAR-09 08.50.04.757 PM      0qa98gcnnza7h                            2        8.771      173,731.0
     22032      1 27-MAR-09 08.50.47.031 PM      0qa98gcnnza7h      3723858078            1      215.876   28,901,466.0
     22033      1 27-MAR-09 08.57.37.614 PM      0qa98gcnnza7h       568322376            2        9.804      173,731.0
     22034      1 27-MAR-09 08.59.12.432 PM      0qa98gcnnza7h      3723858078            1       97.554   28,901,466.0
     22034      1 27-MAR-09 08.59.12.432 PM      0qa98gcnnza7h       568322376            2        8.222      173,731.5
     22035      1 27-MAR-09 09.12.00.422 PM      0qa98gcnnza7h                            3        9.023      173,807.3

9 rows selected.

So back to the question, what’s the best way to deal with the issue. In general, the best way to eliminate Bind Variable Peeking is as follows:

  1. Only create histograms on skewed columns.
  2. Use literals in where clauses on columns where you have histograms and want to use them. Note that it’s not necessary to use literals for every possible value of a skewed column. There may be only a few outlier values that result in significantly different plans. With a little extra code you can use literals for those values and bind variables for the rest of the values that don’t matter.
  3. If you can’t modify the code, consider turning off Bind Variable Peeking by setting the _OPTIM_PEEK_USER_BINDS parameter to false. You won’t get the absolute best performance for every possible statement, but you will get much more consistent performance, which is, in my opinion, more important than getting the absolute best performance. Keep in mind that this is a hidden parameter and so should be carefully tested and probably discussed with Oracle support prior to implementing it in any production system.
  4. You can also consider stronger methods of forcing the optimizer’s hand such as Outlines (see my previous posts on Unstable Plans and on Outlines). This option provides a quick method of locking in a single plan, but it’s not fool proof. Even with outlines, there is some possibility that the plan can change. Also note that this option is only palatable in situations where you have a relatively small number of problem SQL statements.
  5. Upgrade to 11g and let Adaptive Cursor Sharing take care of all your problems for you (don’t bet on it working without a little effort – I’ll try to do a post on that soon).

In summary, using literals with histograms on columns with skewed data distributions are really the only effective way to deal with the issue and still retain the ability for the optimizer to choose the absolute best execution plans. However, if circumstances prevent this approach, there are other techniques that can be applied. These should be considered temporary fixes, but may work well while a longer term solution is contemplated. From a philosophical stand point, I strongly believe that consistency is more important than absolute speed. So when a choice must be made, I would always favor slightly reduced but consistent performance over anything that didn’t provide that consistency.

Your comments are always welcome. Please let me know what you think.

Oracle Outlines – aka Plan Stability

Roughly 10 years ago, Oracle introduced a way to lock down execution plans. This was a response to the inherent unpredictability of the early cost based optimizer. The Oracle marketing machine even gave the feature the name “Plan Stability”. It appeared to be a good approach, basically allowing a DBA to lock an existing plan for a given statement. The mechanism employed to accomplish this was to create hints that didn’t allow the optimizer much (if any) flexibility to come up with an alternative plan. These hints are stored in the database in a structure called an OUTLINE or sometimes a STORED OUTLINE. The optimizer could then apply these hints behind the scenes whenever a SQL statement that matched was executed. By the way, “matching” basically means that the text of the statement matches. Originally outlines had to match character for character just like the normal rules for sharing SQL statements, but for some reason, Oracle later decided that the matching algorithm should be somewhat relaxed as compared to Oracle’s standard. What that means is that in 10gR2 by default whitespace is collapsed and differences in case are ignored. So (at least as far as outlines are concerned)  “select * from dual” is the same as “SELECT     *       FROM DuAl”. You’ll still get two different statements in the shared pool but they will use the same outline, if one exists.

With 9i, Oracle started to enhance this feature by adding the ability to edit the outlines themselves, but they never really completed the job. They pretty much quit doing anything with the feature after 9i. In fact, the script that creates the DBMS_OUTLN package ($ORACLE_HOME/rdbms/admin/dbmsol.sql) has not been updated since early in 2004 (with the exception of a tweak to keep it working in 11g). Anyway, I think it is a great feature with two primary uses.

  • First, it can be used to freeze a plan for a statement. This is especially helpful in situations where bind variable peeking is causing Oracle to alternate between a couple of plans.
  • Second, it can be very helpful when dealing with an application where the code can not be modified. Outlines provide a means of altering the execution plan for a statement without changing the code or making changes to the basic database configuration.

Lot’s of people have written about outlines, so I don’t want to just repeat information that is already available. But there doesn’t seem to be a single spot that pulls together all (at least what I consider to be all) the important stuff. Also, most of the stuff I have seen about outlines was written for 8i or 9i. As this is being written, 11gR1 has been out for over a year (although it has still not been widely adopted), and 10gR2 is far and away the most prevalent version in production. So, here we go.

Outlines can be created two ways.

  1. You can use the CREATE OUTLINE statement – which allows you to give your outline a name, but requires you to include the SQL statement as part of your CREATE OUTLINE statement. Therefore you can’t see what the execution plan is before creating the outline. Not very useful in my opinion.
  2. You can use the CREATE_OUTLINE procedure in the DBMS_OUTLN package – which doesn’t allow you to give your outline a name, but does let you specify a specific child cursor of a specific SQL statement in the shared pool. This means you can check the execution plan before creating the outline and that you can be sure the statement exactly matches what is being sent from the application.

Here’s an example:

CREATE OUTLINE myoutline FOR CATEGORY temp ON select * from dual;

EXEC DBMS_OUTLN.CREATE_OUTLINE('82734682234',0,'DEFAULT');

Continue reading ‘Oracle Outlines – aka Plan Stability’ »

Unstable Plans (Oracle Plan Stability/Instability)

One of the most frustrating things about the Cost Based Optimizer in Oracle is the sometimes erratic performance. It can seem almost random at times. Resolving these issues can also be challenging. It’s a little like taking your car to the mechanic when it makes a funny noise. It never seems to make the noise when the mechanic is there. Fortunately we have ASH/AWR which captures a ton of information about what our databases are doing.

Here’s a couple of scripts that I find useful.

The first one can be used to show statements that have experienced significant variances in execution time (it can be modified to look for variances in the amount of logical i/o, but I’ll leave it as an exercise for the reader).  I called the script unstable_plans.sql. It uses an analytic function to calculate a standard deviation on the average elapsed time by plan. So the statements that have multiple plans with wild variations in the response time between plans will be returned by the script. The script prompts for a couple of values. The first is minimum number of standard deviations. The second is the minimum elapsed time (I usually don’t care if a statement executes sometimes in .005 seconds and sometimes in .02 seconds, even though this is a large swing statistically). Both these inputs are defaulted by the way.

Continue reading ‘Unstable Plans (Oracle Plan Stability/Instability)’ »