trusted online casino malaysia

Archive for the ‘Oracle’ Category.

12c Adaptive Optimization – Part 2 – Hints

This is the second post on follow up questions from the Redgate webinar I did on 12c Adaptive Optimization. The first post is here: 12c Adaptive Optimization – Part 1. Since there were several comments and questions about hints and how they interact with Adaptive Plans, I decided to limit this 2nd post to that topic.

Q: Regarding turning off the adaptive optimization (particularly adaptive joins), will there also be a hint to disable it for a particular SQL?
Q: can we pick and choose SQL’s not to run this collector for

A: There are no specific hints to enable or disable Adaptive Plans as of 12.1.0.1. However, the OPT_PARAM hint does work with both the OPTIMIZER_ADAPTIVE_FEATURES parameter and the “_optimizer_adaptive_plans” parameter.

Here’s an example:


SYS@db12c1> -- statement that wants to generate an adaptive plan
SYS@db12c1> select product_name
  2  from oe.order_items o, oe.product_information p
  3  where o.unit_price=15 and o.quantity > 1
  4  and o.product_id = p.product_id
  5  /

PRODUCT_NAME
--------------------------------------------------
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 

13 rows selected.

SYS@db12c1> @prev_sql

SQL_ID         CHILD  PLAN_HASH  EXECS  AVG_ETIME SQL_TEXT
------------- ------ ---------- ------ ---------- ----------------------------------------------------------------------
3ycnqgx5nc8nn      0 1553478007      1        .00 select product_name from oe.order_items o, oe.product_information p wh

SYS@db12c1> @dplan_adaptive
Enter value for sql_id: 3ycnqgx5nc8nn
Enter value for child_no: 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  3ycnqgx5nc8nn, child number 0
-------------------------------------
select product_name from oe.order_items o, oe.product_information p
where o.unit_price=15 and o.quantity > 1 and o.product_id = p.product_id

Plan hash value: 1553478007

----------------------------------------------------------------------------------------------------------
|   Id  | Operation                     | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT              |                        |       |       |     7 (100)|          |
|  *  1 |  HASH JOIN                    |                        |     4 |   128 |     7   (0)| 00:00:01 |
|-    2 |   NESTED LOOPS                |                        |       |       |            |          |
|-    3 |    NESTED LOOPS               |                        |     4 |   128 |     7   (0)| 00:00:01 |
|-    4 |     STATISTICS COLLECTOR      |                        |       |       |            |          |
|  *  5 |      TABLE ACCESS STORAGE FULL| ORDER_ITEMS            |     4 |    48 |     3   (0)| 00:00:01 |
|- *  6 |     INDEX UNIQUE SCAN         | PRODUCT_INFORMATION_PK |     1 |       |     0   (0)|          |
|-    7 |    TABLE ACCESS BY INDEX ROWID| PRODUCT_INFORMATION    |     1 |    20 |     1   (0)| 00:00:01 |
|     8 |   TABLE ACCESS STORAGE FULL   | PRODUCT_INFORMATION    |     1 |    20 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("O"."PRODUCT_ID"="P"."PRODUCT_ID")
   5 - storage(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
       filter(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
   6 - access("O"."PRODUCT_ID"="P"."PRODUCT_ID")

Note
-----
   - this is an adaptive plan (rows marked '-' are inactive)


Reoptimized plan:
-----------------
This cursor is marked for automatic reoptimization.  The plan that is
expected to be chosen on the next execution is displayed below.

Plan hash value: 1553478007

--------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                     |    13 |   416 |     8   (0)| 00:00:01 |
|*  1 |  HASH JOIN                 |                     |    13 |   416 |     8   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS STORAGE FULL| ORDER_ITEMS         |    13 |   156 |     3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS STORAGE FULL| PRODUCT_INFORMATION |   288 |  5760 |     5   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("O"."PRODUCT_ID"="P"."PRODUCT_ID")
   2 - storage("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1)
       filter("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1)

Note
-----
   - this is an adaptive plan

60 rows selected.

SYS@db12c1> -- so the previous statement used an adaptive plan picking a HJ over the NLJ
SYS@db12c1>
SYS@db12c1> -- now let's turn off adaptive plans via the OPT_PARAM hint
SYS@db12c1> -- (set _optimizer_adaptive_plans or optimizer_adaptive_features to false)
SYS@db12c1>
SYS@db12c1> select /*+ OPT_PARAM('_optimizer_adaptive_plans','false') */ product_name
  2  from oe.order_items o, oe.product_information p
  3  where o.unit_price=15 and o.quantity > 1
  4  and o.product_id = p.product_id
  5  /

PRODUCT_NAME
--------------------------------------------------
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 

13 rows selected.

SYS@db12c1> @x

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  04g4xyu3788qm, child number 0
-------------------------------------
select /*+ OPT_PARAM('_optimizer_adaptive_plans','false') */
product_name from oe.order_items o, oe.product_information p where
o.unit_price=15 and o.quantity > 1 and o.product_id = p.product_id

Plan hash value: 1255158658

-------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                        |       |       |     7 (100)|          |
|   1 |  NESTED LOOPS                |                        |       |       |            |          |
|   2 |   NESTED LOOPS               |                        |     4 |   128 |     7   (0)| 00:00:01 |
|*  3 |    TABLE ACCESS STORAGE FULL | ORDER_ITEMS            |     4 |    48 |     3   (0)| 00:00:01 |
|*  4 |    INDEX UNIQUE SCAN         | PRODUCT_INFORMATION_PK |     1 |       |     0   (0)|          |
|   5 |   TABLE ACCESS BY INDEX ROWID| PRODUCT_INFORMATION    |     1 |    20 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - storage(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
       filter(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
   4 - access("O"."PRODUCT_ID"="P"."PRODUCT_ID")


26 rows selected.

SYS@db12c1> -- So the plan has reverted to the NL Join and is not marked as adaptive

So, even though there is no specific hint at this point, the OPT_PARAM hint can be used to control this behavior on a statement by statement basis.

Q: how does AP (Adaptive Plans) treat query HINTS?
Q: Does adaptive join selection potentially override query hints?

A: As to whether AP can override hints, it does not appear that it can. If you specify a join method with a valid hint, a 10053 (Wolfgang) trace will show that AP’s are not used due to the hint. For example, if you use a hint to specify a nested loop join, the optimizer will not allow AP to override that directive and the 10053 trace will show this behavior.

Here’s an example:

SSYS@db12c1> select product_name
  2  from oe.order_items o, oe.product_information p
  3  where o.unit_price=15 and o.quantity > 1
  4  and o.product_id = p.product_id
  5  /

PRODUCT_NAME
--------------------------------------------------
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 

13 rows selected.

SYS@db12c1> @x

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  3ycnqgx5nc8nn, child number 0
-------------------------------------
select product_name from oe.order_items o, oe.product_information p
where o.unit_price=15 and o.quantity > 1 and o.product_id = p.product_id

Plan hash value: 1553478007

--------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                     |       |       |     8 (100)|          |
|*  1 |  HASH JOIN                 |                     |    13 |   416 |     8   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS STORAGE FULL| ORDER_ITEMS         |    13 |   156 |     3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS STORAGE FULL| PRODUCT_INFORMATION |   288 |  5760 |     5   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("O"."PRODUCT_ID"="P"."PRODUCT_ID")
   2 - storage(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
       filter(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))

Note
-----
   - this is an adaptive plan


27 rows selected.

SYS@db12c1> -- from 10053
SYS@db12c1> !grep -i inflection adapt*trc
Searching for inflection point (join #1) between 0.00 and 12.76
AP: Computing costs for inflection point at min value 0.00
DP: Using binary search for inflection point search
DP: Costing Nested Loops Join for inflection point at card 0.00
DP: Costing Hash Join for inflection point at card 0.00
AP: Computing costs for inflection point at max value 12.76
DP: Costing Nested Loops Join for inflection point at card 12.76
DP: Costing Hash Join for inflection point at card 12.76
AP: Searching for inflection point at value 1.00
DP: Costing Nested Loops Join for inflection point at card 6.38
DP: Costing Hash Join for inflection point at card 6.38
AP: Searching for inflection point at value 6.38
DP: Costing Nested Loops Join for inflection point at card 3.19
DP: Costing Hash Join for inflection point at card 3.19
AP: Searching for inflection point at value 3.19
DP: Costing Nested Loops Join for inflection point at card 4.78
DP: Costing Hash Join for inflection point at card 4.78
AP: Searching for inflection point at value 4.78
DP: Costing Nested Loops Join for inflection point at card 5.58
DP: Costing Hash Join for inflection point at card 5.58
DP: Costing Nested Loops Join for inflection point at card 5.58
DP: Found point of inflection for NLJ vs. HJ: card = 5.58

SYS@db12c1> -- now with valid join hint
SYS@db12c1> select /*+ leading(o) use_nl(p) */ product_name
  2  from oe.order_items o, oe.product_information p
  3  where o.unit_price=15 and o.quantity > 1
  4  and o.product_id = p.product_id
  5  /

PRODUCT_NAME
--------------------------------------------------
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 
Screws 

13 rows selected.

SYS@db12c1> @x

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  bytr421c0c2n7, child number 0
-------------------------------------
select /*+ leading(o) use_nl(p) */ product_name from oe.order_items o,
oe.product_information p where o.unit_price=15 and o.quantity > 1 and
o.product_id = p.product_id

Plan hash value: 1255158658

-------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                        |       |       |    16 (100)|          |
|   1 |  NESTED LOOPS                |                        |       |       |            |          |
|   2 |   NESTED LOOPS               |                        |    13 |   416 |    16   (0)| 00:00:01 |
|*  3 |    TABLE ACCESS STORAGE FULL | ORDER_ITEMS            |    13 |   156 |     3   (0)| 00:00:01 |
|*  4 |    INDEX UNIQUE SCAN         | PRODUCT_INFORMATION_PK |     1 |       |     0   (0)|          |
|   5 |   TABLE ACCESS BY INDEX ROWID| PRODUCT_INFORMATION    |     1 |    20 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - storage(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
       filter(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
   4 - access("O"."PRODUCT_ID"="P"."PRODUCT_ID")


26 rows selected.

SYS@db12c1> !grep -i inflection /u01/app/oracle/diag/rdbms/db12c/db12c1/trace/db12c1_ora_32529.trc

SYS@db12c1> !grep AP: non-adapt*.trc
AP: Adaptive joins bypassed for table P @ SEL$1 due to join method is hinted
AP: Adaptive joins bypassed for table P @ SEL$1 due to join method is hinted

So it appears that AP will not override valid hints (as evidenced by the lines in the 10053 trace file showing that “Adaptive join bypassed … due to join method is hinted”). Keep in mind though that this is only one test case, so it’s possible that in some circumstances AP could override hints, but now you know what to look for to validate. 🙂

Here are a few more hint related questions:

Q: Does adaptive optimization reduce the need for using hints
A: Maybe. If you are hinting to avoid short comings in the optimizer where it chooses the wrong join method, you might not need to do that any more. Likewise, if you are hinting to force a particular distribution method for PX statements, you may not need to do that any more. It’s certainly a step in the right direction.

Q: Can we force dynamic sampling for a statement, regardless of what Oracle thinks it should do?
A: Yes, the DYNAMIC_SAMPLING hint has been available since 9.2.

Q: If we had SQL Plan Baseline set for a paticular SQL in version 11g and we were to upgrade to 12c version….would SQL Baseline be used or this adaptive plans are used ?
A: The hints in the Baseline would be used and should reproduce the 11g plan. See the example above where valid hints disable AP.

Q: Is there a way to grab a previous plan (good) using the profile technique and assign it to the current statement that changed the exec plan which is bad? So far I have been doing this manually using your scripts.
A: Yes – Profiles are just a collection of hints that get applied to a statement. So they can be used to control plans even if AP is enabled.

Q: How does this play with SQL Plan Management.. ?
A: Final plans can be captured and baselines created for them. This feature behaves as expected. Subsequent parses will try to reproduce final plan (using hints in the baseline if necessary).

So that’s it for the hint related questions. The final post in this series will cover the remainder of the questions.

12c Adaptive Optimization – Part 1

Last week I did a webinar on 12c Adaptive Optimization. The talk was recorded. The slides are here: 12c Adaptive Optimization V2 PDF. The recording can be found here: 12c Adaptive Optimization Recording. There were a number of follow up questions and emails so I thought I’d summarize here. Since there were so many questions, (I guess I must not have done that good of a job of explaining how it works) I will break them up into 2 or 3 posts. So for this first one I will just cut and paste from a couple of email follow ups.

Here’s the first question(s):

Hi Kerry,

I followed your webinar today, and I have two questions about it.
First, what will happened if the plan changed during the fetch operation ?
Is it possible ? If then, how does it know which rows has been already fetched ?

Second question is more a practical question. Indeed, in the examples you showed, it uses basic queries, but in the case you have an execution plan with more than hundreds of operation, and if during the execution an adaptive plan is decided with changes in join method, the plan can change a lot.
If we want to identify the step that will modify the plan, do we have to identify it as the step just before the statistic collector op, or will it be more complex to identify ?

Thanks in advance for you answers

And here’s my Answer(s):

Hi

I’ll have to find a little time to test the prepare, open, fetch stuff to verify where the initial rows are actually retrieved, but if I had to guess it would be on the first fetch call, regardless of how many records the fetch requests. Could also be on the open though. It’s interesting to see how the optimizer comes up with the inflection point by the way (although I don’t know enough about the internal algorithm to know exactly what they are doing – but it’s clear they are guessing by splitting the difference ). But here’s a little output from a wolfgang (10053) trace file.

SYS@db12c1> !grep inflection adaptive.trc
Searching for inflection point (join #1) between 0.00 and 12.76
AP: Computing costs for inflection point at min value 0.00
DP: Using binary search for inflection point search
DP: Costing Nested Loops Join for inflection point at card 0.00
DP: Costing Hash Join for inflection point at card 0.00
AP: Computing costs for inflection point at max value 12.76
DP: Costing Nested Loops Join for inflection point at card 12.76
DP: Costing Hash Join for inflection point at card 12.76
AP: Searching for inflection point at value 1.00
DP: Costing Nested Loops Join for inflection point at card 6.38
DP: Costing Hash Join for inflection point at card 6.38
AP: Searching for inflection point at value 6.38
DP: Costing Nested Loops Join for inflection point at card 3.19
DP: Costing Hash Join for inflection point at card 3.19
AP: Searching for inflection point at value 3.19
DP: Costing Nested Loops Join for inflection point at card 4.78
DP: Costing Hash Join for inflection point at card 4.78
AP: Searching for inflection point at value 4.78
DP: Costing Nested Loops Join for inflection point at card 5.58
DP: Costing Hash Join for inflection point at card 5.58
DP: Costing Nested Loops Join for inflection point at card 5.58
DP: Found point of inflection for NLJ vs. HJ: card = 5.58


On the identification of what’s going on in more complicated plans, the general pattern appears to be like this:

Hash Join
NL Join
Statistics Collector

Regardless of whether the final plan would be to use HJ or NL. In some cases the NL is abandoned, in other cases the HJ is abandoned. (by the way, the optimization only appears to kick in on steps where the default plan would use a NL)

* Note that I was wrong in my assertion that the optimization only kicks in for NLJ steps as pointed out by Stephan in the comments section. It can kick in on HJ steps as well, although they don’t appear nearly as often. 🙂

So anyway, a NL would look like this:

- Hash Join
    NL Join
-     Statistics COllector

And a HJ like this:

  Hash Join 
-   NL Join
-     Statistics Collector

Here’s an example of a more complex plan – in this case a couple of hash joins are discarded in favor of NL.


SYS@db12c1> @dplan_adaptive
Enter value for sql_id: 95stx63r9dc34
Enter value for child_no: 1

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  95stx63r9dc34, child number 1
-------------------------------------
select /* test dp2c6pq28u5jr */ count(*), sum(blocks) FROM dba_segments
where    OWNER = 'XDB' and TABLESPACE_NAME = 'SYSAUX'

Plan hash value: 1481365994

----------------------------------------------------------------------------------------------------------------
|   Id  | Operation                               | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT                        |                    |       |       |  1441 (100)|          |
|     1 |  SORT AGGREGATE                         |                    |     1 |   104 |            |          |
|     2 |   VIEW                                  | SYS_DBA_SEGS       |     9 |   936 |  1441   (1)| 00:00:01 |
|     3 |    UNION-ALL                            |                    |       |       |            |          |
|     4 |     NESTED LOOPS                        |                    |     6 |   852 |  1356   (1)| 00:00:01 |
|     5 |      NESTED LOOPS                       |                    |     6 |   810 |  1356   (1)| 00:00:01 |
|  *  6 |       HASH JOIN                         |                    |    67 |  6767 |  1350   (1)| 00:00:01 |
|  *  7 |        FILTER                           |                    |       |       |            |          |
|  *  8 |         HASH JOIN RIGHT OUTER           |                    |   278 | 11954 |    89   (0)| 00:00:01 |
|     9 |          TABLE ACCESS STORAGE FULL      | USER$              |    71 |  1278 |     3   (0)| 00:00:01 |
|    10 |          NESTED LOOPS                   |                    | 19743 |   482K|    86   (0)| 00:00:01 |
|    11 |           TABLE ACCESS BY INDEX ROWID   | TS$                |     1 |    11 |     1   (0)| 00:00:01 |
|  * 12 |            INDEX UNIQUE SCAN            | I_TS1              |     1 |       |     0   (0)|          |
|    13 |           TABLE ACCESS STORAGE FULL     | OBJ$               | 19743 |   269K|    85   (0)| 00:00:01 |
|    14 |        VIEW                             | SYS_OBJECTS        |  4731 |   267K|  1261   (1)| 00:00:01 |
|    15 |         UNION-ALL                       |                    |       |       |            |          |
|  * 16 |          TABLE ACCESS STORAGE FULL      | TAB$               |  1533 | 33726 |   312   (0)| 00:00:01 |
|    17 |          TABLE ACCESS STORAGE FULL      | TABPART$           |   262 |  4192 |     5   (0)| 00:00:01 |
|    18 |          TABLE ACCESS STORAGE FULL      | CLU$               |    10 |   140 |   312   (0)| 00:00:01 |
|  * 19 |          TABLE ACCESS STORAGE FULL      | IND$               |  2164 | 41116 |   312   (0)| 00:00:01 |
|    20 |          TABLE ACCESS STORAGE FULL      | INDPART$           |   194 |  3104 |     4   (0)| 00:00:01 |
|  * 21 |          TABLE ACCESS STORAGE FULL      | LOB$               |   512 | 10752 |   309   (0)| 00:00:01 |
|    22 |          TABLE ACCESS STORAGE FULL      | TABSUBPART$        |    32 |   480 |     2   (0)| 00:00:01 |
|    23 |          TABLE ACCESS STORAGE FULL      | INDSUBPART$        |     1 |    52 |     2   (0)| 00:00:01 |
|    24 |          TABLE ACCESS STORAGE FULL      | LOBFRAG$           |    23 |   414 |     2   (0)| 00:00:01 |
|  * 25 |       TABLE ACCESS CLUSTER              | SEG$               |     1 |    34 |     1   (0)| 00:00:01 |
|  * 26 |        INDEX UNIQUE SCAN                | I_FILE#_BLOCK#     |     1 |       |     0   (0)|          |
|  * 27 |      INDEX UNIQUE SCAN                  | I_FILE2            |     1 |     7 |     0   (0)|          |
|  * 28 |     FILTER                              |                    |       |       |            |          |
|  * 29 |      HASH JOIN RIGHT OUTER              |                    |     3 |   405 |    85   (0)| 00:00:01 |
|    30 |       TABLE ACCESS STORAGE FULL         | USER$              |    71 |  1278 |     3   (0)| 00:00:01 |
|    31 |       VIEW                              | VW_JF_SET$A8769BAB |   246 | 28782 |    82   (0)| 00:00:01 |
|    32 |        UNION-ALL                        |                    |       |       |            |          |
|    33 |         NESTED LOOPS                    |                    |     4 |   272 |    33   (0)| 00:00:01 |
|- * 34 |          HASH JOIN                      |                    |     4 |   244 |    33   (0)| 00:00:01 |
|    35 |           NESTED LOOPS                  |                    |     4 |   244 |    33   (0)| 00:00:01 |
|-   36 |            STATISTICS COLLECTOR         |                    |       |       |            |          |
|    37 |             NESTED LOOPS                |                    |    36 |  1044 |     3   (0)| 00:00:01 |
|    38 |              TABLE ACCESS BY INDEX ROWID| TS$                |     1 |    11 |     1   (0)| 00:00:01 |
|  * 39 |               INDEX UNIQUE SCAN         | I_TS1              |     1 |       |     0   (0)|          |
|  * 40 |              TABLE ACCESS STORAGE FULL  | UNDO$              |    36 |   648 |     2   (0)| 00:00:01 |
|  * 41 |            TABLE ACCESS CLUSTER         | SEG$               |     1 |    32 |     1   (0)| 00:00:01 |
|  * 42 |             INDEX UNIQUE SCAN           | I_FILE#_BLOCK#     |     1 |       |     0   (0)|          |
|- * 43 |           TABLE ACCESS STORAGE FULL     | SEG$               |     1 |    32 |     1   (0)| 00:00:01 |
|  * 44 |          INDEX UNIQUE SCAN              | I_FILE2            |     1 |     7 |     0   (0)|          |
|- * 45 |         HASH JOIN                       |                    |   241 | 13255 |    25   (0)| 00:00:01 |
|    46 |          NESTED LOOPS                   |                    |   241 | 13255 |    25   (0)| 00:00:01 |
|-   47 |           STATISTICS COLLECTOR          |                    |       |       |            |          |
|    48 |            NESTED LOOPS                 |                    |     5 |    90 |     2   (0)| 00:00:01 |
|    49 |             TABLE ACCESS BY INDEX ROWID | TS$                |     1 |    11 |     1   (0)| 00:00:01 |
|  * 50 |              INDEX UNIQUE SCAN          | I_TS1              |     1 |       |     0   (0)|          |
|    51 |             INDEX FULL SCAN             | I_FILE2            |     5 |    35 |     1   (0)| 00:00:01 |
|  * 52 |           TABLE ACCESS CLUSTER          | SEG$               |    48 |  1776 |     5   (0)| 00:00:01 |
|  * 53 |            INDEX RANGE SCAN             | I_FILE#_BLOCK#     |     1 |       |     2   (0)| 00:00:01 |
|- * 54 |          TABLE ACCESS STORAGE FULL      | SEG$               |    48 |  1776 |     5   (0)| 00:00:01 |
|- * 55 |         HASH JOIN                       |                    |     1 |    55 |    25   (0)| 00:00:01 |
|    56 |          NESTED LOOPS                   |                    |     1 |    55 |    25   (0)| 00:00:01 |
|-   57 |           STATISTICS COLLECTOR          |                    |       |       |            |          |
|    58 |            NESTED LOOPS                 |                    |     5 |    90 |     2   (0)| 00:00:01 |
|    59 |             TABLE ACCESS BY INDEX ROWID | TS$                |     1 |    11 |     1   (0)| 00:00:01 |
|  * 60 |              INDEX UNIQUE SCAN          | I_TS1              |     1 |       |     0   (0)|          |
|    61 |             INDEX FULL SCAN             | I_FILE2            |     5 |    35 |     1   (0)| 00:00:01 |
|  * 62 |           TABLE ACCESS CLUSTER          | SEG$               |     1 |    37 |     5   (0)| 00:00:01 |
|  * 63 |            INDEX RANGE SCAN             | I_FILE#_BLOCK#     |     1 |       |     2   (0)| 00:00:01 |
|- * 64 |          TABLE ACCESS STORAGE FULL      | SEG$               |     1 |    37 |     5   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("O"."OBJ#"="SO"."OBJECT_ID" AND "O"."TYPE#"="SO"."OBJECT_TYPE_ID")
   7 - filter(NVL("U"."NAME",'SYS')='XDB')
   8 - access("O"."OWNER#"="U"."USER#")
  12 - access("TS"."NAME"='SYSAUX')
  16 - filter(BITAND("T"."PROPERTY",1024)=0)
  19 - filter(("I"."TYPE#"=1 OR "I"."TYPE#"=2 OR "I"."TYPE#"=3 OR "I"."TYPE#"=4 OR "I"."TYPE#"=6 OR
              "I"."TYPE#"=7 OR "I"."TYPE#"=8 OR "I"."TYPE#"=9))
  21 - filter((BITAND("L"."PROPERTY",64)=0 OR BITAND("L"."PROPERTY",128)=128))
  25 - filter("S"."TYPE#"="SO"."SEGMENT_TYPE_ID")
  26 - access("S"."TS#"="TS"."TS#" AND "S"."FILE#"="SO"."HEADER_FILE" AND
              "S"."BLOCK#"="SO"."HEADER_BLOCK")
       filter("S"."TS#"="SO"."TS_NUMBER")
  27 - access("S"."TS#"="F"."TS#" AND "S"."FILE#"="F"."RELFILE#")
  28 - filter(NVL("U"."NAME",'SYS')='XDB')
  29 - access("ITEM_1"="U"."USER#")
  34 - access("S"."TS#"="TS"."TS#" AND "S"."TS#"="UN"."TS#" AND "S"."BLOCK#"="UN"."BLOCK#" AND
              "S"."FILE#"="UN"."FILE#")
  39 - access("TS"."NAME"='SYSAUX')
  40 - storage("UN"."STATUS$"<>1)
       filter("UN"."STATUS$"<>1)
  41 - filter(("S"."TYPE#"=1 OR "S"."TYPE#"=10))
  42 - access("S"."TS#"="UN"."TS#" AND "S"."FILE#"="UN"."FILE#" AND "S"."BLOCK#"="UN"."BLOCK#")
       filter("S"."TS#"="TS"."TS#")
  43 - filter(("S"."TYPE#"=1 OR "S"."TYPE#"=10))
  44 - access("UN"."TS#"="F"."TS#" AND "UN"."FILE#"="F"."RELFILE#")
  45 - access("S"."FILE#"="F"."RELFILE#" AND "S"."TS#"="F"."TS#" AND "S"."TS#"="TS"."TS#")
  50 - access("TS"."NAME"='SYSAUX')
  52 - filter(("S"."TYPE#"<>6 AND "S"."TYPE#"<>5 AND "S"."TYPE#"<>8 AND "S"."TYPE#"<>10 AND
              "S"."TYPE#"<>11 AND "S"."TYPE#"<>1))
  53 - access("S"."TS#"="TS"."TS#" AND "S"."FILE#"="F"."RELFILE#")
       filter("S"."TS#"="F"."TS#")
  54 - filter(("S"."TYPE#"<>6 AND "S"."TYPE#"<>5 AND "S"."TYPE#"<>8 AND "S"."TYPE#"<>10 AND
              "S"."TYPE#"<>11 AND "S"."TYPE#"<>1))
  55 - access("S"."FILE#"="F"."RELFILE#" AND "S"."TS#"="F"."TS#" AND "S"."TS#"="TS"."TS#")
  60 - access("TS"."NAME"='SYSAUX')
  62 - filter("S"."TYPE#"=11)
  63 - access("S"."TS#"="TS"."TS#" AND "S"."FILE#"="F"."RELFILE#")
       filter("S"."TS#"="F"."TS#")
  64 - filter("S"."TYPE#"=11)

Note
-----
   - this is an adaptive plan (rows marked '-' are inactive)

Hope that helps.

Kerry



The second email:


Thanks, Kerry!
So as i understand, just one execution can now create several child cursors with different final plans? (As many collectors there are in the plan?)
They will have different plan hash values?
And how other sessions will choose child for them during execution which creates many child cursors? Especially interesting, how we will analyze such plans through AWR if statistics will be splitted between several plan hash values…

And I said:

No – only one cursor is created. It can have multiple adaptations – i.e. there may be multiple places where a decision between NL and HJ are made – but in the end it decides on 1 plan ands that’s it. A new cursor will only be created if something more normal triggers a new cursor (adaptive cursor sharing, optimizer environment changes, cardinality feedback kicks in, etc…)

Kerry

That’s it for now. In Part 2 I’ll address some questions regarding interaction with hints.

APPEND_VALUES and SYS_DL_CURSOR Hints with HCC

The APPEND_VALUES hint was introduced in 11.2 to allow direct path inserts with variables using the VALUES clause. i.e.

INSERT INTO XYZ (COL1, COL2) VALUES (:A, :B);

The feature was designed to allow bulk inserting via arrays of 100’s or 1000’s of records in a single insert statement. Prior to 11.2, there was no documented way to do an direct path insert other than with the APPEND hint which only worked on inserts that used the SELECT clause. i.e.

INSERT INTO XYZ SELECT * from ZYX;

There was however an undocumented hint (SYS_DL_CURSOR) which did a bulk insert of sorts. (not a full append) You may have seen this hint if you use Informatica. I was recently asked about the use of Informatica with Hybrid Columnar Compression (HCC) on Exadata. Which prompted a little research on these two methods of loading data and whether they were compatible with HCC or not. So first off, here’s a test with the APPEND_VALUES clause (using my check_row_comp.sql script):


KSO@dbm1> !cat bulk_insert1.sql

CREATE TABLE t1
(c1    NUMBER,
 c2    NUMBER,
 c3    VARCHAR2(50),
 c4    VARCHAR2(50),
 c5    DATE,
 c6    DATE)
compress for query high;

DECLARE
  -- Define a collection
  TYPE t1_tbl_type IS TABLE OF t1%ROWTYPE;
  t1_tbl    t1_tbl_type := t1_tbl_type();
BEGIN

  -- Populate the collection
  FOR i IN 1..32000 LOOP
    t1_tbl.EXTEND;
    t1_tbl(i).c1 := i;
    t1_tbl(i).c2 := i*i;
    t1_tbl(i).c3 := 'i=' || TO_CHAR(i);
    t1_tbl(i).c4 := 'i*i=' || TO_CHAR(i*i);
    t1_tbl(i).c5 := SYSDATE;
    t1_tbl(i).c6 := SYSDATE;
  END LOOP;

  -- Bulk Insert the collection into table T1
  FORALL i IN 1..t1_tbl.COUNT
    INSERT /*+ append_values */ INTO t1
      (c1,
       c2,
       c3,
       c4,
       c5,
       c6)
    VALUES
      (t1_tbl(i).c1,
       t1_tbl(i).c2,
       t1_tbl(i).c3,
       t1_tbl(i).c4,
       t1_tbl(i).c5,
       t1_tbl(i).c6);

END;
/

KSO@dbm1> @bulk_insert1

Table created.


PL/SQL procedure successfully completed.

KSO@dbm1> select count(*) from t1;
select count(*) from t1
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


KSO@dbm1> -- correct behavior for direct path insert
KSO@dbm1> commit;

Commit complete.

KSO@dbm1> select rowid from t1 where rownum < 10;

ROWID
------------------
AAARlqAAIAAB/krAAA
AAARlqAAIAAB/krAAB
AAARlqAAIAAB/krAAC
AAARlqAAIAAB/krAAD
AAARlqAAIAAB/krAAE
AAARlqAAIAAB/krAAF
AAARlqAAIAAB/krAAG
AAARlqAAIAAB/krAAH
AAARlqAAIAAB/krAAI

9 rows selected.

KSO@dbm1> @check_row_comp.sql
Enter value for owner: KSO
Enter value for table_namr: T1
Enter value for rowid: AAARlqAAIAAB/krAAH

OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
8.522539.7           HCC Query High

So the APPEND_VALUES hint behaves as expected, does a proper append and applies HCC. There are a couple of corner cases worth mentioning. First, small inserts (a single row for example) will not trigger the HCC processing. See the following example.


KSO@dbm1> insert /*+ append_values */ into t1 (c1) values (-1);

1 row created.

KSO@dbm1> select count(*) from t1 where c1 = -1;
select count(*) from t1 where c1 = -1
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


KSO@dbm1> commit;

Commit complete.

KSO@dbm1> select rowid from t1 where c1 = -1;

ROWID
------------------
AAARlqAAIAAEfDLAAA

KSO@dbm1> @check_row_comp.sql
Enter value for owner: KSO
Enter value for table_namr: T1
Enter value for rowid: AAARlqAAIAAEfDLAAA

OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
8.1175755.0          No Compression

So a single row insert was done in direct path mode, but HCC was not applied due to the small size of the insert.

The second corner case is that objects owned by SYS (and stored in the SYSTEM tablespace) appear to be ineligible for HCC processing. See the example below which shows the same test but when T1 is owned by SYS. It also shows an alter table move does not compress the rows when the object is owned by SYS and stored in SYSTEM. Note: there could be something else that is preventing the compression but objects in SYSTEM that are not owned by SYS and objects that are owned by SYS but not stored in SYSTEM both seem to behave correctly. As Tanel says, “the rabbit hole always gets deeper”. (I’ll leave those tests as an exercise for the reader rather than clutter up this post) Here’s the simple case of creating a table in the SYSTEM tablespace that is owned by SYS.

SYS@dbm1> @bulk_insert1

Table created.


PL/SQL procedure successfully completed.

SYS@dbm1> select count(*) from t1;
select count(*) from t1
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


SYS@dbm1> commit;

Commit complete.

SYS@dbm1> select rowid from t1 where rownum < 10;

ROWID
------------------
AAARltAABAAAUnRAAA
AAARltAABAAAUnRAAB
AAARltAABAAAUnRAAC
AAARltAABAAAUnRAAD
AAARltAABAAAUnRAAE
AAARltAABAAAUnRAAF
AAARltAABAAAUnRAAG
AAARltAABAAAUnRAAH
AAARltAABAAAUnRAAI

9 rows selected.

SYS@dbm1> @check_row_comp.sql
Enter value for owner: SYS
Enter value for table_namr: T1
Enter value for rowid: AAARltAABAAAUnRAAG

OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
1.84433.6            No Compression

SYS@dbm1> @table_size
Enter value for owner: SYS
Enter value for table_name: T1
Enter value for type: 
Enter value for tablespace_name: 

OWNER                SEGMENT_NAME                   TYPE               TOTALSIZE_MEGS TABLESPACE_NAME
-------------------- ------------------------------ ------------------ -------------- ------------------------------
SYS                  T1                             TABLE                         2.0 SYSTEM
                                                                       --------------
sum                                                                               2.0

SYS@dbm1> alter table sys.t1 move compress for archive high;

Table altered.

SYS@dbm1> @table_size
Enter value for owner: SYS
Enter value for table_name: T1
Enter value for type: 
Enter value for tablespace_name: 

OWNER                SEGMENT_NAME                   TYPE               TOTALSIZE_MEGS TABLESPACE_NAME
-------------------- ------------------------------ ------------------ -------------- ------------------------------
SYS                  T1                             TABLE                         2.0 SYSTEM
                                                                       --------------
sum                                                                               2.0

SYS@dbm1> select rowid from t1 where rownum < 10;

ROWID
------------------
AAARluAABAAAXNRAAA
AAARluAABAAAXNRAAB
AAARluAABAAAXNRAAC
AAARluAABAAAXNRAAD
AAARluAABAAAXNRAAE
AAARluAABAAAXNRAAF
AAARluAABAAAXNRAAG
AAARluAABAAAXNRAAH
AAARluAABAAAXNRAAI

9 rows selected.

SYS@dbm1> @check_row_comp.sql
Enter value for owner: SYS
Enter value for table_namr: T1
Enter value for rowid: AAARluAABAAAXNRAAG

OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
1.95057.6            No Compression

So again, in this case, it is clearly a direct path insert, but HCC is not applied.

So what about the older SYS_DL_CURSOR hint. Well the short story is it doesn’t do a real direct path load, so it doesn’t work with HCC. Here’s a quick demonstration (back in a regular user account).

KSO@dbm1> !cat bulk_insert2.sql
CREATE TABLE t1
(c1    NUMBER,
 c2    NUMBER,
 c3    VARCHAR2(50),
 c4    VARCHAR2(50),
 c5    DATE,
 c6    DATE)
compress for query high;

DECLARE
  -- Define a collection
  TYPE t1_tbl_type IS TABLE OF t1%ROWTYPE;
  t1_tbl    t1_tbl_type := t1_tbl_type();
BEGIN

  -- Populate the collection
  FOR i IN 1..32000 LOOP
    t1_tbl.EXTEND;
    t1_tbl(i).c1 := i;
    t1_tbl(i).c2 := i*i;
    t1_tbl(i).c3 := 'i=' || TO_CHAR(i);
    t1_tbl(i).c4 := 'i*i=' || TO_CHAR(i*i);
    t1_tbl(i).c5 := SYSDATE;
    t1_tbl(i).c6 := SYSDATE;
  END LOOP;

  -- Bulk Insert the collection into table T1
  FORALL i IN 1..t1_tbl.COUNT
    INSERT /*+ sys_dl_cursor */ INTO t1
      (c1,
       c2,
       c3,
       c4,
       c5,
       c6)
    VALUES
      (t1_tbl(i).c1,
       t1_tbl(i).c2,
       t1_tbl(i).c3,
       t1_tbl(i).c4,
       t1_tbl(i).c5,
       t1_tbl(i).c6);

END;
/

KSO@dbm1> drop table t1;

Table dropped.

KSO@dbm1> @bulk_insert2

Table created.


PL/SQL procedure successfully completed.

KSO@dbm1> select count(*) from t1;

  COUNT(*)
----------
     32000

KSO@dbm1> -- Not a good sign 
KSO@dbm1> -- direct path should not allow a select without a commit or rollback
KSO@dbm1> -- let's check anyway
KSO@dbm1> select rowid from t1 where rownum < 10;

ROWID
------------------
AAARlzAAIAAB/krAAA
AAARlzAAIAAB/krAAB
AAARlzAAIAAB/krAAC
AAARlzAAIAAB/krAAD
AAARlzAAIAAB/krAAE
AAARlzAAIAAB/krAAF
AAARlzAAIAAB/krAAG
AAARlzAAIAAB/krAAH
AAARlzAAIAAB/krAAI

9 rows selected.

KSO@dbm1> @check_row_comp.sql
Enter value for owner: KSO
Enter value for table_namr: T1
Enter value for rowid: AAARlzAAIAAB/krAAG

OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
8.522539.6           No Compression

So no love with SYS_DL_CURSOR. Of course you can still do the more normal, load, alter table move, exchange partition type processing. As always, please let me know if you any questions or comments.

System Statistics – “Exadata” Mode

Here’s a very quick note on a relatively recent addition to System Statistics. It’s the so called “Exadata mode”. This mode is intended to give the optimizer a little more info about the storage system on the Exadata platform. Here’s some info from an MOS note.

——————————————————————————————
Oracle Sun Database Machine Setup/Configuration Best Practices [ID 1274318.1]

Verify Exadata specific optimizer statistics have been gathered

 
Benefit / Impact

Gathering Exadata specific system statistics ensure the optimizer is aware of Exadata scan speed. Accurately accounting for the speed of scan operations will ensure the Optimizer chooses an optimal execution plan in a Exadata environment. The following command gathers Exadata specific system statistics

exec dbms_stats.gather_system_stats(‘EXADATA’);

Risk:

Lack of Exadata specific stats can lead to less performant optimizer plans.

Action / Repair:

To see if Exadata specific optimizer stats have been gathered, run the following query on a system with at least 11.2.0.2 BP18 or 11.2.0.2 BP8 Oracle software. If PVAL1 returns null or is not set, Exadata specific stats have not been gathered.

select pname, PVAL1 from aux_stats$ where pname='MBRC';

——————————————————————————————-

And here’s an example of gathering Exadata mode system stats on a real Exadata as performed by a real Enkitec employee.


SYS@dbm1> exec dbms_stats.delete_system_stats;

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SYS@dbm1> @system_stats

PNAME                               PVAL1
------------------------------ ----------
STATUS
DSTART
DSTOP
FLAGS                                   0
CPUSPEEDNW                           2797
IOSEEKTIM                              10
IOTFRSPEED                           4096
SREADTIM
MREADTIM
CPUSPEED
MBRC
MAXTHR
SLAVETHR

13 rows selected.

Elapsed: 00:00:00.00
SYS@dbm1> exec dbms_stats.gather_system_stats('EXADATA');

PL/SQL procedure successfully completed.

Elapsed: 00:01:45.51
SYS@dbm1> select  pname, pval1 from sys.aux_stats$;

PNAME                               PVAL1
------------------------------ ----------
STATUS
DSTART
DSTOP
FLAGS                                   1
CPUSPEEDNW                           2832
IOSEEKTIM                               9
IOTFRSPEED                          88419
SREADTIM
MREADTIM
CPUSPEED
MBRC                                  128
MAXTHR
SLAVETHR

13 rows selected.
Elapsed: 00:00:00.00

So just a couple of quick notes. First, gathering “Exadata” mode system stats does spend some time gathering NOWORKLOAD stats (1:45 in this case) but appears to manually set MBRC to 128.
Note that this is not hardcoded, but actually uses the value of the db parameter db_file_multiblock_read_count which in our case defaults to 128. (see comment from Chris Antognini below) Increasing MBRC makes sense because it would tend to push the optimizer towards full table scans which can obviously be a lot faster on Exadata due to Smart Scan offloading. The default for costing is 8 and pushing it to 128 does make full scans more attractive to the optimizer. Note also that the IOTFRSPEED was dramatically increased (although this doesn’t always happen).

Here’s the section from a 10053 (Wolfgang) trace just for verification.

-----------------------------
SYSTEM STATISTICS INFORMATION
-----------------------------
  Using NOWORKLOAD Stats
  CPUSPEEDNW: 2832 millions instructions/sec (default is 100)
  IOTFRSPEED: 88419 bytes per millisecond (default is 4096)
  IOSEEKTIM: 9 milliseconds (default is 10)
  MBRC: 128 blocks (default is 8)

I do think that Exadata mode system stats should be set (collected) on new Exadata implementations.

12c – parallel_degree_level (control for auto DOP)

I heard JP Dijcks speak at RMOUG in 2012 about a new parameter that would show up in 12c called parallel_degree_level. It’s basically a knob that you can turn to dial up (or down) the calculated DOP when setting parallel_degree_policy=auto. Early on (11.2.0.1) auto DOP seemed to vastly overestimate what the DOP should be. In a later version (11.2.0.3) it seems to often underestimate what the DOP should be. I’ve said in the past that I thought auto DOP was too hard to control and thus too scary for production systems. I’ve also said that I thought auto DOP was the wave of the future. And I think this parameter alone may make it possible to use this feature in production because it gives us the ability to dial in the level of parallelism that works for our system. So here’s a quick demo:

SYS@db12c1> @parms
Enter value for parameter: parallel_degree
Enter value for isset: 
Enter value for show_hidden: 

NAME                                               VALUE                                                                  ISDEFAUL ISMODIFIED ISSET
-------------------------------------------------- ---------------------------------------------------------------------- -------- ---------- ----------
parallel_degree_level                              100                                                                    TRUE     FALSE      FALSE
parallel_degree_limit                              16                                                                     FALSE    FALSE      TRUE
parallel_degree_policy                             AUTO                                                                   FALSE    TRUE       TRUE

3 rows selected.

Elapsed: 00:00:00.05
SYS@db12c1> alter session set parallel_degree_policy=auto;

Session altered.

Elapsed: 00:00:00.00
SYS@db12c1> select count(*) from kso.TT_CLUSTER_ONAME;

    COUNT(*)
------------
    79429632

1 row selected.

Elapsed: 00:00:01.96
SYS@db12c1> @x

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  apvrg0vpxxw8k, child number 2
-------------------------------------
select count(*) from kso.TT_CLUSTER_ONAME

Plan hash value: 2036413816

--------------------------------------------------------------------
| Id  | Operation                      | Name             | E-Rows |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                  |        |
|   1 |  SORT AGGREGATE                |                  |      1 |
|   2 |   PX COORDINATOR               |                  |        |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000         |      1 |
|   4 |     SORT AGGREGATE             |                  |      1 |
|   5 |      PX BLOCK ITERATOR         |                  |     79M|
|*  6 |       TABLE ACCESS STORAGE FULL| TT_CLUSTER_ONAME |     79M|
--------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - storage(:Z>=:Z AND :Z<=:Z)

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 14
   - parallel scans affinitized

31 rows selected.

Elapsed: 00:00:00.02
SYS@db12c1> alter session set parallel_degree_level=10;

Session altered.

Elapsed: 00:00:00.01
SYS@db12c1> select count(*) from kso.TT_CLUSTER_ONAME;

    COUNT(*)
------------
    79429632

1 row selected.

Elapsed: 00:00:19.95
SYS@db12c1> @x

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  apvrg0vpxxw8k, child number 4
-------------------------------------
select count(*) from kso.TT_CLUSTER_ONAME

Plan hash value: 2036413816

------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name             | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                  |       |   174K(100)|          |        |      |            |
|   1 |  SORT AGGREGATE                |                  |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR               |                  |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000         |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE             |                  |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR         |                  |    79M|   174K  (1)| 00:00:07 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS STORAGE FULL| TT_CLUSTER_ONAME |    79M|   174K  (1)| 00:00:07 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - storage(:Z>=:Z AND :Z<=:Z)

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2
   - parallel scans affinitized


31 rows selected.

Elapsed: 00:00:00.09
SYS@db12c1> alter session set parallel_degree_level=100;

Session altered.

Elapsed: 00:00:00.00
SYS@db12c1> select count(*) from kso.TT_CLUSTER_ONAME;

    COUNT(*)
------------
    79429632

1 row selected.

Elapsed: 00:00:04.07
SYS@db12c1> @x
Enter value for sql_id: apvrg0vpxxw8k
Enter value for child_no: 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  apvrg0vpxxw8k, child number 2
-------------------------------------
select count(*) from kso.TT_CLUSTER_ONAME

Plan hash value: 2036413816

------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name             | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                  |       | 24875 (100)|          |        |      |            |
|   1 |  SORT AGGREGATE                |                  |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR               |                  |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000         |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE             |                  |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR         |                  |    79M| 24875   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS STORAGE FULL| TT_CLUSTER_ONAME |    79M| 24875   (1)| 00:00:01 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - storage(:Z>=:Z AND :Z<=:Z)

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 14
   - parallel scans affinitized


31 rows selected.

Elapsed: 00:00:00.09
SYS@db12c1> alter session set parallel_degree_level=200;

Session altered.

SYS@db12c1> select count(*) from kso.TT_CLUSTER_ONAME;

  COUNT(*)
----------
  79429632

Elapsed: 00:00:00.59
SYS@db12c1> @x

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  apvrg0vpxxw8k, child number 5
-------------------------------------
select count(*) from kso.TT_CLUSTER_ONAME

Plan hash value: 2036413816

--------------------------------------------------------------------
| Id  | Operation                      | Name             | E-Rows |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                  |        |
|   1 |  SORT AGGREGATE                |                  |      1 |
|   2 |   PX COORDINATOR               |                  |        |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000         |      1 |
|   4 |     SORT AGGREGATE             |                  |      1 |
|   5 |      PX BLOCK ITERATOR         |                  |     79M|
|*  6 |       TABLE ACCESS STORAGE FULL| TT_CLUSTER_ONAME |     79M|
--------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - storage(:Z>=:Z AND :Z<=:Z)

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 16 because of degree limit
   - parallel scans affinitized


31 rows selected.

Elapsed: 00:00:00.12
SYS@db12c1> alter session set parallel_degree_limit=32;

Session altered.

Elapsed: 00:00:00.00
SYS@db12c1> select count(*) from kso.TT_CLUSTER_ONAME;

  COUNT(*)
----------
  79429632

Elapsed: 00:00:07.53
SYS@db12c1> @x

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  apvrg0vpxxw8k, child number 6
-------------------------------------
select count(*) from kso.TT_CLUSTER_ONAME

Plan hash value: 2036413816

--------------------------------------------------------------------
| Id  | Operation                      | Name             | E-Rows |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                  |        |
|   1 |  SORT AGGREGATE                |                  |      1 |
|   2 |   PX COORDINATOR               |                  |        |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000         |      1 |
|   4 |     SORT AGGREGATE             |                  |      1 |
|   5 |      PX BLOCK ITERATOR         |                  |     79M|
|*  6 |       TABLE ACCESS STORAGE FULL| TT_CLUSTER_ONAME |     79M|
--------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - storage(:Z>=:Z AND :Z<=:Z)

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 28
   - parallel scans affinitized


31 rows selected.

Elapsed: 00:00:00.06

So as you can see, parallel_degree_level is basically a percentage. The default is 100 and setting it to a value of 10 decreases the calculated value to roughly 10% while increasing it to 200 doubles the calculated DOP.

So just to reiterate, the auto DOP calculations have gotten progressively better over the last couple of years, but I think the simple addition of this new parameter makes it a much more palatable option.

SQL Translation Framework

My favorite new Oracle Database 12c feature is the SQL Translation Framework. The feature grew out of SQL Developer’s ability to translate SQL from non-Oracle RDBMS’s. For example, there is a pre-built Sybase ASE translation package that is designed to translate the Sybase dialect of SQL into Oracle SQL dialect. So that’s what the feature is designed for. But the developers decided to move it to the database and to allow us to write our own translations which opens up a whole world of possibilities.

The first thought that occurred to me when I saw this feature listed in the 12c New Features doc, was that I might be able to use it to fix badly written SQL behind the scenes. I’ve written and talked quite a bit about using hint based mechanisms (Outlines, SQL Profiles, Baselines, and SQL Patches) to alter execution plans without having to change the code. Those technique work great most of the time, but there are cases where hints alone can’t fix the problem. In some cases it is necessary to change the SQL statement text to get the desired results. And the SQL Translation Framework gives us the tool kit we need to do just that. And by the way, although I do have a tendency to use Oracle features for purposes for which they were not originally intended, in this case, I think the developers knew full well that the features could be used to address performance issues by re-writing SQL. As proof, here is a snippet from the 12c Release 1 Migration guide.

In addition to translating non-Oracle SQL statements, the SQL Translation Framework can also be used to substitute an Oracle SQL statement with another Oracle statement to address a semantic or a performance issue. In this way, you can address an application issue without patching the client application.

So let’s dive in. There are two main components to the framework. The first is a pl/sql package to programmatically translate code (also called the Translator in the docs). The second is a set of maps for individual SQL statements (this is called a SQL Translation Profile). There are a couple of requirements to use this feature.

1. You must create a SQL Translation Profile (using dbms_sql_translator.create_profile)
2. You must assign a session to use the Translation Profile (generally with an alter session command)
3. You must set the 10601 system event

While the Translator Profile is required, it does not have to be assigned a translator. In other words, you can map individual statements without writing a PL/SQL package. Of course if you have a system that has a lot of problems caused by the same coding pattern, you could potentially use the framework to rewrite those statements on the fly.

Here’s a quick example for a simple case of mapping individual statements.

SYS@LAB1211> exec dbms_sql_translator.create_profile('FOO');

PL/SQL procedure successfully completed.

SYS@LAB1211> select object_name, object_type from dba_objects where object_name like 'FOO';

OBJECT_NAME                    OBJECT_TYPE
------------------------------ -----------------------
FOO                            SQL TRANSLATION PROFILE

SYS@LAB1211> exec dbms_sql_translator.register_sql_translation('FOO','select count(*) from hr.countries','select count(*) from hr.jobs');

PL/SQL procedure successfully completed.

SYS@LAB1211> exec dbms_sql_translator.register_sql_translation('FOO','select count(*) from countries','select count(*) from jobs');

PL/SQL procedure successfully completed.

SYS@LAB1211> exec dbms_sql_translator.register_sql_translation('FOO','select 1 from hr.countries','select count(*) from hr.countries');

PL/SQL procedure successfully completed.

SYS@LAB1211> grant all on sql translation profile foo to hr;

Grant succeeded.

SYS@LAB1211> alter session set sql_translation_profile = FOO;

Session altered.

SYS@LAB1211> alter session set events = '10601 trace name context forever, level 32';

Session altered.

SYS@LAB1211> set echo on
SYS@LAB1211> select count(*) from hr.countries;

  COUNT(*)
----------
        19

SYS@LAB1211> select /*+ fix_wrong_results */ count(*) from hr.countries;

  COUNT(*)
----------
        25

SYS@LAB1211> @x

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  aaajpnhn25nza, child number 0
-------------------------------------
select /*+ fix_wrong_results */ count(*) from hr.countries

Plan hash value: 1399856367

----------------------------------------------------------------------------
| Id  | Operation        | Name            | Rows  | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                 |       |     1 (100)|          |
|   1 |  SORT AGGREGATE  |                 |     1 |            |          |
|   2 |   INDEX FULL SCAN| COUNTRY_C_ID_PK |    25 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------


14 rows selected.

SYS@LAB1211> select count(*) from hr.countries;

  COUNT(*)
----------
        19

SYS@LAB1211> @x

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  c95vwg4jwqqfd, child number 0
-------------------------------------
select count(*) from hr.jobs

Plan hash value: 3870222678

-----------------------------------------------------------
| Id  | Operation        | Name      | Rows  | Cost (%CPU)|
-----------------------------------------------------------
|   0 | SELECT STATEMENT |           |       |     1 (100)|
|   1 |  SORT AGGREGATE  |           |     1 |            |
|   2 |   INDEX FULL SCAN| JOB_ID_PK |    19 |     0   (0)|
-----------------------------------------------------------


14 rows selected.

Continue reading ‘SQL Translation Framework’ »

12c – New SQL_ID Calculation

Updated 7/7/13: Well I’m a doofus! This is not a generic problem. It is a bug but only happens when using a specific new feature I was playing with on my 12.1 database (SQL Translation Framework). No need to worry about this unless using that feature. (thanks to Stefen for pointing this out) So you probably don’t need to read this at all. The comments might be worth looking at though. 🙂

=========================================================

Shoot! SQL_ID calculation is different between 11.2 and 12.1. This is a bummer because we’ve gotten used to being able to go back and forth between versions to verify plans after upgrading to 11g for example. It was also convenient to be able to track changes in performance statistics before and after an upgrade. Fortunately there is a work around. The old_hash_value column has been carried through to 12c. See here:


SQL*Plus: Release 11.2.0.3.0 Production on Fri Jul 5 18:51:43 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining
and Real Application Testing options


INSTANCE_NAME    STARTUP_TIME      CURRENT_TIME         DAYS    SECONDS
---------------- ----------------- ----------------- ------- ----------
LAB1123          26-JUN-2013 19:02 05-JUL-2013 18:51    8.99     776979

SYS@LAB1123> select sql_id, hash_value, old_hash_value, plan_hash_value, sql_text from v$sql where sql_text = 'select 1 from dual';

SQL_ID        HASH_VALUE OLD_HASH_VALUE PLAN_HASH_VALUE SQL_TEXT
------------- ---------- -------------- --------------- ----------------------------------------
520mkxqpf15q8 2866845384      271604965      1388734953 select 1 from dual

========================================================

SQL*Plus: Release 12.1.0.1.0 Production on Fri Jul 5 18:33:12 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options


INSTANCE_NAME    STARTUP_TIME      CURRENT_TIME         DAYS    SECONDS
---------------- ----------------- ----------------- ------- ----------
LAB1211          02-JUL-2013 10:21 05-JUL-2013 18:33    3.34     288715

SYS@LAB1211> select sql_id, hash_value, old_hash_value, plan_hash_value, sql_text from v$sql where sql_text = 'select 1 from dual';

SQL_ID        HASH_VALUE OLD_HASH_VALUE PLAN_HASH_VALUE SQL_TEXT
------------- ---------- -------------- --------------- ----------------------------------------
3zcn52u5tvfqh 2342370000      271604965      1388734953 select 1 from dual

So as you can see, the sql_id and hash_value have changed between versions but the old_hash_value remains consistent. It also appears that the plan_hash_value calculation is unchanged, at least for simple plans. Anyway, a little reworking of some scripts should allow us to do the same sorts of things we’ve done in the past, albeit with a little more effort. Maybe Tanel will do us all a favor and write a function to calculate the old sql_id in 12c. That would make it a little easier. 🙂

12c Adaptive Optimization

Since everyone seems to be all twitterpated about Oracle Database 12c this week, I thought I’d post a quick note to let you know that the slides from the presentation on 12c Adaptive Optimization I did at the Hotsos Symposium 2013 (with a lot of help from Maria) are now available in the Whitepapers / Presentations section of this blog.

While I’m on the topic, I found this little blurb in the Oracle Database 12c Release 1 New Features Guide:

Zero Effort Performance

That’s the section that talks about the Adaptive Optimization stuff. I think the documentation folks meant that they were describing performance features that didn’t require any manual intervention, but it sort of reads like the features are really easy to describe, or maybe that the writers weren’t going to work very hard on describing them. At any rate, the wording struck me as humorous. 🙂

SQL Gone Bad – But Plan Not Changed? – Part 2

In Part 1 of this series I talked about the basic problem, which is that plan_hash_values are not based on the whole plan. One of the main missing components is the predicates associated with a plan, but there are other missing parts as was pointed out in Part 1 of Randolf Geist’s post on the topic. At any rate, predicates seem to be the most critical of the missing parts.

The purpose of this second post on the topic is to talk about diagnosis. Basically how do you identify when some other part of a plan has changed that doesn’t affect the plan_hash_value, specifically a predicate.

So first I thought I would show a few examples of statements with the same sql_id and plan_hash_value that have other plan differences (in the predicate section). To do this I used a method proposed by Randolf Geist a few years back in his 2nd post on the topic which covered Alternative Ways to Calculate a PLAN_HASH_VALUE In that post, Randolf shows several ways to compute a hash value on any or all of the columns in the v$sql_plan table. I wrote a simple script around one of the those methods (find_pred_mismatch.sql), and as you might guess from the name, I limited this version to not include all the columns in v$sql_plan, but to only identify statements with mismatched predicates. To be more explicit, the script will locate statements in the shared_pool that have multiple child cursors, where there are more than one set of predicates to go with a single plan_hash_value. Here’s an example:

SYS@DEMO1> @find_pred_mismatch

Type created.


SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER   THE_HASH ARE_H
------------- --------------- ------------ ---------- -----
063m5s0cvrr19      1502175119            0 2709091620 DIFF!
093fgfvygm51m      3114044936            0 3689661040 DIFF!
0cn2wm9d7zq8d      1540383338            0 3746559344 DIFF!
0pt4jfmq9f1q0      3078496799            0 1309675335 DIFF!
155cwuv2pfp1d       768389713            0 2982291916 DIFF!
18c2yb5aj919t      1032442798            0 1714458756 DIFF!
1n9crga6mbw2x      4174841998            0 2752042257 DIFF!
1ytxrt5qp9qdd      2707146560            0 3757837347 DIFF!
23buxzfxyp1vy      3143617369            0 2089881285 DIFF!
23nad9x295gkf       891847141            0 4056778555 DIFF!
24zvjzuyrxh3w      1877711096            0 1680905434 DIFF!
28n17ru48jnh5      1665111388            0 3584687734 DIFF!
2j0fw19fph49j      1337823903            0 2431841938 DIFF!
2kd6nusgzc3uw      3151266570            0 3024843785 DIFF!
2rpwgryn7pxz5      3329544515            0  452505826 DIFF!
35nhk48nxwc0v      2553960494            0  117262982 DIFF!
3bc73t2h9mwxc      1420457580            0 1226225583 DIFF!
3gputsqv4u1j3      3161304745            0 2252819340 DIFF!
3zauy2zqryrsx      1420457580            0 1128973296 DIFF!
42q1qby3huf2c      3069437562            0 4008632079 DIFF!
47mm81hm9sggy      1836210496            0 1554180227 DIFF!
4g46czps3t55u      2714703948            0 4063401817 DIFF!
4n2gca427719q      1360319091            0 4013571180 DIFF!
4tpsnbkt1dm5j      2960949352            0 3341004783 DIFF!
5dyhfnkzta2zm      3767331201            0 4238766232 DIFF!
5h91zx386wbht       293199272            0  949799344 DIFF!
5s34t44u10q4g      2693539438            0  839739072 DIFF!
5uw1u291s3m0k       219265157            0  642280427 DIFF!
61tn3mam0vq0b      2012170170            0 2048362545 DIFF!
63t3ufgq37m0c      1155609947            0  844291465 DIFF!
69k5bhm12sz98      3091659676            0  356777601 DIFF!
6cp74g22fzahf        76968983            0 1617454724 DIFF!
6wm3n4d7bnddg      1772758172            0 1148123313 DIFF!
78kp0fcyxavzb      2960949352            0 1085639264 DIFF!
7ah4afrggrw5c      4213028598            0 4285032606 DIFF!
7g4rxwbvhdh3q      3170022080            0 2083442940 DIFF!
7hspvruktu52b      4016032974            0 2538340188 DIFF!
84p3g5b5bsfvn       681044650            0 3826083810 DIFF!
86521pa77y28j      3760090177            0 3887843475 DIFF!
8ak9gkw2mjhvr      1526940012            0 2946674232 DIFF!
8p9z2ztb272bm       408663731            0 3293625021 DIFF!
aca4xvmz0rzup       427901911            0 4215668999 DIFF!
akh9zqqkx3wj7      2306922995            0 2084689096 DIFF!
akx4284f2vjnv      3948068913            0 2662025793 DIFF!
amycufzt6uq5f      3283312188            0 1896511712 DIFF!
atnkqhrp3t7xa      2196914545            0   26873046 DIFF!
aw2x7hh2a9ag0      1148557212            0  719001678 DIFF!
b41wak2bb7atw       108532975            0 1699960507 DIFF!
bhvyz9bgyrhb2      1134671139            0 2402404248 DIFF!
c8gnrhxma4tas      4024720576            0 2473084105 DIFF!
cc7vvmrsxzyq1      1849127868            0 1912933403 DIFF!
cjtaqp92v10bn       922118807            0 2313573387 DIFF!
ckfgcyv05xptf      2869192503            0 3932622422 DIFF!
cw860p03hy5ff      1502175119            0 2915988156 DIFF!
cyw0c6qyrvsdd       192117504            0 2551186960 DIFF!
d53nc7j6n1057      1356236608            0  582788179 DIFF!
dyj1ssw8jw54f      1836210496            0   66902761 DIFF!
fkjkrv5ycz96u      2247257083            0 1809299677 DIFF!
gdn3ysuyssf82      4024720576            0 2473084105 DIFF!
gwbdd5m45ugpm      3180770434            0  235716193 DIFF!

60 rows selected.

SYS@DEMO1> select * from table(dbms_xplan.display_cursor('&sql_id','&child_no','typical'));
Enter value for sql_id: 24zvjzuyrxh3w
Enter value for child_no: 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  24zvjzuyrxh3w, child number 0
-------------------------------------
SELECT script FROM sys.metaxsl$ WHERE xmltag=:1 AND transform=:2  AND
model=:3

Plan hash value: 1877711096

--------------------------------------------------------------------------------------
| Id  | Operation                 | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |          |       |       |     3 (100)|          |
|*  1 |  TABLE ACCESS STORAGE FULL| METAXSL$ |     3 |    99 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - storage(("XMLTAG"=:1 AND "TRANSFORM"=:2 AND "MODEL"=:3))
       filter(("XMLTAG"=:1 AND "TRANSFORM"=:2 AND "MODEL"=:3))

SQL_ID  24zvjzuyrxh3w, child number 1
-------------------------------------
SELECT script FROM sys.metaxsl$ WHERE xmltag=:1 AND transform=:2  AND
model=:3

Plan hash value: 1877711096

----------------------------------------------
| Id  | Operation                 | Name     |
----------------------------------------------
|   0 | SELECT STATEMENT          |          |
|*  1 |  TABLE ACCESS STORAGE FULL| METAXSL$ |
----------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - storage(("MODEL"=:3 AND "TRANSFORM"=:2 AND "XMLTAG"=:1))
       filter(("MODEL"=:3 AND "TRANSFORM"=:2 AND "XMLTAG"=:1))

Note
-----
   - rule based optimizer used (consider using cbo)


44 rows selected.

Continue reading ‘SQL Gone Bad – But Plan Not Changed? – Part 2’ »

SQL Gone Bad – But Plan Not Changed? – Part 1

Last week an interesting issue popped up on a mission critical production app (MCPA). A statement that was run as part of a nightly batch process ran long. In fact, the statement never finished and the job had to be killed and restarted. This particular system is prone to plan stability issues due to various factors outside the scope of this post, so the first thing that the guys checked was if there had been a plan change. Surprisingly the plan_hash_value was the same as it had been for the past several months. The statement was very simple and a quick look at the xplan output showed that the plan was indeed the same with one exception. The predicate section was slightly different.

As a quick diversion, you probably already know that the plan_hash_value is calculated based on partial information about the plan. Arguably it’s the most important parts, but there are some important parts of the plan that are not included (namely the stuff that shows up in the predicate section of the plan). Randolf Geist explained which parts of the plan are used in calculating the plan_hash_value well in a post on How PLAN_HASH_VALUES Are Calculated several years ago. His summary was this:

So in summary the following conclusions can be made:

– The same PLAN_HASH_VALUE is merely an indicator that the same operations on objects of the same name are performed in the same order.

– It tells nothing about the similarity of the expected runtime performance of the execution plan, due to various reasons as demonstrated. The most significant information that is not covered by the PLAN_HASH_VALUE are the filter and access predicates, but there are other attributes, too, that are not part of the hash value calculation.

The production issue got me thinking about several things:

    Could I come up with a simple test case to show a severe performance degradation between executions of a statement with the same plan_hash_value because of a change in the predicates? (the answer is it’s pretty easy actually)

    What forensics could be used to determine when this situation has occurred?

    How do you fix the problem?

Continue reading ‘SQL Gone Bad – But Plan Not Changed? – Part 1’ »