Does parallel_degree_limit work with parallel_degree_policy=manual?

The Oracle 11g parameter parallel_degree_limit is designed to put a cap on the maximum DOP for a statement.

The Oracle Database Reference 11g Release 2 (11.2) says this:

A numeric value for this parameter specifies the maximum degree of parallelism the optimizer can choose for a SQL statement when automatic degree of parallelism is active. Automatic degree of parallelism is only enabled if PARALLEL_DEGREE_POLICY is set to AUTO or LIMITED.

But that’s not entirely correct because it turns out you can enable auto DOP via a hint. The PARALLEL hint without a valid object on which to act will enable auto DOP for the statement. Here is an example:

SYS@DEMO1> @parms
Enter value for parameter: parallel_degree
Enter value for isset: 
Enter value for show_hidden: 
 
NAME                                               VALUE                                                                  ISDEFAUL ISMODIFIED ISSET
-------------------------------------------------- ---------------------------------------------------------------------- -------- ---------- ----------
parallel_degree_limit                              CPU                                                                    FALSE    TRUE       TRUE
parallel_degree_policy                             MANUAL                                                                 FALSE    TRUE       TRUE
 
Elapsed: 00:00:00.00
SYS@DEMO1> select /*+ parallel */ count(*) from eo00.SALES_1M2;
 
  COUNT(*)
----------
   1000000
 
Elapsed: 00:00:00.02
SYS@DEMO1> @x
 
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  aszs6rg3fttrt, child number 2
-------------------------------------
select /*+ parallel */ count(*) from eo00.SALES_1M2
 
Plan hash value: 3155295854
 
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name      | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |           |       |    25 (100)|          |        |      |            |
|   1 |  SORT AGGREGATE                |           |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR               |           |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000  |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE             |           |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR         |           |   982K|    25   (4)| 00:00:01 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS STORAGE FULL| SALES_1M2 |   982K|    25   (4)| 00:00:01 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - storage(:Z>=:Z AND :Z<=:Z)
 
Note
-----
   - automatic DOP: Computed Degree of Parallelism is 7
 
 
28 rows selected.
 
Elapsed: 00:00:00.02

So as you can see, the hint enabled auto DOP, even though the parallel_degree_policy is set to manual. So let’s see if the parallel_degree_limit will kick in for such a case.

SYS@DEMO1> alter session set parallel_degree_limit=4;
 
Session altered.
 
Elapsed: 00:00:00.00
SYS@DEMO1> select /*+ parallel */ count(*) from eo00.SALES_1M2;
 
  COUNT(*)
----------
   1000000
 
Elapsed: 00:00:00.02
SYS@DEMO1> @x
 
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  aszs6rg3fttrt, child number 1
-------------------------------------
select /*+ parallel */ count(*) from eo00.SALES_1M2
 
Plan hash value: 3155295854
 
-----------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name      | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |           |       |    44 (100)|          |        |      |            |
|   1 |  SORT AGGREGATE                |           |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR               |           |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000  |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE             |           |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR         |           |   982K|    44   (3)| 00:00:01 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS STORAGE FULL| SALES_1M2 |   982K|    44   (3)| 00:00:01 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - storage(:Z>=:Z AND :Z<=:Z)
 
Note
-----
   - automatic DOP: Computed Degree of Parallelism is 4 because of degree limit
 
 
28 rows selected.
 
Elapsed: 00:00:00.02

Yes it does. So what does it mean? Well for one thing the documentation (and some presentations I have seen recently) are slightly wrong. But the real moral of the story is that just because you have parallel_degree_policy set to manual, doesn’t mean you are not using auto DOP.

Does this mean that the other features enabled by auto DOP (parallel statement queuing and in-memory parallel) will kick in on these kinds of statements as well? I’ll leave that as an exercise for the reader.

12c Adaptive Optimization – Part 3

This is the third and final post on follow up questions from the Redgate webinar I did on 12c Adaptive Optimization (the link goes to a recording of the webcast by the way).

Also, here are links to the 2 earlier posts:

    12c Adaptive Optimization – Part 1.
    12c Adaptive Optimization – Part 2 (Hints).

So here are the last set of questions along with my responses:

Q: Is this feature on by default or you have to set a parameter to make sure of it?
A: It’s on by default but can be turned off by the methods listed in the presentation.

Q: Is there any drawback of adaptive execution plan?
A: New features (especially auto-magic ones) always make people nervous, but I don’t see too many potential pitfalls with this one. The fact that it is enabled by default out of the box is also a good indicator that the developers themselves have a lot of confidence in it. There is certainly more work going on to collect statistics and buffer rows, but it seems quite minimal and only happens on the first execution. So my basic answer is no, I don’t foresee any major drawbacks.

Q: For adaptive plans, usually queries are more complex, with multiple combinations of hash joins and nested loops. But adaptive plans only switches to one “sub plan”, correct? How does it account for all the various combinations?
A: A sub-plan is limited to a single join. There can obviously be many joins in a single plan and thus many sub-plans. But each sub-plan will result in either a HJ or a NLJ. At the end there will be only one final plan. See my previous post (Part 1) for an example of a more complex plan with multiple sub-plans.

Q: parallel distribution methods: why not use broadcast all the time? :)
A: :)

Q: Would adaptive optim switch to a better index if it finds itself sitting on a wrong index?
A: I presume the question is with regard to Adaptive Plans kicking in on the first execution, if so, the answer is No. At this point only join methods and px distribution methods can be changed. I expect this will be expanded over time though.

Q: Does same plan_hash_value’s means same final plans?
A: Yes – plan hash value is computed based on the final plan with no regard to the fact that the plan was adaptive.

Q: How correlated plan_hash_values with final plans? How we can find same final plans?
A: Plan hash value is computed based on final plan, so the correlation is very high. :)

Q: Dynamic sampling would not put an excessive pressure on the CPU?
A: I guess it could, but it’s been around for some time and I haven’t been involved in any situations where the time spent on dynamic sampling was an issue. Setting it to 11 may give us some chances to see such a thing though. More often the issues arise when dynamic sampling does not come up with a good picture of the data due to the limited size of the sample.

Q: Is dynamic sampling = 11 actually a good blanket setting, or do you not trust the optimizer that much? What do you use and why?
A: The optimizer_dynamic_sampling parameter still defaults to 2 in 12c. That alone makes me cautious about setting it to the new totally auto-magic value of 11. If the developers have enough confidence in a new feature to make it the default, then I will be more trusting. I prefer to stick with default values unless I have to make a change to address a specific issue. I have worked on a few systems that change the default setting, but 11 has not been one of those values (yet). I need to do more testing with it.

Q: Gotta love Spinal Tap… crank it up to 11 !
A: Rock and Roll!

Q: Is there any effect on cpu utilisation becoz of adaptive optimisation??
A: There is definitely some extra overhead in collecting statistics and buffering rows but it should be minimal and it should only affect the initial execution.

Q: Can HJ be change to NL in 1-st execution? What is threshold for such change?
A: Yes – Adaptive Plans kick in the first execution. The threshold depends on the specific case. See the example earlier in part 2 of this series for an example of calculating the inflection point (from a 10053 trace).

Q: This means that if it is abandoned once it will also be abondoned if ran again?
A: Yes, assuming no other changes occur. But there are many things that can change such as Adaptive Cursor Sharing, Cardinality Feedback, etc… and of course the data itself and/or the statistics about the data can change over time as well. Just to be clear, the choice between the the two join methods is only made during the first execution after a hard parse, so once a statement is loaded into the cache, the plan will be static until something changes that causes a new child cursor to be created.

Q: At what data volumes does Adaptive Optimization become likely to be helpful.
A: Any volume that causes a NLJ to result in significantly different elapsed time than HJ.

Q: Does AWR show these updated adaptive plans with minus ?
A: That’s a good question. Yes, you can use the dbms_xplan.display_awr with the ‘adaptive’ format option (see the example below).

 
SYS@db12c1> select * from table(dbms_xplan.display_awr('&sql_id',nvl('&plan_hash_value',null),null,'adaptive'));
Enter value for sql_id: 6qg99cfg26kwb
Enter value for plan_hash_value: 
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 6qg99cfg26kwb
--------------------
SELECT COUNT(UNQ) UNQ, COUNT(PFX) PFX FROM (SELECT /*+ first_rows(1)
leading(cc) */ CD.TYPE# UNQ, NULL PFX FROM SYS.CCOL$ CC, SYS.CDEF$ CD
WHERE CC.OBJ# = :B2 AND CC.INTCOL# = :B1 AND CD.CON# = CC.CON# AND
CD.OBJ# = CC.OBJ# AND CD.ENABLED IS NOT NULL AND CD.INTCOLS = 1 AND
CD.TYPE# IN (2,3) AND BITAND(CD.DEFER, 2+4) = 4 AND ROWNUM < 2 UNION
ALL SELECT /*+ first_rows(1) leading(i) */ CASE WHEN I.INTCOLS = 1 AND
BITAND(I.PROPERTY,1) = 1 THEN 3 ELSE NULL END UNQ, CASE WHEN IC.POS# =
1 THEN 1 ELSE NULL END PFX FROM SYS.IND$ I, SYS.ICOL$ IC WHERE I.BO# =
:B2 AND I.BO# = IC.BO# AND IC.INTCOL# = :B1 AND I.OBJ# = IC.OBJ# AND
BITAND(I.FLAGS,1025) = 0 AND ROWNUM < 2 )
 
Plan hash value: 1065215175
 
----------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |         |       |       |     6 (100)|          |
|   1 |  SORT AGGREGATE                          |         |     1 |    16 |            |          |
|   2 |   VIEW                                   |         |     2 |    32 |     6   (0)| 00:00:01 |
|   3 |    UNION-ALL                             |         |       |       |            |          |
|   4 |     COUNT STOPKEY                        |         |       |       |            |          |
|-  5 |      HASH JOIN                           |         |     1 |    35 |     3   (0)| 00:00:01 |
|   6 |       NESTED LOOPS                       |         |     1 |    35 |     3   (0)| 00:00:01 |
|-  7 |        STATISTICS COLLECTOR              |         |       |       |            |          |
|   8 |         TABLE ACCESS CLUSTER             | CCOL$   |     1 |    13 |     2   (0)| 00:00:01 |
|   9 |          INDEX UNIQUE SCAN               | I_COBJ# |     1 |       |     1   (0)| 00:00:01 |
|  10 |        TABLE ACCESS CLUSTER              | CDEF$   |     1 |    22 |     1   (0)| 00:00:01 |
|- 11 |       TABLE ACCESS BY INDEX ROWID BATCHED| CDEF$   |     1 |    22 |     1   (0)| 00:00:01 |
|- 12 |        INDEX RANGE SCAN                  | I_CDEF2 |     1 |       |     1   (0)| 00:00:01 |
|  13 |     COUNT STOPKEY                        |         |       |       |            |          |
|- 14 |      HASH JOIN                           |         |     1 |    38 |     3   (0)| 00:00:01 |
|  15 |       NESTED LOOPS                       |         |     1 |    38 |     3   (0)| 00:00:01 |
|- 16 |        STATISTICS COLLECTOR              |         |       |       |            |          |
|  17 |         TABLE ACCESS CLUSTER             | IND$    |     1 |    21 |     2   (0)| 00:00:01 |
|  18 |          INDEX UNIQUE SCAN               | I_OBJ#  |     1 |       |     1   (0)| 00:00:01 |
|  19 |        TABLE ACCESS CLUSTER              | ICOL$   |     1 |    17 |     1   (0)| 00:00:01 |
|- 20 |       TABLE ACCESS CLUSTER               | ICOL$   |     1 |    17 |     1   (0)| 00:00:01 |
|- 21 |        INDEX UNIQUE SCAN                 | I_OBJ#  |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------
 
Note
-----
   - this is an adaptive plan (rows marked '-' are inactive)
 
 
46 rows selected.

Q: Is there any way to encourage the optimizer to collect the information but not act on it?
A: Yes, set optimizer_adaptive_reporting_only = true.

Q: Does adaptive distribution for parallel processing work as expected on a Virtual server where resources can be spread over several other servers?
A: No idea (in fact I’m not even sure I understand the question). Give it a test and let us know what you find out. :)

Q: Does Adaptive Optimization help oracle optimize somewhat complex nested views? I know nested views are not recommended but we sometimes have to live with what we inherited.
A: I don’t think this particular feature is going to help nested views specifically. But who knows. The optimizer seems to get lost occasionally with deeply nested views. By the way, there is an interesting new procedure in 12c called dbms_utility.expand_sql_text which spits out the fully expanded version of a SQL statement that accesses data through views. Tom Kyte has blogged about it here: 12c – SQL Text Expansion

Q: We regularly have hash join problems tracable to temp space limits. Shifting to nested loops has proven necessary in 10 and 11. Early detection and shifting to nested loops would be important for us.
A: I’m not sure this feature is really going to help you much in that regard unless the optimizer is erroneously picking the HJ based on incorrect estimates. If you’re just forcing the NLJ to avoid poor i/o performance on the temp stuff though it probably won’t help. In that case you need to figure out how to sort less or use more memory (increase pga, or use manual workarea size, or use more slaves in px, etc…).

Q: So if sort/merge join is used then this feature would not go to nested loop/hash join if sort/merge join is a bad plan ?
A: No it applies only to HJ and NLJ as of 12.1.0.1.

Q: What happens with the rows that were read up to inflection point? Does Oracle start reading from the scratch again?
A: The rows are buffered so they don’t need to be re-read.

Q: Will the SQL scripts that were demonstrated for reviewing the SQL plan information be made available?
A: Most are on this blog already (use the search box to locate them) but let me know if you can’t find any of the ones I used.

Q: It’s is a contraction for it is or it has. Its is a possessive pronoun meaning, more or less, of it or belonging to it.
A: Duly noted (and fixed in the presentation). :)

Q: Can we *force* plan change in mid-execution?
A: No. You can enable or disable the feature, but the optimizer decides whether to switch or not.

Q: How long statistics collector runs if it does not switch?
A: It should only run until the inflection point (the point at which it makes the decision), but I have not actually tested this.

Q: Is there a way adaptive can be disabled for PDB and enabled for others?
A: Yes, the optimizer_adaptive_features parameter can be set separately for each PDB (see the example below).

> rlwrap sqlplus / as sysdba
 
SQL*Plus: Release 12.1.0.1.0 Production on Mon Dec 9 19:53:03 2013
 
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
 
 
INSTANCE_NAME    STARTUP_TIME      CURRENT_TIME         DAYS    SECONDS
---------------- ----------------- ----------------- ------- ----------
CONTAIN1         02-DEC-2013 03:22 09-DEC-2013 19:53    7.69     664225
 
SYS@CONTAIN1> @whoami_pdb
 
    CON_ID CON_NAME   USERNAME             USER#        SID    SERIAL# PREV_HASH_VALUE SCHEMANAME                     OS_PID
---------- ---------- --------------- ---------- ---------- ---------- --------------- ------------------------------ -------
         1 CDB$ROOT   SYS                      0         24        295      3265981639 SYS                            4481
 
SYS@CONTAIN1> @connect_pdb
Enter value for pdb_name: plug1
 
Session altered.
 
SYS@CONTAIN1:PLUG1> @parms
Enter value for parameter: optimizer_adaptive_features
Enter value for isset: 
Enter value for show_hidden: 
 
NAME                                               VALUE                                                                  ISDEFAUL ISMODIFIED ISSET
-------------------------------------------------- ---------------------------------------------------------------------- -------- ---------- ----------
optimizer_adaptive_features                        TRUE                                                                   TRUE     TRUE       TRUE
 
SYS@CONTAIN1:PLUG1> alter system set optimizer_adaptive_features=false;
 
System altered.
 
SYS@CONTAIN1:PLUG1> @parms
Enter value for parameter: optimizer_adaptive_features
Enter value for isset: 
Enter value for show_hidden: 
 
NAME                                               VALUE                                                                  ISDEFAUL ISMODIFIED ISSET
-------------------------------------------------- ---------------------------------------------------------------------- -------- ---------- ----------
optimizer_adaptive_features                        FALSE                                                                  TRUE     TRUE       TRUE
 
SYS@CONTAIN1:PLUG1> @connect_pdb
Enter value for pdb_name: plug2
 
Session altered.
 
SYS@CONTAIN1:PLUG2> @whoami_pdb
 
    CON_ID CON_NAME   USERNAME             USER#        SID    SERIAL# PREV_HASH_VALUE SCHEMANAME                     OS_PID
---------- ---------- --------------- ---------- ---------- ---------- --------------- ------------------------------ -------
         4 PLUG2      SYS                      0         24        295      2710464132 SYS                            4481
 
SYS@CONTAIN1:PLUG2> @parms
Enter value for parameter: optimizer_adaptive_features
Enter value for isset: 
Enter value for show_hidden: 
 
NAME                                               VALUE                                                                  ISDEFAUL ISMODIFIED ISSET
-------------------------------------------------- ---------------------------------------------------------------------- -------- ---------- ----------
optimizer_adaptive_features                        TRUE                                                                   TRUE     TRUE       TRUE
 
SYS@CONTAIN1:PLUG2> connect / as sysdba
Connected.
 
INSTANCE_NAME    STARTUP_TIME      CURRENT_TIME         DAYS    SECONDS
---------------- ----------------- ----------------- ------- ----------
CONTAIN1         02-DEC-2013 03:22 09-DEC-2013 19:54    7.69     664324
 
SYS@CONTAIN1> @parms
Enter value for parameter: optimizer_adaptive_features
Enter value for isset: 
Enter value for show_hidden: 
 
NAME                                               VALUE                                                                  ISDEFAUL ISMODIFIED ISSET
-------------------------------------------------- ---------------------------------------------------------------------- -------- ---------- ----------
optimizer_adaptive_features                        TRUE                                                                   TRUE     TRUE       TRUE

So you can set the optimizer_adaptive_features parameter separately for each PDB. Note: here are links to the couple of scripts I used in this post:

    connect_pdb.sql
    whoami_pdb.sql

There was another good question that I don’t have time to look into at the moment.

Q: In the Pro*C sequence PREPARE, OPEN, FETCH, at what point(s) might Oracle switch plans? If during FETCH, how does Oracle return the next row/array?

Maybe I’ll get around to that later but if anyone wants to give it a shot and post the results in the comments section that would be great. :)

12c Adaptive Optimization – Part 2 – Hints

This is the second post on follow up questions from the Redgate webinar I did on 12c Adaptive Optimization. The first post is here: 12c Adaptive Optimization – Part 1. Since there were several comments and questions about hints and how they interact with Adaptive Plans, I decided to limit this 2nd post to that topic.

Q: Regarding turning off the adaptive optimization (particularly adaptive joins), will there also be a hint to disable it for a particular SQL?
Q: can we pick and choose SQL’s not to run this collector for

A: There are no specific hints to enable or disable Adaptive Plans as of 12.1.0.1. However, the OPT_PARAM hint does work with both the OPTIMIZER_ADAPTIVE_FEATURES parameter and the “_optimizer_adaptive_plans” parameter.

Here’s an example:

 
SYS@db12c1> -- statement that wants to generate an adaptive plan
SYS@db12c1> select product_name
  2  from oe.order_items o, oe.product_information p
  3  where o.unit_price=15 and o.quantity > 1
  4  and o.product_id = p.product_id
  5  /
 
PRODUCT_NAME
--------------------------------------------------
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
 
13 rows selected.
 
SYS@db12c1> @prev_sql
 
SQL_ID         CHILD  PLAN_HASH  EXECS  AVG_ETIME SQL_TEXT
------------- ------ ---------- ------ ---------- ----------------------------------------------------------------------
3ycnqgx5nc8nn      0 1553478007      1        .00 select product_name from oe.order_items o, oe.product_information p wh
 
SYS@db12c1> @dplan_adaptive
Enter value for sql_id: 3ycnqgx5nc8nn
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  3ycnqgx5nc8nn, child number 0
-------------------------------------
select product_name from oe.order_items o, oe.product_information p
where o.unit_price=15 and o.quantity > 1 and o.product_id = p.product_id
 
Plan hash value: 1553478007
 
----------------------------------------------------------------------------------------------------------
|   Id  | Operation                     | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT              |                        |       |       |     7 (100)|          |
|  *  1 |  HASH JOIN                    |                        |     4 |   128 |     7   (0)| 00:00:01 |
|-    2 |   NESTED LOOPS                |                        |       |       |            |          |
|-    3 |    NESTED LOOPS               |                        |     4 |   128 |     7   (0)| 00:00:01 |
|-    4 |     STATISTICS COLLECTOR      |                        |       |       |            |          |
|  *  5 |      TABLE ACCESS STORAGE FULL| ORDER_ITEMS            |     4 |    48 |     3   (0)| 00:00:01 |
|- *  6 |     INDEX UNIQUE SCAN         | PRODUCT_INFORMATION_PK |     1 |       |     0   (0)|          |
|-    7 |    TABLE ACCESS BY INDEX ROWID| PRODUCT_INFORMATION    |     1 |    20 |     1   (0)| 00:00:01 |
|     8 |   TABLE ACCESS STORAGE FULL   | PRODUCT_INFORMATION    |     1 |    20 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("O"."PRODUCT_ID"="P"."PRODUCT_ID")
   5 - storage(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
       filter(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
   6 - access("O"."PRODUCT_ID"="P"."PRODUCT_ID")
 
Note
-----
   - this is an adaptive plan (rows marked '-' are inactive)
 
 
Reoptimized plan:
-----------------
This cursor is marked for automatic reoptimization.  The plan that is
expected to be chosen on the next execution is displayed below.
 
Plan hash value: 1553478007
 
--------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                     |    13 |   416 |     8   (0)| 00:00:01 |
|*  1 |  HASH JOIN                 |                     |    13 |   416 |     8   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS STORAGE FULL| ORDER_ITEMS         |    13 |   156 |     3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS STORAGE FULL| PRODUCT_INFORMATION |   288 |  5760 |     5   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("O"."PRODUCT_ID"="P"."PRODUCT_ID")
   2 - storage("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1)
       filter("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1)
 
Note
-----
   - this is an adaptive plan
 
60 rows selected.
 
SYS@db12c1> -- so the previous statement used an adaptive plan picking a HJ over the NLJ
SYS@db12c1>
SYS@db12c1> -- now let's turn off adaptive plans via the OPT_PARAM hint
SYS@db12c1> -- (set _optimizer_adaptive_plans or optimizer_adaptive_features to false)
SYS@db12c1>
SYS@db12c1> select /*+ OPT_PARAM('_optimizer_adaptive_plans','false') */ product_name
  2  from oe.order_items o, oe.product_information p
  3  where o.unit_price=15 and o.quantity > 1
  4  and o.product_id = p.product_id
  5  /
 
PRODUCT_NAME
--------------------------------------------------
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
 
13 rows selected.
 
SYS@db12c1> @x
 
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  04g4xyu3788qm, child number 0
-------------------------------------
select /*+ OPT_PARAM('_optimizer_adaptive_plans','false') */
product_name from oe.order_items o, oe.product_information p where
o.unit_price=15 and o.quantity > 1 and o.product_id = p.product_id
 
Plan hash value: 1255158658
 
-------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                        |       |       |     7 (100)|          |
|   1 |  NESTED LOOPS                |                        |       |       |            |          |
|   2 |   NESTED LOOPS               |                        |     4 |   128 |     7   (0)| 00:00:01 |
|*  3 |    TABLE ACCESS STORAGE FULL | ORDER_ITEMS            |     4 |    48 |     3   (0)| 00:00:01 |
|*  4 |    INDEX UNIQUE SCAN         | PRODUCT_INFORMATION_PK |     1 |       |     0   (0)|          |
|   5 |   TABLE ACCESS BY INDEX ROWID| PRODUCT_INFORMATION    |     1 |    20 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - storage(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
       filter(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
   4 - access("O"."PRODUCT_ID"="P"."PRODUCT_ID")
 
 
26 rows selected.
 
SYS@db12c1> -- So the plan has reverted to the NL Join and is not marked as adaptive

So, even though there is no specific hint at this point, the OPT_PARAM hint can be used to control this behavior on a statement by statement basis.

Q: how does AP (Adaptive Plans) treat query HINTS?
Q: Does adaptive join selection potentially override query hints?

A: As to whether AP can override hints, it does not appear that it can. If you specify a join method with a valid hint, a 10053 (Wolfgang) trace will show that AP’s are not used due to the hint. For example, if you use a hint to specify a nested loop join, the optimizer will not allow AP to override that directive and the 10053 trace will show this behavior.

Here’s an example:

SSYS@db12c1> select product_name
  2  from oe.order_items o, oe.product_information p
  3  where o.unit_price=15 and o.quantity > 1
  4  and o.product_id = p.product_id
  5  /
 
PRODUCT_NAME
--------------------------------------------------
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
 
13 rows selected.
 
SYS@db12c1> @x
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  3ycnqgx5nc8nn, child number 0
-------------------------------------
select product_name from oe.order_items o, oe.product_information p
where o.unit_price=15 and o.quantity > 1 and o.product_id = p.product_id
 
Plan hash value: 1553478007
 
--------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                     |       |       |     8 (100)|          |
|*  1 |  HASH JOIN                 |                     |    13 |   416 |     8   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS STORAGE FULL| ORDER_ITEMS         |    13 |   156 |     3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS STORAGE FULL| PRODUCT_INFORMATION |   288 |  5760 |     5   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("O"."PRODUCT_ID"="P"."PRODUCT_ID")
   2 - storage(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
       filter(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
 
Note
-----
   - this is an adaptive plan
 
 
27 rows selected.
 
SYS@db12c1> -- from 10053
SYS@db12c1> !grep -i inflection adapt*trc
Searching for inflection point (join #1) between 0.00 and 12.76
AP: Computing costs for inflection point at min value 0.00
DP: Using binary search for inflection point search
DP: Costing Nested Loops Join for inflection point at card 0.00
DP: Costing Hash Join for inflection point at card 0.00
AP: Computing costs for inflection point at max value 12.76
DP: Costing Nested Loops Join for inflection point at card 12.76
DP: Costing Hash Join for inflection point at card 12.76
AP: Searching for inflection point at value 1.00
DP: Costing Nested Loops Join for inflection point at card 6.38
DP: Costing Hash Join for inflection point at card 6.38
AP: Searching for inflection point at value 6.38
DP: Costing Nested Loops Join for inflection point at card 3.19
DP: Costing Hash Join for inflection point at card 3.19
AP: Searching for inflection point at value 3.19
DP: Costing Nested Loops Join for inflection point at card 4.78
DP: Costing Hash Join for inflection point at card 4.78
AP: Searching for inflection point at value 4.78
DP: Costing Nested Loops Join for inflection point at card 5.58
DP: Costing Hash Join for inflection point at card 5.58
DP: Costing Nested Loops Join for inflection point at card 5.58
DP: Found point of inflection for NLJ vs. HJ: card = 5.58
 
SYS@db12c1> -- now with valid join hint
SYS@db12c1> select /*+ leading(o) use_nl(p) */ product_name
  2  from oe.order_items o, oe.product_information p
  3  where o.unit_price=15 and o.quantity > 1
  4  and o.product_id = p.product_id
  5  /
 
PRODUCT_NAME
--------------------------------------------------
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
Screws <B.28.S>
 
13 rows selected.
 
SYS@db12c1> @x
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  bytr421c0c2n7, child number 0
-------------------------------------
select /*+ leading(o) use_nl(p) */ product_name from oe.order_items o,
oe.product_information p where o.unit_price=15 and o.quantity > 1 and
o.product_id = p.product_id
 
Plan hash value: 1255158658
 
-------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                        |       |       |    16 (100)|          |
|   1 |  NESTED LOOPS                |                        |       |       |            |          |
|   2 |   NESTED LOOPS               |                        |    13 |   416 |    16   (0)| 00:00:01 |
|*  3 |    TABLE ACCESS STORAGE FULL | ORDER_ITEMS            |    13 |   156 |     3   (0)| 00:00:01 |
|*  4 |    INDEX UNIQUE SCAN         | PRODUCT_INFORMATION_PK |     1 |       |     0   (0)|          |
|   5 |   TABLE ACCESS BY INDEX ROWID| PRODUCT_INFORMATION    |     1 |    20 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - storage(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
       filter(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
   4 - access("O"."PRODUCT_ID"="P"."PRODUCT_ID")
 
 
26 rows selected.
 
SYS@db12c1> !grep -i inflection /u01/app/oracle/diag/rdbms/db12c/db12c1/trace/db12c1_ora_32529.trc
 
SYS@db12c1> !grep AP: non-adapt*.trc
AP: Adaptive joins bypassed for table P @ SEL$1 due to join method is hinted
AP: Adaptive joins bypassed for table P @ SEL$1 due to join method is hinted

So it appears that AP will not override valid hints (as evidenced by the lines in the 10053 trace file showing that “Adaptive join bypassed … due to join method is hinted”). Keep in mind though that this is only one test case, so it’s possible that in some circumstances AP could override hints, but now you know what to look for to validate. :)

Here are a few more hint related questions:

Q: Does adaptive optimization reduce the need for using hints
A: Maybe. If you are hinting to avoid short comings in the optimizer where it chooses the wrong join method, you might not need to do that any more. Likewise, if you are hinting to force a particular distribution method for PX statements, you may not need to do that any more. It’s certainly a step in the right direction.

Q: Can we force dynamic sampling for a statement, regardless of what Oracle thinks it should do?
A: Yes, the DYNAMIC_SAMPLING hint has been available since 9.2.

Q: If we had SQL Plan Baseline set for a paticular SQL in version 11g and we were to upgrade to 12c version….would SQL Baseline be used or this adaptive plans are used ?
A: The hints in the Baseline would be used and should reproduce the 11g plan. See the example above where valid hints disable AP.

Q: Is there a way to grab a previous plan (good) using the profile technique and assign it to the current statement that changed the exec plan which is bad? So far I have been doing this manually using your scripts.
A: Yes – Profiles are just a collection of hints that get applied to a statement. So they can be used to control plans even if AP is enabled.

Q: How does this play with SQL Plan Management.. ?
A: Final plans can be captured and baselines created for them. This feature behaves as expected. Subsequent parses will try to reproduce final plan (using hints in the baseline if necessary).

So that’s it for the hint related questions. The final post in this series will cover the remainder of the questions.

12c Adaptive Optimization – Part 1

Last week I did a webinar on 12c Adaptive Optimization. The talk was recorded. The slides are here: 12c Adaptive Optimization V2 PDF. The recording can be found here: 12c Adaptive Optimization Recording. There were a number of follow up questions and emails so I thought I’d summarize here. Since there were so many questions, (I guess I must not have done that good of a job of explaining how it works) I will break them up into 2 or 3 posts. So for this first one I will just cut and paste from a couple of email follow ups.

Here’s the first question(s):

Hi Kerry,

I followed your webinar today, and I have two questions about it.
First, what will happened if the plan changed during the fetch operation ?
Is it possible ? If then, how does it know which rows has been already fetched ?

Second question is more a practical question. Indeed, in the examples you showed, it uses basic queries, but in the case you have an execution plan with more than hundreds of operation, and if during the execution an adaptive plan is decided with changes in join method, the plan can change a lot.
If we want to identify the step that will modify the plan, do we have to identify it as the step just before the statistic collector op, or will it be more complex to identify ?

Thanks in advance for you answers

And here’s my Answer(s):

Hi

I’ll have to find a little time to test the prepare, open, fetch stuff to verify where the initial rows are actually retrieved, but if I had to guess it would be on the first fetch call, regardless of how many records the fetch requests. Could also be on the open though. It’s interesting to see how the optimizer comes up with the inflection point by the way (although I don’t know enough about the internal algorithm to know exactly what they are doing – but it’s clear they are guessing by splitting the difference ). But here’s a little output from a wolfgang (10053) trace file.

SYS@db12c1> !grep inflection adaptive.trc
Searching for inflection point (join #1) between 0.00 and 12.76
AP: Computing costs for inflection point at min value 0.00
DP: Using binary search for inflection point search
DP: Costing Nested Loops Join for inflection point at card 0.00
DP: Costing Hash Join for inflection point at card 0.00
AP: Computing costs for inflection point at max value 12.76
DP: Costing Nested Loops Join for inflection point at card 12.76
DP: Costing Hash Join for inflection point at card 12.76
AP: Searching for inflection point at value 1.00
DP: Costing Nested Loops Join for inflection point at card 6.38
DP: Costing Hash Join for inflection point at card 6.38
AP: Searching for inflection point at value 6.38
DP: Costing Nested Loops Join for inflection point at card 3.19
DP: Costing Hash Join for inflection point at card 3.19
AP: Searching for inflection point at value 3.19
DP: Costing Nested Loops Join for inflection point at card 4.78
DP: Costing Hash Join for inflection point at card 4.78
AP: Searching for inflection point at value 4.78
DP: Costing Nested Loops Join for inflection point at card 5.58
DP: Costing Hash Join for inflection point at card 5.58
DP: Costing Nested Loops Join for inflection point at card 5.58
DP: Found point of inflection for NLJ vs. HJ: card = 5.58


On the identification of what’s going on in more complicated plans, the general pattern appears to be like this:

Hash Join
NL Join
Statistics Collector

Regardless of whether the final plan would be to use HJ or NL. In some cases the NL is abandoned, in other cases the HJ is abandoned. (by the way, the optimization only appears to kick in on steps where the default plan would use a NL)

* Note that I was wrong in my assertion that the optimization only kicks in for NLJ steps as pointed out by Stephan in the comments section. It can kick in on HJ steps as well, although they don’t appear nearly as often. :)

So anyway, a NL would look like this:

- Hash Join
    NL Join
-     Statistics COllector

And a HJ like this:

  Hash Join 
-   NL Join
-     Statistics Collector

Here’s an example of a more complex plan – in this case a couple of hash joins are discarded in favor of NL.


SYS@db12c1> @dplan_adaptive
Enter value for sql_id: 95stx63r9dc34
Enter value for child_no: 1
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  95stx63r9dc34, child number 1
-------------------------------------
select /* test dp2c6pq28u5jr */ count(*), sum(blocks) FROM dba_segments
where    OWNER = 'XDB' and TABLESPACE_NAME = 'SYSAUX'
 
Plan hash value: 1481365994
 
----------------------------------------------------------------------------------------------------------------
|   Id  | Operation                               | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT                        |                    |       |       |  1441 (100)|          |
|     1 |  SORT AGGREGATE                         |                    |     1 |   104 |            |          |
|     2 |   VIEW                                  | SYS_DBA_SEGS       |     9 |   936 |  1441   (1)| 00:00:01 |
|     3 |    UNION-ALL                            |                    |       |       |            |          |
|     4 |     NESTED LOOPS                        |                    |     6 |   852 |  1356   (1)| 00:00:01 |
|     5 |      NESTED LOOPS                       |                    |     6 |   810 |  1356   (1)| 00:00:01 |
|  *  6 |       HASH JOIN                         |                    |    67 |  6767 |  1350   (1)| 00:00:01 |
|  *  7 |        FILTER                           |                    |       |       |            |          |
|  *  8 |         HASH JOIN RIGHT OUTER           |                    |   278 | 11954 |    89   (0)| 00:00:01 |
|     9 |          TABLE ACCESS STORAGE FULL      | USER$              |    71 |  1278 |     3   (0)| 00:00:01 |
|    10 |          NESTED LOOPS                   |                    | 19743 |   482K|    86   (0)| 00:00:01 |
|    11 |           TABLE ACCESS BY INDEX ROWID   | TS$                |     1 |    11 |     1   (0)| 00:00:01 |
|  * 12 |            INDEX UNIQUE SCAN            | I_TS1              |     1 |       |     0   (0)|          |
|    13 |           TABLE ACCESS STORAGE FULL     | OBJ$               | 19743 |   269K|    85   (0)| 00:00:01 |
|    14 |        VIEW                             | SYS_OBJECTS        |  4731 |   267K|  1261   (1)| 00:00:01 |
|    15 |         UNION-ALL                       |                    |       |       |            |          |
|  * 16 |          TABLE ACCESS STORAGE FULL      | TAB$               |  1533 | 33726 |   312   (0)| 00:00:01 |
|    17 |          TABLE ACCESS STORAGE FULL      | TABPART$           |   262 |  4192 |     5   (0)| 00:00:01 |
|    18 |          TABLE ACCESS STORAGE FULL      | CLU$               |    10 |   140 |   312   (0)| 00:00:01 |
|  * 19 |          TABLE ACCESS STORAGE FULL      | IND$               |  2164 | 41116 |   312   (0)| 00:00:01 |
|    20 |          TABLE ACCESS STORAGE FULL      | INDPART$           |   194 |  3104 |     4   (0)| 00:00:01 |
|  * 21 |          TABLE ACCESS STORAGE FULL      | LOB$               |   512 | 10752 |   309   (0)| 00:00:01 |
|    22 |          TABLE ACCESS STORAGE FULL      | TABSUBPART$        |    32 |   480 |     2   (0)| 00:00:01 |
|    23 |          TABLE ACCESS STORAGE FULL      | INDSUBPART$        |     1 |    52 |     2   (0)| 00:00:01 |
|    24 |          TABLE ACCESS STORAGE FULL      | LOBFRAG$           |    23 |   414 |     2   (0)| 00:00:01 |
|  * 25 |       TABLE ACCESS CLUSTER              | SEG$               |     1 |    34 |     1   (0)| 00:00:01 |
|  * 26 |        INDEX UNIQUE SCAN                | I_FILE#_BLOCK#     |     1 |       |     0   (0)|          |
|  * 27 |      INDEX UNIQUE SCAN                  | I_FILE2            |     1 |     7 |     0   (0)|          |
|  * 28 |     FILTER                              |                    |       |       |            |          |
|  * 29 |      HASH JOIN RIGHT OUTER              |                    |     3 |   405 |    85   (0)| 00:00:01 |
|    30 |       TABLE ACCESS STORAGE FULL         | USER$              |    71 |  1278 |     3   (0)| 00:00:01 |
|    31 |       VIEW                              | VW_JF_SET$A8769BAB |   246 | 28782 |    82   (0)| 00:00:01 |
|    32 |        UNION-ALL                        |                    |       |       |            |          |
|    33 |         NESTED LOOPS                    |                    |     4 |   272 |    33   (0)| 00:00:01 |
|- * 34 |          HASH JOIN                      |                    |     4 |   244 |    33   (0)| 00:00:01 |
|    35 |           NESTED LOOPS                  |                    |     4 |   244 |    33   (0)| 00:00:01 |
|-   36 |            STATISTICS COLLECTOR         |                    |       |       |            |          |
|    37 |             NESTED LOOPS                |                    |    36 |  1044 |     3   (0)| 00:00:01 |
|    38 |              TABLE ACCESS BY INDEX ROWID| TS$                |     1 |    11 |     1   (0)| 00:00:01 |
|  * 39 |               INDEX UNIQUE SCAN         | I_TS1              |     1 |       |     0   (0)|          |
|  * 40 |              TABLE ACCESS STORAGE FULL  | UNDO$              |    36 |   648 |     2   (0)| 00:00:01 |
|  * 41 |            TABLE ACCESS CLUSTER         | SEG$               |     1 |    32 |     1   (0)| 00:00:01 |
|  * 42 |             INDEX UNIQUE SCAN           | I_FILE#_BLOCK#     |     1 |       |     0   (0)|          |
|- * 43 |           TABLE ACCESS STORAGE FULL     | SEG$               |     1 |    32 |     1   (0)| 00:00:01 |
|  * 44 |          INDEX UNIQUE SCAN              | I_FILE2            |     1 |     7 |     0   (0)|          |
|- * 45 |         HASH JOIN                       |                    |   241 | 13255 |    25   (0)| 00:00:01 |
|    46 |          NESTED LOOPS                   |                    |   241 | 13255 |    25   (0)| 00:00:01 |
|-   47 |           STATISTICS COLLECTOR          |                    |       |       |            |          |
|    48 |            NESTED LOOPS                 |                    |     5 |    90 |     2   (0)| 00:00:01 |
|    49 |             TABLE ACCESS BY INDEX ROWID | TS$                |     1 |    11 |     1   (0)| 00:00:01 |
|  * 50 |              INDEX UNIQUE SCAN          | I_TS1              |     1 |       |     0   (0)|          |
|    51 |             INDEX FULL SCAN             | I_FILE2            |     5 |    35 |     1   (0)| 00:00:01 |
|  * 52 |           TABLE ACCESS CLUSTER          | SEG$               |    48 |  1776 |     5   (0)| 00:00:01 |
|  * 53 |            INDEX RANGE SCAN             | I_FILE#_BLOCK#     |     1 |       |     2   (0)| 00:00:01 |
|- * 54 |          TABLE ACCESS STORAGE FULL      | SEG$               |    48 |  1776 |     5   (0)| 00:00:01 |
|- * 55 |         HASH JOIN                       |                    |     1 |    55 |    25   (0)| 00:00:01 |
|    56 |          NESTED LOOPS                   |                    |     1 |    55 |    25   (0)| 00:00:01 |
|-   57 |           STATISTICS COLLECTOR          |                    |       |       |            |          |
|    58 |            NESTED LOOPS                 |                    |     5 |    90 |     2   (0)| 00:00:01 |
|    59 |             TABLE ACCESS BY INDEX ROWID | TS$                |     1 |    11 |     1   (0)| 00:00:01 |
|  * 60 |              INDEX UNIQUE SCAN          | I_TS1              |     1 |       |     0   (0)|          |
|    61 |             INDEX FULL SCAN             | I_FILE2            |     5 |    35 |     1   (0)| 00:00:01 |
|  * 62 |           TABLE ACCESS CLUSTER          | SEG$               |     1 |    37 |     5   (0)| 00:00:01 |
|  * 63 |            INDEX RANGE SCAN             | I_FILE#_BLOCK#     |     1 |       |     2   (0)| 00:00:01 |
|- * 64 |          TABLE ACCESS STORAGE FULL      | SEG$               |     1 |    37 |     5   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - access("O"."OBJ#"="SO"."OBJECT_ID" AND "O"."TYPE#"="SO"."OBJECT_TYPE_ID")
   7 - filter(NVL("U"."NAME",'SYS')='XDB')
   8 - access("O"."OWNER#"="U"."USER#")
  12 - access("TS"."NAME"='SYSAUX')
  16 - filter(BITAND("T"."PROPERTY",1024)=0)
  19 - filter(("I"."TYPE#"=1 OR "I"."TYPE#"=2 OR "I"."TYPE#"=3 OR "I"."TYPE#"=4 OR "I"."TYPE#"=6 OR
              "I"."TYPE#"=7 OR "I"."TYPE#"=8 OR "I"."TYPE#"=9))
  21 - filter((BITAND("L"."PROPERTY",64)=0 OR BITAND("L"."PROPERTY",128)=128))
  25 - filter("S"."TYPE#"="SO"."SEGMENT_TYPE_ID")
  26 - access("S"."TS#"="TS"."TS#" AND "S"."FILE#"="SO"."HEADER_FILE" AND
              "S"."BLOCK#"="SO"."HEADER_BLOCK")
       filter("S"."TS#"="SO"."TS_NUMBER")
  27 - access("S"."TS#"="F"."TS#" AND "S"."FILE#"="F"."RELFILE#")
  28 - filter(NVL("U"."NAME",'SYS')='XDB')
  29 - access("ITEM_1"="U"."USER#")
  34 - access("S"."TS#"="TS"."TS#" AND "S"."TS#"="UN"."TS#" AND "S"."BLOCK#"="UN"."BLOCK#" AND
              "S"."FILE#"="UN"."FILE#")
  39 - access("TS"."NAME"='SYSAUX')
  40 - storage("UN"."STATUS$"<>1)
       filter("UN"."STATUS$"<>1)
  41 - filter(("S"."TYPE#"=1 OR "S"."TYPE#"=10))
  42 - access("S"."TS#"="UN"."TS#" AND "S"."FILE#"="UN"."FILE#" AND "S"."BLOCK#"="UN"."BLOCK#")
       filter("S"."TS#"="TS"."TS#")
  43 - filter(("S"."TYPE#"=1 OR "S"."TYPE#"=10))
  44 - access("UN"."TS#"="F"."TS#" AND "UN"."FILE#"="F"."RELFILE#")
  45 - access("S"."FILE#"="F"."RELFILE#" AND "S"."TS#"="F"."TS#" AND "S"."TS#"="TS"."TS#")
  50 - access("TS"."NAME"='SYSAUX')
  52 - filter(("S"."TYPE#"<>6 AND "S"."TYPE#"<>5 AND "S"."TYPE#"<>8 AND "S"."TYPE#"<>10 AND
              "S"."TYPE#"<>11 AND "S"."TYPE#"<>1))
  53 - access("S"."TS#"="TS"."TS#" AND "S"."FILE#"="F"."RELFILE#")
       filter("S"."TS#"="F"."TS#")
  54 - filter(("S"."TYPE#"<>6 AND "S"."TYPE#"<>5 AND "S"."TYPE#"<>8 AND "S"."TYPE#"<>10 AND
              "S"."TYPE#"<>11 AND "S"."TYPE#"<>1))
  55 - access("S"."FILE#"="F"."RELFILE#" AND "S"."TS#"="F"."TS#" AND "S"."TS#"="TS"."TS#")
  60 - access("TS"."NAME"='SYSAUX')
  62 - filter("S"."TYPE#"=11)
  63 - access("S"."TS#"="TS"."TS#" AND "S"."FILE#"="F"."RELFILE#")
       filter("S"."TS#"="F"."TS#")
  64 - filter("S"."TYPE#"=11)
 
Note
-----
   - this is an adaptive plan (rows marked '-' are inactive)

Hope that helps.

Kerry



The second email:


Thanks, Kerry!
So as i understand, just one execution can now create several child cursors with different final plans? (As many collectors there are in the plan?)
They will have different plan hash values?
And how other sessions will choose child for them during execution which creates many child cursors? Especially interesting, how we will analyze such plans through AWR if statistics will be splitted between several plan hash values…

And I said:

No – only one cursor is created. It can have multiple adaptations – i.e. there may be multiple places where a decision between NL and HJ are made – but in the end it decides on 1 plan ands that’s it. A new cursor will only be created if something more normal triggers a new cursor (adaptive cursor sharing, optimizer environment changes, cardinality feedback kicks in, etc…)

Kerry

That’s it for now. In Part 2 I’ll address some questions regarding interaction with hints.

APPEND_VALUES and SYS_DL_CURSOR Hints with HCC

The APPEND_VALUES hint was introduced in 11.2 to allow direct path inserts with variables using the VALUES clause. i.e.

INSERT INTO XYZ (COL1, COL2) VALUES (:A, :B);

The feature was designed to allow bulk inserting via arrays of 100′s or 1000′s of records in a single insert statement. Prior to 11.2, there was no documented way to do an direct path insert other than with the APPEND hint which only worked on inserts that used the SELECT clause. i.e.

INSERT INTO XYZ SELECT * from ZYX;

There was however an undocumented hint (SYS_DL_CURSOR) which did a bulk insert of sorts. (not a full append) You may have seen this hint if you use Informatica. I was recently asked about the use of Informatica with Hybrid Columnar Compression (HCC) on Exadata. Which prompted a little research on these two methods of loading data and whether they were compatible with HCC or not. So first off, here’s a test with the APPEND_VALUES clause (using my check_row_comp.sql script):

 
KSO@dbm1> !cat bulk_insert1.sql
 
CREATE TABLE t1
(c1    NUMBER,
 c2    NUMBER,
 c3    VARCHAR2(50),
 c4    VARCHAR2(50),
 c5    DATE,
 c6    DATE)
compress for query high;
 
DECLARE
  -- Define a collection
  TYPE t1_tbl_type IS TABLE OF t1%ROWTYPE;
  t1_tbl    t1_tbl_type := t1_tbl_type();
BEGIN
 
  -- Populate the collection
  FOR i IN 1..32000 LOOP
    t1_tbl.EXTEND;
    t1_tbl(i).c1 := i;
    t1_tbl(i).c2 := i*i;
    t1_tbl(i).c3 := 'i=' || TO_CHAR(i);
    t1_tbl(i).c4 := 'i*i=' || TO_CHAR(i*i);
    t1_tbl(i).c5 := SYSDATE;
    t1_tbl(i).c6 := SYSDATE;
  END LOOP;
 
  -- Bulk Insert the collection into table T1
  FORALL i IN 1..t1_tbl.COUNT
    INSERT /*+ append_values */ INTO t1
      (c1,
       c2,
       c3,
       c4,
       c5,
       c6)
    VALUES
      (t1_tbl(i).c1,
       t1_tbl(i).c2,
       t1_tbl(i).c3,
       t1_tbl(i).c4,
       t1_tbl(i).c5,
       t1_tbl(i).c6);
 
END;
/
 
KSO@dbm1> @bulk_insert1
 
Table created.
 
 
PL/SQL procedure successfully completed.
 
KSO@dbm1> select count(*) from t1;
select count(*) from t1
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel
 
 
KSO@dbm1> -- correct behavior for direct path insert
KSO@dbm1> commit;
 
Commit complete.
 
KSO@dbm1> select rowid from t1 where rownum < 10;
 
ROWID
------------------
AAARlqAAIAAB/krAAA
AAARlqAAIAAB/krAAB
AAARlqAAIAAB/krAAC
AAARlqAAIAAB/krAAD
AAARlqAAIAAB/krAAE
AAARlqAAIAAB/krAAF
AAARlqAAIAAB/krAAG
AAARlqAAIAAB/krAAH
AAARlqAAIAAB/krAAI
 
9 rows selected.
 
KSO@dbm1> @check_row_comp.sql
Enter value for owner: KSO
Enter value for table_namr: T1
Enter value for rowid: AAARlqAAIAAB/krAAH
 
OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
8.522539.7           HCC Query High

So the APPEND_VALUES hint behaves as expected, does a proper append and applies HCC. There are a couple of corner cases worth mentioning. First, small inserts (a single row for example) will not trigger the HCC processing. See the following example.

 
KSO@dbm1> insert /*+ append_values */ into t1 (c1) values (-1);
 
1 row created.
 
KSO@dbm1> select count(*) from t1 where c1 = -1;
select count(*) from t1 where c1 = -1
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel
 
 
KSO@dbm1> commit;
 
Commit complete.
 
KSO@dbm1> select rowid from t1 where c1 = -1;
 
ROWID
------------------
AAARlqAAIAAEfDLAAA
 
KSO@dbm1> @check_row_comp.sql
Enter value for owner: KSO
Enter value for table_namr: T1
Enter value for rowid: AAARlqAAIAAEfDLAAA
 
OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
8.1175755.0          No Compression

So a single row insert was done in direct path mode, but HCC was not applied due to the small size of the insert.

The second corner case is that objects owned by SYS (and stored in the SYSTEM tablespace) appear to be ineligible for HCC processing. See the example below which shows the same test but when T1 is owned by SYS. It also shows an alter table move does not compress the rows when the object is owned by SYS and stored in SYSTEM. Note: there could be something else that is preventing the compression but objects in SYSTEM that are not owned by SYS and objects that are owned by SYS but not stored in SYSTEM both seem to behave correctly. As Tanel says, “the rabbit hole always gets deeper”. (I’ll leave those tests as an exercise for the reader rather than clutter up this post) Here’s the simple case of creating a table in the SYSTEM tablespace that is owned by SYS.

SYS@dbm1> @bulk_insert1
 
Table created.
 
 
PL/SQL procedure successfully completed.
 
SYS@dbm1> select count(*) from t1;
select count(*) from t1
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel
 
 
SYS@dbm1> commit;
 
Commit complete.
 
SYS@dbm1> select rowid from t1 where rownum < 10;
 
ROWID
------------------
AAARltAABAAAUnRAAA
AAARltAABAAAUnRAAB
AAARltAABAAAUnRAAC
AAARltAABAAAUnRAAD
AAARltAABAAAUnRAAE
AAARltAABAAAUnRAAF
AAARltAABAAAUnRAAG
AAARltAABAAAUnRAAH
AAARltAABAAAUnRAAI
 
9 rows selected.
 
SYS@dbm1> @check_row_comp.sql
Enter value for owner: SYS
Enter value for table_namr: T1
Enter value for rowid: AAARltAABAAAUnRAAG
 
OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
1.84433.6            No Compression
 
SYS@dbm1> @table_size
Enter value for owner: SYS
Enter value for table_name: T1
Enter value for type: 
Enter value for tablespace_name: 
 
OWNER                SEGMENT_NAME                   TYPE               TOTALSIZE_MEGS TABLESPACE_NAME
-------------------- ------------------------------ ------------------ -------------- ------------------------------
SYS                  T1                             TABLE                         2.0 SYSTEM
                                                                       --------------
sum                                                                               2.0
 
SYS@dbm1> alter table sys.t1 move compress for archive high;
 
Table altered.
 
SYS@dbm1> @table_size
Enter value for owner: SYS
Enter value for table_name: T1
Enter value for type: 
Enter value for tablespace_name: 
 
OWNER                SEGMENT_NAME                   TYPE               TOTALSIZE_MEGS TABLESPACE_NAME
-------------------- ------------------------------ ------------------ -------------- ------------------------------
SYS                  T1                             TABLE                         2.0 SYSTEM
                                                                       --------------
sum                                                                               2.0
 
SYS@dbm1> select rowid from t1 where rownum < 10;
 
ROWID
------------------
AAARluAABAAAXNRAAA
AAARluAABAAAXNRAAB
AAARluAABAAAXNRAAC
AAARluAABAAAXNRAAD
AAARluAABAAAXNRAAE
AAARluAABAAAXNRAAF
AAARluAABAAAXNRAAG
AAARluAABAAAXNRAAH
AAARluAABAAAXNRAAI
 
9 rows selected.
 
SYS@dbm1> @check_row_comp.sql
Enter value for owner: SYS
Enter value for table_namr: T1
Enter value for rowid: AAARluAABAAAXNRAAG
 
OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
1.95057.6            No Compression

So again, in this case, it is clearly a direct path insert, but HCC is not applied.

So what about the older SYS_DL_CURSOR hint. Well the short story is it doesn’t do a real direct path load, so it doesn’t work with HCC. Here’s a quick demonstration (back in a regular user account).

KSO@dbm1> !cat bulk_insert2.sql
CREATE TABLE t1
(c1    NUMBER,
 c2    NUMBER,
 c3    VARCHAR2(50),
 c4    VARCHAR2(50),
 c5    DATE,
 c6    DATE)
compress for query high;
 
DECLARE
  -- Define a collection
  TYPE t1_tbl_type IS TABLE OF t1%ROWTYPE;
  t1_tbl    t1_tbl_type := t1_tbl_type();
BEGIN
 
  -- Populate the collection
  FOR i IN 1..32000 LOOP
    t1_tbl.EXTEND;
    t1_tbl(i).c1 := i;
    t1_tbl(i).c2 := i*i;
    t1_tbl(i).c3 := 'i=' || TO_CHAR(i);
    t1_tbl(i).c4 := 'i*i=' || TO_CHAR(i*i);
    t1_tbl(i).c5 := SYSDATE;
    t1_tbl(i).c6 := SYSDATE;
  END LOOP;
 
  -- Bulk Insert the collection into table T1
  FORALL i IN 1..t1_tbl.COUNT
    INSERT /*+ sys_dl_cursor */ INTO t1
      (c1,
       c2,
       c3,
       c4,
       c5,
       c6)
    VALUES
      (t1_tbl(i).c1,
       t1_tbl(i).c2,
       t1_tbl(i).c3,
       t1_tbl(i).c4,
       t1_tbl(i).c5,
       t1_tbl(i).c6);
 
END;
/
 
KSO@dbm1> drop table t1;
 
Table dropped.
 
KSO@dbm1> @bulk_insert2
 
Table created.
 
 
PL/SQL procedure successfully completed.
 
KSO@dbm1> select count(*) from t1;
 
  COUNT(*)
----------
     32000
 
KSO@dbm1> -- Not a good sign 
KSO@dbm1> -- direct path should not allow a select without a commit or rollback
KSO@dbm1> -- let's check anyway
KSO@dbm1> select rowid from t1 where rownum < 10;
 
ROWID
------------------
AAARlzAAIAAB/krAAA
AAARlzAAIAAB/krAAB
AAARlzAAIAAB/krAAC
AAARlzAAIAAB/krAAD
AAARlzAAIAAB/krAAE
AAARlzAAIAAB/krAAF
AAARlzAAIAAB/krAAG
AAARlzAAIAAB/krAAH
AAARlzAAIAAB/krAAI
 
9 rows selected.
 
KSO@dbm1> @check_row_comp.sql
Enter value for owner: KSO
Enter value for table_namr: T1
Enter value for rowid: AAARlzAAIAAB/krAAG
 
OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
8.522539.6           No Compression

So no love with SYS_DL_CURSOR. Of course you can still do the more normal, load, alter table move, exchange partition type processing. As always, please let me know if you any questions or comments.