12c Adaptive Optimization – Part 1
Last week I did a webinar on 12c Adaptive Optimization. The talk was recorded. The slides are here: 12c Adaptive Optimization V2 PDF. The recording can be found here: 12c Adaptive Optimization Recording. There were a number of follow up questions and emails so I thought I’d summarize here. Since there were so many questions, (I guess I must not have done that good of a job of explaining how it works) I will break them up into 2 or 3 posts. So for this first one I will just cut and paste from a couple of email follow ups.
Here’s the first question(s):
Hi Kerry,
I followed your webinar today, and I have two questions about it.
First, what will happened if the plan changed during the fetch operation ?
Is it possible ? If then, how does it know which rows has been already fetched ?Second question is more a practical question. Indeed, in the examples you showed, it uses basic queries, but in the case you have an execution plan with more than hundreds of operation, and if during the execution an adaptive plan is decided with changes in join method, the plan can change a lot.
If we want to identify the step that will modify the plan, do we have to identify it as the step just before the statistic collector op, or will it be more complex to identify ?Thanks in advance for you answers
And here’s my Answer(s):
Hi
I’ll have to find a little time to test the prepare, open, fetch stuff to verify where the initial rows are actually retrieved, but if I had to guess it would be on the first fetch call, regardless of how many records the fetch requests. Could also be on the open though. It’s interesting to see how the optimizer comes up with the inflection point by the way (although I don’t know enough about the internal algorithm to know exactly what they are doing – but it’s clear they are guessing by splitting the difference ). But here’s a little output from a wolfgang (10053) trace file.
SYS@db12c1> !grep inflection adaptive.trc
Searching for inflection point (join #1) between 0.00 and 12.76
AP: Computing costs for inflection point at min value 0.00
DP: Using binary search for inflection point search
DP: Costing Nested Loops Join for inflection point at card 0.00
DP: Costing Hash Join for inflection point at card 0.00
AP: Computing costs for inflection point at max value 12.76
DP: Costing Nested Loops Join for inflection point at card 12.76
DP: Costing Hash Join for inflection point at card 12.76
AP: Searching for inflection point at value 1.00
DP: Costing Nested Loops Join for inflection point at card 6.38
DP: Costing Hash Join for inflection point at card 6.38
AP: Searching for inflection point at value 6.38
DP: Costing Nested Loops Join for inflection point at card 3.19
DP: Costing Hash Join for inflection point at card 3.19
AP: Searching for inflection point at value 3.19
DP: Costing Nested Loops Join for inflection point at card 4.78
DP: Costing Hash Join for inflection point at card 4.78
AP: Searching for inflection point at value 4.78
DP: Costing Nested Loops Join for inflection point at card 5.58
DP: Costing Hash Join for inflection point at card 5.58
DP: Costing Nested Loops Join for inflection point at card 5.58
DP: Found point of inflection for NLJ vs. HJ: card = 5.58
On the identification of what’s going on in more complicated plans, the general pattern appears to be like this:
Hash Join
NL Join
Statistics CollectorRegardless of whether the final plan would be to use HJ or NL. In some cases the NL is abandoned, in other cases the HJ is abandoned. (by the way, the optimization only appears to kick in on steps where the default plan would use a NL)
* Note that I was wrong in my assertion that the optimization only kicks in for NLJ steps as pointed out by Stephan in the comments section. It can kick in on HJ steps as well, although they don’t appear nearly as often. 🙂
So anyway, a NL would look like this:
- Hash Join NL Join - Statistics COllectorAnd a HJ like this:
Hash Join - NL Join - Statistics CollectorHere’s an example of a more complex plan – in this case a couple of hash joins are discarded in favor of NL.
SYS@db12c1> @dplan_adaptive
Enter value for sql_id: 95stx63r9dc34
Enter value for child_no: 1
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 95stx63r9dc34, child number 1
-------------------------------------
select /* test dp2c6pq28u5jr */ count(*), sum(blocks) FROM dba_segments
where OWNER = 'XDB' and TABLESPACE_NAME = 'SYSAUX'
Plan hash value: 1481365994
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 1441 (100)| |
| 1 | SORT AGGREGATE | | 1 | 104 | | |
| 2 | VIEW | SYS_DBA_SEGS | 9 | 936 | 1441 (1)| 00:00:01 |
| 3 | UNION-ALL | | | | | |
| 4 | NESTED LOOPS | | 6 | 852 | 1356 (1)| 00:00:01 |
| 5 | NESTED LOOPS | | 6 | 810 | 1356 (1)| 00:00:01 |
| * 6 | HASH JOIN | | 67 | 6767 | 1350 (1)| 00:00:01 |
| * 7 | FILTER | | | | | |
| * 8 | HASH JOIN RIGHT OUTER | | 278 | 11954 | 89 (0)| 00:00:01 |
| 9 | TABLE ACCESS STORAGE FULL | USER$ | 71 | 1278 | 3 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | 19743 | 482K| 86 (0)| 00:00:01 |
| 11 | TABLE ACCESS BY INDEX ROWID | TS$ | 1 | 11 | 1 (0)| 00:00:01 |
| * 12 | INDEX UNIQUE SCAN | I_TS1 | 1 | | 0 (0)| |
| 13 | TABLE ACCESS STORAGE FULL | OBJ$ | 19743 | 269K| 85 (0)| 00:00:01 |
| 14 | VIEW | SYS_OBJECTS | 4731 | 267K| 1261 (1)| 00:00:01 |
| 15 | UNION-ALL | | | | | |
| * 16 | TABLE ACCESS STORAGE FULL | TAB$ | 1533 | 33726 | 312 (0)| 00:00:01 |
| 17 | TABLE ACCESS STORAGE FULL | TABPART$ | 262 | 4192 | 5 (0)| 00:00:01 |
| 18 | TABLE ACCESS STORAGE FULL | CLU$ | 10 | 140 | 312 (0)| 00:00:01 |
| * 19 | TABLE ACCESS STORAGE FULL | IND$ | 2164 | 41116 | 312 (0)| 00:00:01 |
| 20 | TABLE ACCESS STORAGE FULL | INDPART$ | 194 | 3104 | 4 (0)| 00:00:01 |
| * 21 | TABLE ACCESS STORAGE FULL | LOB$ | 512 | 10752 | 309 (0)| 00:00:01 |
| 22 | TABLE ACCESS STORAGE FULL | TABSUBPART$ | 32 | 480 | 2 (0)| 00:00:01 |
| 23 | TABLE ACCESS STORAGE FULL | INDSUBPART$ | 1 | 52 | 2 (0)| 00:00:01 |
| 24 | TABLE ACCESS STORAGE FULL | LOBFRAG$ | 23 | 414 | 2 (0)| 00:00:01 |
| * 25 | TABLE ACCESS CLUSTER | SEG$ | 1 | 34 | 1 (0)| 00:00:01 |
| * 26 | INDEX UNIQUE SCAN | I_FILE#_BLOCK# | 1 | | 0 (0)| |
| * 27 | INDEX UNIQUE SCAN | I_FILE2 | 1 | 7 | 0 (0)| |
| * 28 | FILTER | | | | | |
| * 29 | HASH JOIN RIGHT OUTER | | 3 | 405 | 85 (0)| 00:00:01 |
| 30 | TABLE ACCESS STORAGE FULL | USER$ | 71 | 1278 | 3 (0)| 00:00:01 |
| 31 | VIEW | VW_JF_SET$A8769BAB | 246 | 28782 | 82 (0)| 00:00:01 |
| 32 | UNION-ALL | | | | | |
| 33 | NESTED LOOPS | | 4 | 272 | 33 (0)| 00:00:01 |
|- * 34 | HASH JOIN | | 4 | 244 | 33 (0)| 00:00:01 |
| 35 | NESTED LOOPS | | 4 | 244 | 33 (0)| 00:00:01 |
|- 36 | STATISTICS COLLECTOR | | | | | |
| 37 | NESTED LOOPS | | 36 | 1044 | 3 (0)| 00:00:01 |
| 38 | TABLE ACCESS BY INDEX ROWID| TS$ | 1 | 11 | 1 (0)| 00:00:01 |
| * 39 | INDEX UNIQUE SCAN | I_TS1 | 1 | | 0 (0)| |
| * 40 | TABLE ACCESS STORAGE FULL | UNDO$ | 36 | 648 | 2 (0)| 00:00:01 |
| * 41 | TABLE ACCESS CLUSTER | SEG$ | 1 | 32 | 1 (0)| 00:00:01 |
| * 42 | INDEX UNIQUE SCAN | I_FILE#_BLOCK# | 1 | | 0 (0)| |
|- * 43 | TABLE ACCESS STORAGE FULL | SEG$ | 1 | 32 | 1 (0)| 00:00:01 |
| * 44 | INDEX UNIQUE SCAN | I_FILE2 | 1 | 7 | 0 (0)| |
|- * 45 | HASH JOIN | | 241 | 13255 | 25 (0)| 00:00:01 |
| 46 | NESTED LOOPS | | 241 | 13255 | 25 (0)| 00:00:01 |
|- 47 | STATISTICS COLLECTOR | | | | | |
| 48 | NESTED LOOPS | | 5 | 90 | 2 (0)| 00:00:01 |
| 49 | TABLE ACCESS BY INDEX ROWID | TS$ | 1 | 11 | 1 (0)| 00:00:01 |
| * 50 | INDEX UNIQUE SCAN | I_TS1 | 1 | | 0 (0)| |
| 51 | INDEX FULL SCAN | I_FILE2 | 5 | 35 | 1 (0)| 00:00:01 |
| * 52 | TABLE ACCESS CLUSTER | SEG$ | 48 | 1776 | 5 (0)| 00:00:01 |
| * 53 | INDEX RANGE SCAN | I_FILE#_BLOCK# | 1 | | 2 (0)| 00:00:01 |
|- * 54 | TABLE ACCESS STORAGE FULL | SEG$ | 48 | 1776 | 5 (0)| 00:00:01 |
|- * 55 | HASH JOIN | | 1 | 55 | 25 (0)| 00:00:01 |
| 56 | NESTED LOOPS | | 1 | 55 | 25 (0)| 00:00:01 |
|- 57 | STATISTICS COLLECTOR | | | | | |
| 58 | NESTED LOOPS | | 5 | 90 | 2 (0)| 00:00:01 |
| 59 | TABLE ACCESS BY INDEX ROWID | TS$ | 1 | 11 | 1 (0)| 00:00:01 |
| * 60 | INDEX UNIQUE SCAN | I_TS1 | 1 | | 0 (0)| |
| 61 | INDEX FULL SCAN | I_FILE2 | 5 | 35 | 1 (0)| 00:00:01 |
| * 62 | TABLE ACCESS CLUSTER | SEG$ | 1 | 37 | 5 (0)| 00:00:01 |
| * 63 | INDEX RANGE SCAN | I_FILE#_BLOCK# | 1 | | 2 (0)| 00:00:01 |
|- * 64 | TABLE ACCESS STORAGE FULL | SEG$ | 1 | 37 | 5 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
6 - access("O"."OBJ#"="SO"."OBJECT_ID" AND "O"."TYPE#"="SO"."OBJECT_TYPE_ID")
7 - filter(NVL("U"."NAME",'SYS')='XDB')
8 - access("O"."OWNER#"="U"."USER#")
12 - access("TS"."NAME"='SYSAUX')
16 - filter(BITAND("T"."PROPERTY",1024)=0)
19 - filter(("I"."TYPE#"=1 OR "I"."TYPE#"=2 OR "I"."TYPE#"=3 OR "I"."TYPE#"=4 OR "I"."TYPE#"=6 OR
"I"."TYPE#"=7 OR "I"."TYPE#"=8 OR "I"."TYPE#"=9))
21 - filter((BITAND("L"."PROPERTY",64)=0 OR BITAND("L"."PROPERTY",128)=128))
25 - filter("S"."TYPE#"="SO"."SEGMENT_TYPE_ID")
26 - access("S"."TS#"="TS"."TS#" AND "S"."FILE#"="SO"."HEADER_FILE" AND
"S"."BLOCK#"="SO"."HEADER_BLOCK")
filter("S"."TS#"="SO"."TS_NUMBER")
27 - access("S"."TS#"="F"."TS#" AND "S"."FILE#"="F"."RELFILE#")
28 - filter(NVL("U"."NAME",'SYS')='XDB')
29 - access("ITEM_1"="U"."USER#")
34 - access("S"."TS#"="TS"."TS#" AND "S"."TS#"="UN"."TS#" AND "S"."BLOCK#"="UN"."BLOCK#" AND
"S"."FILE#"="UN"."FILE#")
39 - access("TS"."NAME"='SYSAUX')
40 - storage("UN"."STATUS$"<>1)
filter("UN"."STATUS$"<>1)
41 - filter(("S"."TYPE#"=1 OR "S"."TYPE#"=10))
42 - access("S"."TS#"="UN"."TS#" AND "S"."FILE#"="UN"."FILE#" AND "S"."BLOCK#"="UN"."BLOCK#")
filter("S"."TS#"="TS"."TS#")
43 - filter(("S"."TYPE#"=1 OR "S"."TYPE#"=10))
44 - access("UN"."TS#"="F"."TS#" AND "UN"."FILE#"="F"."RELFILE#")
45 - access("S"."FILE#"="F"."RELFILE#" AND "S"."TS#"="F"."TS#" AND "S"."TS#"="TS"."TS#")
50 - access("TS"."NAME"='SYSAUX')
52 - filter(("S"."TYPE#"<>6 AND "S"."TYPE#"<>5 AND "S"."TYPE#"<>8 AND "S"."TYPE#"<>10 AND
"S"."TYPE#"<>11 AND "S"."TYPE#"<>1))
53 - access("S"."TS#"="TS"."TS#" AND "S"."FILE#"="F"."RELFILE#")
filter("S"."TS#"="F"."TS#")
54 - filter(("S"."TYPE#"<>6 AND "S"."TYPE#"<>5 AND "S"."TYPE#"<>8 AND "S"."TYPE#"<>10 AND
"S"."TYPE#"<>11 AND "S"."TYPE#"<>1))
55 - access("S"."FILE#"="F"."RELFILE#" AND "S"."TS#"="F"."TS#" AND "S"."TS#"="TS"."TS#")
60 - access("TS"."NAME"='SYSAUX')
62 - filter("S"."TYPE#"=11)
63 - access("S"."TS#"="TS"."TS#" AND "S"."FILE#"="F"."RELFILE#")
filter("S"."TS#"="F"."TS#")
64 - filter("S"."TYPE#"=11)
Note
-----
- this is an adaptive plan (rows marked '-' are inactive)
Hope that helps.
Kerry
The second email:
Thanks, Kerry!
So as i understand, just one execution can now create several child cursors with different final plans? (As many collectors there are in the plan?)
They will have different plan hash values?
And how other sessions will choose child for them during execution which creates many child cursors? Especially interesting, how we will analyze such plans through AWR if statistics will be splitted between several plan hash values…
And I said:
No – only one cursor is created. It can have multiple adaptations – i.e. there may be multiple places where a decision between NL and HJ are made – but in the end it decides on 1 plan ands that’s it. A new cursor will only be created if something more normal triggers a new cursor (adaptive cursor sharing, optimizer environment changes, cardinality feedback kicks in, etc…)
Kerry
That’s it for now. In Part 2 I’ll address some questions regarding interaction with hints.
Hi Kerry,
> by the way, the optimization only appears to kick in on steps where the default plan would use a NL
The optimization also occurs in case of a default plan with a HJ. I have written a blog post about Adaptive Plans in September and in my demo case the default plan was using a HJ and in case of not reaching the inflection point (in my case 1.53 rows) it switches to a NJ.
Blog and demo can be found here: http://scn.sap.com/community/oracle/blog/2013/09/24/oracle-db-optimizer-part-vii–looking-under-the-hood-of-adaptive-query-optimization-adaptive-plans-oracle-12c
Best Regards
Stefan
Hi Stephan,
I was hoping someone would call me on that. Maria and I had a conversation several months ago about whether it could both directions as at the time I was convinced I had seen a case of HJ default going to NL, but said she didn’t think so. When I was writing up this post I tried to prove to myself that I had seen one but was unsuccessful at finding one of the system generated queries on my test system that started with a HJ and ended with a NL. I had actually typed up an email to Maria to see if she ever checked with the developers to see if it could both directions. I was able reproduced your test case and viola, there’s the proof. Thanks for pointing it out. I will fix the text of the post as well.
Kerry
[…] « 12c Adaptive Optimization – Part 1 […]
[…] 12c Adaptive Optimization – Part 1. 12c Adaptive Optimization – Part 2 (Hints). […]
Kerry,
Have you tried the adaptive optimization features on Exadata? I’m wondering if Exadata’s tendancy to do hash joins and full scans (smart scans) efficiently would change the point at which a hash join should be favored over a nested loops join compared with a non-Exadata system.
– Bobby
Hi Bobby,
That’s a great question. I have not tested it yet on Exadata. The 12c version of cellsrv just came out a couple of weeks ago. Prior to that you could run 12c DB’s on Exa, but smart scans did not work. We have a system in our lab with 12.1.1.1.0 cellsrv now, so I will do some testing as time permits. I’ll try to do a blog post on it as well. 🙂
Kerry
Thanks for your reply.
– Bobby
[…] http://kerryosborne.oracle-guy.com/2013/11/12c-adaptive-optimization-part-1/ […]
[…] Such as Kerry Osborne blog : http://kerryosborne.oracle-guy.com/2013/11/12c-adaptive-optimization-part-1/ […]
Kerry,
What is the prerequisite for a SQL to be selected for adaptive optimization. I have a lot of statements with NULL value for IS_RESOLVED_ADAPTIVE_PLAN. Trying to find out why they were not selected for adaptive optimization.
–vk
They only get an adaptive plan if the optimizer notices a significant variation in optimizer calculation vs reality when it tries to execute the plan. Hopefully, most of the time that won’t be the case.
Kerry