12c Adaptive Optimization – Part 3
This is the third and final post on follow up questions from the Redgate webinar I did on 12c Adaptive Optimization (the link goes to a recording of the webcast by the way).
Also, here are links to the 2 earlier posts:
So here are the last set of questions along with my responses:
Q: Is this feature on by default or you have to set a parameter to make sure of it?
A: It’s on by default but can be turned off by the methods listed in the presentation.Q: Is there any drawback of adaptive execution plan?
A: New features (especially auto-magic ones) always make people nervous, but I don’t see too many potential pitfalls with this one. The fact that it is enabled by default out of the box is also a good indicator that the developers themselves have a lot of confidence in it. There is certainly more work going on to collect statistics and buffer rows, but it seems quite minimal and only happens on the first execution. So my basic answer is no, I don’t foresee any major drawbacks.Q: For adaptive plans, usually queries are more complex, with multiple combinations of hash joins and nested loops. But adaptive plans only switches to one “sub plan”, correct? How does it account for all the various combinations?
A: A sub-plan is limited to a single join. There can obviously be many joins in a single plan and thus many sub-plans. But each sub-plan will result in either a HJ or a NLJ. At the end there will be only one final plan. See my previous post (Part 1) for an example of a more complex plan with multiple sub-plans.Q: parallel distribution methods: why not use broadcast all the time? 🙂
A: 🙂Q: Would adaptive optim switch to a better index if it finds itself sitting on a wrong index?
A: I presume the question is with regard to Adaptive Plans kicking in on the first execution, if so, the answer is No. At this point only join methods and px distribution methods can be changed. I expect this will be expanded over time though.Q: Does same plan_hash_value’s means same final plans?
A: Yes – plan hash value is computed based on the final plan with no regard to the fact that the plan was adaptive.Q: How correlated plan_hash_values with final plans? How we can find same final plans?
A: Plan hash value is computed based on final plan, so the correlation is very high. 🙂Q: Dynamic sampling would not put an excessive pressure on the CPU?
A: I guess it could, but it’s been around for some time and I haven’t been involved in any situations where the time spent on dynamic sampling was an issue. Setting it to 11 may give us some chances to see such a thing though. More often the issues arise when dynamic sampling does not come up with a good picture of the data due to the limited size of the sample.Q: Is dynamic sampling = 11 actually a good blanket setting, or do you not trust the optimizer that much? What do you use and why?
A: The optimizer_dynamic_sampling parameter still defaults to 2 in 12c. That alone makes me cautious about setting it to the new totally auto-magic value of 11. If the developers have enough confidence in a new feature to make it the default, then I will be more trusting. I prefer to stick with default values unless I have to make a change to address a specific issue. I have worked on a few systems that change the default setting, but 11 has not been one of those values (yet). I need to do more testing with it.Q: Gotta love Spinal Tap… crank it up to 11 !
A: Rock and Roll!Q: Is there any effect on cpu utilisation becoz of adaptive optimisation??
A: There is definitely some extra overhead in collecting statistics and buffering rows but it should be minimal and it should only affect the initial execution.Q: Can HJ be change to NL in 1-st execution? What is threshold for such change?
A: Yes – Adaptive Plans kick in the first execution. The threshold depends on the specific case. See the example earlier in part 2 of this series for an example of calculating the inflection point (from a 10053 trace).Q: This means that if it is abandoned once it will also be abondoned if ran again?
A: Yes, assuming no other changes occur. But there are many things that can change such as Adaptive Cursor Sharing, Cardinality Feedback, etc… and of course the data itself and/or the statistics about the data can change over time as well. Just to be clear, the choice between the the two join methods is only made during the first execution after a hard parse, so once a statement is loaded into the cache, the plan will be static until something changes that causes a new child cursor to be created.Q: At what data volumes does Adaptive Optimization become likely to be helpful.
A: Any volume that causes a NLJ to result in significantly different elapsed time than HJ.Q: Does AWR show these updated adaptive plans with minus ?
A: That’s a good question. Yes, you can use the dbms_xplan.display_awr with the ‘adaptive’ format option (see the example below).
SYS@db12c1> select * from table(dbms_xplan.display_awr('&sql_id',nvl('&plan_hash_value',null),null,'adaptive'));
Enter value for sql_id: 6qg99cfg26kwb
Enter value for plan_hash_value:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 6qg99cfg26kwb
--------------------
SELECT COUNT(UNQ) UNQ, COUNT(PFX) PFX FROM (SELECT /*+ first_rows(1)
leading(cc) */ CD.TYPE# UNQ, NULL PFX FROM SYS.CCOL$ CC, SYS.CDEF$ CD
WHERE CC.OBJ# = :B2 AND CC.INTCOL# = :B1 AND CD.CON# = CC.CON# AND
CD.OBJ# = CC.OBJ# AND CD.ENABLED IS NOT NULL AND CD.INTCOLS = 1 AND
CD.TYPE# IN (2,3) AND BITAND(CD.DEFER, 2+4) = 4 AND ROWNUM < 2 UNION
ALL SELECT /*+ first_rows(1) leading(i) */ CASE WHEN I.INTCOLS = 1 AND
BITAND(I.PROPERTY,1) = 1 THEN 3 ELSE NULL END UNQ, CASE WHEN IC.POS# =
1 THEN 1 ELSE NULL END PFX FROM SYS.IND$ I, SYS.ICOL$ IC WHERE I.BO# =
:B2 AND I.BO# = IC.BO# AND IC.INTCOL# = :B1 AND I.OBJ# = IC.OBJ# AND
BITAND(I.FLAGS,1025) = 0 AND ROWNUM < 2 )
Plan hash value: 1065215175
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 6 (100)| |
| 1 | SORT AGGREGATE | | 1 | 16 | | |
| 2 | VIEW | | 2 | 32 | 6 (0)| 00:00:01 |
| 3 | UNION-ALL | | | | | |
| 4 | COUNT STOPKEY | | | | | |
|- 5 | HASH JOIN | | 1 | 35 | 3 (0)| 00:00:01 |
| 6 | NESTED LOOPS | | 1 | 35 | 3 (0)| 00:00:01 |
|- 7 | STATISTICS COLLECTOR | | | | | |
| 8 | TABLE ACCESS CLUSTER | CCOL$ | 1 | 13 | 2 (0)| 00:00:01 |
| 9 | INDEX UNIQUE SCAN | I_COBJ# | 1 | | 1 (0)| 00:00:01 |
| 10 | TABLE ACCESS CLUSTER | CDEF$ | 1 | 22 | 1 (0)| 00:00:01 |
|- 11 | TABLE ACCESS BY INDEX ROWID BATCHED| CDEF$ | 1 | 22 | 1 (0)| 00:00:01 |
|- 12 | INDEX RANGE SCAN | I_CDEF2 | 1 | | 1 (0)| 00:00:01 |
| 13 | COUNT STOPKEY | | | | | |
|- 14 | HASH JOIN | | 1 | 38 | 3 (0)| 00:00:01 |
| 15 | NESTED LOOPS | | 1 | 38 | 3 (0)| 00:00:01 |
|- 16 | STATISTICS COLLECTOR | | | | | |
| 17 | TABLE ACCESS CLUSTER | IND$ | 1 | 21 | 2 (0)| 00:00:01 |
| 18 | INDEX UNIQUE SCAN | I_OBJ# | 1 | | 1 (0)| 00:00:01 |
| 19 | TABLE ACCESS CLUSTER | ICOL$ | 1 | 17 | 1 (0)| 00:00:01 |
|- 20 | TABLE ACCESS CLUSTER | ICOL$ | 1 | 17 | 1 (0)| 00:00:01 |
|- 21 | INDEX UNIQUE SCAN | I_OBJ# | 1 | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------
Note
-----
- this is an adaptive plan (rows marked '-' are inactive)
46 rows selected.
Q: Is there any way to encourage the optimizer to collect the information but not act on it?
A: Yes, set optimizer_adaptive_reporting_only = true.Q: Does adaptive distribution for parallel processing work as expected on a Virtual server where resources can be spread over several other servers?
A: No idea (in fact I'm not even sure I understand the question). Give it a test and let us know what you find out. 🙂Q: Does Adaptive Optimization help oracle optimize somewhat complex nested views? I know nested views are not recommended but we sometimes have to live with what we inherited.
A: I don't think this particular feature is going to help nested views specifically. But who knows. The optimizer seems to get lost occasionally with deeply nested views. By the way, there is an interesting new procedure in 12c called dbms_utility.expand_sql_text which spits out the fully expanded version of a SQL statement that accesses data through views. Tom Kyte has blogged about it here: 12c - SQL Text ExpansionQ: We regularly have hash join problems tracable to temp space limits. Shifting to nested loops has proven necessary in 10 and 11. Early detection and shifting to nested loops would be important for us.
A: I'm not sure this feature is really going to help you much in that regard unless the optimizer is erroneously picking the HJ based on incorrect estimates. If you're just forcing the NLJ to avoid poor i/o performance on the temp stuff though it probably won't help. In that case you need to figure out how to sort less or use more memory (increase pga, or use manual workarea size, or use more slaves in px, etc...).Q: So if sort/merge join is used then this feature would not go to nested loop/hash join if sort/merge join is a bad plan ?
A: No it applies only to HJ and NLJ as of 12.1.0.1.Q: What happens with the rows that were read up to inflection point? Does Oracle start reading from the scratch again?
A: The rows are buffered so they don't need to be re-read.Q: Will the SQL scripts that were demonstrated for reviewing the SQL plan information be made available?
A: Most are on this blog already (use the search box to locate them) but let me know if you can't find any of the ones I used.Q: It's is a contraction for it is or it has. Its is a possessive pronoun meaning, more or less, of it or belonging to it.
A: Duly noted (and fixed in the presentation). 🙂Q: Can we *force* plan change in mid-execution?
A: No. You can enable or disable the feature, but the optimizer decides whether to switch or not.Q: How long statistics collector runs if it does not switch?
A: It should only run until the inflection point (the point at which it makes the decision), but I have not actually tested this.Q: Is there a way adaptive can be disabled for PDB and enabled for others?
A: Yes, the optimizer_adaptive_features parameter can be set separately for each PDB (see the example below).
> rlwrap sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Mon Dec 9 19:53:03 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
INSTANCE_NAME STARTUP_TIME CURRENT_TIME DAYS SECONDS
---------------- ----------------- ----------------- ------- ----------
CONTAIN1 02-DEC-2013 03:22 09-DEC-2013 19:53 7.69 664225
SYS@CONTAIN1> @whoami_pdb
CON_ID CON_NAME USERNAME USER# SID SERIAL# PREV_HASH_VALUE SCHEMANAME OS_PID
---------- ---------- --------------- ---------- ---------- ---------- --------------- ------------------------------ -------
1 CDB$ROOT SYS 0 24 295 3265981639 SYS 4481
SYS@CONTAIN1> @connect_pdb
Enter value for pdb_name: plug1
Session altered.
SYS@CONTAIN1:PLUG1> @parms
Enter value for parameter: optimizer_adaptive_features
Enter value for isset:
Enter value for show_hidden:
NAME VALUE ISDEFAUL ISMODIFIED ISSET
-------------------------------------------------- ---------------------------------------------------------------------- -------- ---------- ----------
optimizer_adaptive_features TRUE TRUE TRUE TRUE
SYS@CONTAIN1:PLUG1> alter system set optimizer_adaptive_features=false;
System altered.
SYS@CONTAIN1:PLUG1> @parms
Enter value for parameter: optimizer_adaptive_features
Enter value for isset:
Enter value for show_hidden:
NAME VALUE ISDEFAUL ISMODIFIED ISSET
-------------------------------------------------- ---------------------------------------------------------------------- -------- ---------- ----------
optimizer_adaptive_features FALSE TRUE TRUE TRUE
SYS@CONTAIN1:PLUG1> @connect_pdb
Enter value for pdb_name: plug2
Session altered.
SYS@CONTAIN1:PLUG2> @whoami_pdb
CON_ID CON_NAME USERNAME USER# SID SERIAL# PREV_HASH_VALUE SCHEMANAME OS_PID
---------- ---------- --------------- ---------- ---------- ---------- --------------- ------------------------------ -------
4 PLUG2 SYS 0 24 295 2710464132 SYS 4481
SYS@CONTAIN1:PLUG2> @parms
Enter value for parameter: optimizer_adaptive_features
Enter value for isset:
Enter value for show_hidden:
NAME VALUE ISDEFAUL ISMODIFIED ISSET
-------------------------------------------------- ---------------------------------------------------------------------- -------- ---------- ----------
optimizer_adaptive_features TRUE TRUE TRUE TRUE
SYS@CONTAIN1:PLUG2> connect / as sysdba
Connected.
INSTANCE_NAME STARTUP_TIME CURRENT_TIME DAYS SECONDS
---------------- ----------------- ----------------- ------- ----------
CONTAIN1 02-DEC-2013 03:22 09-DEC-2013 19:54 7.69 664324
SYS@CONTAIN1> @parms
Enter value for parameter: optimizer_adaptive_features
Enter value for isset:
Enter value for show_hidden:
NAME VALUE ISDEFAUL ISMODIFIED ISSET
-------------------------------------------------- ---------------------------------------------------------------------- -------- ---------- ----------
optimizer_adaptive_features TRUE TRUE TRUE TRUE
So you can set the optimizer_adaptive_features parameter separately for each PDB. Note: here are links to the couple of scripts I used in this post:
There was another good question that I don't have time to look into at the moment.
Q: In the Pro*C sequence PREPARE, OPEN, FETCH, at what point(s) might Oracle switch plans? If during FETCH, how does Oracle return the next row/array?
Maybe I'll get around to that later but if anyone wants to give it a shot and post the results in the comments section that would be great. 🙂