12c Adaptive Optimization – Part 1

Last week I did a webinar on 12c Adaptive Optimization. The talk was recorded. The slides are here: 12c Adaptive Optimization V2 PDF. The recording can be found here: 12c Adaptive Optimization Recording. There were a number of follow up questions and emails so I thought I’d summarize here. Since there were so many questions, (I guess I must not have done that good of a job of explaining how it works) I will break them up into 2 or 3 posts. So for this first one I will just cut and paste from a couple of email follow ups.

Here’s the first question(s):

Hi Kerry,

I followed your webinar today, and I have two questions about it.
First, what will happened if the plan changed during the fetch operation ?
Is it possible ? If then, how does it know which rows has been already fetched ?

Second question is more a practical question. Indeed, in the examples you showed, it uses basic queries, but in the case you have an execution plan with more than hundreds of operation, and if during the execution an adaptive plan is decided with changes in join method, the plan can change a lot.
If we want to identify the step that will modify the plan, do we have to identify it as the step just before the statistic collector op, or will it be more complex to identify ?

Thanks in advance for you answers

And here’s my Answer(s):

Hi

I’ll have to find a little time to test the prepare, open, fetch stuff to verify where the initial rows are actually retrieved, but if I had to guess it would be on the first fetch call, regardless of how many records the fetch requests. Could also be on the open though. It’s interesting to see how the optimizer comes up with the inflection point by the way (although I don’t know enough about the internal algorithm to know exactly what they are doing – but it’s clear they are guessing by splitting the difference ). But here’s a little output from a wolfgang (10053) trace file.

SYS@db12c1> !grep inflection adaptive.trc
Searching for inflection point (join #1) between 0.00 and 12.76
AP: Computing costs for inflection point at min value 0.00
DP: Using binary search for inflection point search
DP: Costing Nested Loops Join for inflection point at card 0.00
DP: Costing Hash Join for inflection point at card 0.00
AP: Computing costs for inflection point at max value 12.76
DP: Costing Nested Loops Join for inflection point at card 12.76
DP: Costing Hash Join for inflection point at card 12.76
AP: Searching for inflection point at value 1.00
DP: Costing Nested Loops Join for inflection point at card 6.38
DP: Costing Hash Join for inflection point at card 6.38
AP: Searching for inflection point at value 6.38
DP: Costing Nested Loops Join for inflection point at card 3.19
DP: Costing Hash Join for inflection point at card 3.19
AP: Searching for inflection point at value 3.19
DP: Costing Nested Loops Join for inflection point at card 4.78
DP: Costing Hash Join for inflection point at card 4.78
AP: Searching for inflection point at value 4.78
DP: Costing Nested Loops Join for inflection point at card 5.58
DP: Costing Hash Join for inflection point at card 5.58
DP: Costing Nested Loops Join for inflection point at card 5.58
DP: Found point of inflection for NLJ vs. HJ: card = 5.58


On the identification of what’s going on in more complicated plans, the general pattern appears to be like this:

Hash Join
NL Join
Statistics Collector

Regardless of whether the final plan would be to use HJ or NL. In some cases the NL is abandoned, in other cases the HJ is abandoned. (by the way, the optimization only appears to kick in on steps where the default plan would use a NL)

* Note that I was wrong in my assertion that the optimization only kicks in for NLJ steps as pointed out by Stephan in the comments section. It can kick in on HJ steps as well, although they don’t appear nearly as often. :)

So anyway, a NL would look like this:

- Hash Join
    NL Join
-     Statistics COllector

And a HJ like this:

  Hash Join 
-   NL Join
-     Statistics Collector

Here’s an example of a more complex plan – in this case a couple of hash joins are discarded in favor of NL.


SYS@db12c1> @dplan_adaptive
Enter value for sql_id: 95stx63r9dc34
Enter value for child_no: 1
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  95stx63r9dc34, child number 1
-------------------------------------
select /* test dp2c6pq28u5jr */ count(*), sum(blocks) FROM dba_segments
where    OWNER = 'XDB' and TABLESPACE_NAME = 'SYSAUX'
 
Plan hash value: 1481365994
 
----------------------------------------------------------------------------------------------------------------
|   Id  | Operation                               | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------------
|     0 | SELECT STATEMENT                        |                    |       |       |  1441 (100)|          |
|     1 |  SORT AGGREGATE                         |                    |     1 |   104 |            |          |
|     2 |   VIEW                                  | SYS_DBA_SEGS       |     9 |   936 |  1441   (1)| 00:00:01 |
|     3 |    UNION-ALL                            |                    |       |       |            |          |
|     4 |     NESTED LOOPS                        |                    |     6 |   852 |  1356   (1)| 00:00:01 |
|     5 |      NESTED LOOPS                       |                    |     6 |   810 |  1356   (1)| 00:00:01 |
|  *  6 |       HASH JOIN                         |                    |    67 |  6767 |  1350   (1)| 00:00:01 |
|  *  7 |        FILTER                           |                    |       |       |            |          |
|  *  8 |         HASH JOIN RIGHT OUTER           |                    |   278 | 11954 |    89   (0)| 00:00:01 |
|     9 |          TABLE ACCESS STORAGE FULL      | USER$              |    71 |  1278 |     3   (0)| 00:00:01 |
|    10 |          NESTED LOOPS                   |                    | 19743 |   482K|    86   (0)| 00:00:01 |
|    11 |           TABLE ACCESS BY INDEX ROWID   | TS$                |     1 |    11 |     1   (0)| 00:00:01 |
|  * 12 |            INDEX UNIQUE SCAN            | I_TS1              |     1 |       |     0   (0)|          |
|    13 |           TABLE ACCESS STORAGE FULL     | OBJ$               | 19743 |   269K|    85   (0)| 00:00:01 |
|    14 |        VIEW                             | SYS_OBJECTS        |  4731 |   267K|  1261   (1)| 00:00:01 |
|    15 |         UNION-ALL                       |                    |       |       |            |          |
|  * 16 |          TABLE ACCESS STORAGE FULL      | TAB$               |  1533 | 33726 |   312   (0)| 00:00:01 |
|    17 |          TABLE ACCESS STORAGE FULL      | TABPART$           |   262 |  4192 |     5   (0)| 00:00:01 |
|    18 |          TABLE ACCESS STORAGE FULL      | CLU$               |    10 |   140 |   312   (0)| 00:00:01 |
|  * 19 |          TABLE ACCESS STORAGE FULL      | IND$               |  2164 | 41116 |   312   (0)| 00:00:01 |
|    20 |          TABLE ACCESS STORAGE FULL      | INDPART$           |   194 |  3104 |     4   (0)| 00:00:01 |
|  * 21 |          TABLE ACCESS STORAGE FULL      | LOB$               |   512 | 10752 |   309   (0)| 00:00:01 |
|    22 |          TABLE ACCESS STORAGE FULL      | TABSUBPART$        |    32 |   480 |     2   (0)| 00:00:01 |
|    23 |          TABLE ACCESS STORAGE FULL      | INDSUBPART$        |     1 |    52 |     2   (0)| 00:00:01 |
|    24 |          TABLE ACCESS STORAGE FULL      | LOBFRAG$           |    23 |   414 |     2   (0)| 00:00:01 |
|  * 25 |       TABLE ACCESS CLUSTER              | SEG$               |     1 |    34 |     1   (0)| 00:00:01 |
|  * 26 |        INDEX UNIQUE SCAN                | I_FILE#_BLOCK#     |     1 |       |     0   (0)|          |
|  * 27 |      INDEX UNIQUE SCAN                  | I_FILE2            |     1 |     7 |     0   (0)|          |
|  * 28 |     FILTER                              |                    |       |       |            |          |
|  * 29 |      HASH JOIN RIGHT OUTER              |                    |     3 |   405 |    85   (0)| 00:00:01 |
|    30 |       TABLE ACCESS STORAGE FULL         | USER$              |    71 |  1278 |     3   (0)| 00:00:01 |
|    31 |       VIEW                              | VW_JF_SET$A8769BAB |   246 | 28782 |    82   (0)| 00:00:01 |
|    32 |        UNION-ALL                        |                    |       |       |            |          |
|    33 |         NESTED LOOPS                    |                    |     4 |   272 |    33   (0)| 00:00:01 |
|- * 34 |          HASH JOIN                      |                    |     4 |   244 |    33   (0)| 00:00:01 |
|    35 |           NESTED LOOPS                  |                    |     4 |   244 |    33   (0)| 00:00:01 |
|-   36 |            STATISTICS COLLECTOR         |                    |       |       |            |          |
|    37 |             NESTED LOOPS                |                    |    36 |  1044 |     3   (0)| 00:00:01 |
|    38 |              TABLE ACCESS BY INDEX ROWID| TS$                |     1 |    11 |     1   (0)| 00:00:01 |
|  * 39 |               INDEX UNIQUE SCAN         | I_TS1              |     1 |       |     0   (0)|          |
|  * 40 |              TABLE ACCESS STORAGE FULL  | UNDO$              |    36 |   648 |     2   (0)| 00:00:01 |
|  * 41 |            TABLE ACCESS CLUSTER         | SEG$               |     1 |    32 |     1   (0)| 00:00:01 |
|  * 42 |             INDEX UNIQUE SCAN           | I_FILE#_BLOCK#     |     1 |       |     0   (0)|          |
|- * 43 |           TABLE ACCESS STORAGE FULL     | SEG$               |     1 |    32 |     1   (0)| 00:00:01 |
|  * 44 |          INDEX UNIQUE SCAN              | I_FILE2            |     1 |     7 |     0   (0)|          |
|- * 45 |         HASH JOIN                       |                    |   241 | 13255 |    25   (0)| 00:00:01 |
|    46 |          NESTED LOOPS                   |                    |   241 | 13255 |    25   (0)| 00:00:01 |
|-   47 |           STATISTICS COLLECTOR          |                    |       |       |            |          |
|    48 |            NESTED LOOPS                 |                    |     5 |    90 |     2   (0)| 00:00:01 |
|    49 |             TABLE ACCESS BY INDEX ROWID | TS$                |     1 |    11 |     1   (0)| 00:00:01 |
|  * 50 |              INDEX UNIQUE SCAN          | I_TS1              |     1 |       |     0   (0)|          |
|    51 |             INDEX FULL SCAN             | I_FILE2            |     5 |    35 |     1   (0)| 00:00:01 |
|  * 52 |           TABLE ACCESS CLUSTER          | SEG$               |    48 |  1776 |     5   (0)| 00:00:01 |
|  * 53 |            INDEX RANGE SCAN             | I_FILE#_BLOCK#     |     1 |       |     2   (0)| 00:00:01 |
|- * 54 |          TABLE ACCESS STORAGE FULL      | SEG$               |    48 |  1776 |     5   (0)| 00:00:01 |
|- * 55 |         HASH JOIN                       |                    |     1 |    55 |    25   (0)| 00:00:01 |
|    56 |          NESTED LOOPS                   |                    |     1 |    55 |    25   (0)| 00:00:01 |
|-   57 |           STATISTICS COLLECTOR          |                    |       |       |            |          |
|    58 |            NESTED LOOPS                 |                    |     5 |    90 |     2   (0)| 00:00:01 |
|    59 |             TABLE ACCESS BY INDEX ROWID | TS$                |     1 |    11 |     1   (0)| 00:00:01 |
|  * 60 |              INDEX UNIQUE SCAN          | I_TS1              |     1 |       |     0   (0)|          |
|    61 |             INDEX FULL SCAN             | I_FILE2            |     5 |    35 |     1   (0)| 00:00:01 |
|  * 62 |           TABLE ACCESS CLUSTER          | SEG$               |     1 |    37 |     5   (0)| 00:00:01 |
|  * 63 |            INDEX RANGE SCAN             | I_FILE#_BLOCK#     |     1 |       |     2   (0)| 00:00:01 |
|- * 64 |          TABLE ACCESS STORAGE FULL      | SEG$               |     1 |    37 |     5   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - access("O"."OBJ#"="SO"."OBJECT_ID" AND "O"."TYPE#"="SO"."OBJECT_TYPE_ID")
   7 - filter(NVL("U"."NAME",'SYS')='XDB')
   8 - access("O"."OWNER#"="U"."USER#")
  12 - access("TS"."NAME"='SYSAUX')
  16 - filter(BITAND("T"."PROPERTY",1024)=0)
  19 - filter(("I"."TYPE#"=1 OR "I"."TYPE#"=2 OR "I"."TYPE#"=3 OR "I"."TYPE#"=4 OR "I"."TYPE#"=6 OR
              "I"."TYPE#"=7 OR "I"."TYPE#"=8 OR "I"."TYPE#"=9))
  21 - filter((BITAND("L"."PROPERTY",64)=0 OR BITAND("L"."PROPERTY",128)=128))
  25 - filter("S"."TYPE#"="SO"."SEGMENT_TYPE_ID")
  26 - access("S"."TS#"="TS"."TS#" AND "S"."FILE#"="SO"."HEADER_FILE" AND
              "S"."BLOCK#"="SO"."HEADER_BLOCK")
       filter("S"."TS#"="SO"."TS_NUMBER")
  27 - access("S"."TS#"="F"."TS#" AND "S"."FILE#"="F"."RELFILE#")
  28 - filter(NVL("U"."NAME",'SYS')='XDB')
  29 - access("ITEM_1"="U"."USER#")
  34 - access("S"."TS#"="TS"."TS#" AND "S"."TS#"="UN"."TS#" AND "S"."BLOCK#"="UN"."BLOCK#" AND
              "S"."FILE#"="UN"."FILE#")
  39 - access("TS"."NAME"='SYSAUX')
  40 - storage("UN"."STATUS$"<>1)
       filter("UN"."STATUS$"<>1)
  41 - filter(("S"."TYPE#"=1 OR "S"."TYPE#"=10))
  42 - access("S"."TS#"="UN"."TS#" AND "S"."FILE#"="UN"."FILE#" AND "S"."BLOCK#"="UN"."BLOCK#")
       filter("S"."TS#"="TS"."TS#")
  43 - filter(("S"."TYPE#"=1 OR "S"."TYPE#"=10))
  44 - access("UN"."TS#"="F"."TS#" AND "UN"."FILE#"="F"."RELFILE#")
  45 - access("S"."FILE#"="F"."RELFILE#" AND "S"."TS#"="F"."TS#" AND "S"."TS#"="TS"."TS#")
  50 - access("TS"."NAME"='SYSAUX')
  52 - filter(("S"."TYPE#"<>6 AND "S"."TYPE#"<>5 AND "S"."TYPE#"<>8 AND "S"."TYPE#"<>10 AND
              "S"."TYPE#"<>11 AND "S"."TYPE#"<>1))
  53 - access("S"."TS#"="TS"."TS#" AND "S"."FILE#"="F"."RELFILE#")
       filter("S"."TS#"="F"."TS#")
  54 - filter(("S"."TYPE#"<>6 AND "S"."TYPE#"<>5 AND "S"."TYPE#"<>8 AND "S"."TYPE#"<>10 AND
              "S"."TYPE#"<>11 AND "S"."TYPE#"<>1))
  55 - access("S"."FILE#"="F"."RELFILE#" AND "S"."TS#"="F"."TS#" AND "S"."TS#"="TS"."TS#")
  60 - access("TS"."NAME"='SYSAUX')
  62 - filter("S"."TYPE#"=11)
  63 - access("S"."TS#"="TS"."TS#" AND "S"."FILE#"="F"."RELFILE#")
       filter("S"."TS#"="F"."TS#")
  64 - filter("S"."TYPE#"=11)
 
Note
-----
   - this is an adaptive plan (rows marked '-' are inactive)

Hope that helps.

Kerry



The second email:


Thanks, Kerry!
So as i understand, just one execution can now create several child cursors with different final plans? (As many collectors there are in the plan?)
They will have different plan hash values?
And how other sessions will choose child for them during execution which creates many child cursors? Especially interesting, how we will analyze such plans through AWR if statistics will be splitted between several plan hash values…

And I said:

No – only one cursor is created. It can have multiple adaptations – i.e. there may be multiple places where a decision between NL and HJ are made – but in the end it decides on 1 plan ands that’s it. A new cursor will only be created if something more normal triggers a new cursor (adaptive cursor sharing, optimizer environment changes, cardinality feedback kicks in, etc…)

Kerry

That’s it for now. In Part 2 I’ll address some questions regarding interaction with hints.

APPEND_VALUES and SYS_DL_CURSOR Hints with HCC

The APPEND_VALUES hint was introduced in 11.2 to allow direct path inserts with variables using the VALUES clause. i.e.

INSERT INTO XYZ (COL1, COL2) VALUES (:A, :B);

The feature was designed to allow bulk inserting via arrays of 100′s or 1000′s of records in a single insert statement. Prior to 11.2, there was no documented way to do an direct path insert other than with the APPEND hint which only worked on inserts that used the SELECT clause. i.e.

INSERT INTO XYZ SELECT * from ZYX;

There was however an undocumented hint (SYS_DL_CURSOR) which did a bulk insert of sorts. (not a full append) You may have seen this hint if you use Informatica. I was recently asked about the use of Informatica with Hybrid Columnar Compression (HCC) on Exadata. Which prompted a little research on these two methods of loading data and whether they were compatible with HCC or not. So first off, here’s a test with the APPEND_VALUES clause (using my check_row_comp.sql script):

 
KSO@dbm1> !cat bulk_insert1.sql
 
CREATE TABLE t1
(c1    NUMBER,
 c2    NUMBER,
 c3    VARCHAR2(50),
 c4    VARCHAR2(50),
 c5    DATE,
 c6    DATE)
compress for query high;
 
DECLARE
  -- Define a collection
  TYPE t1_tbl_type IS TABLE OF t1%ROWTYPE;
  t1_tbl    t1_tbl_type := t1_tbl_type();
BEGIN
 
  -- Populate the collection
  FOR i IN 1..32000 LOOP
    t1_tbl.EXTEND;
    t1_tbl(i).c1 := i;
    t1_tbl(i).c2 := i*i;
    t1_tbl(i).c3 := 'i=' || TO_CHAR(i);
    t1_tbl(i).c4 := 'i*i=' || TO_CHAR(i*i);
    t1_tbl(i).c5 := SYSDATE;
    t1_tbl(i).c6 := SYSDATE;
  END LOOP;
 
  -- Bulk Insert the collection into table T1
  FORALL i IN 1..t1_tbl.COUNT
    INSERT /*+ append_values */ INTO t1
      (c1,
       c2,
       c3,
       c4,
       c5,
       c6)
    VALUES
      (t1_tbl(i).c1,
       t1_tbl(i).c2,
       t1_tbl(i).c3,
       t1_tbl(i).c4,
       t1_tbl(i).c5,
       t1_tbl(i).c6);
 
END;
/
 
KSO@dbm1> @bulk_insert1
 
Table created.
 
 
PL/SQL procedure successfully completed.
 
KSO@dbm1> select count(*) from t1;
select count(*) from t1
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel
 
 
KSO@dbm1> -- correct behavior for direct path insert
KSO@dbm1> commit;
 
Commit complete.
 
KSO@dbm1> select rowid from t1 where rownum < 10;
 
ROWID
------------------
AAARlqAAIAAB/krAAA
AAARlqAAIAAB/krAAB
AAARlqAAIAAB/krAAC
AAARlqAAIAAB/krAAD
AAARlqAAIAAB/krAAE
AAARlqAAIAAB/krAAF
AAARlqAAIAAB/krAAG
AAARlqAAIAAB/krAAH
AAARlqAAIAAB/krAAI
 
9 rows selected.
 
KSO@dbm1> @check_row_comp.sql
Enter value for owner: KSO
Enter value for table_namr: T1
Enter value for rowid: AAARlqAAIAAB/krAAH
 
OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
8.522539.7           HCC Query High

So the APPEND_VALUES hint behaves as expected, does a proper append and applies HCC. There are a couple of corner cases worth mentioning. First, small inserts (a single row for example) will not trigger the HCC processing. See the following example.

 
KSO@dbm1> insert /*+ append_values */ into t1 (c1) values (-1);
 
1 row created.
 
KSO@dbm1> select count(*) from t1 where c1 = -1;
select count(*) from t1 where c1 = -1
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel
 
 
KSO@dbm1> commit;
 
Commit complete.
 
KSO@dbm1> select rowid from t1 where c1 = -1;
 
ROWID
------------------
AAARlqAAIAAEfDLAAA
 
KSO@dbm1> @check_row_comp.sql
Enter value for owner: KSO
Enter value for table_namr: T1
Enter value for rowid: AAARlqAAIAAEfDLAAA
 
OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
8.1175755.0          No Compression

So a single row insert was done in direct path mode, but HCC was not applied due to the small size of the insert.

The second corner case is that objects owned by SYS (and stored in the SYSTEM tablespace) appear to be ineligible for HCC processing. See the example below which shows the same test but when T1 is owned by SYS. It also shows an alter table move does not compress the rows when the object is owned by SYS and stored in SYSTEM. Note: there could be something else that is preventing the compression but objects in SYSTEM that are not owned by SYS and objects that are owned by SYS but not stored in SYSTEM both seem to behave correctly. As Tanel says, “the rabbit hole always gets deeper”. (I’ll leave those tests as an exercise for the reader rather than clutter up this post) Here’s the simple case of creating a table in the SYSTEM tablespace that is owned by SYS.

SYS@dbm1> @bulk_insert1
 
Table created.
 
 
PL/SQL procedure successfully completed.
 
SYS@dbm1> select count(*) from t1;
select count(*) from t1
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel
 
 
SYS@dbm1> commit;
 
Commit complete.
 
SYS@dbm1> select rowid from t1 where rownum < 10;
 
ROWID
------------------
AAARltAABAAAUnRAAA
AAARltAABAAAUnRAAB
AAARltAABAAAUnRAAC
AAARltAABAAAUnRAAD
AAARltAABAAAUnRAAE
AAARltAABAAAUnRAAF
AAARltAABAAAUnRAAG
AAARltAABAAAUnRAAH
AAARltAABAAAUnRAAI
 
9 rows selected.
 
SYS@dbm1> @check_row_comp.sql
Enter value for owner: SYS
Enter value for table_namr: T1
Enter value for rowid: AAARltAABAAAUnRAAG
 
OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
1.84433.6            No Compression
 
SYS@dbm1> @table_size
Enter value for owner: SYS
Enter value for table_name: T1
Enter value for type: 
Enter value for tablespace_name: 
 
OWNER                SEGMENT_NAME                   TYPE               TOTALSIZE_MEGS TABLESPACE_NAME
-------------------- ------------------------------ ------------------ -------------- ------------------------------
SYS                  T1                             TABLE                         2.0 SYSTEM
                                                                       --------------
sum                                                                               2.0
 
SYS@dbm1> alter table sys.t1 move compress for archive high;
 
Table altered.
 
SYS@dbm1> @table_size
Enter value for owner: SYS
Enter value for table_name: T1
Enter value for type: 
Enter value for tablespace_name: 
 
OWNER                SEGMENT_NAME                   TYPE               TOTALSIZE_MEGS TABLESPACE_NAME
-------------------- ------------------------------ ------------------ -------------- ------------------------------
SYS                  T1                             TABLE                         2.0 SYSTEM
                                                                       --------------
sum                                                                               2.0
 
SYS@dbm1> select rowid from t1 where rownum < 10;
 
ROWID
------------------
AAARluAABAAAXNRAAA
AAARluAABAAAXNRAAB
AAARluAABAAAXNRAAC
AAARluAABAAAXNRAAD
AAARluAABAAAXNRAAE
AAARluAABAAAXNRAAF
AAARluAABAAAXNRAAG
AAARluAABAAAXNRAAH
AAARluAABAAAXNRAAI
 
9 rows selected.
 
SYS@dbm1> @check_row_comp.sql
Enter value for owner: SYS
Enter value for table_namr: T1
Enter value for rowid: AAARluAABAAAXNRAAG
 
OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
1.95057.6            No Compression

So again, in this case, it is clearly a direct path insert, but HCC is not applied.

So what about the older SYS_DL_CURSOR hint. Well the short story is it doesn’t do a real direct path load, so it doesn’t work with HCC. Here’s a quick demonstration (back in a regular user account).

KSO@dbm1> !cat bulk_insert2.sql
CREATE TABLE t1
(c1    NUMBER,
 c2    NUMBER,
 c3    VARCHAR2(50),
 c4    VARCHAR2(50),
 c5    DATE,
 c6    DATE)
compress for query high;
 
DECLARE
  -- Define a collection
  TYPE t1_tbl_type IS TABLE OF t1%ROWTYPE;
  t1_tbl    t1_tbl_type := t1_tbl_type();
BEGIN
 
  -- Populate the collection
  FOR i IN 1..32000 LOOP
    t1_tbl.EXTEND;
    t1_tbl(i).c1 := i;
    t1_tbl(i).c2 := i*i;
    t1_tbl(i).c3 := 'i=' || TO_CHAR(i);
    t1_tbl(i).c4 := 'i*i=' || TO_CHAR(i*i);
    t1_tbl(i).c5 := SYSDATE;
    t1_tbl(i).c6 := SYSDATE;
  END LOOP;
 
  -- Bulk Insert the collection into table T1
  FORALL i IN 1..t1_tbl.COUNT
    INSERT /*+ sys_dl_cursor */ INTO t1
      (c1,
       c2,
       c3,
       c4,
       c5,
       c6)
    VALUES
      (t1_tbl(i).c1,
       t1_tbl(i).c2,
       t1_tbl(i).c3,
       t1_tbl(i).c4,
       t1_tbl(i).c5,
       t1_tbl(i).c6);
 
END;
/
 
KSO@dbm1> drop table t1;
 
Table dropped.
 
KSO@dbm1> @bulk_insert2
 
Table created.
 
 
PL/SQL procedure successfully completed.
 
KSO@dbm1> select count(*) from t1;
 
  COUNT(*)
----------
     32000
 
KSO@dbm1> -- Not a good sign 
KSO@dbm1> -- direct path should not allow a select without a commit or rollback
KSO@dbm1> -- let's check anyway
KSO@dbm1> select rowid from t1 where rownum < 10;
 
ROWID
------------------
AAARlzAAIAAB/krAAA
AAARlzAAIAAB/krAAB
AAARlzAAIAAB/krAAC
AAARlzAAIAAB/krAAD
AAARlzAAIAAB/krAAE
AAARlzAAIAAB/krAAF
AAARlzAAIAAB/krAAG
AAARlzAAIAAB/krAAH
AAARlzAAIAAB/krAAI
 
9 rows selected.
 
KSO@dbm1> @check_row_comp.sql
Enter value for owner: KSO
Enter value for table_namr: T1
Enter value for rowid: AAARlzAAIAAB/krAAG
 
OLD_ROWID            COMPRESSION_TYPE
-------------------- -------------------------
8.522539.6           No Compression

So no love with SYS_DL_CURSOR. Of course you can still do the more normal, load, alter table move, exchange partition type processing. As always, please let me know if you any questions or comments.

Enkitec is an Anagram for Necktie

 

Screen shot 2013-10-01 at 3.08.30 PM

 

I work for a small company named Enkitec. I’ve been asked several times recently where the name Enkitec came from. In fact, someone told me just today that they had heard that “Enkitec is an anagram for necktie, which guys that work for Enkitec are universally reluctant to wear”. While that statement is true, it’s not the reason why the company was named Enkitec. Someone just happened to notice it a couple of years after the company was formed and mentioned it in a presentation. And thus the myth was born. The real story about the name was that we were trying to figure out what to call the company and we couldn’t come up with anything we liked (with a matching domain name that was still available). So we did what all marketing challenged people do, we hired a freelance marketing wiz kid to come up with some ideas for us. We met with him a few times over a couple of weeks and eventually settled on Enkitec. It means absolutely nothing by the way, but the domain name was available! :) The thing that sticks out in my memory though is that the marketing guy wore these dark rimed glasses that made him look smart. One day I made some comment about how they improved his image and said something about how if I was the suspicious type, I would think that they were just for show. He then proceeded to admit that he didn’t actually need glasses and so the lenses were just plain glass. One of my favorite all time marketing stories!

no_neckties

System Statistics – “Exadata” Mode

Here’s a very quick note on a relatively recent addition to System Statistics. It’s the so called “Exadata mode”. This mode is intended to give the optimizer a little more info about the storage system on the Exadata platform. Here’s some info from an MOS note.

——————————————————————————————
Oracle Sun Database Machine Setup/Configuration Best Practices [ID 1274318.1]

Verify Exadata specific optimizer statistics have been gathered

 
Benefit / Impact

Gathering Exadata specific system statistics ensure the optimizer is aware of Exadata scan speed. Accurately accounting for the speed of scan operations will ensure the Optimizer chooses an optimal execution plan in a Exadata environment. The following command gathers Exadata specific system statistics

exec dbms_stats.gather_system_stats(‘EXADATA’);

Risk:

Lack of Exadata specific stats can lead to less performant optimizer plans.

Action / Repair:

To see if Exadata specific optimizer stats have been gathered, run the following query on a system with at least 11.2.0.2 BP18 or 11.2.0.2 BP8 Oracle software. If PVAL1 returns null or is not set, Exadata specific stats have not been gathered.

select pname, PVAL1 from aux_stats$ where pname='MBRC';

——————————————————————————————-

And here’s an example of gathering Exadata mode system stats on a real Exadata as performed by a real Enkitec employee.

 
SYS@dbm1> exec dbms_stats.delete_system_stats;
 
PL/SQL procedure successfully completed.
 
Elapsed: 00:00:00.00
SYS@dbm1> @system_stats
 
PNAME                               PVAL1
------------------------------ ----------
STATUS
DSTART
DSTOP
FLAGS                                   0
CPUSPEEDNW                           2797
IOSEEKTIM                              10
IOTFRSPEED                           4096
SREADTIM
MREADTIM
CPUSPEED
MBRC
MAXTHR
SLAVETHR
 
13 rows selected.
 
Elapsed: 00:00:00.00
SYS@dbm1> exec dbms_stats.gather_system_stats('EXADATA');
 
PL/SQL procedure successfully completed.
 
Elapsed: 00:01:45.51
SYS@dbm1> select  pname, pval1 from sys.aux_stats$;
 
PNAME                               PVAL1
------------------------------ ----------
STATUS
DSTART
DSTOP
FLAGS                                   1
CPUSPEEDNW                           2832
IOSEEKTIM                               9
IOTFRSPEED                          88419
SREADTIM
MREADTIM
CPUSPEED
MBRC                                  128
MAXTHR
SLAVETHR
 
13 rows selected.
Elapsed: 00:00:00.00

So just a couple of quick notes. First, gathering “Exadata” mode system stats does spend some time gathering NOWORKLOAD stats (1:45 in this case) but appears to manually set MBRC to 128.
Note that this is not hardcoded, but actually uses the value of the db parameter db_file_multiblock_read_count which in our case defaults to 128. (see comment from Chris Antognini below) Increasing MBRC makes sense because it would tend to push the optimizer towards full table scans which can obviously be a lot faster on Exadata due to Smart Scan offloading. The default for costing is 8 and pushing it to 128 does make full scans more attractive to the optimizer. Note also that the IOTFRSPEED was dramatically increased (although this doesn’t always happen).

Here’s the section from a 10053 (Wolfgang) trace just for verification.

-----------------------------
SYSTEM STATISTICS INFORMATION
-----------------------------
  Using NOWORKLOAD Stats
  CPUSPEEDNW: 2832 millions instructions/sec (default is 100)
  IOTFRSPEED: 88419 bytes per millisecond (default is 4096)
  IOSEEKTIM: 9 milliseconds (default is 10)
  MBRC: 128 blocks (default is 8)

I do think that Exadata mode system stats should be set (collected) on new Exadata implementations.

12c – parallel_degree_level (control for auto DOP)

I heard JP Dijcks speak at RMOUG in 2012 about a new parameter that would show up in 12c called parallel_degree_level. It’s basically a knob that you can turn to dial up (or down) the calculated DOP when setting parallel_degree_policy=auto. Early on (11.2.0.1) auto DOP seemed to vastly overestimate what the DOP should be. In a later version (11.2.0.3) it seems to often underestimate what the DOP should be. I’ve said in the past that I thought auto DOP was too hard to control and thus too scary for production systems. I’ve also said that I thought auto DOP was the wave of the future. And I think this parameter alone may make it possible to use this feature in production because it gives us the ability to dial in the level of parallelism that works for our system. So here’s a quick demo:

SYS@db12c1> @parms
Enter value for parameter: parallel_degree
Enter value for isset: 
Enter value for show_hidden: 
 
NAME                                               VALUE                                                                  ISDEFAUL ISMODIFIED ISSET
-------------------------------------------------- ---------------------------------------------------------------------- -------- ---------- ----------
parallel_degree_level                              100                                                                    TRUE     FALSE      FALSE
parallel_degree_limit                              16                                                                     FALSE    FALSE      TRUE
parallel_degree_policy                             AUTO                                                                   FALSE    TRUE       TRUE
 
3 rows selected.
 
Elapsed: 00:00:00.05
SYS@db12c1> alter session set parallel_degree_policy=auto;
 
Session altered.
 
Elapsed: 00:00:00.00
SYS@db12c1> select count(*) from kso.TT_CLUSTER_ONAME;
 
    COUNT(*)
------------
    79429632
 
1 row selected.
 
Elapsed: 00:00:01.96
SYS@db12c1> @x
 
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  apvrg0vpxxw8k, child number 2
-------------------------------------
select count(*) from kso.TT_CLUSTER_ONAME
 
Plan hash value: 2036413816
 
--------------------------------------------------------------------
| Id  | Operation                      | Name             | E-Rows |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                  |        |
|   1 |  SORT AGGREGATE                |                  |      1 |
|   2 |   PX COORDINATOR               |                  |        |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000         |      1 |
|   4 |     SORT AGGREGATE             |                  |      1 |
|   5 |      PX BLOCK ITERATOR         |                  |     79M|
|*  6 |       TABLE ACCESS STORAGE FULL| TT_CLUSTER_ONAME |     79M|
--------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - storage(:Z>=:Z AND :Z<=:Z)
 
Note
-----
   - automatic DOP: Computed Degree of Parallelism is 14
   - parallel scans affinitized
 
31 rows selected.
 
Elapsed: 00:00:00.02
SYS@db12c1> alter session set parallel_degree_level=10;
 
Session altered.
 
Elapsed: 00:00:00.01
SYS@db12c1> select count(*) from kso.TT_CLUSTER_ONAME;
 
    COUNT(*)
------------
    79429632
 
1 row selected.
 
Elapsed: 00:00:19.95
SYS@db12c1> @x
 
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  apvrg0vpxxw8k, child number 4
-------------------------------------
select count(*) from kso.TT_CLUSTER_ONAME
 
Plan hash value: 2036413816
 
------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name             | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                  |       |   174K(100)|          |        |      |            |
|   1 |  SORT AGGREGATE                |                  |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR               |                  |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000         |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE             |                  |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR         |                  |    79M|   174K  (1)| 00:00:07 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS STORAGE FULL| TT_CLUSTER_ONAME |    79M|   174K  (1)| 00:00:07 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - storage(:Z>=:Z AND :Z<=:Z)
 
Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2
   - parallel scans affinitized
 
 
31 rows selected.
 
Elapsed: 00:00:00.09
SYS@db12c1> alter session set parallel_degree_level=100;
 
Session altered.
 
Elapsed: 00:00:00.00
SYS@db12c1> select count(*) from kso.TT_CLUSTER_ONAME;
 
    COUNT(*)
------------
    79429632
 
1 row selected.
 
Elapsed: 00:00:04.07
SYS@db12c1> @x
Enter value for sql_id: apvrg0vpxxw8k
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  apvrg0vpxxw8k, child number 2
-------------------------------------
select count(*) from kso.TT_CLUSTER_ONAME
 
Plan hash value: 2036413816
 
------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name             | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                  |       | 24875 (100)|          |        |      |            |
|   1 |  SORT AGGREGATE                |                  |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR               |                  |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000         |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE             |                  |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR         |                  |    79M| 24875   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS STORAGE FULL| TT_CLUSTER_ONAME |    79M| 24875   (1)| 00:00:01 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - storage(:Z>=:Z AND :Z<=:Z)
 
Note
-----
   - automatic DOP: Computed Degree of Parallelism is 14
   - parallel scans affinitized
 
 
31 rows selected.
 
Elapsed: 00:00:00.09
SYS@db12c1> alter session set parallel_degree_level=200;
 
Session altered.
 
SYS@db12c1> select count(*) from kso.TT_CLUSTER_ONAME;
 
  COUNT(*)
----------
  79429632
 
Elapsed: 00:00:00.59
SYS@db12c1> @x
 
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  apvrg0vpxxw8k, child number 5
-------------------------------------
select count(*) from kso.TT_CLUSTER_ONAME
 
Plan hash value: 2036413816
 
--------------------------------------------------------------------
| Id  | Operation                      | Name             | E-Rows |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                  |        |
|   1 |  SORT AGGREGATE                |                  |      1 |
|   2 |   PX COORDINATOR               |                  |        |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000         |      1 |
|   4 |     SORT AGGREGATE             |                  |      1 |
|   5 |      PX BLOCK ITERATOR         |                  |     79M|
|*  6 |       TABLE ACCESS STORAGE FULL| TT_CLUSTER_ONAME |     79M|
--------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - storage(:Z>=:Z AND :Z<=:Z)
 
Note
-----
   - automatic DOP: Computed Degree of Parallelism is 16 because of degree limit
   - parallel scans affinitized
 
 
31 rows selected.
 
Elapsed: 00:00:00.12
SYS@db12c1> alter session set parallel_degree_limit=32;
 
Session altered.
 
Elapsed: 00:00:00.00
SYS@db12c1> select count(*) from kso.TT_CLUSTER_ONAME;
 
  COUNT(*)
----------
  79429632
 
Elapsed: 00:00:07.53
SYS@db12c1> @x
 
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  apvrg0vpxxw8k, child number 6
-------------------------------------
select count(*) from kso.TT_CLUSTER_ONAME
 
Plan hash value: 2036413816
 
--------------------------------------------------------------------
| Id  | Operation                      | Name             | E-Rows |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                  |        |
|   1 |  SORT AGGREGATE                |                  |      1 |
|   2 |   PX COORDINATOR               |                  |        |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000         |      1 |
|   4 |     SORT AGGREGATE             |                  |      1 |
|   5 |      PX BLOCK ITERATOR         |                  |     79M|
|*  6 |       TABLE ACCESS STORAGE FULL| TT_CLUSTER_ONAME |     79M|
--------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - storage(:Z>=:Z AND :Z<=:Z)
 
Note
-----
   - automatic DOP: Computed Degree of Parallelism is 28
   - parallel scans affinitized
 
 
31 rows selected.
 
Elapsed: 00:00:00.06

So as you can see, parallel_degree_level is basically a percentage. The default is 100 and setting it to a value of 10 decreases the calculated value to roughly 10% while increasing it to 200 doubles the calculated DOP.

So just to reiterate, the auto DOP calculations have gotten progressively better over the last couple of years, but I think the simple addition of this new parameter makes it a much more palatable option.