trusted online casino malaysia

Archive for January 2011

storage(:Z>=:Z AND :Z

Ever wonder why that weird line shows up in the Predicate Section of an execution plan on Exadata? Me too! The STORAGE bit tells us it’s a filter applied at the storage cell layer, but the rest is nonsensical. Well I recently ran across a very brief mention of it in a Metalink note. (I know it’s not called Metalink any more, but I’m kind of set in my ways). The note said it was related to distribution of rows to PX slaves. Ah ha! Let’s test it out. Here’s a plan with the predicate just so you can see what it looks like.

KSO@arcp> @dplan
Enter value for sql_id: a9axwj6ym3b29
Enter value for child_no: 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a9axwj6ym3b29, child number 0
-------------------------------------
select /*+  parallel_index(t, "ISD_SI_I03",8)  dbms_stats
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0)
no_monitoring no_substrb_pad  no_expand index_ffs(t,"ISD_SI_I03") */
count(*) as nrw,count(distinct sys_op_lbid(725425,'L',t.rowid)) as
nlb,count(distinct hextoraw(sys_op_descend("SUPPORT_LEVEL")||sys_op_desc
end("SUPPORT_LEVEL_KEY")||sys_op_descend("NAME")||sys_op_descend("VALUE"
))) as ndk,sys_op_countchg(substrb(t.rowid,1,15),1) as clf from
"ISD"."SUPPORT_INFO" sample block (   .0503530742,1)  t where
"SUPPORT_LEVEL" is not null or "SUPPORT_LEVEL_KEY" is not null or
"NAME" is not null or "VALUE" is not null

Plan hash value: 1766555783

---------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |            |       |       |   672 (100)|          |        |      |            |
|   1 |  SORT GROUP BY                              |            |     1 |    56 |            |          |        |      |            |
|   2 |   PX COORDINATOR                            |            |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)                      | :TQ10001   |     1 |    56 |            |          |  Q1,01 | P->S | QC (RAND)  |
|   4 |     SORT GROUP BY                           |            |     1 |    56 |            |          |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE                             |            |     1 |    56 |            |          |  Q1,01 | PCWP |            |
|   6 |       PX SEND HASH                          | :TQ10000   |     1 |    56 |            |          |  Q1,00 | P->P | HASH       |
|   7 |        SORT GROUP BY                        |            |     1 |    56 |            |          |  Q1,00 | PCWP |            |
|   8 |         PX BLOCK ITERATOR                   |            |   397K|    21M|   672   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|*  9 |          INDEX STORAGE SAMPLE FAST FULL SCAN| ISD_SI_I03 |   397K|    21M|   672   (1)| 00:00:01 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   9 - storage(:Z>=:Z AND :Z<=:Z AND ("SUPPORT_LEVEL" IS NOT NULL OR "SUPPORT_LEVEL_KEY" IS NOT NULL OR "NAME" IS NOT NULL OR
              "VALUE" IS NOT NULL))
       filter(("SUPPORT_LEVEL" IS NOT NULL OR "SUPPORT_LEVEL_KEY" IS NOT NULL OR "NAME" IS NOT NULL OR "VALUE" IS NOT NULL))


37 rows selected.

Notice that the plan is for a PX statement. So how can we convince ourselves that it is a PX related predicate. We can try two tests.

  1. Let's see if we can find any SQL statements that have the predicate that aren't PX statements.
  2. Let's see if we can find any PX statements that don't have the predicate.

So here we go.
Continue reading ‘storage(:Z>=:Z AND :Z’ »

EHCC and the GET_COMPRESSION_TYPE function

Well I turned in the HCC chapter on the Exadata book last week and of course as is usually the case, I immediately ran across something kind of cool on the topic I just finished writing about. (we still have several editing passes though, so I can add it later). Anyway, although I don’t have time to post much these days, I thought this one would be a quick little snippet. So here it is.

The Compression Advisor is part of the DBMS_COMPRESSION package. Specifically it is the GET_COMPRESSION_RATIO procedure. This procedure is worthy of a separate post but I won’t discuss it here except to say that as of 11.2.0.2 you can use it to test HCC compression ratios on non-Exadata platforms. That’s pretty cool, but what I wanted to tell you about is a handy little function in the same package called GET_COMPRESSION_TYPE. This function can tell you exactly what level of compression has been applied to a single row. This can come in handy for investigating the inner workings of  HCC (or OLTP or BASIC compression for that matter).

As you probably already know, HCC is only applied to records loaded via direct path writes. Any updates cause rows to be migrated out of that storage format into blocks flagged for OLTP compression. Of course OLTP compression on a block only kicks in when a block is “full”. On top of this, altering a table to change it’s compression does not actually change the storage format of any existing records (unless you use the MOVE keyword). So you could load some data and then change the designation (say from QUERY LOW to QUERY HIGH). Rows that are inserted after the change will be stored in the new format (assuming the records are loaded via direct path writes of course). So why am I telling you all this. Well, because I ran across a statement in some Oracle documentation that said you can check to see what compression method a table is stored with by looking at the COMPRESS_FOR column in the DBA_TABLES view. This column does reveal what the table designation is. However, the setting actually only tells you how rows inserted in the future will be compressed. It tells you absolutely nothing about the way current rows are stored.

As for the mechanics, it appears that each row has a bitmask associated with it showing what compression format is being used. So I wrote a little script to give me what I want to see (check_row_comp.sql) using the DBMS_COMPRESSION.GET_COMPRESSION_TYPE function. Here’s an example of its use.

Continue reading ‘EHCC and the GET_COMPRESSION_TYPE function’ »

EHCC Mechanics – Proof that whole CU’s are not decompressed

I saw an interesting post recently where Greg Rahn talked about HCC mechanics. He claimed that an update to a record stored in HCC format did not require decompressing the whole Compression Unit (CU) which consist of several Oracle blocks. I’m assuming by this he meant that all the records contained in the CU did not get written back to storage in a non-HCC format due to a single record being updated. Greg then showed an example proving row migration occurred for an updated record. He didn’t show that the other records had not been decompressed though. So since I was already working on an HCC chapter for the upcoming Apress Exadata book, I thought I would take time off from the book writing to post this (hopefully the editors will forgive me).

Here’s the recipe: Basically we’ll update a single row, see that its rowid has changed, veify that we can still get to the record via its original rowid, and check to see if the TABLE FETCH CONTINUED ROW statistic gets updated when we we access the row via its original rowid, thus proving basic row migration (this is what Greg has already shown). Then we’ll look at block dumps for the original and new block to see what’s there.

Continue reading ‘EHCC Mechanics – Proof that whole CU’s are not decompressed’ »

Licensing Requirements for SQL Profiles

Jonathan Lewis reminded me on my last post that using SQL Profiles (because they are part of the SQL Tuning Advisor) requires a license for Oracle Tuning Pack (which requires a license for the Diagnostics Pack). He also mentioned that Baselines did not require any additional license (at least creating and using Baselines on SQL statements). It’s been a while since I worked on a database that didn’t have both packs, but frankly I wasn’t sure I had a good handle of what was allowed and what wasn’t. So I thought it might be worthwhile to check. There is an easy way to check by the way. I did a post a while back on Tuning Pack and Diagnostic Pack license requirements for running AWR and how to check what was allowed and what wasn’t using the CONTROL_MANAGEMENT_PACK_ACCESS parameter. Here’s a link to the post:

Oracle Management Packs

Here’s an example using the same technique to show that SQL Profiles are indeed disabled by turning off the Diagnostic and Tuning Packs (at least on 11.2.02).

Continue reading ‘Licensing Requirements for SQL Profiles’ »

SQL Profiles Disable Automatic Dynamic Sampling

I had an interesting email exchange with a fellow Oracle practitioner, Bryan Grenn, about differences between SQL Profiles and Baselines last week. Basically Bryan observed that SQL Profiles appeared to disable Dynamic Sampling on Global Temporary Tables while Baselines did not. This caused the optimizer’s cost calculations (and therefore the estimated elapsed runtimes) to be incorrect – sometime grossly incorrect. So I did a little follow up testing with a GTT. I used 2 test cases inserting either one row or 100,000 rows into the GTT. With Dynamic Sampling the plans were different (as you might expect). I then tested with Profiles to see how the statement behaved. Here’s is some of the output I generated during the test (note this test was done on an Exadata but non-Exadata environments exhibit the same behavior):
Continue reading ‘SQL Profiles Disable Automatic Dynamic Sampling’ »