trusted online casino malaysia

SQL Translation Framework

My favorite new Oracle Database 12c feature is the SQL Translation Framework. The feature grew out of SQL Developer’s ability to translate SQL from non-Oracle RDBMS’s. For example, there is a pre-built Sybase ASE translation package that is designed to translate the Sybase dialect of SQL into Oracle SQL dialect. So that’s what the feature is designed for. But the developers decided to move it to the database and to allow us to write our own translations which opens up a whole world of possibilities.

The first thought that occurred to me when I saw this feature listed in the 12c New Features doc, was that I might be able to use it to fix badly written SQL behind the scenes. I’ve written and talked quite a bit about using hint based mechanisms (Outlines, SQL Profiles, Baselines, and SQL Patches) to alter execution plans without having to change the code. Those technique work great most of the time, but there are cases where hints alone can’t fix the problem. In some cases it is necessary to change the SQL statement text to get the desired results. And the SQL Translation Framework gives us the tool kit we need to do just that. And by the way, although I do have a tendency to use Oracle features for purposes for which they were not originally intended, in this case, I think the developers knew full well that the features could be used to address performance issues by re-writing SQL. As proof, here is a snippet from the 12c Release 1 Migration guide.

In addition to translating non-Oracle SQL statements, the SQL Translation Framework can also be used to substitute an Oracle SQL statement with another Oracle statement to address a semantic or a performance issue. In this way, you can address an application issue without patching the client application.

So let’s dive in. There are two main components to the framework. The first is a pl/sql package to programmatically translate code (also called the Translator in the docs). The second is a set of maps for individual SQL statements (this is called a SQL Translation Profile). There are a couple of requirements to use this feature.

1. You must create a SQL Translation Profile (using dbms_sql_translator.create_profile)
2. You must assign a session to use the Translation Profile (generally with an alter session command)
3. You must set the 10601 system event

While the Translator Profile is required, it does not have to be assigned a translator. In other words, you can map individual statements without writing a PL/SQL package. Of course if you have a system that has a lot of problems caused by the same coding pattern, you could potentially use the framework to rewrite those statements on the fly.

Here’s a quick example for a simple case of mapping individual statements.

SYS@LAB1211> exec dbms_sql_translator.create_profile('FOO');

PL/SQL procedure successfully completed.

SYS@LAB1211> select object_name, object_type from dba_objects where object_name like 'FOO';

OBJECT_NAME                    OBJECT_TYPE
------------------------------ -----------------------
FOO                            SQL TRANSLATION PROFILE

SYS@LAB1211> exec dbms_sql_translator.register_sql_translation('FOO','select count(*) from hr.countries','select count(*) from hr.jobs');

PL/SQL procedure successfully completed.

SYS@LAB1211> exec dbms_sql_translator.register_sql_translation('FOO','select count(*) from countries','select count(*) from jobs');

PL/SQL procedure successfully completed.

SYS@LAB1211> exec dbms_sql_translator.register_sql_translation('FOO','select 1 from hr.countries','select count(*) from hr.countries');

PL/SQL procedure successfully completed.

SYS@LAB1211> grant all on sql translation profile foo to hr;

Grant succeeded.

SYS@LAB1211> alter session set sql_translation_profile = FOO;

Session altered.

SYS@LAB1211> alter session set events = '10601 trace name context forever, level 32';

Session altered.

SYS@LAB1211> set echo on
SYS@LAB1211> select count(*) from hr.countries;

  COUNT(*)
----------
        19

SYS@LAB1211> select /*+ fix_wrong_results */ count(*) from hr.countries;

  COUNT(*)
----------
        25

SYS@LAB1211> @x

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  aaajpnhn25nza, child number 0
-------------------------------------
select /*+ fix_wrong_results */ count(*) from hr.countries

Plan hash value: 1399856367

----------------------------------------------------------------------------
| Id  | Operation        | Name            | Rows  | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                 |       |     1 (100)|          |
|   1 |  SORT AGGREGATE  |                 |     1 |            |          |
|   2 |   INDEX FULL SCAN| COUNTRY_C_ID_PK |    25 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------


14 rows selected.

SYS@LAB1211> select count(*) from hr.countries;

  COUNT(*)
----------
        19

SYS@LAB1211> @x

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  c95vwg4jwqqfd, child number 0
-------------------------------------
select count(*) from hr.jobs

Plan hash value: 3870222678

-----------------------------------------------------------
| Id  | Operation        | Name      | Rows  | Cost (%CPU)|
-----------------------------------------------------------
|   0 | SELECT STATEMENT |           |       |     1 (100)|
|   1 |  SORT AGGREGATE  |           |     1 |            |
|   2 |   INDEX FULL SCAN| JOB_ID_PK |    19 |     0   (0)|
-----------------------------------------------------------


14 rows selected.

Continue reading ‘SQL Translation Framework’ »

12c – New SQL_ID Calculation

Updated 7/7/13: Well I’m a doofus! This is not a generic problem. It is a bug but only happens when using a specific new feature I was playing with on my 12.1 database (SQL Translation Framework). No need to worry about this unless using that feature. (thanks to Stefen for pointing this out) So you probably don’t need to read this at all. The comments might be worth looking at though. 🙂

=========================================================

Shoot! SQL_ID calculation is different between 11.2 and 12.1. This is a bummer because we’ve gotten used to being able to go back and forth between versions to verify plans after upgrading to 11g for example. It was also convenient to be able to track changes in performance statistics before and after an upgrade. Fortunately there is a work around. The old_hash_value column has been carried through to 12c. See here:


SQL*Plus: Release 11.2.0.3.0 Production on Fri Jul 5 18:51:43 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining
and Real Application Testing options


INSTANCE_NAME    STARTUP_TIME      CURRENT_TIME         DAYS    SECONDS
---------------- ----------------- ----------------- ------- ----------
LAB1123          26-JUN-2013 19:02 05-JUL-2013 18:51    8.99     776979

SYS@LAB1123> select sql_id, hash_value, old_hash_value, plan_hash_value, sql_text from v$sql where sql_text = 'select 1 from dual';

SQL_ID        HASH_VALUE OLD_HASH_VALUE PLAN_HASH_VALUE SQL_TEXT
------------- ---------- -------------- --------------- ----------------------------------------
520mkxqpf15q8 2866845384      271604965      1388734953 select 1 from dual

========================================================

SQL*Plus: Release 12.1.0.1.0 Production on Fri Jul 5 18:33:12 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options


INSTANCE_NAME    STARTUP_TIME      CURRENT_TIME         DAYS    SECONDS
---------------- ----------------- ----------------- ------- ----------
LAB1211          02-JUL-2013 10:21 05-JUL-2013 18:33    3.34     288715

SYS@LAB1211> select sql_id, hash_value, old_hash_value, plan_hash_value, sql_text from v$sql where sql_text = 'select 1 from dual';

SQL_ID        HASH_VALUE OLD_HASH_VALUE PLAN_HASH_VALUE SQL_TEXT
------------- ---------- -------------- --------------- ----------------------------------------
3zcn52u5tvfqh 2342370000      271604965      1388734953 select 1 from dual

So as you can see, the sql_id and hash_value have changed between versions but the old_hash_value remains consistent. It also appears that the plan_hash_value calculation is unchanged, at least for simple plans. Anyway, a little reworking of some scripts should allow us to do the same sorts of things we’ve done in the past, albeit with a little more effort. Maybe Tanel will do us all a favor and write a function to calculate the old sql_id in 12c. That would make it a little easier. 🙂

12c Adaptive Optimization

Since everyone seems to be all twitterpated about Oracle Database 12c this week, I thought I’d post a quick note to let you know that the slides from the presentation on 12c Adaptive Optimization I did at the Hotsos Symposium 2013 (with a lot of help from Maria) are now available in the Whitepapers / Presentations section of this blog.

While I’m on the topic, I found this little blurb in the Oracle Database 12c Release 1 New Features Guide:

Zero Effort Performance

That’s the section that talks about the Adaptive Optimization stuff. I think the documentation folks meant that they were describing performance features that didn’t require any manual intervention, but it sort of reads like the features are really easy to describe, or maybe that the writers weren’t going to work very hard on describing them. At any rate, the wording struck me as humorous. 🙂

SQL Gone Bad – But Plan Not Changed? – Part 2

In Part 1 of this series I talked about the basic problem, which is that plan_hash_values are not based on the whole plan. One of the main missing components is the predicates associated with a plan, but there are other missing parts as was pointed out in Part 1 of Randolf Geist’s post on the topic. At any rate, predicates seem to be the most critical of the missing parts.

The purpose of this second post on the topic is to talk about diagnosis. Basically how do you identify when some other part of a plan has changed that doesn’t affect the plan_hash_value, specifically a predicate.

So first I thought I would show a few examples of statements with the same sql_id and plan_hash_value that have other plan differences (in the predicate section). To do this I used a method proposed by Randolf Geist a few years back in his 2nd post on the topic which covered Alternative Ways to Calculate a PLAN_HASH_VALUE In that post, Randolf shows several ways to compute a hash value on any or all of the columns in the v$sql_plan table. I wrote a simple script around one of the those methods (find_pred_mismatch.sql), and as you might guess from the name, I limited this version to not include all the columns in v$sql_plan, but to only identify statements with mismatched predicates. To be more explicit, the script will locate statements in the shared_pool that have multiple child cursors, where there are more than one set of predicates to go with a single plan_hash_value. Here’s an example:

SYS@DEMO1> @find_pred_mismatch

Type created.


SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER   THE_HASH ARE_H
------------- --------------- ------------ ---------- -----
063m5s0cvrr19      1502175119            0 2709091620 DIFF!
093fgfvygm51m      3114044936            0 3689661040 DIFF!
0cn2wm9d7zq8d      1540383338            0 3746559344 DIFF!
0pt4jfmq9f1q0      3078496799            0 1309675335 DIFF!
155cwuv2pfp1d       768389713            0 2982291916 DIFF!
18c2yb5aj919t      1032442798            0 1714458756 DIFF!
1n9crga6mbw2x      4174841998            0 2752042257 DIFF!
1ytxrt5qp9qdd      2707146560            0 3757837347 DIFF!
23buxzfxyp1vy      3143617369            0 2089881285 DIFF!
23nad9x295gkf       891847141            0 4056778555 DIFF!
24zvjzuyrxh3w      1877711096            0 1680905434 DIFF!
28n17ru48jnh5      1665111388            0 3584687734 DIFF!
2j0fw19fph49j      1337823903            0 2431841938 DIFF!
2kd6nusgzc3uw      3151266570            0 3024843785 DIFF!
2rpwgryn7pxz5      3329544515            0  452505826 DIFF!
35nhk48nxwc0v      2553960494            0  117262982 DIFF!
3bc73t2h9mwxc      1420457580            0 1226225583 DIFF!
3gputsqv4u1j3      3161304745            0 2252819340 DIFF!
3zauy2zqryrsx      1420457580            0 1128973296 DIFF!
42q1qby3huf2c      3069437562            0 4008632079 DIFF!
47mm81hm9sggy      1836210496            0 1554180227 DIFF!
4g46czps3t55u      2714703948            0 4063401817 DIFF!
4n2gca427719q      1360319091            0 4013571180 DIFF!
4tpsnbkt1dm5j      2960949352            0 3341004783 DIFF!
5dyhfnkzta2zm      3767331201            0 4238766232 DIFF!
5h91zx386wbht       293199272            0  949799344 DIFF!
5s34t44u10q4g      2693539438            0  839739072 DIFF!
5uw1u291s3m0k       219265157            0  642280427 DIFF!
61tn3mam0vq0b      2012170170            0 2048362545 DIFF!
63t3ufgq37m0c      1155609947            0  844291465 DIFF!
69k5bhm12sz98      3091659676            0  356777601 DIFF!
6cp74g22fzahf        76968983            0 1617454724 DIFF!
6wm3n4d7bnddg      1772758172            0 1148123313 DIFF!
78kp0fcyxavzb      2960949352            0 1085639264 DIFF!
7ah4afrggrw5c      4213028598            0 4285032606 DIFF!
7g4rxwbvhdh3q      3170022080            0 2083442940 DIFF!
7hspvruktu52b      4016032974            0 2538340188 DIFF!
84p3g5b5bsfvn       681044650            0 3826083810 DIFF!
86521pa77y28j      3760090177            0 3887843475 DIFF!
8ak9gkw2mjhvr      1526940012            0 2946674232 DIFF!
8p9z2ztb272bm       408663731            0 3293625021 DIFF!
aca4xvmz0rzup       427901911            0 4215668999 DIFF!
akh9zqqkx3wj7      2306922995            0 2084689096 DIFF!
akx4284f2vjnv      3948068913            0 2662025793 DIFF!
amycufzt6uq5f      3283312188            0 1896511712 DIFF!
atnkqhrp3t7xa      2196914545            0   26873046 DIFF!
aw2x7hh2a9ag0      1148557212            0  719001678 DIFF!
b41wak2bb7atw       108532975            0 1699960507 DIFF!
bhvyz9bgyrhb2      1134671139            0 2402404248 DIFF!
c8gnrhxma4tas      4024720576            0 2473084105 DIFF!
cc7vvmrsxzyq1      1849127868            0 1912933403 DIFF!
cjtaqp92v10bn       922118807            0 2313573387 DIFF!
ckfgcyv05xptf      2869192503            0 3932622422 DIFF!
cw860p03hy5ff      1502175119            0 2915988156 DIFF!
cyw0c6qyrvsdd       192117504            0 2551186960 DIFF!
d53nc7j6n1057      1356236608            0  582788179 DIFF!
dyj1ssw8jw54f      1836210496            0   66902761 DIFF!
fkjkrv5ycz96u      2247257083            0 1809299677 DIFF!
gdn3ysuyssf82      4024720576            0 2473084105 DIFF!
gwbdd5m45ugpm      3180770434            0  235716193 DIFF!

60 rows selected.

SYS@DEMO1> select * from table(dbms_xplan.display_cursor('&sql_id','&child_no','typical'));
Enter value for sql_id: 24zvjzuyrxh3w
Enter value for child_no: 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  24zvjzuyrxh3w, child number 0
-------------------------------------
SELECT script FROM sys.metaxsl$ WHERE xmltag=:1 AND transform=:2  AND
model=:3

Plan hash value: 1877711096

--------------------------------------------------------------------------------------
| Id  | Operation                 | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |          |       |       |     3 (100)|          |
|*  1 |  TABLE ACCESS STORAGE FULL| METAXSL$ |     3 |    99 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - storage(("XMLTAG"=:1 AND "TRANSFORM"=:2 AND "MODEL"=:3))
       filter(("XMLTAG"=:1 AND "TRANSFORM"=:2 AND "MODEL"=:3))

SQL_ID  24zvjzuyrxh3w, child number 1
-------------------------------------
SELECT script FROM sys.metaxsl$ WHERE xmltag=:1 AND transform=:2  AND
model=:3

Plan hash value: 1877711096

----------------------------------------------
| Id  | Operation                 | Name     |
----------------------------------------------
|   0 | SELECT STATEMENT          |          |
|*  1 |  TABLE ACCESS STORAGE FULL| METAXSL$ |
----------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - storage(("MODEL"=:3 AND "TRANSFORM"=:2 AND "XMLTAG"=:1))
       filter(("MODEL"=:3 AND "TRANSFORM"=:2 AND "XMLTAG"=:1))

Note
-----
   - rule based optimizer used (consider using cbo)


44 rows selected.

Continue reading ‘SQL Gone Bad – But Plan Not Changed? – Part 2’ »

SQL Gone Bad – But Plan Not Changed? – Part 1

Last week an interesting issue popped up on a mission critical production app (MCPA). A statement that was run as part of a nightly batch process ran long. In fact, the statement never finished and the job had to be killed and restarted. This particular system is prone to plan stability issues due to various factors outside the scope of this post, so the first thing that the guys checked was if there had been a plan change. Surprisingly the plan_hash_value was the same as it had been for the past several months. The statement was very simple and a quick look at the xplan output showed that the plan was indeed the same with one exception. The predicate section was slightly different.

As a quick diversion, you probably already know that the plan_hash_value is calculated based on partial information about the plan. Arguably it’s the most important parts, but there are some important parts of the plan that are not included (namely the stuff that shows up in the predicate section of the plan). Randolf Geist explained which parts of the plan are used in calculating the plan_hash_value well in a post on How PLAN_HASH_VALUES Are Calculated several years ago. His summary was this:

So in summary the following conclusions can be made:

– The same PLAN_HASH_VALUE is merely an indicator that the same operations on objects of the same name are performed in the same order.

– It tells nothing about the similarity of the expected runtime performance of the execution plan, due to various reasons as demonstrated. The most significant information that is not covered by the PLAN_HASH_VALUE are the filter and access predicates, but there are other attributes, too, that are not part of the hash value calculation.

The production issue got me thinking about several things:

    Could I come up with a simple test case to show a severe performance degradation between executions of a statement with the same plan_hash_value because of a change in the predicates? (the answer is it’s pretty easy actually)

    What forensics could be used to determine when this situation has occurred?

    How do you fix the problem?

Continue reading ‘SQL Gone Bad – But Plan Not Changed? – Part 1’ »

Extreme Exadata Expo Speakers Announced

Thanks to everyone that submitted abstracts for our upcoming E4 conference. Unfortunately, there were more quality submissions than we had room for. Maybe next year we should expand the event to 3 days. 🙂 But in the meantime, we have assembled what I believe is an excellent line up of speakers. I’ll just mention a few highlights here:

Tom Kyte will be doing the keynote. Enough said!

Maria Colgan and Roger MacNicol will be doing a 3 hour combined session on smart scans. Maria will attack the topic from the top down (optimizer) point of view (since she is the product manager for the optimizer) and Roger will be attacking it from the bottom up (since he is the lead developer for the smart scan code). This should be an awesome session and Tanel Poder has already said he was going to line up the night before.

Ferhat Sgonul will be talking about Turkcell’s usage of Exadata. Turkcell is one of the earliest adopters of Exadata and has had great success with it over the last several years, so this should be a very interesting case study.

Karl Arao and Tyler Muth will do a joint presentation on visualization techniques for performance data from Exadata environments. The plan is for them to compare and contrast their approaches using the same data set. Tyler usually uses R and Karl likes Tableu – may the best violin chart win.

Tyler Muth will also be doing a deep dive presentation on bloom filters and how they can be offloaded with smart scans. This is a topic about which there is little information, so it should be quite interesting.

Frits Hoogland will be doing a deep dive on how Oracle does multi-block i/o. This is of special interest with regard to Exadata because the direct path mechanism for doing multi-block i/o is a requirement for enabling smarts scans. So understanding how it works is one of the keys to getting the most out the platform.

Sue Lee (product manager for resource manager) will be doing a session on how to deal with mixed workloads. I’m really interested in this session as IORM and DBRM are critical for managing Exadata, particularly when it is used as a consolidation platform.

There are many other well known speakers including Martin Bach, Andy Colvin, Gwen Shapira, Mark Rittman, Tim Fox and Tanel Poder.

Here’s a link to see the complete line up of E4 speakers.

While we’re on the subject, I should mention that there will be several talks on hadoop related topics and the increasingly expanding role it is playing in our industry. The idea of pushing the work to the storage is not unique to Exadata. It is also the main driver behind hadoop. So I’m extremely pleased to announce that Doug Cutting will be speaking at E4 as well.

So that’s all for the marketing related stuff on E4. I hope you can join us in Dallas.

E4 2013 – Exadata Conference Call for Papers Closing

 

Just a quick note to remind you that the call for papers for E4 is closing in a few days (on April 30).  So if you have anything you think is interesting related to Exadata that you’d like to share we’d love to hear from you. By the way, you don’t have to be a world renowned speaker on Oracle technology to get accepted. You just need to have an interesting story to tell about your experience. Implementations, migrations and consolidation stories are all worthy of consideration. Any interaction between Exadata and Big Data platforms will also be of great interest to the attendees. Of course the more details you can provide the better.

Here’s a link to the submission page:

Submit an Abstract for Enkitec’s Extreme Exadata Expo

Feel free to shoot me an email if you have ideas for a talk that you want to discuss before formally submitting an abstract.

Bad Advice?

I got an email a few days ago . . .

Subject: Exadata / your blogs / you aren’t saying to put HINTS on everything, Are you…

. . . A peer of mine found a couple of your blogs listed below which discuss Profiles and Hints. The position he is inferring upon you is that you are stating unequivocally that we should create lots of indexes, add hints, create profiles and outlines on our Exadata machine. Having read your book and achieve actual results in line with that book, I know that pretty much the opposite is true. . . .

Here’s what I said in my reply (edited a bit to clean up typos and to be a bit more precise):

I do not advocate widespread usage of hints or hint based mechanisms (Outlines, Patches, Profiles). However, because I have written quite a bit about using these techniques, I fear I may have inadvertently left people with the impression that I think hints should be used everywhere. I do not. I view them as expedient ways to change plans or lock in specific plans that I want, but in most cases I use them as bandaids to temporarily cover up other problems. I haven’t really written a blog post specifically on my view of the proper usage of these techniques, so maybe I should do that. When I do presentations on hint based mechanisms to control execution plans, I always have a slide with a picture of a vial of morphine. I use that metaphor to caution people not to over use them. There is a presentation on my blog in the whitepapers/presentations section called “Controlling Execution Plans” (or something to that effect) that you can download which has that slide in it. Also I wrote a chapter on controlling execution plans in a book called Pro Oracle SQL that probably has some prose on when and why to use them (and not use them). But in general – I think they should only be used to give temporary relief until underlying problems can be fixed or in cases where the optimizer just can’t deal with a specific situation (like correlated columns for example) or in cases where you’re using a packaged app that has badly written code that you can’t change. The newest version of the optimizer (which should be released soon) actually stores plans and not just hints with Baselines, so I will leave out the case where you want to manage plans for stability sake.

Since you mentioned Exadata I’ll add that the issue of mixed workload DB’s on Exadata and when the optimizer uses indexes vs when it doesn’t is not straight forward. In general, the current version of the optimizer doesn’t know it’s running on Exadata so it tends to cost FTS’s too high, meaning it often picks indexes when a full scan would be faster. There are some things that can be done to push the optimizer towards full scans. Think of anything that pushes the optimizer in that direction – large db_file_multiblock_read_count for example. Also, System Stats has recently been updated to have an “Exadata” mode which basically just updates the MBRC setting to 128 which does the same sort of thing. If that’s not enough to get most of the statements behaving the way you want then exceptional measures can be taken such as making indexes invisible and using alter session for specific processing to enable the optimizer to see invisible indexes, or doing away with some indexes altogether. Hints (or hint based mechanisms) can be used as well. But my goal would be to get the optimizer to pick the right plan on it’s own in the vast majority of cases before resorting to the more extreme measures.

Hope this helps.

Here’s a copy of the slide I referenced in the email and have been using recently:

Continue reading ‘Bad Advice?’ »

Randolf Geist – Dynamic Sampling Posts

I was doing a little research for an upcoming presentation on Oracle 12c Adaptive Optimization and I came across a series of posts on Dynamic Sampling by Randolf Geist (one of my favorite Oracle smart guys). I could not find a complete list of the series of posts either through Google or the search function on the site where they were posted. I guessed at a couple of the url’s and got lucky. Thought it might be useful at some point in the future so I wrote them down on my blog (which is where I often keep private notes). But decided it might be useful to others so decided to go ahead and publish it.

Dynamic Sampling – Intro – Part 1
Dynamic Sampling – Intro – Part 2
Dynamic Sampling – Controlling the Activity – Part 1
Dynamic Sampling – Controlling the Activity – Part 2
Dynamic Sampling – Real Life Data – Part 1
Dynamic Sampling – Real Life Data – Part 2

Here are a couple of other posts Randolf has done on the topic of Dynamic Sampling.

Volatile Data, Dynamic Sampling and Shared Cursors
Dynamic Sampling on Multiple Partitions – Bugs

Note these posts are all referring to 11gR2 or earlier releases.

New Exadata Prototype

I got a look a new prototype for the next generation Exadata last week while doing some work with a company in Europe. Apparently there is a big push to be environmentally friendly there now and so Oracle is trying to come up with a model that uses less power and is biodegradable. The word on the street is that it won’t be available until after release 2 of the 12c database.

The new model has a few drawbacks though. For one thing, it only lasts a few weeks before you must either replace it or higher some rocket surgeon consultants to come in and tune it. From the early version of the prototype I saw, it does appear to be smaller and more tasty than previous models though.

 

 

That’s a picture of the lead designer (JP) showing off the prototype. The code name for the project is “Exanana” by the way. The new model should be available in select supermarkets after lunch (err launch).  Here’s another picture of JP and one of the other designers (Paul) hamming it up for the camera.

 

 

I probably should have saved this post for April 1st!