trusted online casino malaysia

Bind Variable Peeking – Drives Me Nuts!

In the constant battle to provide consistent performance, Oracle took a giant step backwards with the 9i version by introducing an “Enhancement” called Bind Variable Peeking. I’ll explain what I mean in a minute, but first a bit of history.

When Oracle introduced histograms in 8i, they provided a mechanism for the optimizer to recognize that the values in a column were not distributed evenly. That is, in a table with 100 rows and 10 distinct values, the default assumption the optimizer would make, in the absence of a histogram, would be that no matter which value you picked – you would always get 100/10 or 10 rows back. Histograms let the optimizer know if that was not the case. The classic example would be 100 records with 2 distinct values where one value, say “Y”, occurred 99 times and the other value, say “N”, occurred only 1 time.  So without a histogram the optimizer would always assume that whether you requested records with a “Y” or an “N”, you would get half the records back (100/2 = 50). Therefore you always want to do a full table scan as opposed to using an index on the column. A histogram, assuming it was accurate (we’ll come back to that later), would let the optimizer know that the distribution was not normal (i.e. not spread out evenly – also commonly called skewed) and that a “Y” would get basically the whole table, while an “N” would get only 1%. This would allow the optimizer to pick an appropriate plan regardless of which value was specified in the Where Clause.

So let’s consider the implications of that. Would that improve the response time for the query where the value was “Y”. The answer is no. In this simple case, the default costing algorithm is close enough and produces the same plan that the histogram produces. The full table scan takes just as long whether the optimizer thought it was getting 50 rows or 99 rows. But what about the case where we specified the value of “N”. In this case, with a histogram we would pick up the index on that column and presumably get a much better response time than the full table scan. This is an important point. Generally speaking it is only the outliers, the exceptional cases if you will, where the histogram really makes a difference.

So at first glance, all appeared well with the world. But there was a fly in the ointment. You had to use literals in your SQL statements for the optimizer to be able use the histograms. So you had to write your statements like this:

SELECT XYZ FROM TABLE1 WHERE COLUMN1 = ‘Y’;

SELECT XYZ FROM TABLE1 WHERE COLUMN1 = ‘N’;

Not a problem in our simple example, because you only have two possibilities. But consider a statement with 2 or 3 skewed columns, each with a couple of hundred distinct values. The possible combinations could quickly grow into the millions. Not a good thing for the shared pool.

Enter our star: Bind Variable Peeking, a new feature introduced in 9i that was added to allow the optimizer to peek at the value of bind variables and then use a histogram to pick an appropriate plan, just like it would do with literals. The problem with the new feature was that it only looked at the variables once, when the statement was parsed. So let’s make our simple example a little more realistic by assuming we have a 10 million row table where 99% have a value of “Y” and 1% have a value of “N”. So in our example, if the first time the statement was executed it was passed a “Y”, the full table scan plan would be locked in and it would be used until the statement had to be re-parsed, even if the value “N” was passed to it in subsequent executions.

So let’s consider the implication of that. When you get the full table scan plan (because you passed a “Y” the first time) it behaves the same way no matter what which value you pass subsequently. Always a full table scan, always the same amount of work and the same basic elapsed time. From a user standpoint that seems reasonable. The performance is consistent. (this is the way it would work without a histogram by the way) On the other hand, if the index plan gets picked because the parse occurs with a value of “N”, the executions where the value is “N” will be even faster than they were before, but the execution with a value of “Y” will be incredibly slow. This is not at all what the user expects. They expect the response time to be about the same every time they execute a piece of code. And this is the problem with bind variable peeking. It’s basically just Russian Roulette. It just depends on what value you happen to pass the statement when it’s parsed (which could be any execution by the way).

So is Bind Variable Peeking a feature or a bug? Well technically it’s not a bug because it works the way it’s designed. I just happen to believe that it was not a good decision to implement it that way. But what other choices did the optimizer development group have?

  • They could have evaluated the bind variables and re-parsed  for every execution of every statement using bind variables. This would eliminate the advantage of having bind variables in the first place and would never work for high transaction systems. So it was basically not an option.
  • They could have just said no, and made us use literals in order to get the benefit of histograms (probably not a bad option in retrospect – the fact that they added _optim_peek_user_binds probably means that they decided later to give us that option via setting this hidden parameter).
  • They could have implemented a system where they could identify statements that might benefit from different plans based on the values of bind variables. Then peek at those variables for every execution of those “bind sensitive” statements (sound familiar? – that’s what they finally did in 11g with Adaptive Cursor Sharing).

So why is it such a pervasive problem? And I do believe it is a pervasive problem with 10g in particular. A couple of reasons come to mind:

  1. We’ve been taught to always use bind variables. It’s a best practice which allows SQL statements to be shared, thus eliminating a great deal of work/contention. Using bind variable is an absolute necessity when building scalable high transaction rate systems. (of course that doesn’t mean that you can’t bend the rule occasionally)
  2. 10g changed it’s default stats gathering method to automatically gather histograms. So in a typical 10g database there are a huge number of histograms, many of them inappropriate (i.e. on columns that don’t have significantly skewed distributions) and many of them created with very small sample sizes causing the histograms to be less than accurate. Note that 11g appears to be better on both counts – that is to say, 11g seems to create fewer inappropriate histograms and seems to create much more accurate histograms with small sample sizes. But the jury is still out on 11g stats gathering as it has not been widely adopted at this point in time.
  3. In my humble opinion, Bind Variable Peeking is not that well understood. When I talk to people about the issue, they usually have heard of it and have a basic idea what the problem is, but their behavior (in terms of the code they write and how they manage their databases) indicates that they don’t really have a good handle on the issue.

So what’s the best way to deal with this issue? Well recognizing that you have a problem is the first step to recovery, so being able to identify that you have a problem with plan stability is an appropriate first step. Direct queries against the Statspack or AWR tables are probably the best way to identify the issue. I’ve posted a couple of scripts that I find useful for this purpose previously – (unstable_plans.sql, awr_plan_stats.sql, awr_plan_change.sql). What you’re looking for is statements that flip flop back and forth between 2 or more plans. Note that there are other reasons for statements to change plans, but Bind Variable Peeking is the number one suspect. Here’s an example of their usage:

SQL> @unstable_plans
SQL> break on plan_hash_value on startup_time skip 1
SQL> select * from (
  2  select sql_id, sum(execs), min(avg_etime) min_etime, max(avg_etime) max_etime, stddev_etime/min(avg_etime) norm_stddev
  3  from (
  4  select sql_id, plan_hash_value, execs, avg_etime,
  5  stddev(avg_etime) over (partition by sql_id) stddev_etime
  6  from (
  7  select sql_id, plan_hash_value,
  8  sum(nvl(executions_delta,0)) execs,
  9  (sum(elapsed_time_delta)/decode(sum(nvl(executions_delta,0)),0,1,sum(executions_delta))/1000000) avg_etime
 10  -- sum((buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta))) avg_lio
 11  from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
 12  where ss.snap_id = S.snap_id
 13  and ss.instance_number = S.instance_number
 14  and executions_delta > 0
 15  and elapsed_time_delta > 0
 16  group by sql_id, plan_hash_value
 17  )
 18  )
 19  group by sql_id, stddev_etime
 20  )
 21  where norm_stddev > nvl(to_number('&min_stddev'),2)
 22  and max_etime > nvl(to_number('&min_etime'),.1)
 23  order by norm_stddev
 24  /
Enter value for min_stddev:
Enter value for min_etime:

SQL_ID        SUM(EXECS)   MIN_ETIME   MAX_ETIME   NORM_STDDEV
------------- ---------- ----------- ----------- -------------
1tn90bbpyjshq         20         .06         .24        2.2039
0qa98gcnnza7h         16       20.62      156.72        4.6669
7vgmvmy8vvb9s        170         .04         .39        6.3705
32whwm2babwpt        196         .02         .26        8.1444
5jjx6dhb68d5v         51         .03         .47        9.3888
71y370j6428cb        155         .01         .38       19.7416
66gs90fyynks7        163         .02         .55       21.1603
b0cxc52zmwaxs        197         .02         .68       23.6470
31a13pnjps7j3        196         .02        1.03       35.1301
7k6zct1sya530        197         .53       49.88       65.2909

10 rows selected.

SQL> @find_sql
SQL> select sql_id, child_number, plan_hash_value plan_hash, executions execs,
  2  (elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions) avg_etime,
  3  buffer_gets/decode(nvl(executions,0),0,1,executions) avg_lio,
  4  sql_text
  5  from v$sql s
  6  where upper(sql_text) like upper(nvl('&sql_text',sql_text))
  7  and sql_text not like '%from v$sql where sql_text like nvl(%'
  8  and sql_id like nvl('&sql_id',sql_id)
  9  order by 1, 2, 3
 10  /
Enter value for sql_text:
Enter value for sql_id: 0qa98gcnnza7h

SQL_ID         CHILD  PLAN_HASH        EXECS     AVG_ETIME      AVG_LIO SQL_TEXT
------------- ------ ---------- ------------ ------------- ------------ ------------------------------------------------------------
0qa98gcnnza7h      0  568322376            3          9.02      173,807 select avg(pk_col) from kso.skew where col1 > 0

SQL> @awr_plan_stats
SQL> break on plan_hash_value on startup_time skip 1
SQL> select sql_id, plan_hash_value, sum(execs) execs, sum(etime) etime, sum(etime)/sum(execs) avg_etime, sum(lio)/sum(execs) avg_lio
  2  from (
  3  select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
  4  nvl(executions_delta,0) execs,
  5  elapsed_time_delta/1000000 etime,
  6  (elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
  7  buffer_gets_delta lio,
  8  (buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
  9  from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
 10  where sql_id = nvl('&sql_id','4dqs2k5tynk61')
 11  and ss.snap_id = S.snap_id
 12  and ss.instance_number = S.instance_number
 13  and executions_delta > 0
 14  )
 15  group by sql_id, plan_hash_value
 16  order by 5
 17  /
Enter value for sql_id: 0qa98gcnnza7h

SQL_ID        PLAN_HASH_VALUE        EXECS          ETIME    AVG_ETIME        AVG_LIO
------------- --------------- ------------ -------------- ------------ --------------
0qa98gcnnza7h       568322376           14          288.7       20.620      172,547.4
0qa98gcnnza7h      3723858078            2          313.4      156.715   28,901,466.0

SQL> @awr_plan_change
SQL> break on plan_hash_value on startup_time skip 1
SQL> select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
  2  nvl(executions_delta,0) execs,
  3  (elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
  4  (buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
  5  from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
  6  where sql_id = nvl('&sql_id','4dqs2k5tynk61')
  7  and ss.snap_id = S.snap_id
  8  and ss.instance_number = S.instance_number
  9  and executions_delta > 0
 10  order by 1, 2, 3
 11  /
Enter value for sql_id: 0qa98gcnnza7h

   SNAP_ID   NODE BEGIN_INTERVAL_TIME            SQL_ID        PLAN_HASH_VALUE        EXECS    AVG_ETIME        AVG_LIO
---------- ------ ------------------------------ ------------- --------------- ------------ ------------ --------------
     21857      1 20-MAR-09 04.00.08.872 PM      0qa98gcnnza7h       568322376            1       31.528      173,854.0
     22027      1 27-MAR-09 05.00.08.006 PM      0qa98gcnnza7h                            1      139.141      156,807.0
     22030      1 27-MAR-09 08.00.15.380 PM      0qa98gcnnza7h                            3       12.451      173,731.0
     22031      1 27-MAR-09 08.50.04.757 PM      0qa98gcnnza7h                            2        8.771      173,731.0
     22032      1 27-MAR-09 08.50.47.031 PM      0qa98gcnnza7h      3723858078            1      215.876   28,901,466.0
     22033      1 27-MAR-09 08.57.37.614 PM      0qa98gcnnza7h       568322376            2        9.804      173,731.0
     22034      1 27-MAR-09 08.59.12.432 PM      0qa98gcnnza7h      3723858078            1       97.554   28,901,466.0
     22034      1 27-MAR-09 08.59.12.432 PM      0qa98gcnnza7h       568322376            2        8.222      173,731.5
     22035      1 27-MAR-09 09.12.00.422 PM      0qa98gcnnza7h                            3        9.023      173,807.3

9 rows selected.

So back to the question, what’s the best way to deal with the issue. In general, the best way to eliminate Bind Variable Peeking is as follows:

  1. Only create histograms on skewed columns.
  2. Use literals in where clauses on columns where you have histograms and want to use them. Note that it’s not necessary to use literals for every possible value of a skewed column. There may be only a few outlier values that result in significantly different plans. With a little extra code you can use literals for those values and bind variables for the rest of the values that don’t matter.
  3. If you can’t modify the code, consider turning off Bind Variable Peeking by setting the _OPTIM_PEEK_USER_BINDS parameter to false. You won’t get the absolute best performance for every possible statement, but you will get much more consistent performance, which is, in my opinion, more important than getting the absolute best performance. Keep in mind that this is a hidden parameter and so should be carefully tested and probably discussed with Oracle support prior to implementing it in any production system.
  4. You can also consider stronger methods of forcing the optimizer’s hand such as Outlines (see my previous posts on Unstable Plans and on Outlines). This option provides a quick method of locking in a single plan, but it’s not fool proof. Even with outlines, there is some possibility that the plan can change. Also note that this option is only palatable in situations where you have a relatively small number of problem SQL statements.
  5. Upgrade to 11g and let Adaptive Cursor Sharing take care of all your problems for you (don’t bet on it working without a little effort – I’ll try to do a post on that soon).

In summary, using literals with histograms on columns with skewed data distributions are really the only effective way to deal with the issue and still retain the ability for the optimizer to choose the absolute best execution plans. However, if circumstances prevent this approach, there are other techniques that can be applied. These should be considered temporary fixes, but may work well while a longer term solution is contemplated. From a philosophical stand point, I strongly believe that consistency is more important than absolute speed. So when a choice must be made, I would always favor slightly reduced but consistent performance over anything that didn’t provide that consistency.

Your comments are always welcome. Please let me know what you think.

DOUG Presentation – 11g

I did a little talk at the Dallas Area Users Group this afternoon. The talk was about 11g stuff. Here’s a link to the presentation materials.

DOUG 11g presentation materials

It’s a zip file with the power point and several text files with examples. Also, I promised to upload Randy Johnson’s slides from our presentation several months ago at an Oracle Tech day. His material included info on new features of RMAN and compression in 11g. I’ll add that here as soon as I get it from him.

Please let me know if you have any questions or comments.

Oracle Performance For Developers …

This week I attended the Hotsos Symposium – It was great as usual. There are more smart guys at this event every year than you can shake a stick at. In fact, I often learn as much from the attendees as I do from the presenters.  Here’s a fancy link to the presentation I gave:

Note: I struggled a bit with how to label myself, since I don’t really have an official title. I thought about calling myself a “Senior Oracle Specialist”, but that sounded a little too puffed up. Especially the “Specialist” part. So then I thought maybe “Senior Oracle Guy” would be a little more down to earth. That was better, but it sounded a little too old, like a Senior Citizen. And since I am still in my late 40’s (OK very late 40’s) I am still quite a ways from being a “Senior” I think. Then I thought maybe I should go with something more generic like “Nice Guy and All Around Prince of a Fellow”, but that seemed a little too uninformative (and beside, my former partner used to have that on his business cards). So I decided to go back to the “Oracle guy” idea and considered using something like “Very Experienced Oracle Guy”. That sounded OK, but “Very Experienced” is really just code for old. So I was back to that, how to say old, but not too old. “Oldish” – that’s what I ended up with, mainly because I ran out of time to think about it any more (probably a good thing).

I was originally scheduled to deliver my talk on Tuesday afternoon. But when I checked in on Monday morning, Becky Goodman asked me if I would mind swapping time slots with Stephan Haisley, who had a “conflict”. His slot was first thing in the morning on Wednesday. So I said sure. Only later did I find out that the conflict was related to the Tuesday night party, which has a tendency to stretch into the wee hours of the morning. Stephan’s a smart guy and he was thinking ahead. He realized that he probably wouldn’t be at his best, first thing on Wednesday morning. As Clint Eastwood said, “A man’s gotta know his limitations”.

Anyway, the talk went pretty well but I did have one embarrassing moment. I’ve been doing Oracle stuff for a long time, so I often run into people that I haven’t seen for a while (sometimes a very long while). I’m pretty good with faces and places, but names sometimes escape me. Isn’t it odd how our brains work? I can remember minute details about some arcane unix command that I haven’t used in 10 years, but a guy’s name that I worked closely with for half a decade can escape me. How does that happen?

I’ve gotten used to it, but occasionally something even more bizarre happens. Like getting a couple of bits of memory cross wired. This actually happens more often than you would think. Try this on a friend. Get them to say “Silk” five times as quickly as they can.  Like … “Silk, Silk, Silk, Silk, Silk” …  Then immediately ask them what cows drink. Almost without fail they will say “Milk”. Of course they know that cows don’t drink milk. They know that cows drink water. But for some reason the word “Milk” just comes rolling off their tongue. Why? Because the word “Milk” sounds almost the same as the word “Silk” and you’ve just made them access the part of their brain that stores the word “Silk” several times in a row. In addition, you have asked them a question with a word (cow) that is very closely associated with the word “Milk”. And finally, milk is a liquid that people drink. So there are 3 very strong associations in your brain, even though you know that it is not the correct answer to the question.

So what’s the point, well  … The first day of the Symposium, I ran into a guy that I have known for several years and that I had in fact shared office space with just a couple of years ago. His name is Jeff Holt and he co-wrote a book with a guy named Cary Millsap called Optimizing Oracle Performance. So I see Jeff, walk over with a big grin on my face, shake hands with him and say “Hi Kevin!”.

And he just looks at me like I’m crazy (which he does pretty well, by the way). And I realize what I’ve done and say “I’m sorry Jeff, I do know what your name is”. And he looks somewhat dubious but accepts my apology. The thing is, I have done this to Jeff several times in the past. I explained to Jeff that there is a perfectly reasonable explanation for me calling him by the wrong name. I used to work with a guy named Kevin Holt and for some inexplicable reason, Kevin’s name always comes out when I think about Jeff. Maybe it’s because my brain stores data by last_name and the cells holding the first names have become damaged in some way, maybe I’ve used the name “Kevin Holt” a lot more than the name “Jeff Holt”, maybe my brain was more impressionable when I was younger and so the earlier memory is stronger. I’m not sure. Anyway, I pretty much just wrote it off as one of those questions for which there is no answer.

But I digress, back to the embarrassing moment during my presentation: So the talk is going along well and I get to this page where I reference Cary and Jeff’s book and I look at the big overhead and the reference looks like this:

Cary Millsap & Kevin Holt. Optimizing Oracle Performance
O’Reilly, 2003.

Of course to me it looked like this:

Cary Millsap & Kevin Holt. Optimizing Oracle Performance
O’Reilly, 2003.

Yes that’s right. Not only did I call him by the wrong name when I ran into him, but I actually typed it wrong on my presentation. To make matters worse, Cary Millsap is in the audience with a puzzled look on his face. So I have to apologize to him while the rest of the audience looks on. Then as soon as the talk is over, I fix the presentation materials and resubmit them (hopefully wiping out any trace of my cross wired brain). This whole experience gets me really thinking about how my brain is working and why it continues to make this repeated error. It seems unlikely that just knowing two guys with the same last name would cause such a problem. I know lot’s of people with the same last name, and I don’t get their first names mixed up.

So I start racking my brain to see if I can come up with any other explanation. What other associations do I have with the name “Kevin”? Well for starters, my only brother’s name is Kevin. We were born only a year apart so when I was a kid, almost every time I heard my name it was closely followed by his name (usually it was at the top of some adult’s lungs due to some trouble we were stirring up). In fact, the old folks often couldn’t be bothered  to keep us straight, so even when we weren’t together (which was rare) they often just combined our names (Kervin was the most common version – Kevrry was a lot less common – for obvious reasons). So anyway, I do have a very strong association between my name and my brother’s name. Then it occurred to me that my first name sounds just like Cary Millsap’s first name. Hmmmm. Cary and Jeff are closely associated (at least in my mind) as I mentioned before. They co-authored the book and have worked together at the same company (first Hotsos and then Method-R) for the last decade or so. I’ve known Cary a long time, but only met Jeff 4 or 5 years ago. So I’ve only ever known Jeff to be associated with Cary. You probably can see where this is heading. I believe my brain does something like this old school fill in the blank problem:

__________________________________________________________________________________________

Fill in the Blank with the Word that Connects the Other Two Words

Cary (which sounds like Kerry)  ____________   Holt

__________________________________________________________________________________________

It’s like a little pattern matching or free association thing. My brain just wants to put the word “Kevin” in that spot as the link between the other two words.

By the way, there have been lots of studies done over the years about how our brains store memories, how we retrieve them, how we forget things, etc… Some of those studies have indicated that most long term memory is semantic based while short term memory is more acoustic based. So most people would tend to mix up words that sound alike (like milk and silk) in short term memory while mixing up words that mean the same thing (like auto and car) in long term memories. Of course there are other studies that prove we all have some long term acoustic memory (being able to identify specific accents for example). The fact that I am a long time musician and that I play mostly by ear is probably a contributing factor as well. I am said to have a “good ear” which means that I can reproduce music pretty accurately after a very short exposure to it. So I think all that extra exercise my brain has done has made me more likely to store long term memories with an acoustic or “sound alike” kind of memory organization. By the way, if you are interested in this kind of stuff, there is an excerpt from a survey of the literature which discusses several of these studies here: Human Memory by Elizabeth Loftus.

So that’s my story and my rationalization for why it happened. And for what ever it’s worth, I’m sorry Kevin, err… I mean Jeff! –  I guess my brain just has a mind of it’s own.

Statistics Gathering

Karen Morton just posted a great paper on her blog about statistics gathering. The paper is titled “Managing Statistics for Optimal Query Performance“. I was excited to see it because I think gathering stats is one of the most important and least well understood aspects of managing an Oracle environment. I must admit that I was expecting a recommended method or framework for gathering stats, but actually the paper is really more about how the statistics are used along with general guidelines for gathering them, rather than a direct recommendation on how to gather them. Nevertheless, it is one of the best papers I’ve seen on the subject. She’s going to present the paper at the Hotsos Symposium to be held in Dallas the week of March 9th. I’m going to be there and am really looking forward to hearing what she has to say on the subject.

By the way, I can’t recommend this conference highly enough. If you really want to understand how Oracle works, this is the place to be. You should know that I am not generally a fan of formal training classes. I have often been disappointed because I felt like my time would have been better spent researching the subject matter myself. On the other hand, I have found a lot of value in working closely on a project with someone who knows the subject matter well, kind of like a mentor. But generally speaking, the formal classes have been less satisfying, except in the rare case where you get the great instructor that wrote the class. This symposium format on the other hand allows you to listen to a collection of really knowledgeable Oracle people packed into a short period of time. I have been to the Hotsos Symposiums for several years in a row and I always come away with pages of notes on things I want to investigate further. And the participants are, generally speaking, a collection of very bright Oracle people. So even the conversations between the presentations are often very interesting. Finally, they run two presentations at a time which allows you to pick the presentation that is most interesting. I have often found it hard to choose (don’t tell anyone, but I have on more than one occasion listened to the first half of one and then the second half of the other). So like I said, I find it to be a very productive few days.

But I digress, Karen’s paper is pretty long (24 pages) but it covers a vast amount of ground. There are a number of one liners that could be expanded into full papers. In the paper she discusses a number of topics including dealing with short comings of the optimizer in 10g. One of those issues is bind variable peeking (probably my least favorite optimizer feature, quirk, bug, … what ever you want to call it). I must say that I think it has caused far more problems than it solved, and I frankly don’t know what they were thinking when they put that feature in. I wrote a little about a way to get around it using outlines here. By the way, this reminds me of a cartoon I drew 20 years ago that looked very similar to this one (that I lifted off of Steve Karam’s blog)

Of course as Karen points out, the right way to deal with bind variable peeking issues it is to understand your data and use literals where they are appropriate, keeping in mind the number of additional statements that will need to be parsed and dealt with in the shared pool. She also points out that code could be written to selectively use literals for specific values, giving you a mix of literals and bind variables for the same statement. This approach should allow you to minimize the impact on the shared pool while still providing the optimizer with the data it needs to make good decisions (this is a great idea but I’ve never seen anyone actually do it). And of course she points out that 11g has a much better mechanism for dealing with this whole issue.

Another idea that really got me thinking was the use of dynamic sampling . Karen clearly shows one of the advantages of dynamic sampling in the case of correlated predicates (i.e. the optimizer assumes a query where car_model = ‘Mustang’ and car_make = ‘Ford’ are independent, when clearly they are not). She shows how dynamic sampling can be very useful in conjunction with normal statistics in this situation. (rats, now I’m going to have to go play around with that a bit – so much to do, so little time)

Finally, she discusses some of the statistics gathering options and differences in 9i, 10g, and 11g. The automatic creation of histograms is one of the main differences between 9i and 10g and she discusses this issue, but doesn’t go into to much detail on it. I must admit that I think 10g’s default setup does a very poor job when it comes to histograms. This is the one area I would have liked to see address a little more fully, but at 24 pages already I can understand why she had to draw the line somewhere. Anyway, by default 10g creates histograms on columns based on several factors including their use in where clauses. Unfortunately, histograms often get created on columns where their usefulness is questionable at best and they regularly get created with a very small sample sizes. The small sample sizes often result in significant inaccuracies. I personally think that allowing the gather stats job to automatically create histograms in 10g is really bad idea.

Anyway, this is a paper that is well worth reading in my opinion. Typical Hotsos Symposium fare!

Saving Rows from Corrupt Blocks

Recently we ran into a database with a (or some) incomplete transactions that had not been able to rollback due to a file in the undo tablespace that had been deleted. And as luck would have it, the log files were some how lost as well (so we couldn’t just recover the undo file). So the database was up and running, but the undo file was missing and any time you hit one of the records that needed info from the missing undo file to rebuild a consistent version, it would fail with a ORA-00376  error (file not found). Technically that is not a corrupt block, just an uncommitted record that has been written, with missing undo. Make sense? Anyway, we narrowed it down to a couple of sub-partitions and were able to export the data from all the other sub-partitions. To get the data out of the affected sub-partitions we decided to use an approached based on the old rowid, which contained file_id,block_id, and row_num.

Toon Koppelaars did an excellent write up on this basic approach here. But it was written a while back and unfortunately the rowid was changed somewhere around the Oracle 8i time frame making it a little more difficult. However, in their infinite wisdom, the developers at Oracle added the DBMS_ROWID package which allows us to work around the issue.

So here’s the psuedo code for what we did:

Get Object Name (object_name)
Get Max Rows Per Block (max_rows_per_block)
Get List of Blocks by Extent for Object

For Each Extent
  For Each Block 
    For row in 1 to max_rows_per_block
      insert into saved_table select * from object_name 
        where rowid = dbms_rowid.rowid_create(file,block,row);
    End Row Loop
  End Block Loop
End Extent Loop

The actual script we used is a little more complicated. It actually created a Bad Blocks table as well so we’d know how many blocks were skipped and it had some error checking. I later embellished it a bit to make the object name dynamic (which was considerably more work than I thought it would be). Here’s the actual script I ended up with: save_u.sql. NOTE: I am not the worlds greatest pl/sql guy, so if you have any suggestions, let me know! But it seems to do the job. You may also find this script (obj_blocks.sql) useful for getting a list of all the blocks mapped to a specific object. And here’s a couple of scripts to create functions that might come in handy: create_new_rowid.sql (creates function new_rowid which returns the new format rowid if you give it the obj#, file#, block#, and row#) and create_old_rowid.sql (which returns the old format rowid if given the new format id). Here’s a quick example of the scripts in use:

> sqlplus / as sysdba

SQL*Plus: Release 10.2.0.3.0 - Production on Thu Feb 12 11:35:14 2009

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> @obj
Enter value for owner: MF
Enter value for name: BROKE%
Enter value for type: 

OWNER           OBJECT_NAME                    OBJECT_TYPE         STATUS  T
--------------- ------------------------------ ------------------- ------- -
MF              BROKE_NO_DEPENDENCIES          TABLE               VALID   N
MF              BROKE_YES_DEPENDENCIES         TABLE               VALID   N

SQL> set echo on
SQL> @save_u
Enter value for owner_name: mf
Enter value for table_name: BROKE_NO_DEPENDENCIES

WARNING: This script may issue a DROP TABLE command. Do not execute it unless you have read through it
and are comfortable you know what it does.

Ready? (hit ctl-C to quit)  
Enter value for owner_name: mf
Enter value for table_name: BROKE_NO_DEPENDENCIES    


Saved 18800 records in BROKE_NO_DEPENDENCIES_SAVED.
35 bad records in BROKE_NO_DEPENDENCIES_BAD.

PL/SQL procedure successfully completed.

SQL> set echo on
SQL> @create_old_rowid
SQL> create or replace function old_rowid (p_rowid rowid)
  2  return varchar as
  3  
  4    rowid_type NUMBER;
  5    object_id NUMBER;
  6    fileno NUMBER;
  7    blockno   NUMBER;
  8    rowno  NUMBER;
  9  
 10  BEGIN
 11  
 12     dbms_rowid.rowid_info(p_rowid, rowid_type, object_id, fileno, blockno, rowno);
 13  /*
 14     dbms_output.put_line('Row Typ-' || TO_CHAR(rowid_type));
 15     dbms_output.put_line('Obj No-' || TO_CHAR(object_id));
 16     dbms_output.put_line('RFNO-' || TO_CHAR(fileno));
 17     dbms_output.put_line('Block No-' || TO_CHAR(blockno));
 18     dbms_output.put_line('Row No-' || TO_CHAR(rowno));
 19  */
 20  return(to_char(fileno)||'.'||to_char(blockno)||'.'||to_char(rowno));
 21  
 22  END;
 23  /

Function created.

SQL> @create_new_rowid
SQL> create or replace function new_rowid (p_object_id number, p_old_rowid varchar)
  2  return varchar as
  3  
  4    new_rowid varchar2(30);
  5    fileno NUMBER;
  6    blockno   NUMBER;
  7    rowno  NUMBER;
  8  
  9  BEGIN
 10  
 11    fileno := substr(p_old_rowid,1,instr(p_old_rowid,'.')-1);
 12    blockno := substr(p_old_rowid,instr(p_old_rowid,'.')+1,instr(p_old_rowid,'.',1,2)-instr(p_old_rowid,'.'));
 13    rowno := substr(p_old_rowid,instr(p_old_rowid,'.',1,2)+1,100);
 14    new_rowid := DBMS_ROWID.ROWID_CREATE(1, p_object_id, fileno , blockno , rowno);
 15  
 16    return(new_rowid);
 17  
 18  END;
 19  /

Function created.

SQL> select rowid,old_rowid(rowid) old_rowid from mf.BROKE_NO_DEPENDENCIES where rownum < 5
SQL> /

ROWID              OLD_ROWID
------------------ ------------------------------
AAACYmAAEAAAAAMAAt 4.12.45
AAACYmAAEAAAACIAAt 4.136.45
AAACYmAAEAAAAAMAAF 4.12.5
AAACYmAAEAAAACIAAF 4.136.5

One other thing to keep in mind, as Toon mentioned, it may be possible to retrieve data from an index, even if the underlying data block is messed up. Selecting only the values of the indexed columns allows Oracle to completely bypass the data blocks. So, if for example, you found a set of blocks were inaccessible, you may be able to construct statements that would retrieve at least some of the data from the indexes like so:

select /*+ index (messed_up_object  mess_up_object_pk) */ indexed_column1, indexed_column2 
from messed_up_object 
where rowid in (select dbms_rowid.rowid_create(1,object,file,block,row) from bad_rows_table);

Your comments are always welcomed!

Oracle Fudge

One of my favorite holiday treats was my MeeMaw’s fudge brownies. Note: I did a brief poll (only 5 people so not statistically significant) but nevertheless, 100% of the people I surveyed had a grandparent that they called either MeeMaw or PopPa. And 40% had both a MeeMaw and a PopPa. Of course all 5 of the pollees were native Texans. Anyway, here’s what my MeeMaw’s fudge brownies looked like.

Oracle has a long history of baking fudge as well.

So here’s a little Oracle Fudge for you!

11gR1 has 4 parameters with the word fudge in them.

_nested_loop_fudge
_parallelism_cost_fudge_factor
_px_broadcast_fudge_factor
_query_rewrite_fudge

These four “fudge” parameters have been around with the same default values since at least 8.1.7. Maybe the elves will fix these in version 12.

And in keeping with the holiday theme, I’m reminded of the song “My Favorite Things” (often sung at Christmas) that goes:

“blah, blah, blah, blah, blah, blah,
These are a few of my favorite things”
(think Julie Andrews in Sound of Music)

Anyway, here’s a few of my favorite parameters (and my interpretation of what they mean):

db_cache_advice – If you turn this one on, Oracle will tell you what to do with your money.
db_cache_size – And this one will tell you how much money you have.
db_ultra_safe – Oddly enough, this one defaults to OFF. Seems like you’d want your database to be “Ultra Safe”.
ifile – Looks like someone from Apple slipped this one in (you know – iPod, iPhone, iMac, etc…).
large_pool_size – Just how big is your pool?
skip_unusable_indexes – Defaults to TRUE. I guess if you want, you can tell Oracle to use those unusable indexes.
_addm_skiprules – Yeah, rules suck!
_ash_size – Do these pants make my butt look big?
_backup_max_gap_size – How big does the doorway have to be to get your butt through it?
_asm_disk_repair_time – Uh oh, time to repair those disks.
_avoid_prepare – Why get ready ahead of time.
_awr_disabled_flush_tables – Not sure but it sounds stinky.
_awr_sql_child_limit – Population control?
_bloom_pruning_enabled – Trim the roses!
_bwr_for_flushed_pi – Not sure what this one does, but flushing pie seems like such a waste.
_cvw_enable_weak_checking – I’d prefer strong checking please! Get that weak stuff out of here!
_db_aging_cool_count – I used to be cool, I think.
_db_aging_hot_criteria – ???
_db_block_bad_write_check – I hope our database is not writing bad checks!
_db_block_check_for_debug – I think a developer from Chicago named this one (and it should be: _db_block_check_for_the_bug).
_db_cache_crx_check – I don’t know what a crx check is, but cashing any kind of check should be good, right?
_db_large_dirty_queue – Just like in the laundry room at home.
_db_row_overlap_checking – Do your rows overlap? Perhaps we should check that.
_disable_fast_aggregation – Why would anyone use this, “No thanks, I want really slow aggregation”
_dtree_pruning_enabled – Trim D-Tree too while yer at it!
_extended_pruning_enabled – Cut ’em way back!
_disable_recoverable_recovery – Hmmmm???? I guess if you don’t want your recovery to be recoverable you can set this one.
_dummy_instance – I’ve thought this many times (it’s basically the same as the _stupid_database parameter).
_dispatcher_rate_scale – How much are we paying that dispatcher anyway?
_fairness_threshold – My queries should always run faster than everyone else’s, I think that’s fair.
_flashback_fuzzy_barrier – Fuzzy Wuzzy was a bear, Fuzzy Wuzzy had no hair, Fuzzy Wuzzy wasn’t very fuzzy was he.
_gc_defer_time – I’ve wanted to do this many times in the past.
_ges_dd_debug – Sounds like a speech impediment, b,b,but maybe not.
_hard_protection – Well if it was easy, everyone would be doing it.
_imr_avoid_double_voting – They needed this in Florida during the 2004 election.
_in_memory_undo – Same as the _forget parameter.
_kdli_STOP_dba – Keep the DBA from messing up the system.
_kdli_delay_flushes – Don’t flush until a specified threshold is reached.
_kdlwp_flush_threshold – The amount of poo that triggers flushing (see _kdli_delay_flushes).
_kdli_memory_protect – Same as the _dont_forget parameter (i.e. the opposite of the _forget parameter).
_kdli_squeeze – One of my favorite bands.
_kebm_nstrikes – The number of strikes before you are out (defaults to 3 – no joke).
_kebm_suspension_time – How long before convicted felons can return to playing football.
_kill_enqueue_blocker – What the defensive linemen try to do on every play.
_kill_java_threads_on_eoc – I always turn this one on, because anything that kills java threads is OK in my book.
_kfm_disable_set_fence – Good fences make good neighbors.
_kse_signature_limit – The cash advance limit on your credit card.
_kse_snap_ring_size
_ksi_clientlocks_enabled – Can be used on clients when they won’t follow your advice.
_lm_better_ddvictim – Not sure what this one does, but it has the word victim in it, scary!
_lm_master_weight – Set this to get control of your diet.
_lm_tx_delta – Ah the Texas delta, I think it’s some where near Galveston.
_max_exponential_sleep – The older I get, the longer the naps.
_memory_sanity_check – Do I seem crazy to you?
_mv_generalized_oj_refresh_opt – In general, orange juice is refreshing!
_olapi_memory_operation_history_retention – Same as the _dont_forget parameter (see _kdli_memory_protect).
_olap_wrap_errors – When you cut the wrapping paper too short and it won’t go around the present and you have to cut a little strip to cover the gap.
_optimizer_ignore_hints – No matter what you hear in there, no matter how cruelly I beg you, no matter how terribly I may scream, don’t open that door.
_optimizer_random_plan – This one is self evident and has defaulted to TRUE since the CBO first came out.
_optimizer_squ_bottomup – Bottoms Up!
_parallel_fake_class_pct – “The higher you hold your pinky, the fancier you are.” – Patrick from Sponge Bob.
_parallel_syspls_obey_force – The parameter that allows you to get the sysadmins (syspls) to do what you tell them.
_pct_refresh_double_count_prevented – This would have been useful in Florida during the 2004 election.
_pdml_gim_staggered – Gim must have had too much to drink.
_pred_move_around – If your dad was in the air force and you moved from base to base while growing up, setting this parameter will make you feel right at home.
_px_no_stealing – This parameter is set to TRUE by default and it’s actually against the law to change it.
_px_nss_planb – Use this parameter if plan A doesn’t work.
_shrunk_aggs_enabled – I don’t like shrunk aggs, I like the big-uns, wif bacun!
_spin_count – Have you ever seen fans at a baseball game put their heads on the end of a bat and turn circles and then run? You get the idea.
_two_pass_reverse_polish_enabled – I don’t think this one is politically correct.
_use_best_fit – One size fits all does not fit all.
_write_clones – Send a letter to your siblings.
_ultrafast_latch_statistics – Anything that is ultrafast has got to be fantastic!
_xsolapi_densify_cubes – Densify??? I think they made that word up.
_xsolapi_optimize_suppression – Allows us to to keep the common man down as effectively as possible.
_xsolapi_stringify_order_levels – Stringify??? I think they made that word up too!
_xsolapi_use_models – It will make our advertising look better!

Here a few that allow you to turn on (or off) special checks:

_disable_acid_check – My acid is just fine thanks.
_disable_cpu_check – Yep, this machine has got at least one.
_disable_health_check – I don’t need no stinking annual checkup!
_disable_image_check – And I really don’t care how I look!

Here’s a few parameters dealing with death:

_px_execution_services_enabled – Apparently we can set up a special service for executions.
_xsolapi_share_executors – And if you have a lot of killing to do, it’s wasteful to not share executors.
_imr_splitbrain_res_wait – Sounds like it would hurt (if not kill).
_ksv_spawn_control_all – Creates zombies.
_cgs_zombie_member_kill_wait – Specifies how long to wait before slaying zombies!
_imr_evicted_member_kill – Boy I hope I don’t get evicted!
_ksu_diag_kill_time – Killing Time!
_ksuitm_dont_kill_dumper – The rumor is that this one was named by an animal rights activist and it was supposed to be “Don’t Kill Thumper”.
_lm_rcvr_hang_kill – Death by hanging!
_ksv_pool_hang_kill_to – If hanging doesn’t work, drown them in the pool!
_hang_detection – Turn this on so we’ll know when anybody is getting hung.

And just in case the 2000+ parameters in 11g aren’t enough, they have a few spares:

_first_spare_parameter
_second_spare_parameter
_third_spare_parameter
_fourth_spare_parameter
_fifth_spare_parameter
_sixth_spare_parameter
_seventh_spare_parameter

 Anyway, that’s it for now. I hope you have a happy holiday and a …

Merry Christmas!

Low Tech Solutions to High Tech Problems

When I got to work today I walked into my co-worker’s (Michael’s) office and saw this:

 

 

Data was scrolling by on the screen in rapid fashion. So I asked him what he was doing and he said he got tired of mashing the inner-butt-n  (that’s the way we say “pressing the the return key” in Texas). Works for me. He could have probably written a custom shell script with proper error checking and whatnot, but why, when the stapler was sitting right there.

I always thought the best programmers were basically lazy. They always seem to find ways to get more done in less time. When I was a young programmer my goal was to write a batch job that would run all month. That way I’d only have to come in on the first to kick it off for the next month. I never quite got there but I had fun trying.

Which reminds me of something that happened at my first programming job. I worked for an oil company that had more money than sense. We had two of everything. We actually had two Cray’s. Anyway, my boss had one of the very first transportable computers, a Grid. The Grids were very futuristic back in 1982. They looked pretty similar to what we have today. So anyway, my boss told me this story after returning from trip with his brand new Grid. He said he was on the airplane and decided to get his new toy out and play with it. So he gets it out of the bag and sets it up on the tray (I guess it was after the flight had taken off due to the electronics restrictions, oh yeah, they didn’t have those then!) – so anyway, he starts getting all the stuff out of the bag and getting organized, and by this time he says everyone within 10 rows is staring at him because no one had ever seen a laptop computer before. And he’s looking around smiling at everyone, thinking yeah this is pretty cool. Then he gets the last part out of the bag, … a power cord.

and he looks at the power cord …

and he looks around the cabin for a place to plug it in …

and he looks at the power cord again …

They didn’t have batteries on those early models. (they didn’t have ethernet jacks either, but they did have a 1200 baud modem BUILT IN!)  So anyway, he sheepishly puts the computer back in the bag and pretends to sleep for the rest of the flight.

Here’s a picture of the Grid computer (notice the wire running out the back <grin>):

My favorite low tech solution though was provided by a friend of mine that got a job right out of college working for an oil company. His first assignment was to fix a bug in an extremely complex reservoir simulation program. Apparently they had been trying to fix the bug for months.  The bug manifested itself by producing a result for one of the calculations that was always off by 1. And they just couldn’t figure out where the error was. They ran test case after test case through it and it was always off by 1. My friend worked on it for a day and then demoed it for them and it worked flawlessly. When asked how he did it, he said “Well, I just went to the end of the program and added 1 to the result”.

 

Your comments are always welcome.

Oracle Outlines – aka Plan Stability

Roughly 10 years ago, Oracle introduced a way to lock down execution plans. This was a response to the inherent unpredictability of the early cost based optimizer. The Oracle marketing machine even gave the feature the name “Plan Stability”. It appeared to be a good approach, basically allowing a DBA to lock an existing plan for a given statement. The mechanism employed to accomplish this was to create hints that didn’t allow the optimizer much (if any) flexibility to come up with an alternative plan. These hints are stored in the database in a structure called an OUTLINE or sometimes a STORED OUTLINE. The optimizer could then apply these hints behind the scenes whenever a SQL statement that matched was executed. By the way, “matching” basically means that the text of the statement matches. Originally outlines had to match character for character just like the normal rules for sharing SQL statements, but for some reason, Oracle later decided that the matching algorithm should be somewhat relaxed as compared to Oracle’s standard. What that means is that in 10gR2 by default whitespace is collapsed and differences in case are ignored. So (at least as far as outlines are concerned)  “select * from dual” is the same as “SELECT     *       FROM DuAl”. You’ll still get two different statements in the shared pool but they will use the same outline, if one exists.

With 9i, Oracle started to enhance this feature by adding the ability to edit the outlines themselves, but they never really completed the job. They pretty much quit doing anything with the feature after 9i. In fact, the script that creates the DBMS_OUTLN package ($ORACLE_HOME/rdbms/admin/dbmsol.sql) has not been updated since early in 2004 (with the exception of a tweak to keep it working in 11g). Anyway, I think it is a great feature with two primary uses.

  • First, it can be used to freeze a plan for a statement. This is especially helpful in situations where bind variable peeking is causing Oracle to alternate between a couple of plans.
  • Second, it can be very helpful when dealing with an application where the code can not be modified. Outlines provide a means of altering the execution plan for a statement without changing the code or making changes to the basic database configuration.

Lot’s of people have written about outlines, so I don’t want to just repeat information that is already available. But there doesn’t seem to be a single spot that pulls together all (at least what I consider to be all) the important stuff. Also, most of the stuff I have seen about outlines was written for 8i or 9i. As this is being written, 11gR1 has been out for over a year (although it has still not been widely adopted), and 10gR2 is far and away the most prevalent version in production. So, here we go.

Outlines can be created two ways.

  1. You can use the CREATE OUTLINE statement – which allows you to give your outline a name, but requires you to include the SQL statement as part of your CREATE OUTLINE statement. Therefore you can’t see what the execution plan is before creating the outline. Not very useful in my opinion.
  2. You can use the CREATE_OUTLINE procedure in the DBMS_OUTLN package – which doesn’t allow you to give your outline a name, but does let you specify a specific child cursor of a specific SQL statement in the shared pool. This means you can check the execution plan before creating the outline and that you can be sure the statement exactly matches what is being sent from the application.

Here’s an example:

CREATE OUTLINE myoutline FOR CATEGORY temp ON select * from dual;

EXEC DBMS_OUTLN.CREATE_OUTLINE('82734682234',0,'DEFAULT');

Continue reading ‘Oracle Outlines – aka Plan Stability’ »

Flush a Single SQL Statement – Take 2

I posted earlier about the ability to flush a single SQL statement out of the shared pool in 11g (also back ported to 10.2.0.4 with a bit of extra work) here. If you are on an earlier release of Oracle though, you can accomplish the same thing by creating an outline on the statement using the DBMS_OUTLN.CREATE_OUTLINE procedure. I just discovered this recently, so let me know if I just missed this trick. Anyway, prior to noticing this affect of creating an outline, my best options to flush a statement were:

  • flush the shared pool – not a very appealing option in a production environment (although I see almost the same affect frequently at sites that gathering stats every night).
  • modify an object that the statement depends on – I usually would add a comment to one of the tables used by the statement. Unfortunately, all statements that use the table will be flushed, so this technique can also be a little hard on a production system, but it’s certainly better than flushing the whole shared pool. 

So I wrote a little script (like I always do) and called it flush_sql10.sql. There are a couple of things to be aware of with it.

  1. I don’t like having to find the hash_value that the create_outline procedure uses, so I prompt for the sql_id and then let the script find the hash_value.
  2. The create_outline procedure requires a child number, but the flushing affect is not limited to the specified child. All children for the specified statement will be flushed.
  3. The script drops the outline after creating it (since the purpose is to flush the statement, not to create an outline). This part is a little dicey since the create_outline procedure does not return an identifier for the outline that gets created. Nor does it allow you set a name for the outline. So I coded it to drop the most recently created outline (which should be sufficient, since it would be highly unlikely that more than one person would be creating outlines at the same time). But wait, don’t answer yet, I also limited the drop to outlines created in the last 5 seconds. Bottom line, it is unlikely that an unintended outline would be accidentally dropped by the script. (you have however been forewarned!)
  4. There’s no error checking. Any errors stop execution of the script and are passed back to the user. The most common error is to not give it a valid SQL_ID, CHILD_NO combination. In this case the create_outline procedure fails and the script exits with a “no data found” message.

Here’s an example:


> sqlplus / as sysdba

SQL*Plus: Release 10.2.0.3.0 - Production on Fri Dec 12 08:31:08 2008

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> @find_sql
Enter value for sql_text: %skew%
Enter value for sql_id:

SQL_ID         CHILD  PLAN_HASH      EXECS     AVG_ETIME      AVG_LIO INVALIDATIONS    SQL_TEXT
------------- ------ ---------- ---------- ------------- ------------ ------------- -----------------------------------------------------
0gza16w5ka67q      0 1528838287          1           .01          249             0 SELECT   count(*)    FROM kso.skew where pk_col < 10
688rj6tv1bav0      0 3723858078          1          2.15       37,270             1 select avg(pk_col) from kso.skew where col1 = 1
688rj6tv1bav0      1  568322376          1          5.09      173,731             1 select avg(pk_col) from kso.skew where col1 = 1
7cbu7dgt0vh6y      0 1528838287          1           .00          226             0 select count(*) from kso.skew where pk_col < 10

SQL> @flush_sql_ol

Session altered.

Enter value for sql_id: 688rj6tv1bav0
Enter value for child_number: 1

SQL Statement 688rj6tv1bav0 flushed.
(Note also that outline SYS_OUTLINE_08121120170934217 was dropped.)

PL/SQL procedure successfully completed.

SQL> @find_sql
Enter value for sql_text: %skew%
Enter value for sql_id:

SQL_ID         CHILD  PLAN_HASH      EXECS     AVG_ETIME      AVG_LIO INVALIDATIONS    SQL_TEXT
------------- ------ ---------- ---------- ------------- ------------ ------------- -----------------------------------------------------
0gza16w5ka67q      0 1528838287          1           .01          249             0 SELECT   count(*)    FROM kso.skew where pk_col < 10
7cbu7dgt0vh6y      0 1528838287          1           .00          226             0 select count(*) from kso.skew where pk_col < 10

2008 Dallas 100 – Enkitec

I went to the Dallas 100 awards banquet last night. The Dallas 100 is an annual award for the fastest growing privately held companies that are based in the Dallas / Fort Worth area. Enkitec was ranked 42nd in our first year to be eligible. They have it every year at the Morton Meyerson. It’s a beautiful place by the way, with one of the worlds greatest pipe organs.

              

 

Ray Hunt was the speaker. He had a couple of insightful things to say (of course I’ll have to paraphrase).

He said the oil business was the only business he was aware of where you could look forward at the first of every year and know that 90% of the decisions you’ll make that year will be wrong. Which means that the other 10% have to make up for all those wrong decisions and still provide a reasonable profit to the business. He said his background in the oil business made him very tolerant of failure and that America in general was very tolerant of failure which he thought was a good thing. He then told a story about one of his managers telling him that his department had not made any mistakes in the previous 12 months. To Ray that meant one of two things (and both were bad). Either the manager had lost complete touch with reality, or his department was not being aggressive enough.

He went on to list his top 5 attributes of successful companies. He said he thought the most important characteristic of a successful company was it’s Culture.

He defined Culture as the combination of the shared values and the shared work ethic of a group of individuals.

Other attributes that made his top 5 list were:

  • Adaptibility – the ability to change
  • Agility – the ability to move quickly
  • Differentiation – the ability to recognize, retain, and enhance that which sets you apart
  • Contrarian – the ability to question the accepted practices