trusted online casino malaysia

Archive for March 2009

Bind Variable Peeking – Drives Me Nuts!

In the constant battle to provide consistent performance, Oracle took a giant step backwards with the 9i version by introducing an “Enhancement” called Bind Variable Peeking. I’ll explain what I mean in a minute, but first a bit of history.

When Oracle introduced histograms in 8i, they provided a mechanism for the optimizer to recognize that the values in a column were not distributed evenly. That is, in a table with 100 rows and 10 distinct values, the default assumption the optimizer would make, in the absence of a histogram, would be that no matter which value you picked – you would always get 100/10 or 10 rows back. Histograms let the optimizer know if that was not the case. The classic example would be 100 records with 2 distinct values where one value, say “Y”, occurred 99 times and the other value, say “N”, occurred only 1 time.  So without a histogram the optimizer would always assume that whether you requested records with a “Y” or an “N”, you would get half the records back (100/2 = 50). Therefore you always want to do a full table scan as opposed to using an index on the column. A histogram, assuming it was accurate (we’ll come back to that later), would let the optimizer know that the distribution was not normal (i.e. not spread out evenly – also commonly called skewed) and that a “Y” would get basically the whole table, while an “N” would get only 1%. This would allow the optimizer to pick an appropriate plan regardless of which value was specified in the Where Clause.

So let’s consider the implications of that. Would that improve the response time for the query where the value was “Y”. The answer is no. In this simple case, the default costing algorithm is close enough and produces the same plan that the histogram produces. The full table scan takes just as long whether the optimizer thought it was getting 50 rows or 99 rows. But what about the case where we specified the value of “N”. In this case, with a histogram we would pick up the index on that column and presumably get a much better response time than the full table scan. This is an important point. Generally speaking it is only the outliers, the exceptional cases if you will, where the histogram really makes a difference.

So at first glance, all appeared well with the world. But there was a fly in the ointment. You had to use literals in your SQL statements for the optimizer to be able use the histograms. So you had to write your statements like this:

SELECT XYZ FROM TABLE1 WHERE COLUMN1 = ‘Y’;

SELECT XYZ FROM TABLE1 WHERE COLUMN1 = ‘N’;

Not a problem in our simple example, because you only have two possibilities. But consider a statement with 2 or 3 skewed columns, each with a couple of hundred distinct values. The possible combinations could quickly grow into the millions. Not a good thing for the shared pool.

Enter our star: Bind Variable Peeking, a new feature introduced in 9i that was added to allow the optimizer to peek at the value of bind variables and then use a histogram to pick an appropriate plan, just like it would do with literals. The problem with the new feature was that it only looked at the variables once, when the statement was parsed. So let’s make our simple example a little more realistic by assuming we have a 10 million row table where 99% have a value of “Y” and 1% have a value of “N”. So in our example, if the first time the statement was executed it was passed a “Y”, the full table scan plan would be locked in and it would be used until the statement had to be re-parsed, even if the value “N” was passed to it in subsequent executions.

So let’s consider the implication of that. When you get the full table scan plan (because you passed a “Y” the first time) it behaves the same way no matter what which value you pass subsequently. Always a full table scan, always the same amount of work and the same basic elapsed time. From a user standpoint that seems reasonable. The performance is consistent. (this is the way it would work without a histogram by the way) On the other hand, if the index plan gets picked because the parse occurs with a value of “N”, the executions where the value is “N” will be even faster than they were before, but the execution with a value of “Y” will be incredibly slow. This is not at all what the user expects. They expect the response time to be about the same every time they execute a piece of code. And this is the problem with bind variable peeking. It’s basically just Russian Roulette. It just depends on what value you happen to pass the statement when it’s parsed (which could be any execution by the way).

So is Bind Variable Peeking a feature or a bug? Well technically it’s not a bug because it works the way it’s designed. I just happen to believe that it was not a good decision to implement it that way. But what other choices did the optimizer development group have?

  • They could have evaluated the bind variables and re-parsed  for every execution of every statement using bind variables. This would eliminate the advantage of having bind variables in the first place and would never work for high transaction systems. So it was basically not an option.
  • They could have just said no, and made us use literals in order to get the benefit of histograms (probably not a bad option in retrospect – the fact that they added _optim_peek_user_binds probably means that they decided later to give us that option via setting this hidden parameter).
  • They could have implemented a system where they could identify statements that might benefit from different plans based on the values of bind variables. Then peek at those variables for every execution of those “bind sensitive” statements (sound familiar? – that’s what they finally did in 11g with Adaptive Cursor Sharing).

So why is it such a pervasive problem? And I do believe it is a pervasive problem with 10g in particular. A couple of reasons come to mind:

  1. We’ve been taught to always use bind variables. It’s a best practice which allows SQL statements to be shared, thus eliminating a great deal of work/contention. Using bind variable is an absolute necessity when building scalable high transaction rate systems. (of course that doesn’t mean that you can’t bend the rule occasionally)
  2. 10g changed it’s default stats gathering method to automatically gather histograms. So in a typical 10g database there are a huge number of histograms, many of them inappropriate (i.e. on columns that don’t have significantly skewed distributions) and many of them created with very small sample sizes causing the histograms to be less than accurate. Note that 11g appears to be better on both counts – that is to say, 11g seems to create fewer inappropriate histograms and seems to create much more accurate histograms with small sample sizes. But the jury is still out on 11g stats gathering as it has not been widely adopted at this point in time.
  3. In my humble opinion, Bind Variable Peeking is not that well understood. When I talk to people about the issue, they usually have heard of it and have a basic idea what the problem is, but their behavior (in terms of the code they write and how they manage their databases) indicates that they don’t really have a good handle on the issue.

So what’s the best way to deal with this issue? Well recognizing that you have a problem is the first step to recovery, so being able to identify that you have a problem with plan stability is an appropriate first step. Direct queries against the Statspack or AWR tables are probably the best way to identify the issue. I’ve posted a couple of scripts that I find useful for this purpose previously – (unstable_plans.sql, awr_plan_stats.sql, awr_plan_change.sql). What you’re looking for is statements that flip flop back and forth between 2 or more plans. Note that there are other reasons for statements to change plans, but Bind Variable Peeking is the number one suspect. Here’s an example of their usage:

SQL> @unstable_plans
SQL> break on plan_hash_value on startup_time skip 1
SQL> select * from (
  2  select sql_id, sum(execs), min(avg_etime) min_etime, max(avg_etime) max_etime, stddev_etime/min(avg_etime) norm_stddev
  3  from (
  4  select sql_id, plan_hash_value, execs, avg_etime,
  5  stddev(avg_etime) over (partition by sql_id) stddev_etime
  6  from (
  7  select sql_id, plan_hash_value,
  8  sum(nvl(executions_delta,0)) execs,
  9  (sum(elapsed_time_delta)/decode(sum(nvl(executions_delta,0)),0,1,sum(executions_delta))/1000000) avg_etime
 10  -- sum((buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta))) avg_lio
 11  from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
 12  where ss.snap_id = S.snap_id
 13  and ss.instance_number = S.instance_number
 14  and executions_delta > 0
 15  and elapsed_time_delta > 0
 16  group by sql_id, plan_hash_value
 17  )
 18  )
 19  group by sql_id, stddev_etime
 20  )
 21  where norm_stddev > nvl(to_number('&min_stddev'),2)
 22  and max_etime > nvl(to_number('&min_etime'),.1)
 23  order by norm_stddev
 24  /
Enter value for min_stddev:
Enter value for min_etime:

SQL_ID        SUM(EXECS)   MIN_ETIME   MAX_ETIME   NORM_STDDEV
------------- ---------- ----------- ----------- -------------
1tn90bbpyjshq         20         .06         .24        2.2039
0qa98gcnnza7h         16       20.62      156.72        4.6669
7vgmvmy8vvb9s        170         .04         .39        6.3705
32whwm2babwpt        196         .02         .26        8.1444
5jjx6dhb68d5v         51         .03         .47        9.3888
71y370j6428cb        155         .01         .38       19.7416
66gs90fyynks7        163         .02         .55       21.1603
b0cxc52zmwaxs        197         .02         .68       23.6470
31a13pnjps7j3        196         .02        1.03       35.1301
7k6zct1sya530        197         .53       49.88       65.2909

10 rows selected.

SQL> @find_sql
SQL> select sql_id, child_number, plan_hash_value plan_hash, executions execs,
  2  (elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions) avg_etime,
  3  buffer_gets/decode(nvl(executions,0),0,1,executions) avg_lio,
  4  sql_text
  5  from v$sql s
  6  where upper(sql_text) like upper(nvl('&sql_text',sql_text))
  7  and sql_text not like '%from v$sql where sql_text like nvl(%'
  8  and sql_id like nvl('&sql_id',sql_id)
  9  order by 1, 2, 3
 10  /
Enter value for sql_text:
Enter value for sql_id: 0qa98gcnnza7h

SQL_ID         CHILD  PLAN_HASH        EXECS     AVG_ETIME      AVG_LIO SQL_TEXT
------------- ------ ---------- ------------ ------------- ------------ ------------------------------------------------------------
0qa98gcnnza7h      0  568322376            3          9.02      173,807 select avg(pk_col) from kso.skew where col1 > 0

SQL> @awr_plan_stats
SQL> break on plan_hash_value on startup_time skip 1
SQL> select sql_id, plan_hash_value, sum(execs) execs, sum(etime) etime, sum(etime)/sum(execs) avg_etime, sum(lio)/sum(execs) avg_lio
  2  from (
  3  select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
  4  nvl(executions_delta,0) execs,
  5  elapsed_time_delta/1000000 etime,
  6  (elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
  7  buffer_gets_delta lio,
  8  (buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
  9  from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
 10  where sql_id = nvl('&sql_id','4dqs2k5tynk61')
 11  and ss.snap_id = S.snap_id
 12  and ss.instance_number = S.instance_number
 13  and executions_delta > 0
 14  )
 15  group by sql_id, plan_hash_value
 16  order by 5
 17  /
Enter value for sql_id: 0qa98gcnnza7h

SQL_ID        PLAN_HASH_VALUE        EXECS          ETIME    AVG_ETIME        AVG_LIO
------------- --------------- ------------ -------------- ------------ --------------
0qa98gcnnza7h       568322376           14          288.7       20.620      172,547.4
0qa98gcnnza7h      3723858078            2          313.4      156.715   28,901,466.0

SQL> @awr_plan_change
SQL> break on plan_hash_value on startup_time skip 1
SQL> select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
  2  nvl(executions_delta,0) execs,
  3  (elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
  4  (buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
  5  from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
  6  where sql_id = nvl('&sql_id','4dqs2k5tynk61')
  7  and ss.snap_id = S.snap_id
  8  and ss.instance_number = S.instance_number
  9  and executions_delta > 0
 10  order by 1, 2, 3
 11  /
Enter value for sql_id: 0qa98gcnnza7h

   SNAP_ID   NODE BEGIN_INTERVAL_TIME            SQL_ID        PLAN_HASH_VALUE        EXECS    AVG_ETIME        AVG_LIO
---------- ------ ------------------------------ ------------- --------------- ------------ ------------ --------------
     21857      1 20-MAR-09 04.00.08.872 PM      0qa98gcnnza7h       568322376            1       31.528      173,854.0
     22027      1 27-MAR-09 05.00.08.006 PM      0qa98gcnnza7h                            1      139.141      156,807.0
     22030      1 27-MAR-09 08.00.15.380 PM      0qa98gcnnza7h                            3       12.451      173,731.0
     22031      1 27-MAR-09 08.50.04.757 PM      0qa98gcnnza7h                            2        8.771      173,731.0
     22032      1 27-MAR-09 08.50.47.031 PM      0qa98gcnnza7h      3723858078            1      215.876   28,901,466.0
     22033      1 27-MAR-09 08.57.37.614 PM      0qa98gcnnza7h       568322376            2        9.804      173,731.0
     22034      1 27-MAR-09 08.59.12.432 PM      0qa98gcnnza7h      3723858078            1       97.554   28,901,466.0
     22034      1 27-MAR-09 08.59.12.432 PM      0qa98gcnnza7h       568322376            2        8.222      173,731.5
     22035      1 27-MAR-09 09.12.00.422 PM      0qa98gcnnza7h                            3        9.023      173,807.3

9 rows selected.

So back to the question, what’s the best way to deal with the issue. In general, the best way to eliminate Bind Variable Peeking is as follows:

  1. Only create histograms on skewed columns.
  2. Use literals in where clauses on columns where you have histograms and want to use them. Note that it’s not necessary to use literals for every possible value of a skewed column. There may be only a few outlier values that result in significantly different plans. With a little extra code you can use literals for those values and bind variables for the rest of the values that don’t matter.
  3. If you can’t modify the code, consider turning off Bind Variable Peeking by setting the _OPTIM_PEEK_USER_BINDS parameter to false. You won’t get the absolute best performance for every possible statement, but you will get much more consistent performance, which is, in my opinion, more important than getting the absolute best performance. Keep in mind that this is a hidden parameter and so should be carefully tested and probably discussed with Oracle support prior to implementing it in any production system.
  4. You can also consider stronger methods of forcing the optimizer’s hand such as Outlines (see my previous posts on Unstable Plans and on Outlines). This option provides a quick method of locking in a single plan, but it’s not fool proof. Even with outlines, there is some possibility that the plan can change. Also note that this option is only palatable in situations where you have a relatively small number of problem SQL statements.
  5. Upgrade to 11g and let Adaptive Cursor Sharing take care of all your problems for you (don’t bet on it working without a little effort – I’ll try to do a post on that soon).

In summary, using literals with histograms on columns with skewed data distributions are really the only effective way to deal with the issue and still retain the ability for the optimizer to choose the absolute best execution plans. However, if circumstances prevent this approach, there are other techniques that can be applied. These should be considered temporary fixes, but may work well while a longer term solution is contemplated. From a philosophical stand point, I strongly believe that consistency is more important than absolute speed. So when a choice must be made, I would always favor slightly reduced but consistent performance over anything that didn’t provide that consistency.

Your comments are always welcome. Please let me know what you think.

DOUG Presentation – 11g

I did a little talk at the Dallas Area Users Group this afternoon. The talk was about 11g stuff. Here’s a link to the presentation materials.

DOUG 11g presentation materials

It’s a zip file with the power point and several text files with examples. Also, I promised to upload Randy Johnson’s slides from our presentation several months ago at an Oracle Tech day. His material included info on new features of RMAN and compression in 11g. I’ll add that here as soon as I get it from him.

Please let me know if you have any questions or comments.

Oracle Performance For Developers …

This week I attended the Hotsos Symposium – It was great as usual. There are more smart guys at this event every year than you can shake a stick at. In fact, I often learn as much from the attendees as I do from the presenters.  Here’s a fancy link to the presentation I gave:

Note: I struggled a bit with how to label myself, since I don’t really have an official title. I thought about calling myself a “Senior Oracle Specialist”, but that sounded a little too puffed up. Especially the “Specialist” part. So then I thought maybe “Senior Oracle Guy” would be a little more down to earth. That was better, but it sounded a little too old, like a Senior Citizen. And since I am still in my late 40’s (OK very late 40’s) I am still quite a ways from being a “Senior” I think. Then I thought maybe I should go with something more generic like “Nice Guy and All Around Prince of a Fellow”, but that seemed a little too uninformative (and beside, my former partner used to have that on his business cards). So I decided to go back to the “Oracle guy” idea and considered using something like “Very Experienced Oracle Guy”. That sounded OK, but “Very Experienced” is really just code for old. So I was back to that, how to say old, but not too old. “Oldish” – that’s what I ended up with, mainly because I ran out of time to think about it any more (probably a good thing).

I was originally scheduled to deliver my talk on Tuesday afternoon. But when I checked in on Monday morning, Becky Goodman asked me if I would mind swapping time slots with Stephan Haisley, who had a “conflict”. His slot was first thing in the morning on Wednesday. So I said sure. Only later did I find out that the conflict was related to the Tuesday night party, which has a tendency to stretch into the wee hours of the morning. Stephan’s a smart guy and he was thinking ahead. He realized that he probably wouldn’t be at his best, first thing on Wednesday morning. As Clint Eastwood said, “A man’s gotta know his limitations”.

Anyway, the talk went pretty well but I did have one embarrassing moment. I’ve been doing Oracle stuff for a long time, so I often run into people that I haven’t seen for a while (sometimes a very long while). I’m pretty good with faces and places, but names sometimes escape me. Isn’t it odd how our brains work? I can remember minute details about some arcane unix command that I haven’t used in 10 years, but a guy’s name that I worked closely with for half a decade can escape me. How does that happen?

I’ve gotten used to it, but occasionally something even more bizarre happens. Like getting a couple of bits of memory cross wired. This actually happens more often than you would think. Try this on a friend. Get them to say “Silk” five times as quickly as they can.  Like … “Silk, Silk, Silk, Silk, Silk” …  Then immediately ask them what cows drink. Almost without fail they will say “Milk”. Of course they know that cows don’t drink milk. They know that cows drink water. But for some reason the word “Milk” just comes rolling off their tongue. Why? Because the word “Milk” sounds almost the same as the word “Silk” and you’ve just made them access the part of their brain that stores the word “Silk” several times in a row. In addition, you have asked them a question with a word (cow) that is very closely associated with the word “Milk”. And finally, milk is a liquid that people drink. So there are 3 very strong associations in your brain, even though you know that it is not the correct answer to the question.

So what’s the point, well  … The first day of the Symposium, I ran into a guy that I have known for several years and that I had in fact shared office space with just a couple of years ago. His name is Jeff Holt and he co-wrote a book with a guy named Cary Millsap called Optimizing Oracle Performance. So I see Jeff, walk over with a big grin on my face, shake hands with him and say “Hi Kevin!”.

And he just looks at me like I’m crazy (which he does pretty well, by the way). And I realize what I’ve done and say “I’m sorry Jeff, I do know what your name is”. And he looks somewhat dubious but accepts my apology. The thing is, I have done this to Jeff several times in the past. I explained to Jeff that there is a perfectly reasonable explanation for me calling him by the wrong name. I used to work with a guy named Kevin Holt and for some inexplicable reason, Kevin’s name always comes out when I think about Jeff. Maybe it’s because my brain stores data by last_name and the cells holding the first names have become damaged in some way, maybe I’ve used the name “Kevin Holt” a lot more than the name “Jeff Holt”, maybe my brain was more impressionable when I was younger and so the earlier memory is stronger. I’m not sure. Anyway, I pretty much just wrote it off as one of those questions for which there is no answer.

But I digress, back to the embarrassing moment during my presentation: So the talk is going along well and I get to this page where I reference Cary and Jeff’s book and I look at the big overhead and the reference looks like this:

Cary Millsap & Kevin Holt. Optimizing Oracle Performance
O’Reilly, 2003.

Of course to me it looked like this:

Cary Millsap & Kevin Holt. Optimizing Oracle Performance
O’Reilly, 2003.

Yes that’s right. Not only did I call him by the wrong name when I ran into him, but I actually typed it wrong on my presentation. To make matters worse, Cary Millsap is in the audience with a puzzled look on his face. So I have to apologize to him while the rest of the audience looks on. Then as soon as the talk is over, I fix the presentation materials and resubmit them (hopefully wiping out any trace of my cross wired brain). This whole experience gets me really thinking about how my brain is working and why it continues to make this repeated error. It seems unlikely that just knowing two guys with the same last name would cause such a problem. I know lot’s of people with the same last name, and I don’t get their first names mixed up.

So I start racking my brain to see if I can come up with any other explanation. What other associations do I have with the name “Kevin”? Well for starters, my only brother’s name is Kevin. We were born only a year apart so when I was a kid, almost every time I heard my name it was closely followed by his name (usually it was at the top of some adult’s lungs due to some trouble we were stirring up). In fact, the old folks often couldn’t be bothered  to keep us straight, so even when we weren’t together (which was rare) they often just combined our names (Kervin was the most common version – Kevrry was a lot less common – for obvious reasons). So anyway, I do have a very strong association between my name and my brother’s name. Then it occurred to me that my first name sounds just like Cary Millsap’s first name. Hmmmm. Cary and Jeff are closely associated (at least in my mind) as I mentioned before. They co-authored the book and have worked together at the same company (first Hotsos and then Method-R) for the last decade or so. I’ve known Cary a long time, but only met Jeff 4 or 5 years ago. So I’ve only ever known Jeff to be associated with Cary. You probably can see where this is heading. I believe my brain does something like this old school fill in the blank problem:

__________________________________________________________________________________________

Fill in the Blank with the Word that Connects the Other Two Words

Cary (which sounds like Kerry)  ____________   Holt

__________________________________________________________________________________________

It’s like a little pattern matching or free association thing. My brain just wants to put the word “Kevin” in that spot as the link between the other two words.

By the way, there have been lots of studies done over the years about how our brains store memories, how we retrieve them, how we forget things, etc… Some of those studies have indicated that most long term memory is semantic based while short term memory is more acoustic based. So most people would tend to mix up words that sound alike (like milk and silk) in short term memory while mixing up words that mean the same thing (like auto and car) in long term memories. Of course there are other studies that prove we all have some long term acoustic memory (being able to identify specific accents for example). The fact that I am a long time musician and that I play mostly by ear is probably a contributing factor as well. I am said to have a “good ear” which means that I can reproduce music pretty accurately after a very short exposure to it. So I think all that extra exercise my brain has done has made me more likely to store long term memories with an acoustic or “sound alike” kind of memory organization. By the way, if you are interested in this kind of stuff, there is an excerpt from a survey of the literature which discusses several of these studies here: Human Memory by Elizabeth Loftus.

So that’s my story and my rationalization for why it happened. And for what ever it’s worth, I’m sorry Kevin, err… I mean Jeff! –  I guess my brain just has a mind of it’s own.