September 21, 2011, 3:59 pm
Oracle today announced a new database appliance product. It’s called Oracle Database Appliance (ODA). I’m not crazy about the name, but I really like the product. Here’s a picture of the one in Enkitec’s lab:
|
|
The project was code named “Comet” – thus the yellow sticky note. 😉
I really like that name better than ODA, so I think I will just stick with Comet.
Enkitec participated in the beta test program for the product and we were very impressed, particularly with the speed at which the product could be deployed and configured. There is a new tool called the “OAK Configurator” that is sort like the Exadata OneCommand for configuring the system. Keep an eye out for Karl Arao‘s upcoming post with screen shots of the tool in action.
I’m sure there will be plenty of people talking about the specs so I won’t get carried away with that. But I will tell you that it’s basically 4 terrabytes of usable storage, 2 node RAC with up to 24 cores and a SSD component that is used for improving redo write speeds (more on that later), all in a 4U chassis. Andy Colvin has already got a really good post on the hardware components that are included in the Oracle Database Appliance (along with pictures of the bits and bobs inside the chassis).
I should point out that while I have heard people refer to Comet as a “Baby Exadata”, I really don’t view it that way. That’s because it DOES NOT have separate storage and compute tiers. So there is no Exadata Smart Scan / Offloading secret sauce here. It also does not provide the ability to utilize Exadata’s Hybrid Columnar Compression. On the other hand, like Exadata, it is a pre-configured and tested system that can be dropped in a data center and be ready for use almost immediately (it took us only a couple of hours to set it up and create a database). Pretty unbelievable really.
So much like my favorite Bill Clinton quote, whether ODA is a “Baby Exadata” or not, really depends on your definition of the word “is”. It is a hardware platform that is built specifically to run Oracle databases, but it does not embed any of the unique Exadata software components. Nevertheless, it is an extremely capable platform that will appeal to wide variety of companies running Oracle databases.
And best of all, the list price for this puppy is only $50K. I predict this thing is going to sell like hot cakes!
September 12, 2011, 5:25 pm
I ran into a really ugly SQL statement last week. It was the most expensive statement running on the system for the last several weeks. On top of the fact that the statement ran for hours, it also had a number of “issues”. The statement had the following characteristics:
- Several DISTINCT Operators – these tend to show up in statements where developers have left off a join condition
- ANSI Join Syntax – generally Oracle converts this syntax back to Oracle join syntax under the covers. I realize this is standard SQL syntax, but it still makes me nervous due to previous experience with problems caused by this conversion process.
- Functions Used in WHERE Clauses (DECODE and NVL in particular) – can disable indexes
- UNION – forces a sort of the union’ed result sets so that it can eliminate duplicates (which is often not necessary). Depending on how the statement is coded, UNIONs can also sometimes result in multiple accesses of the same objects (i.e. if the developer has used the UNION in the place of a more robust where clause).
- Numerous != and <> Comparison Operators – Not necessarily a problem but often can be coded in a more efficient manner
- Several OR’s – OR’s are tricky. I get suspicious when there are numerous other problems in a statement as well.
I’ll refrain from publishing the actual statement to protect the guilty, but one of the most interesting bits of the statement looked something like this:
...
WHERE a.col1 = DECODE ('XYZ', '*', a.col1 ,'XYZ')
...
The processing of the where clause can be roughly translated to the following psuedo code:
if 'XYZ' = '*' then
x = a.col1
else
x = 'XYZ'
end if
...
WHERE a.col1 = x
...
Since the literal ‘XYZ’ will never be equal to the literal ‘*’, the value returned by this DECODE will always be ‘XYZ’. Therefore the statement could have been written much more simply as:
...
WHERE a.col1 = 'XYZ'
...
So this is an extremely strange way to write the statement and clouds the intention considerably, but does it hurt performance? Well actually no. The optimizer is smart enough to know that this DECODE will always result in the same literal value and so it appears that Oracle does not run the DECODE statement for each row. Here’s a quick example from a 10.2.0.4 database. (note that I used dplan.sql and fss.sql in this code snippet)
Continue reading ‘Funny Developer Tricks – Decode’ »
August 19, 2011, 8:58 am
Cary Millsap is teaching a new one day class next week in Dallas (well Southlake actually) on Oracle trace data. This is a class that he has personally been working on recently and is teaching. I am planning on attending. Here’s a link to the sign up page which has all the details:
Mastering Oracle Trace Data
Check it out.
August 16, 2011, 9:45 am
I modified my create_1_hint_sql_profile.sql script (which I blogged about here: Single Hint Profiles) to allow any arbitrary text sting including quote characters. This is a script that I use fairly often to apply a hint to a single SQL statement that is executing in a production system where we can’t touch the code for some reason. For example, it’s sometimes useful to add a MONITOR hint or a GATHER_PLAN_STATISTICS hint to a statement that’s behaving badly so we can get more information about what the optimizer is thinking. I recently updated the script to allow special characters in the hint syntax. This feature is useful when you want to add something like an OPT_PARAM hint that takes quoted arguments. The change makes use of the q-Quote feature which I blogged about here: q-Quote. (the original version just barfed on quotes being input as part of the hint)
Here’s an example of how to use it:
SYS@SANDBOX1> alter session set cell_offload_processing=false;
Session altered.
Elapsed: 00:00:00.00
SYS@SANDBOX1> select avg(pk_col) from kso.skew3 where col1 < 0;
AVG(PK_COL)
-----------
1849142.5
1 row selected.
Elapsed: 00:00:28.08
SYS@SANDBOX1> @fsx
Enter value for sql_text: select avg(pk_col) from kso.skew3 where col1 < 0
Enter value for sql_id:
SQL_ID CHILD PLAN_HASH EXECS AVG_ETIME AVG_PX OFFLOAD IO_SAVED_% SQL_TEXT
------------- ------ ---------- ------ ---------- ------ ------- ---------- ----------------------------------------------------------------------
a6j7wgqf84jvg 0 2684249835 1 28.07 0 No .00 select avg(pk_col) from kso.skew3 where col1 < 0
1 row selected.
Elapsed: 00:00:00.02
SYS@SANDBOX1> @create_1_hint_sql_profile.sql
Enter value for sql_id: a6j7wgqf84jvg
Enter value for profile_name (PROFILE_sqlid_MANUAL):
Enter value for category (DEFAULT):
Enter value for force_matching (false):
Enter value for hint_text: opt_param('cell_offload_processing' 'true')
Profile PROFILE_a6j7wgqf84jvg_MANUAL created.
Elapsed: 00:00:00.07
SYS@SANDBOX1> @sql_profile_hints
Enter value for profile_name: PROFILE_a6j7wgqf84jvg_MANUAL
HINT
------------------------------------------------------------------------------------------------------------------------------------------------------
opt_param('cell_offload_processing' 'true')
1 rows selected.
Elapsed: 00:00:00.04
SYS@SANDBOX1> select avg(pk_col) from kso.skew3 where col1 < 0;
AVG(PK_COL)
-----------
1849142.5
1 row selected.
Elapsed: 00:00:05.11
SYS@SANDBOX1> @fsx
Enter value for sql_text: select avg(pk_col) from kso.skew3 where col1 < 0
Enter value for sql_id:
SQL_ID CHILD PLAN_HASH EXECS AVG_ETIME AVG_PX OFFLOAD IO_SAVED_% SQL_TEXT
------------- ------ ---------- ------ ---------- ------ ------- ---------- ----------------------------------------------------------------------
a6j7wgqf84jvg 0 2684249835 1 28.07 0 No .00 select avg(pk_col) from kso.skew3 where col1 < 0
a6j7wgqf84jvg 1 2684249835 1 5.10 0 Yes 99.99 select avg(pk_col) from kso.skew3 where col1 < 0
In the example I turned off cell offload processing with the ALTER SESSION and ran a SQL statement that took 28 seconds. Then I used my fsx.sql script to verify that the statement was not offloaded and to find the SQL_ID. Next I created a 1 hint Profile with an OPT_PARAM hint that set the cell_offload_processing parameter back to TRUE using the new version of my create_1_hint_sql_profile.sql script. Next I used my sql_profile_hints.sql script to verify the text of the hint that was added to the Profile. It looked good including the quotes. When I executed the statement a second time it ran in 5 seconds. I then used fsx.sql again to see that the statement was offloaded for the second execution (child 1).
August 4, 2011, 2:57 am
I ran across a great post by Marcin Przepiorowski (How to find SQL_ID and PLAN_HASH_VALUE in Oracle SQL Plan Management Baselines) thanks to Karl Arao‘s tweet. Marcin posted a bit of code for calculating a SQL_ID using the text of a SQL statement (based on some research by Tanel Poder). I was aware of Tanel’s research but had not taken the time to try to figure out how to code it. The context of Marcin’s post was based on wanting to relate a Baseline to a SQL_ID. This is something I’ve wanted to do in the past so I was quite interested in the post. As a bonus Marcin also demonstrated a technique to pull the PLAN_HASH_VALUE associated with a Baseline. Of course I can never leave well enough alone and so I had to re-arrange Marcin’s code a little bit to suite my own purposes. So I created a function that returns either the PLAN_HASH_VALUE or the SQL_ID of a Baseline. Here’s the code to create the function: create_baseline_info.sql and a script that uses it: baselines2.sql
Here’s an example of how to use them:
Continue reading ‘Baselines and SQL_ID’ »
July 28, 2011, 4:43 am
I think I made a mistake. I recently set up a twitter account @KerryOracleGuy (of course plain old @KerryOsborne was already taken). I only signed up because I wanted to see what @TanelPoder was saying. I wasn’t planning on tweeting at all when I signed up. I just wanted to see what Tanel was saying. Well the first day a bunch of people signed up to follow me. The whole thing seems a little strange. Although it is pretty easy to keep up with what people are doing. So I guess I’m going to give it a try and see if I like it. I have tweeted a few times and I think I am starting to get the hang of it (it’s hard to teach an old dog new tricks). I have figured out how to use tiny url’s now. I think I need to figure out how to tweet pictures as well. I’ve heard there is a Twit Pic for doing that. If I use it though, I guess that would make me a “Twit Pic-er”, which I don’t much like the sound of. So maybe I’ll stick to text based tweets for now.
July 21, 2011, 9:58 am
I ran into an interesting issue last week having to do with plan stability. The problem description went something like this:
“I have a statement that runs relatively quickly the first time I run it (around 12 seconds). Subsequent executions always run much slower, usually around 20 or 30 minutes. If I flush the shared pool and run it again elapsed time is back to 12 seconds or so.”
The query looked a little like this:
Continue reading ‘Cardinality Feedback’ »
July 15, 2011, 5:31 pm
Karen Morton, Cary MIllsap and I will be participating in a on-line panel discussion about Oracle Performance on July 28th. Since we all worked together in the past we thought it would be a fun to listen to each other answer questions. Embarcadero is sponsoring this event and invited us to participate. Here’s a graphic from the mailer they sent out.
I only point it out because Cary and Karen look like they are posing for a picture, while I, as usual, look like someone just poured a drink down my pants. That’s normal though. I’ve been told I have a great face for radio.
You can sign up here: Register Now!
July 6, 2011, 9:13 am
I will be participating in an Exadata Virtual Conference which has been organized by Tanel Poder on August 3rd and 4th. This conference will follow the same format as the one Tanel and I did last year with Jonathan Lewis and Cary Millsap. It will be two half days which should provide some flexibility for people with busy schedules. The online format allows all participants to interact directly via chat while the speakers are presenting and then via voice during question and answer sessions. This is a great opportunity to talk to some guys that have done a bunch of work with Exadata. Andy Colvin will be discussing patching which has been problematic for some shops. Andy has done more patching than anyone I know. Randy Johnson will be discussing Resource Management on Exadata which is a key to successful consolidation projects. I will be talking about how Smart Scans work under the covers and covering techniques for determining when and where they are (or are not) occurring. Tanel will be covering in depth tuning and diagnostic techniques specific to Exadata. Of course, Randy, Tanel and I collaborated on the upcoming Apress book, Expert Oracle Exadata, which should be out a couple of weeks before the conference. Here’s a link to the page with all the conference details:
Exadata Virtual Conference
Note that there is a discount for early registration. I hope to see you there.
June 14, 2011, 7:42 am
Here’s a link to brand new blog by a very experienced Exadata guy. Andy has done a bunch of Exadata projects over the last year and a half and has done more patching than anyone I know. Should be some interesting info coming from him. His first post is about an X2-8 that he is currently working on (with pictures). And here’s the link to the home page on his blog:
Andy Colvin’s Oracle Blog
Check him out.