Archive for the ‘Wall of Shame’ Category.

Funny Developer Tricks – Decode

I ran into a really ugly SQL statement last week. It was the most expensive statement running on the system for the last several weeks. On top of the fact that the statement ran for hours, it also had a number of “issues”. The statement had the following characteristics:

  • Several DISTINCT Operators – these tend to show up in statements where developers have left off a join condition
  • ANSI Join Syntax – generally Oracle converts this syntax back to Oracle join syntax under the covers. I realize this is standard SQL syntax, but it still makes me nervous due to previous experience with problems caused by this conversion process.
  • Functions Used in WHERE Clauses (DECODE and NVL in particular) – can disable indexes
  • UNION – forces a sort of the union’ed result sets so that it can eliminate duplicates (which is often not necessary). Depending on how the statement is coded, UNIONs can also sometimes result in multiple accesses of the same objects (i.e. if the developer has used the UNION in the place of a more robust where clause).
  • Numerous != and <> Comparison Operators – Not necessarily a problem but often can be coded in a more efficient manner
  • Several OR’s – OR’s are tricky. I get suspicious when there are numerous other problems in a statement as well.

I’ll refrain from publishing the actual statement to protect the guilty, but one of the most interesting bits of the statement looked something like this:

...
WHERE a.col1 =  DECODE ('XYZ', '*', a.col1 ,'XYZ')
...

The processing of the where clause can be roughly translated to the following psuedo code:

 
if 'XYZ' = '*' then 
  x = a.col1
else
  x = 'XYZ'
end if
 
...
WHERE a.col1 = x
...

Since the literal ‘XYZ’ will never be equal to the literal ‘*’, the value returned by this DECODE will always be ‘XYZ’. Therefore the statement could have been written much more simply as:

...
WHERE a.col1 = 'XYZ'
...

So this is an extremely strange way to write the statement and clouds the intention considerably, but does it hurt performance? Well actually no. The optimizer is smart enough to know that this DECODE will always result in the same literal value and so it appears that Oracle does not run the DECODE statement for each row. Here’s a quick example from a 10.2.0.4 database. (note that I used dplan.sql and fss.sql in this code snippet)

Continue reading ‘Funny Developer Tricks – Decode’ »

Most Expensive SQL Statement Ever

You know the cost calculation that the cost based optimizer (CBO) uses to determine which execution plan to choose for a SQL statement, right? If you don’t, you should immediately stop reading this and pick up a good novel instead. Ah, you’re still here? Well I got an interesting email today from one of my co-workers saying he had to kill a query yesterday. Actually that’s a big part of his current job. Killing runaway queries – apparently that job takes most of his time between 8 and 5. Anyway, he sent me this execution plan today, no comments, “just have a look at this”, he said.

---------------------------------------------------------------------------------------------------
| Id  | Operation              | Name             | Rows  | Bytes|TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                  |       |       |      |    65P(100)|          |
|   1 |  SORT ORDER BY         |                  |    18E|    15E|   15E|    65P (78)|999:59:59 |
|   2 |   COUNT                |                  |       |       |      |            |          |
|*  3 |    FILTER              |                  |       |       |      |            |          |
|   4 |     NESTED LOOPS       |                  |    18E|    15E|      |    14P  (3)|999:59:59 |
|   5 |      NESTED LOOPS      |                  |   984G|   216T|      |    14G  (3)|999:59:59 |
|   6 |       TABLE ACCESS FULL| CAT_6000_6001TBL |  7270K|  1074M|      |   176K  (3)| 00:15:46 |
|   7 |       TABLE ACCESS FULL| CAT_6000TBL      |   135K|    11M|      |  1950   (3)| 00:00:11 |
|   8 |      INDEX FULL SCAN   | PK_OBJECTS       |    32M|   306M|      | 15207   (3)| 00:01:22 |
---------------------------------------------------------------------------------------------------

So I had a look. Yes – that’s a 65P in the cost column. I’ve seen worse (but not in a production system). Cost is not always a good indication of run time, by the way. It’s just a sort of normalized estimation after all. But the estimate for the number of rows and bytes (18E and 15E) are very impressive as well. This query ran for several hours before my buddy finally killed it. As you might expect, the query was missing a join condition between a couple of large tables (7M and 32M).

Here’s a test I worked up to see how big a number I could get.

SYS@LAB1024> !cat dplan.sql
set lines 150
select * from table(dbms_xplan.display_cursor('&sql_id','&child_no','typical'))
/
 
SYS@LAB1024> @dplan
Enter value for sql_id: gf5nnx0pyfqq2
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  gf5nnx0pyfqq2, child number 0
-------------------------------------
select a.col2, sum(a.col1) from kso.skew a, kso.skew b group by a.col2
 
Plan hash value: 321450672
 
-----------------------------------------------------------------------------------
| Id  | Operation               | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |         |       |       |   689G(100)|          |
|   1 |  HASH GROUP BY          |         |     1 |    16 |   689G (84)|999:59:59 |
|   2 |   MERGE JOIN CARTESIAN  |         |  1024T|    14P|   145G (22)|999:59:59 |
|   3 |    TABLE ACCESS FULL    | SKEW    |    32M|   488M| 10032  (18)| 00:01:21 |
|   4 |    BUFFER SORT          |         |    32M|       |   689G (84)|999:59:59 |
|   5 |     INDEX FAST FULL SCAN| SKEW_PK |    32M|       |  4558  (22)| 00:00:37 |
-----------------------------------------------------------------------------------
 
 
17 rows selected.
 
SYS@LAB1024> @dplan
Enter value for sql_id: 12p7fuydx3dd5
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  12p7fuydx3dd5, child number 0
-------------------------------------
select a.col2, sum(a.col1) from kso.skew a, kso.skew b, kso.skew c group by
a.col2
 
Plan hash value: 175710540
 
------------------------------------------------------------------------------------
| Id  | Operation                | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |         |       |       |    18E(100)|          |
|   1 |  HASH GROUP BY           |         |     1 |    16 |    18E (81)|999:59:59 |
|   2 |   MERGE JOIN CARTESIAN   |         |    18E|    15E|  4670P (22)|999:59:59 |
|   3 |    MERGE JOIN CARTESIAN  |         |  1024T|    14P|   145G (22)|999:59:59 |
|   4 |     TABLE ACCESS FULL    | SKEW    |    32M|   488M| 10032  (18)| 00:01:21 |
|   5 |     BUFFER SORT          |         |    32M|       |   145G (22)|999:59:59 |
|   6 |      INDEX FAST FULL SCAN| SKEW_PK |    32M|       |  4558  (22)| 00:00:37 |
|   7 |    BUFFER SORT           |         |    32M|       |    18E (81)|999:59:59 |
|   8 |     INDEX FAST FULL SCAN | SKEW_PK |    32M|       |  4558  (22)| 00:00:37 |
------------------------------------------------------------------------------------
 
 
21 rows selected.
 
SYS@LAB1024> @dplan
Enter value for sql_id: 7b53dxh6w6mpj
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  7b53dxh6w6mpj, child number 0
-------------------------------------
select a.col2, sum(a.col1) from kso.skew a, kso.skew b, kso.skew c, kso.skew
d group by a.col2
 
Plan hash value: 3965951819
 
-------------------------------------------------------------------------------------
| Id  | Operation                 | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |         |       |       |    18E(100)|          |
|   1 |  HASH GROUP BY            |         |     1 |    16 |    18E  (0)|999:59:59 |
|   2 |   MERGE JOIN CARTESIAN    |         |    18E|    15E|    18E  (0)|999:59:59 |
|   3 |    MERGE JOIN CARTESIAN   |         |    18E|    15E|  4670P (22)|999:59:59 |
|   4 |     MERGE JOIN CARTESIAN  |         |  1024T|    14P|   145G (22)|999:59:59 |
|   5 |      TABLE ACCESS FULL    | SKEW    |    32M|   488M| 10032  (18)| 00:01:21 |
|   6 |      BUFFER SORT          |         |    32M|       |   145G (22)|999:59:59 |
|   7 |       INDEX FAST FULL SCAN| SKEW_PK |    32M|       |  4558  (22)| 00:00:37 |
|   8 |     BUFFER SORT           |         |    32M|       |  4670P (22)|999:59:59 |
|   9 |      INDEX FAST FULL SCAN | SKEW_PK |    32M|       |  4558  (22)| 00:00:37 |
|  10 |    BUFFER SORT            |         |    32M|       |    18E  (0)|999:59:59 |
|  11 |     INDEX FAST FULL SCAN  | SKEW_PK |    32M|       |  4558  (22)| 00:00:37 |
-------------------------------------------------------------------------------------
 
 
24 rows selected.

So it looks like the cost tops out at 18E as does the estimated number of rows. Oddly the number of bytes appears to top out at 15E. So the production query had maxed out the rows and bytes estimate although the cost was significantly under the max. Still 65P is the biggest cost I’ve seen in a production system. Anyone seen a bigger one?

P.S. I have two categories for SQL related posts. “Developer Tricks” and “Wall of Shame”. This one gets both tags.

Funny Developer Tricks – (substr(cust_id,1,length(:b1))

Ha. This one was a little surprising. I ran across a SQL statement that gets fired off hundreds at a time in rapid succession. Each execution taking several seconds – too long for thousands of executions in a row. The statement looked like this (cleaned up to protect the guilty):

b1 := '10355P034001SGL00066';
b2 := '10355P034001SGL00066';
 
select count(cust_id) 
from customers
where substr(cust_id,1,length(:b1)) = :b2;

What was the developer trying to do? How can we fix it? Your comments are welcomed.

Continue reading ‘Funny Developer Tricks – (substr(cust_id,1,length(:b1))’ »

Funny Developer Tricks – first_rows(999999999)

I ran across a funny SQL statement recently (funny strange, not funny ha ha – well actually funny ha ha too I guess). It had a first_rows hint like so:

 
 
select /*+ FIRST_ROWS (999999999)  */ 
"MANDT" , "OPBEL" , "OPUPW" , "OPUPK" , "OPUPZ" , "BUKRS" , "GSBER" , "BUPLA" , "SEGMENT" , 
"AUGST" , "GPART" , "VTREF" , "VTRE2" , "VKONT" , "ABWBL" , "ABWTP" , "ABWKT" , "APPLK" , 
"HVORG" , "TVORG" , "KOFIZ" , "SPART" , "HKONT", "MWSKZ" , "MWSZKZ" , "XANZA" , "STAKZ" , 
"BLDAT" , "BUDAT" , "OPTXT" , "WAERS" , "FAEDN", "FAEDS" , "VERKZ" , "STUDT" , "SKTPZ" , 
"XMANL" , "KURSF" , "BETRH" , "BETRW" , "BETR2" , "BETR3" , "SKFBT" , "SBETH" , "SBETW" , 
"SBET2" , "SBET3" , "MWSKO" , "MWVKO" , "TXRUL" , "SPZAH" , "PYMET" , "PYBUK" , "PERNR" , 
"GRKEY" , "PERSL" , "XAESP" , "AUGDT" , "AUGBL" , "AUGBD" , "AUGRD" , "AUGWA" , "AUGBT" , 
"AUGBS" , "AUGSK" , "AUGVD" , "AUGOB" , "WHANG" , "WHGRP" , "XEIPH" , "MAHNV" , "MANSP" , 
"XAUGP" , "ABRZU" , "ABRZO" , "FDGRP" , "FDLEV" , "FDZTG", "FDWBT" , "XTAUS" , "AUGRS" , 
"PYGRP" , "PDTYP" , "SPERZ" , "INFOZ" , "TXJCD" , "TXDAT" ,"VBUND" , "KONTT" , "KONTL" , 
"OPSTA" , "BLART" , "EMGPA" , "EMBVT" , "EMADR" , "IKEY" , "EUROU" , "XRAGL" , "ASTKZ" , 
"ASBLG" , "XBLNR" , "INKPS" , "RNDPS" , "QSSKZ" , "QSSEW" , "QSPTP" , "QSSHB" , "QBSHB" , 
"QSZNR" , "RFUPK" , "STRKZ" , "FITPR" , "XPYOR" , "LANDL" , "INTBU", "EMCRD" , "C4EYE" , 
"C4EYP" , "SCTAX" , "STTAX" , "STZAL" , "ORUPZ" , "NEGBU" , "SUBAP" , "PSWSL" , "PSWBT" , 
"PSWTX" , "PSGRP" , "FINRE" , "RDSTA" , "RDSTB" , "DEAKTIV" , "SGRKEY", "SOLLDAT" , "RECPT" , 
"TOCOLLECT" , "EINMALANF" , "VORAUSZAHL" , "APERIODIC" , "ABRABS" , "GRBBP" , "ASMETH" , 
"INT_CROSSREFNO" , "ETHPPM" , "PAYFREQID" , "INVOICING_PARTY" , "PPMST" , "LOGNO" , "APERIODICT" , 
"ADD_REFOBJ" , "ADD_REFOBJID" , "ADD_SERVICE" , "ZZAGENCY" , "ZZ_EXT_REF" , "ZZ_PAY_AGENT" , 
"ZZFUNDSOURCE" , "ZZINSTALLMENT" , "Z_PROD_ID" , "ZZUSERNAME" , "ZZWF_STAT" , "ZZPAYCHANNEL" 
FROM "DFKKOP" 
WHERE "MANDT" = :A0 -- NDV=1
AND "BUKRS" = :A1 -- NDV=1 
AND "AUGST" = :A2 -- NDV=2 
AND "FAEDN" < :A3 -- less than today probably all records
AND ( "PYMET" = :A4 OR "PYMET" = :A5 ) -- NDV=8
AND ROWNUM <= :A6; -- less than 1B

Yes – that’s a first rows hint with about a billion as the number of rows to optimizer for.

The reason I noticed it is that it runs for 15 hours before getting a Snapshot Too Old error. The attempted solution was to restart it the next day (thinking maybe it will run better the second time I guess). The table has roughly 100M rows. There was no index on PYMET which is unfortunate as the two values requested account for only about 0.15% (not 15%, 0.15%). The optimizer chooses an index on MANDT, BURKRS, AUGST, FAEDN and as you might expect, it doesn’t work very well (see the NDV comments I added to the statement).

Funny things:

The First_Rows hint is requesting the Oracle optimizer to return the first billion records as fast as possible (even though there are only 100M rows).

The documentation for the First_Rows hint in 11g looks like this:

The FIRST_ROWS hint instructs Oracle to optimize an individual SQL statement for fast response, choosing the plan that returns the first n rows most efficiently. For integer, specify the number of rows to return.

For example, the optimizer uses the query optimization approach to optimize the following statement for best response time:

SELECT /*+ FIRST_ROWS(10) */ employee_id, last_name, salary, job_id
FROM employees
WHERE department_id = 20;

In this example each department contains many employees. The user wants the first 10 employees of department 20 to be displayed as quickly as possible.

So I can see where the developers might have interpreted this as the ever elusive “Go Fast” hint.

The developers also added “and rownum < 999999999" to the where clause which limits the amount of rows that can be returned. I'm not sure whether they knew it or not, but this clause also has the same affect as the hint. That is to say that the clause causes the optimizer to modify it's calculations as if the first_rows_N hint had been applied. Maybe the developers weren't getting the "go fast" behavior they expected from the hint and after doing some research found that the "rownum <" syntax would basically do the same thing. I'm guessing that's the case because I can't see why they would really want to limit the number of rows coming back, but I'm not sure.

It's a very odd statement because the First_Rows hint tends to push the optimizer towards index usage, and this statement was behaving badly precisely because it was using an index (a full table scan only took about 1 hour). Regardless of what the developers were trying to do, the fact that they used such a big number caused the optimizer to ignore the hint anyway. Since the table only had 100M rows and the parameter was 1B, the hint was ignored (well at least the "First K Rows" modifications to the optimizer calculations were not used). This happens to the "rownum <" induced behavior as well by the way.

Here's a bit of a couple of 10053 trace file showing some details:
Continue reading ‘Funny Developer Tricks – first_rows(999999999)’ »

Funny Developer Tricks – upper(number)

I saw a funny one today. I’ll paraphrase:

select * from table_x
where upper(acct_number) = '876876'
or upper(acct_number) = '826531';

Nice huh?

1. Obviously turns off any indexes on acct_number (unless they had a function based index).
2. Looks like they are probably storing numeric data in a character data type (or implicitly converting a number to character string).
3. Not using bind variables so they are not helping themselves from a parsing standpoint.
4. Finally, they’re making darn sure they take care of any mixed case numbers!

Some fun.

(by the way, those lower case numbers cause me problems all the time)