trusted online casino malaysia

Archive for the ‘Oracle’ Category.

Exadata vs. IBM P-Series

Earlier this year I participated in a Total Cost of Ownership (TCO) study that was run by a company called the FactPoint Group. The goal was to compare the cost of purchasing and running Oracle on Exadata vs. the cost of purchasing and running Oracle on IBM P-Series hardware. The findings are published here:

Exadata vs. IBM P-Series

Fair warning, the study was funded by Oracle and the first 10 minutes are a sales pitch for Exadata, but the data collected by FactPoint for the TCO study fills the rest of the presentation.. That part was very interesting (to me anyway). They interviewed people from 5 companies which had been running in production on Exadata for at least 6 months and 5 companies that had production systems running on IBM P-Series machines. As a bonus, 2 of the Exadata customers had previously run their systems on IBM P-Series hardware or currently had other systems running on P-Series machines.

The TCO calculations were based on equivalent systems from a CPU count basis. That is, they priced out an IBM P7 system that had the same number of cores as an Exadata Half Rack and used that for the calculations. I didn’t think that was really a fair comparison since the performance of the two systems was vastly different. In fact, one of the customers that had moved from IBM to an Exadata had the following to say:

“This data warehouse was originally on 12 a P570 and the nightly load took 5 hours. With Exadata it takes 30 minutes with the Exadata 1⁄4 rack running at only 5% CPU utilization. Weekly stats updating went from 60 hours to 35 minutes. Backups reduced from 14 hours to 45 minutes.”

This is only one of many quotes from the customers included in the presentation. The quotes were quite revealing, and to my way of thinking, were the most interesting part of the presentation. Another oddity of the cost calculations was that they included RAC licenses in the Exadata costs while not including RAC in the IBM costs. Again I felt that this was an unfair comparison as RAC provided HA capabilities to the Exadata platform that the IBM platform simple didn’t have. Nevertheless, the study found that Exadata was about 35% less expensive. I wonder what it would have been if they had tried to cost an IBM system that performed as well as an Exadata and provided the same HA capabilities.

Updated 11/23/2012 – Note that the official white paper produced as a result of this study is now available here: Cost Comparison – Oracle Exadata Database Machine vs. IBM Power Systems

E4 Wrap Up – Part I – OLTP Bashing

Well the Enkitec Extreme Exadata Expo (E4) is now officially over. I thoroughly enjoyed the event. I personally think Richard Foote stole the show with his clear and concise explanation of why a full table scan is not a straight forward operation on Exadata, and why that makes it so difficult for the optimizer to properly cost it. But Maria Colgan came out with a fiery talk on the optimizer that gave him a good run for his money (she actually had the highest average rating from the attendees that filled out evaluation forms by the way – so congratulations Maria!). Of course there were many excellent presentations from many very well known Oracle practitioners. Overall it was an excellent conference (in my humble opinion) due in large part to the high quality of the speakers and the effort they put into the presentations. I am also thankful for the fact that Intel agreed to sponsor the event and that Oracle supported the event by allowing so many of their technical folks to participate.

While I felt that the overall message presented at the conference was pretty balanced, I did leave with a couple of general impressions that didn’t really feel quite right. Of course having the ability to express one’s opinion is one of the founding principals of our country, so I am going to do a series of posts on generalities I heard expressed that I didn’t completely agree with.

The first was that I got the impression that some people think Exadata isn’t good at OLTP. No one really said that explicitly. They said things like “it wasn’t designed for OLTP” and “OLTP workloads don’t take advantage of Exadata’s secret sauce” (I may have even made similar comments myself). While these types of statements are not incorrect, they left me with the feeling that some people thought Exadata just flat wasn’t good at OLTP.

I disagree with this blanket sentiment for several reasons:

  1. While it’s true that OLTP workloads generally don’t make the best use of the main feature that makes Exadata so special (namely offloading), I have to say that in my experience it has shown itself to be a very capable platform for handling the single block access pattern that characterizes what we often describe as OLTP workloads. I’ve observed many systems running on Exadata that have average physical single-block read times in the sub-1ms range. These are very good times and compare favorably to systems that store all their data on SSD storage. So the flash cache feature actually works very well, which is not too surprising when you consider that Oracle has been working on caching algorithms for several decades.
  2.  

  3. I think part of the reason for the general impression that OLTP doesn’t work well on Exadata is the human tendency to make snap judgements based on reality vs. expectations, rather than actually thinking through the relevant facts. For example, when you go to a movie that has been hyped as being on of the best of the year and a great cinematic achievement, you are more likely to come away feeling that the movie was not that great, simply because it didn’t live up to your expectations. Whereas a little known movie is more likely to impress you simply because you weren’t expecting that much. When you sit down and actually evaluate the movies side by side, you will probably come to the conclusion that hyped movie was indeed better (people don’t usually bestow accolades on totally worthless stuff). I think that, at least to some degree, OLTP type work loads on Exadata suffer from the same issue. The expectations are so high for the platform in general that even good to excellent results fall short of the massive expectations that have been created based on the some of the impressive results with Data Warehouse type work loads. But that doesn’t mean that the platform is not capable of matching the performance of any other platform you could build at a similar price point.
  4.  

  5. I don’t think I’ve ever seen a true OLTP workload. That is to say, I can’t recall ever looking at a system that didn’t have some long running reporting component or batch process that does not fall into the simple single block access (OLTP) category. So I believe that the vast majority of systems categorized as OLTP should more correctly be called “mixed” workloads. In these types of systems, the offloading capability of Exadata can certainly make a big difference for the long running components of the system, but also can improve performance on the single-block access stuff by reducing the contention for resources caused by the long running queries that are unavoidable on standard Oracle architecture.
  6.  

  7. Very few of the Exadata systems we’ve worked on over the last few years are supporting a single application or even a single database. Consolidation has become the name of the game for many (maybe most) Exadata implementations. I did a presentation at last year’s Hotsos Symposium where I compiled statistics from 51 Exadatas that we had worked on. 67% were being used as consolidation platforms. This makes it even more likely that an OLTP type workload will benefit from running on the Exadata platform.

So does Exadata run stand alone “pure OLTP” workloads 10X faster than any other standard Oracle based system you could build yourself?

No it does not.

But it does work as well as almost any system you could build, regardless of how much money you spend on the components. By way of proof I’ll tell you a story about a system that we benchmarked on an Exadata V2 quarter rack system. The benchmark was on a batch process that updated well over a billion records, one row at a time, via an index. The system we were comparing against was an M5000 / Solaris system with 32 cores and all data was stored on an SSD SAN. The benchmark showed Exadata to be a little over 4 times faster. This was primarily because most of the work was logical i/o that was serviced by the buffer cache on both platforms. The faster CPUs in the Exadata accounted for most of the gains. Nevertheless, the system was not migrated to an Exadata. A new system was built using a faster Intel-based server which made up the CPU speed difference (and in fact exceeded specs on the V2) and a more capabile SSD based SAN was installed. The resulting system ran the benchmark in about the same time as the original V2 quarter rack (actually it was very slightly faster). Unfortunately the SAN alone cost more than the Exadata. And the real life system also did a bunch of other stuff like some long running ad hoc queries. Guess which platform dealt with those better. 😉 Here’s a slide that summarizes some of the results.

In fairness, I should point out that there is a subset of “OLTP” workloads that are very write intensive. Since writes to data in Oracle are usually asynchronous, while writes to log files usually must complete before a transaction can complete, it’s usually writes to log files that are the bottle neck in these types of systems. However, if the synchronous log file writes can be avoided (or optimized), much higher transaction rates can occur and writes to DB files can become a bottle neck. In those cases, pure write IOPS can be a limiting factor. My opinion is that such systems are relatively rare. But they do exist. Exadata is not currently the best possible option for these extremely write intensive workloads. I say currently because at the time of this writing the storage software does not include any sort of write back cache for buffering writes to data files. However, this is a feature that is expected to be released in the near future. 😉

So that’s it for the OLTP topic.

Stay tuned for Part II, where I’ll discuss another general impression with which I didn’t really agree…

Exadoop

We started on an interesting mad scientist kind of project a couple of days ago.

One of our long time customers bought an Exadata last month. They went live with one system last week and are in the process of migrating several others. The Exadata has an interesting configuration. The sizing exercise done prior to the purchase indicated a need for 3 compute nodes, but the data volume was relatively small. In the end, a half rack was purchased and all four compute nodes were licensed, but 4 of the 7 storage servers were not licensed. So it’s basically a half rack with only 3 storage servers.

Meanwhile, we had been talking with them about Hadoopie kind of stuff. They are in the telecom space and are interested in pulling data via a packet sniffer which captures info directly from the tcp traffic. During the talks we discussed hardware requirements for building a Hadoop cluster as they didn’t really have any spare hardware available to test with. That’s when the crazy science project idea was born. Someone (who shall remain nameless) suggested that we build the pilot Hadoop cluster on the 4 unused storage nodes from the Exadata half rack. Since the storage servers use basically the same hardware as is used in the Oracle Big Data Appliance (BDA), it’s kind of like having a mini BDA. Of course the storage servers have slower CPU’s and a little less memory so it’s not apples to apples, but the servers do have InfiniBand and the same 3T drives so it’s pretty similar. And since they already had the servers sitting there …

So now we have a mini Hadoop cluster installed (CDH3) with 3 data nodes (roughy 100T raw storage). We also set up the Oracle Big Data Connectors on one of the Exadata compute nodes which allows us to create external tables on files stored in HDFS. Pretty cool. Let the games begin!

Oh and by the way. I’ll probably be talking about this project a bit at E4 (Enkitec Extreme Exadata Expo) on Aug. 13-14 in Dallas.

Expert Oracle Exadata Translated into Chinese

Last year at Oracle Open World I was introduced to a guy named Zhang Leyi (Kamus). He said he had been hired to translate our Expert Exadata book into Chinese. There were a couple of other guys on the team as well – Kaya Huang 黄凯耀 and Jacky Zhang 张瑞. Well they have apparently finished their work and the book is due to be released this month. Kamus sent us a copy of the cover (see below). It looks very sexy don’t you think? We exchanged a few emails with the guys while they were working on it mainly just clarifying our intentions and helping them fix typos in the original version Unfortunately I wasn’t as responsive as I should have been (sorry about that guys), but Tanel and Randy did a pretty good job I think. Not surprisingly, the most difficult part seemed to be translating some of the “Kevin Says” sections, as he has a way of packing a lot into very few words.

So anyway, congratulations to the guys for getting through the project. I hope they send me a copy. 😉

 

Free Instant SQL Formatter

Well apparently you can teach an old dog new tricks. I recently sent the following email to Enkitec’s internal techie email list:

Begin forwarded message:

Have you ever tried to decipher a really ugly, unformatted SQL statement that someone gave you or you pulled out of one the database views or an AWR report? Tim Fox turned me on to the SQL formatting capability of SQL Developer a few weeks ago (actually Brian Hill found it first and showed Tim). It is awesome! It’s a little unintuitive to use though. Here’s how I use it.

1. start up the SQL Developer (there are versions for Mac OS and Windows by the way)
2. Click File->New and choose the SQL File option
– this opens a Query Builder Pane (you don’t need to connect to a database)
3. Pick any random file to open
4. Paste your nasty 10 page long query into the Query Builder pane replacing the text from your random file
5. Right click on the text in the window (brings up a long menu)
6. Click the Format menu Item (it’s at the bottom of the menu in version 3.0)

Viola – nicely formatted SQL text

The latest version of SQL Developer (3.0.x) is actually much better than the previous version by the way.

I haven’t closed SQL Developer since I found out about this capability!

SQL Developer is a free tool that can be downloaded from Oracle’s web site here:

http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html

Note version 3.1 is now available but I haven’t tried it yet.

Displaying SQL Baseline Plans

Since I’m on vacation and not “really” working, I thought I might have time to write up a quick blog post. The idea for this one was triggered by one of Maria Colgan’s presentations at Hotsos last week. Maria was talking about SQL Plan Management and Baselines and somehow got me thinking about the DBMS_XPLAN option to display plans for Baselines. This is a pretty neat feature that allows you to the see the plan associated with a Baseline (well sort of).

The 11.2 documentation (Oracle® Database PL/SQL Packages and Types Reference) says this about the DISPLAY_SQL_PLAN_BASELINE function:

This procedure uses plan information stored in the plan baseline to explain and display the plans.It is possible that the plan_id stored in the SQL management base may not match with the plan_id of the generated plan. A mismatch between stored plan_id and generated plan_id means that it is a non-reproducible plan. Such a plan is deemed invalid and is bypassed by the optimizer during SQL compilation.

But what does that mean? Well in short it means that Baselines don’t store plans, they store hints that when fed to the optimizer will hopefully cause it to come up with the desired plan. Baselines also store a plan_hash_value so it’s possible to tell whether the hints worked or not. Baselines do not actually store all the steps of a plan. So if that’s the case, then it’s obviously not possible for the display_sql_plan_baseline function to show the plan if the optimizer can’t reproduce it for some reason. When the doc’s say “it is possible that the plan_id stored in the SQL management base may not match with the plan_id of the generated plan”, that’s what they are talking about. I decided to create a test case to see what happens when the generated plan can’t match the original. Here’s the basic idea:

  1. run a statement that uses an index and check the plan
  2. create a Baseline on the statement using the index (using my create_baseline.sql script)
  3. check the hints stored with the baseline (using my baselines_hints.sql script)
  4. run the statement again and check the real plan to see that the Baseline was used
  5. use the display_sql_plan_baseline function to show the Baseline plan
  6. make the index invisible (thus rendering the Baseline plan non-reproducible)
  7. execute the statement again and check the real plan
  8. use the display_sql_plan_baseline function to show the Baseline plan

So here’s the test:

Continue reading ‘Displaying SQL Baseline Plans’ »

Upcoming Speaking Engagements

I’ll be speaking at a couple of conferences over the next month or so. Here’s the schedule:

February 16th, in Denver at RMOUG Training Days 2012 – Topic: DIY Exadata
February 23rd, in Redwood Shores at NoCOUG Winter Conference – Topic: DIY Exadata
March 7th, in Dallas at Hotsos Symposium 2012 – Topic: Exadata Optimization – Case Studies

Hope to see you at one of these events.

 

 

C. J. Date Speaking in Dallas

Chris Date is one of the founding fathers of relational databases. Having worked with Ted Codd at IBM during the time when relational databases were being defined gives Chris a perspective that most of us just don’t have. I’ve had the good fortune to hear him speak in the past (at the Hotsos Symposium) and thought I would do a quick post to highlight the fact that he is scheduled to speak in Dallas the week of Jan 30. Method-R is hosting the event in the Enkitec training facilities in Dallas. So maybe I’ll get to hang around with Chris and Cary that week – that would be cool! Anyway, there are actually 2 classes:

SQL and Relational Theory: How to Write Accurate SQL Code
Normal Forms and All That Jazz: a Database Professional’s Guide to Database Design Theory

And here’s a link to the registration page:   C. J. Date Seminar Registration

By the way, I think every developer and every database architect should have a clear understanding of how the SQL language is designed to work and how relational databases were intended to be laid out. Chris obviously has a unique insight into those topics. One of the tenants of Chris’s teaching is that SQL is a complicated language and since comprehensive testing is almost never really feasible, it is important to write SQL using a disciplined approach based on the underlying relational theory. As a side note, I was talking to a few cohorts around the coffee pot today and was shocked to hear that one of the guys had a CS degree but was not required to take a relational theory class. Back when I got started that was the first class that people took, probably because there were almost no real implementations of the theory at that point. Oracle was just getting started and DB2 was still a distant gleam in Mr. Codd’s eye. But I digress.

It does seem to me that we have an awful lot of systems running on Oracle these days that were designed and written by people without a strong background in relational database fundamentals. I can’t even begin to count the number of times I’ve worked on systems that performed poorly due to poor SQL coding techniques and/or poor database design. Chris’s courses are designed to help you avoid these issues. So this is your chance to learn how to know for sure that your SQL is correct.

Hope to you see you there!

 

 

Christmas Explain Plan Patterns

Here’s a lovely Candy Striped pattern in an Explain Plan. Looks like the traditional Christmas candy canes. Just in time for the Holiday Season!

| 156 |     FAST DUAL                                                |                                |     1 |       |     2   (0)| 00:00:01 |
| 157 |      FAST DUAL                                               |                                |     1 |       |     2   (0)| 00:00:01 |
| 158 |       FAST DUAL                                              |                                |     1 |       |     2   (0)| 00:00:01 |
| 159 |        FAST DUAL                                             |                                |     1 |       |     2   (0)| 00:00:01 |
| 160 |         FAST DUAL                                            |                                |     1 |       |     2   (0)| 00:00:01 |
| 161 |          FAST DUAL                                           |                                |     1 |       |     2   (0)| 00:00:01 |
| 162 |           FAST DUAL                                          |                                |     1 |       |     2   (0)| 00:00:01 |
| 163 |            FAST DUAL                                         |                                |     1 |       |     2   (0)| 00:00:01 |
| 164 |             FAST DUAL                                        |                                |     1 |       |     2   (0)| 00:00:01 |
| 165 |              FAST DUAL                                       |                                |     1 |       |     2   (0)| 00:00:01 |
| 166 |               FAST DUAL                                      |                                |     1 |       |     2   (0)| 00:00:01 |
| 167 |                FAST DUAL                                     |                                |     1 |       |     2   (0)| 00:00:01 |
| 168 |                 FAST DUAL                                    |                                |     1 |       |     2   (0)| 00:00:01 |
| 169 |                  FAST DUAL                                   |                                |     1 |       |     2   (0)| 00:00:01 |
| 170 |                   FAST DUAL                                  |                                |     1 |       |     2   (0)| 00:00:01 |
| 171 |                    FAST DUAL                                 |                                |     1 |       |     2   (0)| 00:00:01 |
| 172 |                     FAST DUAL                                |                                |     1 |       |     2   (0)| 00:00:01 |
| 173 |                      FAST DUAL                               |                                |     1 |       |     2   (0)| 00:00:01 |
| 174 |                       FAST DUAL                              |                                |     1 |       |     2   (0)| 00:00:01 |
| 175 |                        FAST DUAL                             |                                |     1 |       |     2   (0)| 00:00:01 |
| 176 |                         FAST DUAL                            |                                |     1 |       |     2   (0)| 00:00:01 |
| 177 |                          FAST DUAL                           |                                |     1 |       |     2   (0)| 00:00:01 |
| 178 |                           FAST DUAL                          |                                |     1 |       |     2   (0)| 00:00:01 |
| 179 |                            FAST DUAL                         |                                |     1 |       |     2   (0)| 00:00:01 |
| 180 |                             FAST DUAL                        |                                |     1 |       |     2   (0)| 00:00:01 |
| 181 |                              FAST DUAL                       |                                |     1 |       |     2   (0)| 00:00:01 |
| 182 |                               FAST DUAL                      |                                |     1 |       |     2   (0)| 00:00:01 |
| 183 |                                FAST DUAL                     |                                |     1 |       |     2   (0)| 00:00:01 |
| 184 |                                 FAST DUAL                    |                                |     1 |       |     2   (0)| 00:00:01 |
| 185 |                                  FAST DUAL                   |                                |     1 |       |     2   (0)| 00:00:01 |
| 186 |                                   FAST DUAL                  |                                |     1 |       |     2   (0)| 00:00:01 |
| 187 |                                    FAST DUAL                 |                                |     1 |       |     2   (0)| 00:00:01 |
| 188 |                                     FAST DUAL                |                                |     1 |       |     2   (0)| 00:00:01 |
| 189 |                                      FAST DUAL               |                                |     1 |       |     2   (0)| 00:00:01 |
| 190 |                                       FAST DUAL              |                                |     1 |       |     2   (0)| 00:00:01 |
| 191 |                                        FAST DUAL             |                                |     1 |       |     2   (0)| 00:00:01 |
| 192 |                                         FAST DUAL            |                                |     1 |       |     2   (0)| 00:00:01 |
| 193 |                                          FAST DUAL           |                                |     1 |       |     2   (0)| 00:00:01 |
| 194 |                                           FAST DUAL          |                                |     1 |       |     2   (0)| 00:00:01 |
| 195 |                                            FAST DUAL         |                                |     1 |       |     2   (0)| 00:00:01 |
| 196 |                                             FAST DUAL        |                                |     1 |       |     2   (0)| 00:00:01 |
| 197 |                                              FAST DUAL       |                                |     1 |       |     2   (0)| 00:00:01 |
| 198 |                                               FAST DUAL      |                                |     1 |       |     2   (0)| 00:00:01 |
| 199 |                                                FAST DUAL     |                                |     1 |       |     2   (0)| 00:00:01 |
| 200 |                                                 FAST DUAL    |                                |     1 |       |     2   (0)| 00:00:01 |
| 201 |                                                  FAST DUAL   |                                |     1 |       |     2   (0)| 00:00:01 |
| 202 |                                                   FAST DUAL  |                                |     1 |       |     2   (0)| 00:00:01 |
| 203 |                                                    FAST DUAL |                                |     1 |       |     2   (0)| 00:00:01 |
| 204 | L                                                   FAST DUA |                                |     1 |       |     2   (0)| 00:00:01 |
| 205 | AL                                                   FAST DU |                                |     1 |       |     2   (0)| 00:00:01 |
| 206 | UAL                                                   FAST D |                                |     1 |       |     2   (0)| 00:00:01 |
| 207 | DUAL                                                   FAST  |                                |     1 |       |     2   (0)| 00:00:01 |
| 208 |  DUAL                                                   FAST |                                |     1 |       |     2   (0)| 00:00:01 |
| 209 | T DUAL                                                   FAS |                                |     1 |       |     2   (0)| 00:00:01 |
| 210 | ST DUAL                                                   FA |                                |     1 |       |     2   (0)| 00:00:01 |
| 211 | AST DUAL                                                   F |                                |     1 |       |     2   (0)| 00:00:01 |
| 212 |.FAST DUAL                                                    |                                |     1 |       |     2   (0)| 00:00:01 |
| 213 |. FAST DUAL                                                   |                                |     1 |       |     2   (0)| 00:00:01 |
| 214 |.  FAST DUAL                                                  |                                |     1 |       |     2   (0)| 00:00:01 |
| 215 |.   FAST DUAL                                                 |                                |     1 |       |     2   (0)| 00:00:01 |
| 216 |.    FAST DUAL                                                |                                |     1 |       |     2   (0)| 00:00:01 |
| 217 |.     FAST DUAL                                               |                                |     1 |       |     2   (0)| 00:00:01 |
| 218 |.      FAST DUAL                                              |                                |     1 |       |     2   (0)| 00:00:01 |
| 219 |.       FAST DUAL                                             |                                |     1 |       |     2   (0)| 00:00:01 |
| 220 |.        FAST DUAL                                            |                                |     1 |       |     2   (0)| 00:00:01 |
| 221 |.         FAST DUAL                                           |                                |     1 |       |     2   (0)| 00:00:01 |
| 222 |.          FAST DUAL                                          |                                |     1 |       |     2   (0)| 00:00:01 |
| 223 |.           FAST DUAL                                         |                                |     1 |       |     2   (0)| 00:00:01 |
| 224 |.            FAST DUAL                                        |                                |     1 |       |     2   (0)| 00:00:01 |
| 225 |.             FAST DUAL                                       |                                |     1 |       |     2   (0)| 00:00:01 |
| 226 |.              FAST DUAL                                      |                                |     1 |       |     2   (0)| 00:00:01 |
| 227 |.               FAST DUAL                                     |                                |     1 |       |     2   (0)| 00:00:01 |
| 228 |.                FAST DUAL                                    |                                |     1 |       |     2   (0)| 00:00:01 |
| 229 |.                 FAST DUAL                                   |                                |     1 |       |     2   (0)| 00:00:01 |
| 230 |.                  FAST DUAL                                  |                                |     1 |       |     2   (0)| 00:00:01 |
| 231 |.                   FAST DUAL                                 |                                |     1 |       |     2   (0)| 00:00:01 |
| 232 |.                    FAST DUAL                                |                                |     1 |       |     2   (0)| 00:00:01 |
| 233 |.                     FAST DUAL                               |                                |     1 |       |     2   (0)| 00:00:01 |
| 234 |.                      FAST DUAL                              |                                |     1 |       |     2   (0)| 00:00:01 |
| 235 |.                       FAST DUAL                             |                                |     1 |       |     2   (0)| 00:00:01 |

Tuning Oracle to Make a Query Slower

I had an interesting little project this morning. Of course it takes longer to write it down than to do actually do it, but it was kind of interesting and since I haven’t done a post in quite some time (and it’s the day before Thanksgiving, so it’s pretty quite at the office anyway) I decided to share. One of the Enkitec guys (Tim Fox) was doing a performance comparison between various platforms (Exadata using it’s IB Storage Network, Oracle Database Appliance (ODA) using it’s direct attached storage, and a standard database on a Dell box using EMC fiber channel attached storage). The general test idea was simple – see how the platforms stacked up for a query that required a full scan of a large table. More specifically, what Tim wanted to see was the relative speed at which the various storage platforms could return data. The expectation was that the direct attached storage would be fastest and the fibre channel storage would be slowest (especially since we only had a single 2G HBA). He tested ODA and Exadata and got basically what he expected, but when he went to test the database on the Dell he was surprised that it was actually faster than either of the other two tests. So here’s some output from the initial tests: First the Exadata. It’s an X2 quarter rack with one extra storage server. Note that we had to set cell_offload_processing to false to turn off the Exadata storage optimizations, thus giving us a measurement of the hardware capabilities without the Exadata offloading.

> !sqlp
sqlp
 
SQL*Plus: Release 11.2.0.2.0 Production on Wed Nov 23 11:08:28 2011
 
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
 
SYS@DEMO1> @uptime
 
INSTANCE_NAME    STARTUP_TIME      CURRENT_TIME         DAYS    SECONDS
---------------- ----------------- ----------------- ------- ----------
DEMO1            07-NOV-2011 12:37 23-NOV-2011 11:08   15.94    1377058
 
SYS@DEMO1> set sqlprompt "_USER'@'EXADATA'>' "
SYS@EXADATA> 
SYS@EXADATA> ! cat /etc/redhat-release
Enterprise Linux Enterprise Linux Server release 5.5 (Carthage)
 
SYS@EXADATA> ! uname -a
Linux enkdb03.enkitec.com 2.6.18-194.3.1.0.3.el5 #1 SMP Tue Aug 31 22:41:13 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
 
SYS@EXADATA> alter session set "_serial_direct_read"=always;
 
Session altered.
 
SYS@EXADATA> alter session set cell_offload_processing=false;
 
Session altered.
 
SYS@EXADATA> set autotrace on
SYS@EXADATA> set timing on
SYS@EXADATA> select count(*) from instructor.class_sales;
 
  COUNT(*)
----------
  90000000
 
Elapsed: 00:00:43.01
 
Execution Plan
----------------------------------------------------------
Plan hash value: 3145879882
 
----------------------------------------------------------------------------------
| Id  | Operation                  | Name        | Rows  | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |             |     1 |   314K  (1)| 00:00:02 |
|   1 |  SORT AGGREGATE            |             |     1 |            |          |
|   2 |   TABLE ACCESS STORAGE FULL| CLASS_SALES |    90M|   314K  (1)| 00:00:02 |
----------------------------------------------------------------------------------
 
 
Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
    1168567  consistent gets
    1168557  physical reads
          0  redo size
        526  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SYS@EXADATA> set autotrace off
SYS@EXADATA> @fss
Enter value for sql_text: select count(*) from instructor.class_sales
Enter value for sql_id: 
 
SQL_ID         CHILD      EXECS   AVG_ROWS     AVG_ETIME       AVG_CPU       AVG_PIO      AVG_LIO SQL_TEXT
------------- ------ ---------- ---------- ------------- ------------- ------------- ------------ ----------------------------------------
b2br1x82p9862      0          1          1         43.00          3.16  1,168,557.00    1,168,567 select count(*) from instructor.class_sa
 
Elapsed: 00:00:00.08

So the test on the Exadata took 43 seconds to read and transport roughly 1 million 8K blocks. The same test on the ODA looked like this: Continue reading ‘Tuning Oracle to Make a Query Slower’ »