trusted online casino malaysia

Archive for the ‘Exadata’ Category.

EHCC and the GET_COMPRESSION_TYPE function

Well I turned in the HCC chapter on the Exadata book last week and of course as is usually the case, I immediately ran across something kind of cool on the topic I just finished writing about. (we still have several editing passes though, so I can add it later). Anyway, although I don’t have time to post much these days, I thought this one would be a quick little snippet. So here it is.

The Compression Advisor is part of the DBMS_COMPRESSION package. Specifically it is the GET_COMPRESSION_RATIO procedure. This procedure is worthy of a separate post but I won’t discuss it here except to say that as of 11.2.0.2 you can use it to test HCC compression ratios on non-Exadata platforms. That’s pretty cool, but what I wanted to tell you about is a handy little function in the same package called GET_COMPRESSION_TYPE. This function can tell you exactly what level of compression has been applied to a single row. This can come in handy for investigating the inner workings of  HCC (or OLTP or BASIC compression for that matter).

As you probably already know, HCC is only applied to records loaded via direct path writes. Any updates cause rows to be migrated out of that storage format into blocks flagged for OLTP compression. Of course OLTP compression on a block only kicks in when a block is “full”. On top of this, altering a table to change it’s compression does not actually change the storage format of any existing records (unless you use the MOVE keyword). So you could load some data and then change the designation (say from QUERY LOW to QUERY HIGH). Rows that are inserted after the change will be stored in the new format (assuming the records are loaded via direct path writes of course). So why am I telling you all this. Well, because I ran across a statement in some Oracle documentation that said you can check to see what compression method a table is stored with by looking at the COMPRESS_FOR column in the DBA_TABLES view. This column does reveal what the table designation is. However, the setting actually only tells you how rows inserted in the future will be compressed. It tells you absolutely nothing about the way current rows are stored.

As for the mechanics, it appears that each row has a bitmask associated with it showing what compression format is being used. So I wrote a little script to give me what I want to see (check_row_comp.sql) using the DBMS_COMPRESSION.GET_COMPRESSION_TYPE function. Here’s an example of its use.

Continue reading ‘EHCC and the GET_COMPRESSION_TYPE function’ »

EHCC Mechanics – Proof that whole CU’s are not decompressed

I saw an interesting post recently where Greg Rahn talked about HCC mechanics. He claimed that an update to a record stored in HCC format did not require decompressing the whole Compression Unit (CU) which consist of several Oracle blocks. I’m assuming by this he meant that all the records contained in the CU did not get written back to storage in a non-HCC format due to a single record being updated. Greg then showed an example proving row migration occurred for an updated record. He didn’t show that the other records had not been decompressed though. So since I was already working on an HCC chapter for the upcoming Apress Exadata book, I thought I would take time off from the book writing to post this (hopefully the editors will forgive me).

Here’s the recipe: Basically we’ll update a single row, see that its rowid has changed, veify that we can still get to the record via its original rowid, and check to see if the TABLE FETCH CONTINUED ROW statistic gets updated when we we access the row via its original rowid, thus proving basic row migration (this is what Greg has already shown). Then we’ll look at block dumps for the original and new block to see what’s there.

Continue reading ‘EHCC Mechanics – Proof that whole CU’s are not decompressed’ »

Cellcli Command Syntax – Top 10 List

Exadata storage software uses the cellcli utility as its command line interface. Unfortunately, although the documentation set that comes with Exadata does have many examples of cellcli commands, and even a chapter dedicated to cellcli, it does not include any reference material on the syntax itself (particularly the LIST command). So I thought I would write up a few of the things I’ve learned while picking around at it. But first a little bit of a rant on why they wrote yet another command line interface.

<RANT>They already had SQL*Plus for crying out loud. Why not just use that. SQL*Plus has all kinds of functionality for using variables, formatting output, etc… And on top of that, they invented a new syntax. Why use LIST instead of SELECT? They used WHERE and LIKE, so why not SELECT? I find it more than a little annoying (in case you couldn’t tell). I’m told that storage admins don’t like SQL and that’s one of the reasons for not using straight SQL syntax. That seems pretty silly since the storage is designed specifically for use with Oracle databases. </RANT>

So anyway, here’s my quick top ten list of things you should know:

  1. cellcli does have a handful of SQL*Plus commands (START (@), SET ECHO ON, SPOOL, DESCRIBE, HELP)
  2. SELECT is replaced by LIST and it must be the first key word on the command line
  3. There is no FROM keyword (the LIST keyword must be immediately followed by the ObjectType which is equivalent to a table name)
  4. There is a DESCRIBE command which displays the attributes (columns) that make up an ObjectType (table)
  5. Column names are specified with the ATTRIBUTES keyword followed by the columns you wish to be displayed
  6. There is a default set of columns for each Object that will be returned if the ATTRIBUTES keyword is not specified
  7. There is a WHERE clause that can be applied to any attribute and multiple conditions can be ANDed together (no OR though)
  8. There is no ORDER BY equivalent
  9. The DETAIL key word can be appended to any LIST command to change the output from column oriented to row oriented
  10. The LIKE operator works but instead of the standard SQL wildcard, %, cellcli uses regex – so ‘%’ = ‘.*’

So here are a few examples:

CellCLI> help 

 HELP [topic]
   Available Topics:
        ALTER
        ALTER ALERTHISTORY
        ALTER CELL
        ALTER CELLDISK
        ALTER GRIDDISK
        ALTER IORMPLAN
        ALTER LUN
        ALTER THRESHOLD
        ASSIGN KEY
        CALIBRATE
        CREATE
        CREATE CELL
        CREATE CELLDISK
        CREATE FLASHCACHE
        CREATE GRIDDISK
        CREATE KEY
        CREATE THRESHOLD
        DESCRIBE
        DROP
        DROP ALERTHISTORY
        DROP CELL
        DROP CELLDISK
        DROP FLASHCACHE
        DROP GRIDDISK
        DROP THRESHOLD
        EXPORT CELLDISK
        IMPORT CELLDISK
        LIST
        LIST ACTIVEREQUEST
        LIST ALERTDEFINITION
        LIST ALERTHISTORY
        LIST CELL
        LIST CELLDISK
        LIST FLASHCACHE
        LIST FLASHCACHECONTENT
        LIST GRIDDISK
        LIST IORMPLAN
        LIST KEY
        LIST LUN
        LIST METRICCURRENT
        LIST METRICDEFINITION
        LIST METRICHISTORY
        LIST PHYSICALDISK
        LIST THRESHOLD
        SET
        SPOOL
        START

CellCLI> help set

  Usage: SET  

  Purpose: Sets a variable to alter the CELLCLI environment settings for your
           current session.

  Arguments:
    variable and value represent one of the following clauses:
    DATEFORMAT { STANDARD | LOCAL }
    ECHO { ON | OFF }

  Examples:
    set dateformat local
    set echo on 


CellCLI> help list

  Enter HELP LIST  for specific help syntax.
    :  {ACTIVEREQUEST | ALERTHISTORY | ALERTDEFINITION | CELL 
                     | CELLDISK | FLASHCACHE | FLASHCACHECONTENT | GRIDDISK
                     | IORMPLAN | KEY | LUN 
                     | METRICCURRENT | METRICDEFINITION | METRICHISTORY 
                     | PHYSICALDISK | THRESHOLD }

CellCLI> help list FLASHCACHECONTENT

  Usage: LIST FLASHCACHECONTENT [] [] [DETAIL] 

  Purpose: Displays specified attributes for flash cache entries.

  Arguments:
    :  An expression which determines the entries to be displayed.
    : The attributes that are to be displayed.
                      ATTRIBUTES {ALL | attr1 [, attr2]... }

  Options:
    [DETAIL]: Formats the display as an attribute on each line, with
              an attribute descriptor preceding each value.

  Examples:
    LIST FLASHCACHECONTENT DETAIL

So as you can see, the help system allows you to see a bit of the syntax for each command. You may also have noticed a couple of SQL*Plus carry-overs. SET, SPOOL, and START work pretty much as expected. Note the @ is equivalent to START and that the only things you can SET are ECHO and DATEFORMAT. Now for a couple of queries (er LIST commands):

CellCLI> desc flashcachecontent
         ^
CELL-01504: Invalid command syntax.

CellCLI> describe flashcachecontent
        cachedKeepSize
        cachedSize
        dbID
        dbUniqueName
        hitCount
        hoursToExpiration
        missCount
        objectNumber
        tableSpaceNumber

CellCLI> set echo on

CellCLI> @fc_content

> CellCLI> list flashcachecontent where dbUniqueName like 'EXDB' and hitcount > 100 attributes dbUniqueName, objectNumber, cachedKeepSize, cachedSize, hitCount, missCount
         EXDB    2       0       4194304         600     208
         EXDB    40      0       2424832         376     60
         EXDB    224     0       1802240         115     80
         EXDB    267     0       458752          128     9
         EXDB    383     0       2547712         157     27
         EXDB    423     0       1867776         180     41
         EXDB    471     0       4071424         552     85
         EXDB    472     0       1277952         114     22
         EXDB    474     0       13246464        286     326
         EXDB    475     0       5914624         519     124
         EXDB    503     0       5308416         669     455
         EXDB    5710    0       3735552         363     90
         EXDB    6207    0       393216          112     9
         EXDB    6213    0       3842048         359     147
         EXDB    6216    0       1245184         184     29
         EXDB    6373    0       3481600         222     61
         EXDB    56085   0       4194304         822     129
         EXDB    66849   0       438763520       1221    3322
         EXDB    71493   0       5636096         302     127
         EXDB    71497   0       1351680         320     22
         EXDB    71573   0       2760704         101     37
         EXDB    71775   0       1801412608      34994   46315

CellCLI> list flashcachecontent where dbUniqueName like 'EX.?.?' and hitcount > 100 -

> attributes dbUniqueName, objectNumber, cachedKeepSize, cachedSize 
         EXDB    2       0       4194304
         EXDB    18      0       1179648
         EXDB    37      0       622592
         EXDB    40      0       2424832
         EXDB    63      0       524288
         EXDB    104     0       688128
         EXDB    224     0       3407872
         EXDB    267     0       458752
         EXDB    383     0       2670592
         EXDB    420     0       1507328
         EXDB    423     0       1867776
         EXDB    424     0       720896
         EXDB    471     0       4071424
         EXDB    472     0       1277952
         EXDB    473     0       2351104
         EXDB    474     0       13574144
         EXDB    475     0       5521408
         EXDB    503     0       5308416
         EXDB    5702    0       262144
         EXDB    5709    0       2416640
         EXDB    5710    0       3735552
         EXDB    6207    0       393216
         EXDB    6210    0       131072
         EXDB    6213    0       4227072
         EXDB    6216    0       1245184
         EXDB    6373    0       3579904
         EXDB    56085   0       4194304
         EXDB    66849   0       438763520
         EXDB    71493   0       5636096
         EXDB    71497   0       1351680
         EXDB    71573   0       2801664
         EXDB    71775   0       1801412608

CellCLI> list flashcachecontent where dbUniqueName like 'EX.?.?' and hitcount > 100 and objectNumber like '.*775'
         2356637742      6       71775

CellCLI> list flashcachecontent where dbUniqueName like '.*X.?.?' and objectNumber like '.*775' detail                                      
         cachedKeepSize:         0
         cachedSize:             1801412608
         dbID:                   2356637742
         dbUniqueName:           EXDB
         hitCount:               34994
         missCount:              46315
         objectNumber:           71775
         tableSpaceNumber:       6

CellCLI> list flashcachecontent where dbUniqueName like 'EX.?.?' and hitcount > 100 and objectNumber like '.*775'
         2356637742      6       71775

CellCLI> list flashcachecontent attributes objectNumber, hitCount, missCount where dbUniqueName like 'EX.?.?' and hitcount > 100 and objectNumber like '.*775'
         71775   34994   46315

So DESC doesn’t work as an abbreviation of DESCRIBE. Notice that there are no headings for column oriented output. As you can see, you can run “scripts” and SET ECHO ON to display the commands in any scripts that you execute. One of the LIST commands was strung across two lines by using the continuation operator (-). The LIST commands look a lot like SQL except for LIST being used instead of SELECT and the regex expressions for matching when using the LIKE key word. Also notice that in the last command a number was matched with a regex expression implying a data type conversion, although all data may be treated at text. You can see that the ATTRIBUTES and WHERE key words can be anywhere on the command line after the “LIST objectName” keywords. In other words, these two key words are not positional, either one can be first. Finally, the DETAIL keyword turns the output sideways. Or as the help says, “Formats the display as an attribute on each line, with
an attribute descriptor preceding each value.”

So the cellcli interface is really not that bad, I just happen to like SQL*Plus better. 😉 I do think it would have been a simple matter to reuse SQL*Plus since they already have all the functionality built into it, but maybe there were other concerns that I’m not aware of. But cellcli works. And by the way, cellcli has the ability to scroll though previous commands and edit them via the arrow keys which is quite handy. The editing capability is definitely a step forward from SQL*Plus on unix like platforms (although you can use rlwrap to accomplish this – see this post for more details on that: Using rlwrap on Windows) And regex also provides a very powerful pattern matching capability although it’s still a little confusing to have SQL like syntax mixed with regex to my way of thinking. Maybe if they just added the ability to use the % wildcard in addition to the regex I would feel better about it.

Oracle XMonth

I spent most of the last week in California at Oracle’s XMonth (it was really a week so I’m not sure why they called it XMonth). Which reminds me of a tuning engagement I did a few years ago. I showed up at the client’s facilities and we started discussing the problem. They told me that their daily job was running too slow and it was causing major problems. So I asked them what time the daily job ran each day. And they told me that it ran every 20 minutes. Ha! “And why do you call it the daily job?”, I asked. They weren’t really sure. The job was taking about an hour to complete as I recall, but I digress.

At XMonth, they had three tracks: Exadata (and two others that I can’t remember). I did learn a few things I thought were worth sharing.

  1. Exalogic is not shipping yet, but we did cover it in some detail. Exalogic’s biggest advantage appears to be the Infiniband fabric. It can be connected directly to any DB server using IB (Exadata for example) and can communicate with the extremely low latency RDS protocol.
  2. Oracle has relaxed their “no changes to the Exadata configuration” stance (but only very slightly). They said that it was OK to change out the Cisco switch and replace it with some other equivalent switch.
  3. A competitive analysis of Sun’s server line was provided. It included T series, X series, and M series servers along with Exadata. Exadata pretty much kicks all other options in the teeth (in my opinion). M series are still suitable for very large applications that are unable to scale out via RAC – such are Oracle’s own MRP package which uses the dbms_pipe package limiting its ability to scale in a RAC environment. But in general, the advice to the sales team was that if you are in a competitive situation, in most cases you should lead with Exadata.

So that’s about it. Oh they also let me talk about our experiences with Exadata. That was fun and I got to try my hand at a virtual presentation, as there were participants all over the world following along with Webex. The software has gotten pretty good for doing these kinds of presentations by the way. It was good practice for the Virtual Oracle Conference we have coming up next month. (note that there are only a couple of days left to sign up at the discounted rate)

Exadata V3 – Oops – EXADATA X2-8

Oracle had the new version of the Exadata Machine on display at Oracle Open World this week. It’s called the Exadata X2-8. That’s a catchy name. It sounds very Iron Manish! In fact they had these fellows on display next to the demo machines.

 

The X2-8 uses two 4U Sun Fire x4800 servers which each have 8 x eight-core intel CPUs (X7560) and 1 Terabyte of memory along with 14 Exadata Storage Servers. Here’s a link to the spec sheet for the X2-8 and for the Oracle Sun X4800. Below are a couple of pictures. The first one shows one of the database servers (X4800) with one of the CPU modules out.

The storage cells have not changed much from the original V2. They still have 2 CPUs and 12 drives and 384G of flash cache. Although I’ve been told they have newer (faster) 6 core Intel CPUs. (I did get a look at an unpublished spec sheet on an Oracle employee’s iPhone and it said they were using the 5670 CPUs) Oh, and they will allow you to run Solaris on the database servers. Of course they will have to finish a new version of Solaris 11 before that can happen. It’s worth noting that X2-8 can be ordered but that they don’t have a firm delivery date yet.

So this configuration is definitely for the high end market and addresses a couple of issues. The increased memory will allow us to have a more robust consolidation platform. It will also allow bigger OLTP type systems to run better (i.e. the additional memory means we can support many more concurrent users and have much larger buffer caches). Note that Exadata’s offloading generally reduces the memory requirements, but nevertheless, very large systems, particularly ones with lot’s of fast OLTP type transactions and lot’s of users will be better satisfied by this type of configuration. Also note that there is no little brother version of the X2-8. It comes in a full rack only. Which makes sense because there are only two database machines. I don’t believe the price has been set yet, but the word on the street is that the hardware will be about 50% more than the full rack version with the small 2 CPU blades (now renamed X2-2 by the way).

I did a post a couple of weeks ago (Thoughts on Exadata V3) about what I thought we might get to see in the way of Exadata changes. We got most of the things I was expecting but not all of them. Among the things we got are bigger/beefier servers with more memory and available slots for HBAs to provide some additional connection capabilities (although I’m not sure if Oracle is going to want people to open the machines up and put additional controllers in). I did see mention in the x4800 spec sheet of an HBA so they may actually have one in there already (I need to check that out). They also announced that they will be offering a version of Solaris that can run on the database servers which I was expecting, although they are still using Intel chips. The thing I was expecting that didn’t happen was a change of mind set about flexibility of configuration. They seem pretty set on maintaining a fixed configuration that will be the same for every customer. That is probably not such a bad idea. Certainly it’s easier to support and faster to deploy. But you know how customers are, they want their hamburgers with extra mustard and no pickles. So we’ll see how that works out over time. But for now, it’s a fixed menu. To quote Henry Ford, “You can get it in any color you want as long as it’s black”.

So that’s all I can think of at this point. Please let me know if you have any questions and I’ll see what I can find out.

 

Do Storage Indexes Work with Bind Variables?

I saw a post today where the subject of Exadata Storage Indexes were being discussed. One of the things that caught my eye was a discussion of whether Storage Indexes worked with Bind Variables. One of the posters observed that since smart scan was aimed at data warehouse type queries, bind variables were pretty much irrelevant. Which is true. Still it’s an interesting question. So I thought I’d give it a quick test.

As usual I used a couple of scripts:

fsx.sql – queries v$sql and shows whether a statement has been offloaded or not (slightly modified to remove 2 columns)
mystats.sql – just queries v$mystat

We’ll look at a test with a number column first.

SYS@LABRAT1> -- Do SI's work with bind variables? - Yes
SYS@LABRAT1>
SYS@LABRAT1> -- first here's basic info on my test table (SKEW3) 
SYS@LABRAT1>
SYS@LABRAT1> desc kso.skew3
 Name                                                                                   Null?    Type
 -------------------------------------------------------------------------------------- -------- ----------------------------------------------------------
 PK_COL                                                                                          NUMBER
 COL1                                                                                            NUMBER
 COL2                                                                                            VARCHAR2(30)
 COL3                                                                                            DATE
 COL4                                                                                            VARCHAR2(1)

SYS@LABRAT1> select count(*) from kso.skew3;

  COUNT(*)
----------
 384000048

1 row selected.

Elapsed: 00:00:26.53
SYS@LABRAT1>  -- 27 seconds to do a full scan with no where clause (there are no indexes)
SYS@LABRAT1> @mystats
Enter value for name: storage

NAME                                                                             VALUE
---------------------------------------------------------------------- ---------------
cell physical IO bytes saved by storage index                                        0

SYS@LABRAT1> -- no Storage Index usage by this session yet
SYS@LABRAT1> -- let's try a query using a variable
SYS@LABRAT1> set echo on
SYS@LABRAT1> @test_bv_si
SYS@LABRAT1> 
SYS@LABRAT1> variable X NUMBER
SYS@LABRAT1> 
SYS@LABRAT1> begin
  2  
  3  :X := -1;
  4  
  5  end;
  6  
  7  /

PL/SQL procedure successfully completed.

SYS@LABRAT1> 
SYS@LABRAT1> select count(*) from kso.skew3 where col1 = :x;

  COUNT(*)
----------
         0

Elapsed: 00:00:00.08

SYS@LABRAT1> set echo off
SYS@LABRAT1> @mystats
Enter value for name: storage

NAME                                                                             VALUE
---------------------------------------------------------------------- ---------------
cell physical IO bytes saved by storage index                              16025346048

SYS@LABRAT1> -- so it used the storage index
SYS@LABRAT1> @fsx
Enter value for sql_text: select count(*) from kso.skew3 where col1 = :x 
Enter value for sql_id: 
Enter value for inst_id: 

 INST SQL_ID         CHILD  PLAN_HASH      EXECS     AVG_ETIME AVG_PX OFFLOADABLE IO_SAVED_% SQL_TEXT
----- ------------- ------ ---------- ---------- ------------- ------ ----------- ---------- --------------------------------------------------
    1 1nsxv1zpawmsa      0 2684249835          2           .08      0 Yes             100.00 select count(*) from kso.skew3 where col1 = :x

1 row selected.

SYS@LABRAT1> @dplan
Enter value for sql_id: 1nsxv1zpawmsa
Enter value for child_no: 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  1nsxv1zpawmsa, child number 0
-------------------------------------
select count(*) from kso.skew3 where col1 = :x

Plan hash value: 2684249835

------------------------------------------------------------------------------------
| Id  | Operation                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |       |       |       |   533K(100)|          |
|   1 |  SORT AGGREGATE            |       |     1 |     5 |            |          |
|*  2 |   TABLE ACCESS STORAGE FULL| SKEW3 |   385 |  1925 |   533K  (1)| 01:46:43 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage("COL1"=:X)
       filter("COL1"=:X)


20 rows selected.

So the Storage Index was clearly used on this statement using a SQL*Plus number variable. Here’s some 10046 trace data to show that smart scan wait event was used – note also the “enq: KO – fast object checkpoint” wait event which is done before the direct path reads (replaced by the “cell smart table scan” event in Exadata land).
Continue reading ‘Do Storage Indexes Work with Bind Variables?’ »

Thoughts on Exadata V3

I expect we’ll see some announcements at Oracle Open World in a couple of weeks (I don’t know if they will really call it V3 yet by the way).

DISCLAIMER: This is all complete conjecture on my part. I don’t work for Oracle and I have not had any conversations with any Oracle employees about official future directions or plans. (I probably would have had to sign a non-disclosure before I could have any of those discussions and then I wouldn’t be able to do this post).

text

Anyway, here’s the list of things I am thinking we might see:

Bigger Database Servers – I mean physically bigger, with open slots, so we can put HBA’s in them to attach to external storage for migration and backup purposes. Sun 4275’s perhaps since they are already using them as storage servers. The only real issue is that we’d run out of space on a full rack configuration, but if the machines are beefier, perhaps we wouldn’t need as many DB servers anyway.

Bigger Database Servers – I mean more memory and more processing power. Faster chips and bigger DIMMs are a no-brainer. Just put them in the existing 4170 boxes. But how about different models altogether. M series perhaps (which also means a change in O/S). Should be fairly easy to do actually as the DB already runs on Solaris. Might make the Sun shops really happy as well. 😉

More Options – I expect we’ll see a little more flexibility in the configurations, because “One Size Fits All” really doesn’t (or at least many people think that it doesn’t). Anyone want a 2/3 rack?

Incrementally Better Software – It’s a great leap forward already, but I expect more things to be offloaded to the storage layer (some of the analytic functions, some of the aggregate functions, etc…)

Exadata SAN – I could see Oracle announcing a stand alone storage unit with a variable number of “trays”. They might even announce some software for doing some of the more SAN like features (think Snap Mirror).

By the way, I doubt they’ll be buying Netapp just yet (need to drive the price of the stock down a bit first I think).

Well that’s what I’m thinking. What do you think?

Exadata Book

Here’s some white Exaddata text

Well, I guess it’s official. A couple of weeks ago I committed to write an Exadata book for Apress, along with my intrepid co-author Randy Johnson. For those of you who don’t know Randy, he’s a very experienced Oracle Guy with a wealth of knowledge, particularly around RAC. I think the two of us make a pretty good team – making up for each others weaknesses (oh wait, I should say we have “Complimentary Skill Sets” – yeah that sounds better).

Anyway, it turns out that writing a book is a lot of work! The way Tom Kyte turns them out I thought it must be pretty easy, but I’ve always been a little overconfident. So I’m starting to realize that I may not have time to do as many blog posts as I might like. But I must say that I am really excited about the subject matter! So I think it will be worth the effort. By the way, that’s not the official cover art (or even the official title as far as I know). I just hacked that together with a Adobe Illustrator. 😉

As I have said many times, I think the Exadata storage software will usher in a whole new era in relational databases. Not just for Oracle, because you know the other guys will be trying to follow in their footsteps. But I think Oracle is miles ahead at this point, and they own their own hardware platform. Oracle claims that it has been their most successful product launch ever and I believe it. They are starting to pop up like weeds. It will be very interesting to see what the future holds for this platform. To be honest, I think we’re just seeing the tip of the iceberg at this point. Anyway, I appreciate the guys at Apress having the faith in us to take on this project. I hope we don’t disappoint them (I don’t think we will).

Notes on Applying Exadata Bundle Patch (BP5)

Randy Johnson has done a brief post after applying BP5 on our Exadata Lab machine. Looks like it went pretty smoothly with the exception of a problem with DBFS and some misleading comments in the README file regarding using the RDS protocol (both of which we had in play). Here’s a link to his post:

Exadata Bundle Patch 5 Gotcha’s

Running Oracle Exadata V2 on Dell Hardware

Well we had to give it a shot.

So we created an Oracle Exadata Storage Server Software CELLBOOT USB flash drive. I’m not kidding, that’s what the Oracle/Sun guys decided to call it. They didn’t even use an acronym in the manual (I guess “ESSSCB USB FD” doesn’t roll off the tongue much better than the whole thing anyway). We used the make_cellboot_usb utility to create the thing off one of our storage servers, which by the way was not that easy to do, since the USB ports are in the back of the 4275’s and they are not easy to get to with all the cabling that’s back there. Anyway, once we had the little bugger created we pulled it out of the back of the rack and booted a Dell Latitude D630 off of it. Here’s a picture:

Notice the thumb drive is all lit up like a Christmas tree.

Here is a close up of the screen (in case your eyes are going bad like mine):

So we tried a couple of different options but eventually got to this screen:

Notice the ERROR line in the middle of the screen. Somebody wisely put a check in the boot procedure to verify the machine type, presumably if it’s not a Sun 4170 it will throw an error. We thought about hacking the system but decided not to at this point as we had real work to do. (maybe later when we’ve got nothing else to do)