The first thing that comes to most people’s mind when database table compression is mentioned is the savings it yields in terms of disk space. While reducing the footprint of data on disk is relevant, I would argue it is the lesser of the benefits for data warehouses. Disk capacity is very cheap and generally plentiful, however, disk bandwidth (scan speed) is proportional to the number of spindles, no mater what the disk capacity and thus is more expensive. Table compression reduces the footprint on the disk drives that a given data set occupies so the amount of physical data that must be read off the disk platters is reduced when compared to the uncompressed version. For example, if 4000 GB of raw data can compress to 1000 GB, it can be read off the same disk drives 4X as fast because it is reading and transferring 1/4 of the data off the spindles (relative to the uncompressed size). Likewise, table compression allows for the database buffer cache to contain more data without having to increase the memory allocation because more rows can be stored in a compressed block/page compared to an uncompressed block/page.
Row major table compression comes in two flavors with the Oracle database: BASIC and OLTP. In 11.1 these were also known by the key phrases COMPRESS or COMPRESS FOR DIRECT_LOAD OPERATIONS and COMPRESS FOR ALL OPERATIONS. The BASIC/DIRECT_LOAD compression has been part of the Oracle database since version 9 and ALL OPERATIONS/OLTP compression was introduced in 11.1 with the Advanced Compression option.
Oracle row major table compression works by storing the column values for a given block in a symbol table at the beginning of the block. The more repeated values per block, even across columns, the better the compression ratio. Sorting data can increase the compression ratio as ordering the data will generally allow more repeat values per block. Specific compression ratios and gains from sorting data are very data dependent but compression ratios are generally between 2x and 4x.
Compression does add some CPU overhead when direct path loading data, but there is no measurable performance overhead when reading data as the Oracle database can operate on compressed blocks directly without having to first uncompress the block. The additional CPU required when bulk loading data is generally well worth the down wind gains for data warehouses. This is because most data in a well designed data warehouse is write once, read many times. Insert only and infrequently modified tables are ideal candidates for BASIC compression. If the tables have significant DML performed against them, then OLTP compression would be advised (or no compression).
Given that most Oracle data warehouses that I have seen are constrained by I/O bandwidth (see Balanced Hardware Configuration) it is highly recommended to leverage compression so the logical table scan rate can increase proportionally to the compression ratio. This will result in faster table and partition scans on the same hardware.
Oracle Documentation References:
- Performance Tuning Guide: Table Compression
- VLDB and Partitioning Guide: Partitioning and Table Compression
- Guidelines for Managing Tables: Consider Using Table Compression
- Table Compression in Oracle9i Release 2: A Performance Analysis
- Table Compression in Oracle Database 10g Release 2
- Advanced Compression with Oracle Database 11g Release 2
[back to Introduction]
If you want to build a house that will stand the test of time, you need to build on a solid foundation. The same goes for architecting computer systems that run databases. If the underlying hardware is not sized appropriately it will likely lead to people blaming software. All too often I see data warehouse systems that are poorly architected for the given workload requirements. I frequently tell people, “you can’t squeeze blood from a turnip“, meaning if the hardware resources are not there for the software to use, how can you expect the software to scale?
Undersizing data warehouse systems has become an epidemic with open platforms – platforms that let you run on any brand and configuration of hardware. This problem has been magnified over time as the size of databases have grown significantly, and generally outpacing the experience of those managing them. This has caused the “big three” database vendors to come up with suggested or recommended hardware configurations for their database platforms:
- Oracle: Optimized Warehouse Initiative
- Microsoft: SQL Server Fast Track Data Warehouse
- IBM: Balanced Configuration Unit (BCU)
Simply put, the reasoning behind those initiatives was to help customers architect systems that are well balanced and sized appropriately for the size of their data warehouse.
Balanced Hardware Configurations
The foundation for a well performing data warehouse (or any system for that matter) is the hardware that it runs on. There are three main hardware resources to consider when sizing your data warehouse hardware. Those are:
- Number of CPUs
- Number of storage devices (HDDs or SSDs)
- I/O bandwidth between CPUs and storage devices
NB: I’ve purposely left off memory (RAM) as most systems are pretty well sized at 2GB or 4GB per CPU core these days.
A balanced system has the following characteristics:
As you can see, each of the three components are sized proportionally to each other. This allows for the max system throughput capacity as no single resource will become the bottleneck before any other. This was one of the critical design decisions that went into the Oracle Database Machine.
Most DBAs and System Admins know what the disk capacity numbers are for their systems, but when it comes to I/O bandwidth or scan rates, most are unaware of what the system is capable of in theory, let alone in practice. Perhaps I/O bandwidth utilization should be included in the system metrics that are collected for your databases. You do collect system metrics, right?
There are several “exchanges” that data must flow through from storage devices to host CPUs, many of which could become bottlenecks. Those include:
- Back-end Fibre Channel loops (the fibre between the drive shelves and the storage array server processor)
- Front-end Fibre Channel ports
- Storage array server processors (SP)
- Host HBAs
One should understand the throughput capacity of each of these components to ensure that one (or more) of them do not restrict the flow of data to the CPUs prematurely.
Unbalanced Hardware Configurations
All too frequently systems are not architected as balanced systems and the system ends up being constrained in one of the following three scenarios:
From the production systems that I have seen, the main deficiency is in I/O bandwidth (both I/O Channel and HDD). I believe there are several reasons for this. First, too many companies capacity plan for their data warehouse based on the size the data occupies on disk alone. That is, they purchase the number of HDDs for the system based on the drive capacity, not on the I/O bandwidth requirement. Think of it like this: If you were to purchase 2 TB of mirrored disk capacity (4 TB total) would you rather purchase 28 x 146 GB drives or 14 x 300 GB drives (or even 4 x 1 TB drives)? You may ask: Well, what is the difference (other than price); in each case you have the same net capacity, correct? Indeed, both configurations do have the same capacity, but I/O bandwidth (how fast you can read data off the HDDs) is proportional to the number of HDDs, not the capacity. Thus it should be slightly obvious then that 28 HDDs can deliver 2X the disk I/O bandwidth that 14 HDDs can. This means that it will take 2X as long to read the same amount of data off of 14 HDDs as 28 HDDs.
Unfortunately what tends to happen is that the bean counter types will see only two things:
- The disk capacity (space) you want to purchase (or the capacity that is required)
- The price per MB/GB/TB
This is where someone worthy of the the title systems architect needs to stand up and explain the concept of I/O bandwidth and the impact it has on data warehouse performance (your systems architect does know this, correct?). This is generally a difficult discussion because I/O bandwidth is not a line item on a purchase order, it is a derived metric that requires both thought and engineering (which means someone had to do some thinking about the requirements for this system!).
When sizing the hardware for your data warehouse consider your workload and understand following (and calculate numbers for them!):
- What rate (in MB/GB per second) can the CPUs consume data?
- What rate can storage devices produce data (scan rate)?
- What rate can the data be delivered from the storage array(s) to the host HBAs?
If you are unable to answer these questions in theory then you need to sit down and do some calculations. Then you need to use some micro benchmarks (like Oracle ORION) and prove out those calculations. This will give you the “speed limit” and an metric by which you can measure your database workload against. All computer systems much obey the laws of physics! There is no way around that.
Kevin Closson has several good blog posts on related topics including:
- SAN Admins: Please Give Me As Much Capacity From As Few Spindles As Possible!
- Hard Drives Are Arcane Technology. So Why Can’t I Realize Their Full Bandwidth Potential?
as well as numerous others.
Oracle Documentation References:
At the 2009 Oracle OpenWorld Unconference back in October I lead a chalk and talk session entitled The Core Performance Fundamentals Of Oracle Data Warehousing. Since this was a chalk and talk I spared the audience any powerpoint slides but I had several people request that make it into a presentation so they could share it with others. After some thought, I decided that a series of blog posts would probably be a better way to share this information, especially since I tend to use slides as a speaking outline, not a condensed version of a white paper. This will be the first of a series of posts discussing what I consider to be the key features and technologies behind well performing Oracle data warehouses.
As an Oracle database performance engineer who has done numerous customer data warehouse benchmarks and POCs over the past 5+ years, I’ve seen many data warehouse systems that have been plagued with problems on nearly every DBMS commonly used in data warehousing. Interestingly enough, many of these systems were facing many of the same problems. I’ve compiled a list of topics that I consider to be key features and/or technologies for Oracle data warehouses:
Core Performance Fundamental Topics
- Balanced Hardware Configuration
- Table Compression
- Parallel Execution
- Data Loading
- Row vs. Set Processing
- Indexing and Materalized Views
In the upcoming posts, I’ll deep dive into each one of these topics discussing why these areas are key for a well performing Oracle data warehouse. Stay tuned…
A number of weeks back I had come across a paper/presentation by Riyaj Shamsudeen entitled Battle of the Nodes: RAC Performance Myths (avaiable here). As I was looking through it I saw one example that struck me as very odd (Myth #3 – Interconnect Performance) and I contacted him about it. After further review Riyaj commented that he had made a mistake in his analysis and offered up a new example. I thought I’d take the time to discuss this as parallel execution seems to be one of those areas where many misconceptions and misunderstandings exist.
The Original Example
I thought I’d quickly discuss why I questioned the initial example. The original query Riyaj cited is this one:
select /*+ full(tl) parallel (tl,4) */ avg (n1), max (n1), avg (n2), max (n2), max (v1) from t_large tl;
As you can see this is a very simple single table aggregation without a group by. The reason that I questioned the validity of this example in the context of interconnect performance is that the parallel execution servers (parallel query slaves) will each return exactly one row from the aggregation and then send that single row to the query coordinator (QC) which will then perform the final aggregation. Given that, it would seem impossible that this query could cause any interconnect issues at all.
Riyaj’s Test Case #1
Recognizing the original example was somehow flawed, Riyaj came up with a new example (I’ll reference as TC#1) which consisted of the following query:
select /*+ parallel (t1, 8,2) parallel (t2, 8, 2) */ min (t1.customer_trx_line_id + t2.customer_trx_line_id), max (t1.set_of_books_id + t2.set_of_books_id), avg (t1.set_of_books_id + t2.set_of_books_id), avg (t1.quantity_ordered + t2.quantity_ordered), max (t1.attribute_category), max (t2.attribute1), max (t1.attribute2) from (select * from big_table where rownum &lt;= 100000000) t1, (select * from big_table where rownum &lt;= 100000000) t2 where t1.customer_trx_line_id = t2.customer_trx_line_id;
The execution plan for this query is:
---------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib | ---------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 249 | | 2846K (4)| 01:59:01 | | | | | 1 | SORT AGGREGATE | | 1 | 249 | | | | | | | |* 2 | HASH JOIN | | 100M| 23G| 762M| 2846K (4)| 01:59:01 | | | | | 3 | VIEW | | 100M| 10G| | 1214K (5)| 00:50:46 | | | | |* 4 | COUNT STOPKEY | | | | | | | | | | | 5 | PX COORDINATOR | | | | | | | | | | | 6 | PX SEND QC (RANDOM) | :TQ10000 | 416M| 6749M| | 1214K (5)| 00:50:46 | Q1,00 | P->S | QC (RAND) | |* 7 | COUNT STOPKEY | | | | | | | Q1,00 | PCWC | | | 8 | PX BLOCK ITERATOR | | 416M| 6749M| | 1214K (5)| 00:50:46 | Q1,00 | PCWC | | | 9 | TABLE ACCESS FULL| BIG_TABLE | 416M| 6749M| | 1214K (5)| 00:50:46 | Q1,00 | PCWP | | | 10 | VIEW | | 100M| 12G| | 1214K (5)| 00:50:46 | | | | |* 11 | COUNT STOPKEY | | | | | | | | | | | 12 | PX COORDINATOR | | | | | | | | | | | 13 | PX SEND QC (RANDOM) | :TQ20000 | 416M| 10G| | 1214K (5)| 00:50:46 | Q2,00 | P->S | QC (RAND) | |* 14 | COUNT STOPKEY | | | | | | | Q2,00 | PCWC | | | 15 | PX BLOCK ITERATOR | | 416M| 10G| | 1214K (5)| 00:50:46 | Q2,00 | PCWC | | | 16 | TABLE ACCESS FULL| BIG_TABLE | 416M| 10G| | 1214K (5)| 00:50:46 | Q2,00 | PCWP | | ---------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("T1"."n1"="T2"."n1") 4 - filter(ROWNUM<=100000000) 7 - filter(ROWNUM<=100000000) 11 - filter(ROWNUM<=100000000) 14 - filter(ROWNUM<=100000000)
This is a rather synthetic query but there are a few things that I would like to point out. First, this query uses a parallel hint with 3 values representing table/degree/instances, however instances has been deprecated (see 10.2 parallel hint documentation). In this case the DOP is calculated by degree * instances or 16, not DOP=8 involving 2 instances. Note that the rownum filter is causing all the rows from the tables to be sent back to the QC for the COUNT STOPKEY operation thus causing the execution plan to serialize, denoted by the P->S in the IN-OUT column.
Riyaj had enabled sql trace for the QC and the TKProf output is such:
Rows Row Source Operation ------- --------------------------------------------------- 1 SORT AGGREGATE (cr=152 pr=701158 pw=701127 time=1510221226 us) 98976295 HASH JOIN (cr=152 pr=701158 pw=701127 time=1244490336 us) 100000000 VIEW (cr=76 pr=0 pw=0 time=200279054 us) 100000000 COUNT STOPKEY (cr=76 pr=0 pw=0 time=200279023 us) 100000000 PX COORDINATOR (cr=76 pr=0 pw=0 time=100270084 us) 0 PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=0 us) 0 COUNT STOPKEY (cr=0 pr=0 pw=0 time=0 us) 0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us) 0 TABLE ACCESS FULL BIG_TABLE_NAME_CHANGED_12 (cr=0 pr=0 pw=0 time=0 us) 100000000 VIEW (cr=76 pr=0 pw=0 time=300298770 us) 100000000 COUNT STOPKEY (cr=76 pr=0 pw=0 time=200298726 us) 100000000 PX COORDINATOR (cr=76 pr=0 pw=0 time=200279954 us) 0 PX SEND QC (RANDOM) :TQ20000 (cr=0 pr=0 pw=0 time=0 us) 0 COUNT STOPKEY (cr=0 pr=0 pw=0 time=0 us) 0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us) 0 TABLE ACCESS FULL BIG_TABLE_NAME_CHANGED_12 (cr=0 pr=0 pw=0 time=0 us)
Note that the Rows column contains zeros for many of the row sources because this trace is only for the QC, not the slaves, and thus only QC rows will show up in the trace file. Something to be aware of if you decide to use sql trace with parallel execution.
Off To The Lab: Myth Busting Or Bust!
I wanted to take a query like TC#1 and run it in my own environment so I could do more monitoring of it. Given the alleged myth had to do with interconnect traffic of cross-instance (inter-node) parallel execution, I wanted to be certain to gather the appropriate data. I ran several tests using a similar query on a similar sized data set (by row count) as the initial example. I ran all my experiments on a Oracle Real Application Clusters version 188.8.131.52 consisting of eight nodes, each with two quad-core CPUs. The interconnect is InfiniBand and the protocol used is RDS (Reliable Datagram Sockets).
Before I get into the experiments I think it is worth mentioning that Oracle’s parallel execution (PX), which includes Parallel Query (PQ), PDML & PDDL, can consume vast amounts of resources. This is by design. You see, the idea of Oracle PX is to dedicate a large amount of resources (processes) to a problem by breaking it up into many smaller pieces and then operate on those pieces in parallel. Thus the more parallelism that is used to solve a problem, the more resources it will consume, assuming those resources are available. That should be fairly obvious, but I think it is worth stating.
For my experiments I used a table that contains just over 468M rows.
Below is my version of TC#1. The query is a self-join on a unique key and the table is range partitioned by DAY_KEY into 31 partitions. Note that I create a AWR snapshot immediately before and after the query.
exec dbms_workload_repository.create_snapshot select /* &amp;&amp;1 */ /*+ parallel (t1, 16) parallel (t2, 16) */ min (t1.bsns_unit_key + t2.bsns_unit_key), max (t1.day_key + t2.day_key), avg (t1.day_key + t2.day_key), max (t1.bsns_unit_typ_cd), max (t2.curr_ind), max (t1.load_dt) from dwb_rtl_trx t1, dwb_rtl_trx t2 where t1.trx_nbr = t2.trx_nbr; exec dbms_workload_repository.create_snapshot
Experiment Results Using Fixed DOP=16
I ran my version of TC#1 across a varying number of nodes by using Oracle services (instance_groups and parallel_instance_group have been deprecated in 11g), but kept the DOP constant at 16 for all the tests. Below is a table of the experiment results.
SQL Monitor Report
AWR SQL Report
Seemingly contrary to what many people would probably guess, the execution times got better the more nodes that participated in the query even though the DOP constant throughout each of tests.
Measuring The Interconnect Traffic
One of the new additions to the AWR report in 11g was the inclusion of interconnect traffic by client. This section is near the bottom of the report and looks like such:
This allows the PQ message traffic to be tracked, whereas in prior releases it was not.
Even though AWR contains the throughput numbers (as in megabytes per second) I thought it would be interesting to see how much data was being transferred, so I used the following query directly against the AWR data. I put a filter predicate on to return only where there DIFF_RECEIVED_MB >= 10MB so the instances that were not part of the execution are filtered out, as well as the single instance execution.
break on snap_id skip 1 compute sum of DIFF_RECEIVED_MB on SNAP_ID compute sum of DIFF_SENT_MB on SNAP_ID select * from (select snap_id, instance_number, round ((bytes_sent - lag (bytes_sent, 1) over (order by instance_number, snap_id)) / 1024 / 1024) diff_sent_mb, round ((bytes_received - lag (bytes_received, 1) over (order by instance_number, snap_id)) / 1024 / 1024) diff_received_mb from dba_hist_ic_client_stats where name = 'ipq' and snap_id between 910 and 917 order by snap_id, instance_number) where snap_id in (911, 913, 915, 917) and diff_received_mb &gt;= 10 / SNAP_ID INSTANCE_NUMBER DIFF_SENT_MB DIFF_RECEIVED_MB ---------- --------------- ------------ ---------------- 913 1 11604 10688 2 10690 11584 ********** ------------ ---------------- sum 22294 22272 915 1 8353 8350 2 8133 8418 3 8396 8336 4 8514 8299 ********** ------------ ---------------- sum 33396 33403 917 1 5033 4853 2 4758 4888 3 4956 4863 4 5029 4852 5 4892 4871 6 4745 4890 7 4753 4889 8 4821 4881 ********** ------------ ---------------- sum 38987 38987
As you can see from the data, the more nodes that were involved in the execution, the more interconnect traffic there was, however, the execution times were best with 8 nodes.
Further Explanation Of Riyaj’s Issue
If you read Riyaj’s post, you noticed that he observed worse, not better as I did, elapsed times when running on two nodes versus one. How could this be? It was noted in the comment thread of that post that the configuration was using Gig-E as the interconnect in a Solaris IPMP active-passive configuration. This means the interconnect speeds would be capped at 128MB/s (1000Mbps), the wire speed of Gig-E. This is by all means is an inadequate configuration to use cross-instance parallel execution.
There is a whitepaper entitled Oracle SQL Parallel Execution that discusses many of the aspects of Oracle’s parallel execution. I would highly recommend reading it. This paper specifically mentions:
If you use a relatively weak interconnect, relative to the I/O bandwidth from the server to the storage configuration, then you may be better of restricting parallel execution to a single node or to a limited number of nodes; inter-node parallel execution will not scale with an undersized interconnect.
I would assert that this is precisely the root cause (insufficient interconnect bandwidth for cross-instance PX) behind the issues that Riyaj observed, thus making his execution slower on two nodes than one node.
The Advantage Of Hash Partitioning/Subpartitioning And Full Partition-Wise Joins
At the end of my comment on Riyaj’s blog, I mentioned:
If a DW frequently uses large table to large table joins, then hash partitioning or subpartitioning would yield added gains as partition-wise joins will be used.
I thought that it would be both beneficial and educational to extend TC#1 and implement hash subpartitioning so that the impact could be measured on both query elapsed time and interconnect traffic. In order for a full partition-wise join to take place, the table must be partitioned/subpartitioned on the join key column, so in this case I’ve hash subpartitioned on TRX_NBR. See the Oracle Documentation on Partition-Wise Joins for a more detailed discussion on PWJ.
Off To The Lab: Partition-Wise Joins
I’ve run through the exact same test matrix with the new range/hash partitioning model and below are the results.
SQL Monitor Report
AWR SQL Report
As you can see by the elapsed times, the range/hash partitioning model with the full partition-wise join has decreased the overall execution time by around a factor of 2X compared to the range only partitioned version.
Now let’s take a look at the interconnect traffic for the PX messages:
break on snap_id skip 1 compute sum of DIFF_RECEIVED_MB on SNAP_ID compute sum of DIFF_SENT_MB on SNAP_ID select * from (select snap_id, instance_number, round ((bytes_sent - lag (bytes_sent, 1) over (order by instance_number, snap_id)) / 1024 / 1024) diff_sent_mb, round ((bytes_received - lag (bytes_received, 1) over (order by instance_number, snap_id)) / 1024 / 1024) diff_received_mb from dba_hist_ic_client_stats where name = 'ipq' and snap_id between 1041 and 1048 order by snap_id, instance_number) where snap_id in (1042,1044,1046,1048) and diff_received_mb &gt;= 10 / no rows selected
Hmm. No rows selected?!? I had previously put in the predicate “DIFF_RECEIVED_MB >= 10MB” to filter out the nodes that were not participating in the parallel execution. Let me remove that predicate rerun the query.
SNAP_ID INSTANCE_NUMBER DIFF_SENT_MB DIFF_RECEIVED_MB ---------- --------------- ------------ ---------------- 1042 1 8 6 2 2 3 3 2 3 4 2 3 5 2 3 6 2 3 7 2 3 8 2 3 ********** ------------ ---------------- sum 22 27 1044 1 7 7 2 3 2 3 2 2 4 2 2 5 2 2 6 2 2 7 2 2 8 2 2 ********** ------------ ---------------- sum 22 21 1046 1 1 2 2 1 2 3 1 2 4 3 1 5 1 1 6 1 1 7 1 1 8 1 1 ********** ------------ ---------------- sum 10 11 1048 1 6 5 2 1 2 3 3 2 4 1 2 5 1 2 6 1 2 7 1 2 8 1 2 ********** ------------ ---------------- sum 15 19
Wow, there is almost no interconnect traffic at all. Let me verify with the AWR report from the 8 node execution.
The AWR report confirms that there is next to no interconnect traffic for the PWJ version of TC#1. The reason for this is that since the table is hash subpartitoned on the join column each of the subpartitions can be joined to each other minimizing the data sent between parallel execution servers. If you look at the execution plan (see the AWR SQL Report) from the first set of experiments you will notice that the broadcast method for each of the tables is HASH but in the range/hash version of TC#1 there is no broadcast at all for either of the two tables. The full partition-wise join behaves logically the same way that a shared-nothing database would; each of the parallel execution servers works on its partition which does not require data from any other partition because of the hash partitioning on the join column. The main difference is that in a shared-nothing database the data is physically hash distributed amongst the nodes (each node contains a subset of all the data) where as all nodes in a Oracle RAC database have access to the all the data.
Personally I see no myth about cross-instance (inter-node) parallel execution and interconnect traffic, but frequently I see misunderstandings and misconceptions. As shown by the data in my experiment, TC#1 (w/o hash subpartitioning) running on eight nodes is more than 2X faster than running on one node using exactly the same DOP. Interconnect traffic is not a bad thing as long as the interconnect is designed to support the workload. Sizing the interconnect is really no different than sizing any other component of your cluster (memory/CPU/disk space/storage bandwidth). If it is undersized, performance will suffer. Depending on the number and speed of the host CPUs and the speed and bandwidth of the interconnect, your results may vary.
By hash subpartioning the table the interconnect traffic was all but eliminated and the query execution times were around 2X faster than the non-hash subpartition version of TC#1. This is obviously a much more scalable solution and one of the main reasons to leverage hash (sub)partitioning in a data warehouse.
Oracle Corporation had its F4Q09 earnings call today and the Exadata comments started right away with the earnings press release:
“The Exadata Database Machine is well on its way to being the most successful new product launch in Oracle’s 30 year history,” said Oracle CEO Larry Ellison. “Several of Teradata’s largest customers are performance testing — then buying — Oracle Exadata Database Machines. In a recent competitive benchmark, a Teradata machine took over six hours to process a query that our Exadata Database Machine ran in less than 30 minutes. They bought Exadata.”
During the earnings call Larry Ellison discusses Exadata and the competition:
…I’m going to talk about Exadata again. I said last quarter that Exadata is shaping up to be our most exciting and successful new product introduction in Oracle’s 30 year history and [in the] last quarter Exadata continues to grow and win competitive deals in the marketplace against our three primarily competitors. It’s turning out that Teradata is our number one competitor…Netezza and IBM are kind of tied for second.
Ellison describes some of the Exadata sales from this quarter which include:
- A well-known California SmartPhone and computer manufacturer (win vs. Netezza) who commented that Exadata ran about 100 times faster in some cases then their standard Oracle environment
- Research in Motion
- A large East Coast insurance company
- Thomson Reuters
- A Japanese telco (biggest Teradata customer in Japan) who benchmarked Exadata and found it to be dramatically faster then Teradata
- Barclays Capital (UK)
- A number of banks in Western Europe and Germany
Larry Ellison follows with:
It was just a great quarter for Exadata, a product that is relatively new to the marketplace that is persuading people to move from their existing environments because Exadata is faster and the hardware costs less.
In the Q&A Larry Ellison responds to John DiFucci on Exadata:
By the way every customer I mentioned and alluded to were actual sales. Now some of these, because the Exadata product is so new, quite often will install in kind of a try and buy situation, but I can’t think of a case where we installed the machine that they didn’t buy. So we’re winning these benchmarks. Sometimes we’re beating Teradata. I think in my quote, I said we’ve beat Teradata on one of the queries by 20 to one. So we think it’s a brand new technology, we think we’re a lot faster then the competition. The benchmarks are proving out with real customer data, we’re proving to be much faster then the competition. Every single deal I mentioned were cases where the customer bought the system. There are obviously other evaluations going on and we expect the Exadata sales to accelerate.
Today, June 10th, marks the Yahoo! Hadoop Summit ’09 and the crew at Facebook have a writeup on the Facebook Engineering page entitled: Hive – A Petabyte Scale Data Warehouse Using Hadoop.
I found this an very interesting read given some of the Hadoop/MapReduce comments from David J. DeWitt and Michael Stonebraker as well as their SIGMOD 2009 paper, A Comparison of Approaches to Large-Scale Data Analysis. Now I’m not about to jump into this whole dbms-is-better-than-mapreduce argument but I found Facebook’s story line interesting:
When we started at Facebook in 2007 all of the data processing infrastructure was built around a data warehouse built using a commercial RDBMS. The data that we were generating was growing very fast – as an example we grew from a 15TB data set in 2007 to a 2PB data set today. The infrastructure at that time was so inadequate that some daily data processing jobs were taking more than a day to process and the situation was just getting worse with every passing day. We had an urgent need for infrastructure that could scale along with our data and it was at that time we then started exploring Hadoop as a way to address our scaling needs.
[The] Hive/Hadoop cluster at Facebook stores more than 2PB of uncompressed data and routinely loads 15 TB of data daily
Wow, 2PB of uncompressed data and growing at around 15TB daily. A part of me wonders how much value there is in 2PB of data or if companies are suffering from OCD when it comes to data. Either way it’s interesting to see how much data is being generated/collected and how engineers are dealing with it.
Oracle and HP have taken back the #1 spot by setting a new performance record in the 1TB TPC-H benchmark. The HP/Oracle result puts the Oracle database ahead of both the Exasol (currently #2 & #3) and ParAccel (currently #4) results in the race for performance at the 1TB scale factor and places Oracle in the >1 million queries per hour (QphH) club, which is no small achievement. Compared to the next best result from HP/Oracle (currently #5), this result has over 9X the query throughput (1,166,976 QphH vs. 123,323 QphH) at around 1/4 the cost (5.42 USD vs. 20.54 USD) demonstrating significantly more performance for the money.
Some of the interesting bits from the hardware side:
- 4 HP BladeSystem c7000 Enclosures
- 64 HP ProLiant BL460c Servers
- 128 Quad-Core Intel Xeon X5450 “Harpertown” Processors (512 cores)
- 2TB Total System Memory (RAM)
- 6 HP Oracle Exadata Storage Servers
As you can see, this was a 64 node Oracle Real Application Cluster (RAC), each node having 2 processors (8 cores). This is also the first TPC-H benchmark from Oracle that used Exadata as the storage platform.
Congratulation to the HP/Oracle team on the great accomplishment!