Important Notice #1: the information on this page refers to the TPC-H benchmark tables and queries, but the results described are not official TPC-H results and should not be interpreted as such; nor should they be compared to official TPC-H results in any way. TPC-H is a trademark of the Transaction Processing Performance Council.
Important Notice #2: this page contains comparative results for LucidDB and MySQL, but the tests were carried out without expertise in MySQL performance tuning, and so the results should not be interpreted as what could be expected from a system tuned by an expert. Suggestions for configuration improvements are welcome! MySQL is a registered trademark of MySQL AB in the United States, the European Union and other countries.
- --Jvs 18:29, 29 July 2008 (PDT): Peter Zaitsev has pointed out that MySQL load performance would be better if indexes were predefined before data is loaded (rather than created individually after), so we'll be trying that on the next run. He also recommened using the rewritten queries with joins instead of subqueries against MySQL (not just LucidDB).
This wiki page presents some LucidDB performance results based on the table and query definitions from the TPC-H benchmark. For the purposes of comparison between LucidDB's column store architecture and a typical row store, results from MySQL running on the same machine are also presented for the identical analytic workload. Scale factor 10 (10GB of source flatfile data) was used for the cross-DBMS comparison. Some information on how to run the tests yourself is also provided. Join the luciddb-users mailing list at sourceforge.net if you need help.
Also, for some results obtained independently by others outside of the LucidDB project, see this blog post.
Scale Factor 10 Benchmark Machine Specifications
The machine used to run the scale factor 10 benchmark for both LucidDB and MySQL had the following hardware:
- 8GB RAM
- 2GHz dual-core
- 1MB L2 cache
- Seagate ST3300555SS SATA drive, 10K RPM
All files (source flatfiles, data files, and temp files) were stored on the same ext3 filesystem on the single drive (no RAID). The Linux I/O scheduler was changed from the default (cfq) to deadline.
The operating system was 64-bit RHEL 5.
For LucidDB, the JRockit R27.4 JVM was used.
LucidDB Test Execution
The LucidDB version used for testing was a prerelease of 0.7.4. If you're going to run the test yourself, download the full 0.7.4 source tarball and unpack it. http://p4web.eigenbase.org/open/dev/thirdparty/tpch.tar.gz contains the dbgen benchmark code from the TPC website, plus some LucidDB-specifics helper scripts, while luciddb/test/sql/tpch from the source tarball contains all LucidDB-specific scripts.
LucidDB Storage Preallocation
For optimal load performance, LucidDB data and temp files were preallocated via the lucidDbAllocFile utility. The allocSpace.10G.sh script can be used after editing the path for LUCIDDB_DIR to point to your installed LucidDB bin directory. Make sure LucidDB is not running when you execute the utility, otherwise you'll encounter a file lock error.
LucidDB Buffer Pool Size
The LucidDB buffer pool was set to 6GB. See LucidDbBufferPoolSizing for the procedure.
LucidDB I/O Scheduler
The LucidDB I/O Scheduler was set to aioLinux (forcing usage of libaio and O_DIRECT):
alter system set "deviceSchedulerType" = 'aioLinux';
Restart LucidDB after changing this setting and make sure that it comes back up successfully; if it doesn't, you probably need to install the libaio package on your machine.
A prefetch queue size of 50 pages was found to be the optimal setting for the disk configuration used:
alter system set "prefetchPagesMax" = 50;
Changing this setting also requires a restart in order to take effect. The optimal setting is dependent on the disk configuration.
(The numbers reported below were with the default setting of 12.)
For benchmark loads and queries run without any concurrency, LucidDB can be configured to give each statement the maximum possible memory:
alter system set "expectedConcurrentStatements" = 1;
Changing this setting does not require a restart.
(The numbers reported below were with the default setting of 4.)
The dbgen utility provided on the TPC website was used to generate the datafiles. The utility can't generate files larger than 2GB, so at scale factor 10, it's necessary to generate the files split up into smaller pieces. Large tables were split up into 10 parts, and then reconcatenated back into a single file per table (since the limit is in the utility, not in LucidDB's flatfile reader, and it's more efficient for LucidDB to load each table in one shot).
(--Jvs 19:07, 30 March 2009 (EDT): According to http://n2.nabble.com/large-file-support-in-TPC-H-dbgen-td2559525.html, it should be possible to get dbgen to generate one big file.)
The createdbMultiProcess script in tpch.tar.gz takes care of the generation step above, as well as some additional munging needed to make the files parsable by LucidDB.
$ ./createdbMultiProcess 10G 10
Reconcatentation can be performed with the Unix cat command.
LucidDB Initialization SQL
The template script init.sql.tmpl executes DDL to create some of the necessary objects such as the flatfile links. It also selects row counts from the flatfile links to verify that the files are accessible. Edit it to match your site-specifics, and then execute it. You can ignore the commented-out portions (they are obsolete leftovers from Broadbase).
LucidDB Table Definitions
The script create_tables.sql defines the TPC-H base tables. The definitions include the primary keys, but no other indexes.
(Note that the flat file external tables are quoted as lowercase, e.g. "lineitem", whereas the corresponding base tables are unquoted and hence ALLCAPS, e.g. LINEITEM.)
LucidDB Data Load
The script load_tables.sql copies data from the flatfile links into the base tables. Besides loading the data, this also updates the primary key index.
LucidDB Index Creation
Non-primary-key indexes are created by script create_indexes.sql. LucidDB does not support foreign key constraints, but indexes on the foreign key columns are still needed for efficient query processing. Additional indexes useful to the optimizer are also created.
This script also runs ANALYZE to collect statistics needed by the optimizer.
Queries are in individual script files named according to query number, e.g. 1.sql.
- Queries 2, 4, 11, 15, 16, and 17 are rewritten into an equivalent form which avoids subqueries, since LucidDB's optimizer does not yet handle all subquery decorrelation efficiently by itself.
- Queries 19, 21, and 22 are omitted since LucidDB's optimizer cannot yet handle them, and no one has come up with an equivalent rewrite.
- Queries 2, 3, 10, and 18 should be using !set rowlimit to control the result set size, but the scripts have not been updated yet; doing so may reduce the result rendering time.
- Queries which perform date arithmetic after parameter substitution in the original spec have been rewritten with the result of the arithmetic; LucidDB's optimizer is capable of doing the equivalent constant reduction rewrite (the scripts can be edited to verify this).
After the server was restarted post-load, all queries were run through exactly once in sequence in the same session.
LucidDB storage breakdowns were obtained with this query:
select table_schem as schema_name, table_name, index_name, pages as page_count from sys_boot.jdbc_metadata.index_info_internal;
MySQL Test Execution
The MySQL version used was 5.0.22, with the MyISAM storage engine.
For the data load and indexing, the following my.cnf settings were used (after some experimentation to attempt to find the best ones):
skip-locking key_buffer = 3000M sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M bulk_insert_buffer_size = 2M
However, for queries, the default my.cnf settings were used, since attempts to use large-memory settings seemed to hurt query times in most cases.
The script Media:mysql_load.sql was used to create the schema, load the data, create the indexes, and analyze statistics. PRIMARY KEY and index definitions were identical to those used for LucidDB. However, unlike LucidDB, primary keys were added as a separate step (instead of being loaded together with the data).
The script Media:mysql_query.sql was used to run queries; the queries were not modified from the TPC-H specification. As with LucidDB, queries were run in sequence in a single session after restarting the server post-load.
Query 11 returned incorrect results (empty result set). Later investigation showed that the query was parameterized incorrectly for both MySQL and LucidDB, so the results for both should be ignored. (This will be corrected for a future run.)
During a trial run, query 18 never came back after hours, and the machine was thrashing, so it was omitted from the real run.
Storage was measured by shutting the server down and examining data and index file sizes:
# ls -l total 16377720 -rw-rw---- 1 mysql mysql 8850 Apr 13 23:28 customer.frm -rw-rw---- 1 mysql mysql 238173452 Apr 13 23:28 customer.MYD -rw-rw---- 1 mysql mysql 38496256 Apr 13 23:28 customer.MYI -rw-rw---- 1 mysql mysql 65 Apr 13 18:16 db.opt -rw-rw---- 1 mysql mysql 9226 Apr 14 04:50 lineitem.frm -rw-rw---- 1 mysql mysql 5296436176 Apr 14 05:01 lineitem.MYD -rw-rw---- 1 mysql mysql 7186396160 Apr 14 06:01 lineitem.MYI -rw-rw---- 1 mysql mysql 8692 Apr 14 06:06 nation.frm -rw-rw---- 1 mysql mysql 2316 Apr 14 06:06 nation.MYD -rw-rw---- 1 mysql mysql 4096 Apr 14 06:06 nation.MYI -rw-rw---- 1 mysql mysql 8928 Apr 14 06:07 orders.frm -rw-rw---- 1 mysql mysql 1472903276 Apr 14 06:09 orders.MYD -rw-rw---- 1 mysql mysql 585344000 Apr 14 06:14 orders.MYI -rw-rw---- 1 mysql mysql 8874 Apr 14 06:15 part.frm -rw-rw---- 1 mysql mysql 234451704 Apr 14 06:15 part.MYD -rw-rw---- 1 mysql mysql 156191744 Apr 14 06:16 part.MYI -rw-rw---- 1 mysql mysql 8748 Apr 14 06:03 partsupp.frm -rw-rw---- 1 mysql mysql 1160039052 Apr 14 06:04 partsupp.MYD -rw-rw---- 1 mysql mysql 370024448 Apr 14 06:06 partsupp.MYI -rw-rw---- 1 mysql mysql 8648 Apr 13 23:33 region.frm -rw-rw---- 1 mysql mysql 412 Apr 13 23:33 region.MYD -rw-rw---- 1 mysql mysql 3072 Apr 13 23:33 region.MYI -rw-rw---- 1 mysql mysql 8804 Apr 13 19:20 supplier.frm -rw-rw---- 1 mysql mysql 13888184 Apr 13 19:20 supplier.MYD -rw-rw---- 1 mysql mysql 1848320 Apr 13 19:20 supplier.MYI
Scale Factor 10 Results
All times are in seconds; all storage sizes are in bytes. Index storage includes PRIMARY KEY.
Results for queries 1-18 are graphed here (times in seconds):
(As noted above, results for query 11 should be ignored due to a problem with the runs.)
This is how long it took to load the data (ANALYZE time not included):
LucidDB takes longer to load the base data, but is able to take advantage of the compression results later while building indexes.
Storage comparison for the LINEITEM table (which dominates the database) is graphed here (measured in bytes):
If we drill into the LucidDB storage by column for this table, we find something interesting:
Most of the storage is eaten up by the comment text field, whereas the other columns compress quite well. With column store, the comment column is only accessed if the query actually needs it, whereas with row store, the I/O cost is always paid up front.
The LucidDB index storage breakdown for this table is also enlightening:
Most of the storage is eaten up by the primary key index, which is hard to compress; the other index bitmaps compress quite well. The primary key on the fact table is not accessed much except at load time, so in terms of query I/O, it isn't relevant.
SF 10 Numbers
|11||bad run||bad run|
|LOAD TABLE||seconds (does not include primary key creation)||seconds (includes primary key creation)|
Scale Factor 30 Results
To test LucidDB scalability, a separate test was carried out comparing LucidDB SF 10 against LucidDB SF 30. (No MySQL run was attempted at SF 30.) These test runs used a different machine with a larger filesystem mounted on SAN (which is why the SF 10 numbers are better than the ones above); other than that, the hardware and LucidDB configuration were similar to the runs described above (in particular, still 8 GB RAM).
Left-hand bars show the SF 10 results; right-hand bars show the SF 30 results. The middle bars are computed as three times the SF 10 results (for comparing the expected results given perfect linear scaleup vs the actual SF 30 results).
SF 30 Numbers
|LucidDB SF 10||LucidDB SF 30|
|Time (sec)||Time (sec)|
|ANALYZE TABLE (ESTIMATE STAT)||64||228|