From 65eaa36aa25b3b794c657c6f05058923208f2281 Mon Sep 17 00:00:00 2001 From: qiancai Date: Wed, 5 Mar 2025 11:53:58 +0800 Subject: [PATCH 1/6] remove the benchmark files not for v8.5 --- benchmark/benchmark-sysbench-v2.md | 130 -------- benchmark/benchmark-sysbench-v3.md | 143 --------- benchmark/benchmark-sysbench-v4-vs-v3.md | 208 ------------- benchmark/benchmark-sysbench-v5-vs-v4.md | 223 -------------- .../benchmark-sysbench-v5.1.0-vs-v5.0.2.md | 186 ----------- .../benchmark-sysbench-v5.2.0-vs-v5.1.1.md | 186 ----------- .../benchmark-sysbench-v5.3.0-vs-v5.2.2.md | 203 ------------ .../benchmark-sysbench-v5.4.0-vs-v5.3.0.md | 203 ------------ .../benchmark-sysbench-v6.0.0-vs-v5.4.0.md | 199 ------------ .../benchmark-sysbench-v6.1.0-vs-v6.0.0.md | 199 ------------ .../benchmark-sysbench-v6.2.0-vs-v6.1.0.md | 199 ------------ benchmark/benchmark-tpch.md | 109 ------- ...-performance-benchmarking-with-sysbench.md | 289 ------------------ ...v3.0-performance-benchmarking-with-tpcc.md | 98 ------ ...v4.0-performance-benchmarking-with-tpcc.md | 131 -------- ...v4.0-performance-benchmarking-with-tpch.md | 209 ------------- ...v5.0-performance-benchmarking-with-tpcc.md | 149 --------- ...v5.1-performance-benchmarking-with-tpcc.md | 93 ------ ...v5.2-performance-benchmarking-with-tpcc.md | 93 ------ ...v5.3-performance-benchmarking-with-tpcc.md | 129 -------- ...v5.4-performance-benchmarking-with-tpcc.md | 129 -------- ...v5.4-performance-benchmarking-with-tpch.md | 128 -------- ...v6.0-performance-benchmarking-with-tpcc.md | 125 -------- ...v6.0-performance-benchmarking-with-tpch.md | 8 - ...v6.1-performance-benchmarking-with-tpcc.md | 124 -------- ...v6.1-performance-benchmarking-with-tpch.md | 8 - ...v6.2-performance-benchmarking-with-tpcc.md | 124 -------- ...v6.2-performance-benchmarking-with-tpch.md | 8 - system-variables.md | 2 +- 29 files changed, 1 insertion(+), 4034 deletions(-) delete mode 100644 benchmark/benchmark-sysbench-v2.md delete mode 100644 benchmark/benchmark-sysbench-v3.md delete mode 100644 benchmark/benchmark-sysbench-v4-vs-v3.md delete mode 100644 benchmark/benchmark-sysbench-v5-vs-v4.md delete mode 100644 benchmark/benchmark-sysbench-v5.1.0-vs-v5.0.2.md delete mode 100644 benchmark/benchmark-sysbench-v5.2.0-vs-v5.1.1.md delete mode 100644 benchmark/benchmark-sysbench-v5.3.0-vs-v5.2.2.md delete mode 100644 benchmark/benchmark-sysbench-v5.4.0-vs-v5.3.0.md delete mode 100644 benchmark/benchmark-sysbench-v6.0.0-vs-v5.4.0.md delete mode 100644 benchmark/benchmark-sysbench-v6.1.0-vs-v6.0.0.md delete mode 100644 benchmark/benchmark-sysbench-v6.2.0-vs-v6.1.0.md delete mode 100644 benchmark/benchmark-tpch.md delete mode 100644 benchmark/v3.0-performance-benchmarking-with-sysbench.md delete mode 100644 benchmark/v3.0-performance-benchmarking-with-tpcc.md delete mode 100644 benchmark/v4.0-performance-benchmarking-with-tpcc.md delete mode 100644 benchmark/v4.0-performance-benchmarking-with-tpch.md delete mode 100644 benchmark/v5.0-performance-benchmarking-with-tpcc.md delete mode 100644 benchmark/v5.1-performance-benchmarking-with-tpcc.md delete mode 100644 benchmark/v5.2-performance-benchmarking-with-tpcc.md delete mode 100644 benchmark/v5.3-performance-benchmarking-with-tpcc.md delete mode 100644 benchmark/v5.4-performance-benchmarking-with-tpcc.md delete mode 100644 benchmark/v5.4-performance-benchmarking-with-tpch.md delete mode 100644 benchmark/v6.0-performance-benchmarking-with-tpcc.md delete mode 100644 benchmark/v6.0-performance-benchmarking-with-tpch.md delete mode 100644 benchmark/v6.1-performance-benchmarking-with-tpcc.md delete mode 100644 benchmark/v6.1-performance-benchmarking-with-tpch.md delete mode 100644 benchmark/v6.2-performance-benchmarking-with-tpcc.md delete mode 100644 benchmark/v6.2-performance-benchmarking-with-tpch.md diff --git a/benchmark/benchmark-sysbench-v2.md b/benchmark/benchmark-sysbench-v2.md deleted file mode 100644 index 2fdc749b3aa00..0000000000000 --- a/benchmark/benchmark-sysbench-v2.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -title: TiDB Sysbench Performance Test Report -- v2.0.0 vs. v1.0.0 -aliases: ['/docs/dev/benchmark/benchmark-sysbench-v2/','/docs/dev/benchmark/sysbench-v2/'] -summary: TiDB 2.0 GA outperforms TiDB 1.0 GA in `Select` and `Insert` tests, with a 10% increase in `Select` query performance and a slight improvement in `Insert` query performance. However, the OLTP performance of both versions is almost the same. ---- - -# TiDB Sysbench Performance Test Report -- v2.0.0 vs. v1.0.0 - -## Test purpose - -This test aims to compare the performances of TiDB 1.0 and TiDB 2.0. - -## Test version, time, and place - -TiDB version: v1.0.8 vs. v2.0.0-rc6 - -Time: April 2018 - -Place: Beijing, China - -## Test environment - -IDC machine - -| Type | Name | -| -------- | --------- | -| OS | linux (CentOS 7.3.1611) | -| CPU | 40 vCPUs, Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz | -| RAM | 128GB | -| DISK | Optane 500GB SSD * 1 | - -## Test plan - -### TiDB version information - -### v1.0.8 - -| Component | GitHash | -| -------- | --------- | -| TiDB | 571f0bbd28a0b8155a5ee831992c986b90d21ab7 | -| TiKV | 4ef5889947019e3cb55cc744f487aa63b42540e7 | -| PD | 776bcd940b71d295a2c7ed762582bc3aff7d3c0e | - -### v2.0.0-rc6 - -| Component | GitHash | -| :--------: | :---------: | -| TiDB | 82d35f1b7f9047c478f4e1e82aa0002abc8107e7 | -| TiKV | 7ed4f6a91f92cad5cd5323aaebe7d9f04b77cc79 | -| PD | 2c8e7d7e33b38e457169ce5dfb2f461fced82d65 | - -### TiKV parameter configuration - -- v1.0.8 - - ``` - sync-log = false - grpc-concurrency = 8 - grpc-raft-conn-num = 24 - ``` - -- v2.0.0-rc6 - - ``` - sync-log = false - grpc-concurrency = 8 - grpc-raft-conn-num = 24 - use-delete-range: false - ``` - -### Cluster topology - -| Machine IP | Deployment instance | -|--------------|------------| -| 172.16.21.1 | 1*tidb 1*pd 1*sysbench | -| 172.16.21.2 | 1*tidb 1*pd 1*sysbench | -| 172.16.21.3 | 1*tidb 1*pd 1*sysbench | -| 172.16.11.4 | 1*tikv | -| 172.16.11.5 | 1*tikv | -| 172.16.11.6 | 1*tikv | -| 172.16.11.7 | 1*tikv | -| 172.16.11.8 | 1*tikv | -| 172.16.11.9 | 1*tikv | - -## Test result - -### Standard `Select` test - -| Version | Table count | Table size | Sysbench threads |QPS | Latency (avg/.95) | -| :---: | :---: | :---: | :---: | :---: | :---: | -| v2.0.0-rc6 | 32 | 10 million | 128 * 3 | 201936 | 1.9033 ms/5.67667 ms | -| v2.0.0-rc6 | 32 | 10 million | 256 * 3 | 208130 | 3.69333 ms/8.90333 ms | -| v2.0.0-rc6 | 32 | 10 million | 512 * 3 | 211788 | 7.23333 ms/15.59 ms | -| v2.0.0-rc6 | 32 | 10 million | 1024 * 3 | 212868 | 14.5933 ms/43.2133 ms | -| v1.0.8 | 32 | 10 million | 128 * 3 | 188686 | 2.03667 ms/5.99 ms | -| v1.0.8 | 32 | 10 million | 256 * 3 | 195090 |3.94 ms/9.12 ms | -| v1.0.8 | 32 | 10 million | 512 * 3 | 203012 | 7.57333 ms/15.3733 ms | -| v1.0.8 | 32 | 10 million | 1024 * 3 | 205932 | 14.9267 ms/40.7633 ms | - -According to the statistics above, the `Select` query performance of TiDB 2.0 GA has increased by about 10% at most than that of TiDB 1.0 GA. - -### Standard OLTP test - -| Version | Table count | Table size | Sysbench threads | TPS | QPS | Latency (avg/.95) | -| :---: | :---: | :---: | :---: | :---: | :---: | :---:| -| v2.0.0-rc6 | 32 | 10 million | 128 * 3 | 5404.22 | 108084.4 | 87.2033 ms/110 ms | -| v2.0.0-rc6 | 32 | 10 million | 256 * 3 | 5578.165 | 111563.3 | 167.673 ms/275.623 ms | -| v2.0.0-rc6 | 32 | 10 million | 512 * 3 | 5874.045 | 117480.9 | 315.083 ms/674.017 ms | -| v2.0.0-rc6 | 32 | 10 million | 1024 * 3 | 6290.7 | 125814 | 529.183 ms/857.007 ms | -| v1.0.8 | 32 | 10 million | 128 * 3 | 5523.91 | 110478 | 69.53 ms/88.6333 ms | -| v1.0.8 | 32 | 10 million | 256 * 3 | 5969.43 | 119389 |128.63 ms/162.58 ms | -| v1.0.8 | 32 | 10 million | 512 * 3 | 6308.93 | 126179 | 243.543 ms/310.913 ms | -| v1.0.8 | 32 | 10 million | 1024 * 3 | 6444.25 | 128885 | 476.787ms/635.143 ms | - -According to the statistics above, the OLTP performance of TiDB 2.0 GA and TiDB 1.0 GA is almost the same. - -### Standard `Insert` test - -| Version | Table count | Table size | Sysbench threads | QPS | Latency (avg/.95) | -| :---: | :---: | :---: | :---: | :---: | :---: | -| v2.0.0-rc6 | 32 | 10 million | 128 * 3 | 31707.5 | 12.11 ms/21.1167 ms | -| v2.0.0-rc6 | 32 | 10 million | 256 * 3 | 38741.2 | 19.8233 ms/39.65 ms | -| v2.0.0-rc6 | 32 | 10 million | 512 * 3 | 45136.8 | 34.0267 ms/66.84 ms | -| v2.0.0-rc6 | 32 | 10 million | 1024 * 3 | 48667 | 63.1167 ms/121.08 ms | -| v1.0.8 | 32 | 10 million | 128 * 3 | 31125.7 | 12.3367 ms/19.89 ms | -| v1.0.8 | 32 | 10 million | 256 * 3 | 36800 | 20.8667 ms/35.3767 ms | -| v1.0.8 | 32 | 10 million | 512 * 3 | 44123 | 34.8067 ms/63.32 ms | -| v1.0.8 | 32 | 10 million | 1024 * 3 | 48496 | 63.3333 ms/118.92 ms | - -According to the statistics above, the `Insert` query performance of TiDB 2.0 GA has increased slightly than that of TiDB 1.0 GA. diff --git a/benchmark/benchmark-sysbench-v3.md b/benchmark/benchmark-sysbench-v3.md deleted file mode 100644 index fd583653db0d4..0000000000000 --- a/benchmark/benchmark-sysbench-v3.md +++ /dev/null @@ -1,143 +0,0 @@ ---- -title: TiDB Sysbench Performance Test Report -- v2.1 vs. v2.0 -aliases: ['/docs/dev/benchmark/benchmark-sysbench-v3/','/docs/dev/benchmark/sysbench-v3/'] -summary: TiDB 2.1 outperforms TiDB 2.0 in the `Point Select` test, with a 50% increase in query performance. However, the `Update Non-Index` and `Update Index` tests show similar performance between the two versions. The test was conducted in September 2018 in Beijing, China, using a specific test environment and configuration. ---- - -# TiDB Sysbench Performance Test Report -- v2.1 vs. v2.0 - -## Test purpose - -This test aims to compare the performance of TiDB 2.1 and TiDB 2.0 for OLTP where the working set fits in memory. - -## Test version, time, and place - -TiDB version: v2.1.0-rc.2 vs. v2.0.6 - -Time: September, 2018 - -Place: Beijing, China - -## Test environment - -IDC machine: - -| Type | Name | -| :-: | :-: | -| OS | Linux (CentOS 7.3.1611) | -| CPU | 40 vCPUs, Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz | -| RAM | 128GB | -| DISK | Optane 500GB SSD \* 1 | - -Sysbench version: 1.1.0 - -## Test plan - -Use Sysbench to import **16 tables, with 10,000,000 rows in each table**. With the HAProxy, requests are sent to the cluster at an incremental concurrent number. A single concurrent test lasts 5 minutes. - -### TiDB version information - -### v2.1.0-rc.2 - -| Component | GitHash | -| :-: | :-: | -| TiDB | 08e56cd3bae166b2af3c2f52354fbc9818717f62 | -| TiKV | 57e684016dafb17dc8a6837d30224be66cbc7246 | -| PD | 6a7832d2d6e5b2923c79683183e63d030f954563 | - -### v2.0.6 - -| Component | GitHash | -| :-: | :-: | -| TiDB | b13bc08462a584a085f377625a7bab0cc0351570 | -| TiKV | 57c83dc4ebc93d38d77dc8f7d66db224760766cc | -| PD | b64716707b7279a4ae822be767085ff17b5f3fea | - -### TiDB parameter configuration - -The default TiDB configuration is used in both v2.1 and v2.0. - -### TiKV parameter configuration - -The following TiKV configuration is used in both v2.1 and v2.0: - -```txt -[readpool.storage] -normal-concurrency = 8 -[server] -grpc-concurrency = 8 -[raftstore] -sync-log = false -[rocksdb.defaultcf] -block-cache-size = "60GB" -[rocksdb.writecf] -block-cache-size = "20GB" -``` - -### Cluster topology - -| Machine IP | Deployment instance | -| :-: | :-: | -| 172.16.30.31 | 1\*Sysbench 1\*HAProxy | -| 172.16.30.32 | 1\*TiDB 1\*pd 1\*TiKV | -| 172.16.30.33 | 1\*TiDB 1\*TiKV | -| 172.16.30.34 | 1\*TiDB 1\*TiKV | - -## Test result - -### `Point Select` test - -| Version | Threads | QPS | 95% Latency (ms) | -| :-: | :-: | :-: | :-: | -| v2.1 | 64 | 111481.09 | 1.16 | -| v2.1 | 128 | 145102.62 | 2.52 | -| v2.1 | 256 | 161311.9 | 4.57 | -| v2.1 | 512 | 184991.19 | 7.56 | -| v2.1 | 1024 | 230282.74 | 10.84 | -| v2.0 | 64 | 75285.87 | 1.93 | -| v2.0 | 128 | 92141.79 | 3.68 | -| v2.0 | 256 | 107464.93 | 6.67 | -| v2.0 | 512 | 121350.61 | 11.65 | -| v2.0 | 1024 | 150036.31 | 17.32 | - -![point select](/media/sysbench_v3_point_select.png) - -According to the statistics above, the `Point Select` query performance of TiDB 2.1 has increased by **50%** than that of TiDB 2.0. - -### `Update Non-Index` test - -| Version | Threads | QPS | 95% Latency (ms) | -| :-: | :-: | :-: | :-: | -| v2.1 | 64 | 18946.09 | 5.77 | -| v2.1 | 128 | 22022.82 | 12.08 | -| v2.1 | 256 | 24679.68 | 25.74 | -| v2.1 | 512 | 25107.1 | 51.94 | -| v2.1 | 1024 | 27144.92 | 106.75 | -| v2.0 | 64 | 16316.85 | 6.91 | -| v2.0 | 128 | 20944.6 | 11.45 | -| v2.0 | 256 | 24017.42 | 23.1 | -| v2.0 | 512 | 25994.33 | 46.63 | -| v2.0 | 1024 | 27917.52 | 92.42 | - -![update non-index](/media/sysbench_v3_update_non_index.png) - -According to the statistics above, the `Update Non-Index` write performance of TiDB 2.1 and TiDB 2.0 is almost the same. - -### `Update Index` test - -| Version | Threads | QPS | 95% Latency (ms) | -| :-: | :-: | :-: | :-: | -| v2.1 | 64 | 9934.49 | 12.08 | -| v2.1 | 128 | 10505.95 | 25.28 | -| v2.1 | 256 | 11007.7 | 55.82 | -| v2.1 | 512 | 11198.81 | 106.75 | -| v2.1 | 1024 | 11591.89 | 200.47 | -| v2.0 | 64 | 9754.68 | 11.65 | -| v2.0 | 128 | 10603.31 | 24.38 | -| v2.0 | 256 | 11011.71 | 50.11 | -| v2.0 | 512 | 11162.63 | 104.84 | -| v2.0 | 1024 | 12067.63 | 179.94 | - -![update index](/media/sysbench_v3_update_index.png) - -According to the statistics above, the `Update Index` write performance of TiDB 2.1 and TiDB 2.0 is almost the same. diff --git a/benchmark/benchmark-sysbench-v4-vs-v3.md b/benchmark/benchmark-sysbench-v4-vs-v3.md deleted file mode 100644 index 41fd3a544de22..0000000000000 --- a/benchmark/benchmark-sysbench-v4-vs-v3.md +++ /dev/null @@ -1,208 +0,0 @@ ---- -title: TiDB Sysbench Performance Test Report -- v4.0 vs. v3.0 -summary: Compare the Sysbench performance of TiDB 4.0 and TiDB 3.0. -aliases: ['/docs/dev/benchmark/benchmark-sysbench-v4-vs-v3/'] ---- - -# TiDB Sysbench Performance Test Report -- v4.0 vs. v3.0 - -## Test purpose - -This test aims to compare the Sysbench performance of TiDB 4.0 and TiDB 3.0 in the Online Transactional Processing (OLTP) scenario. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| Sysbench | m5.4xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | 3.0 and 4.0 | -| TiDB | 3.0 and 4.0 | -| TiKV | 3.0 and 4.0 | -| Sysbench | 1.0.20 | - -### Parameter configuration - -#### TiDB v3.0 configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV v3.0 configuration - -{{< copyable "" >}} - -```yaml -storage.scheduler-worker-pool-size: 5 -raftstore.store-pool-size: 3 -raftstore.apply-pool-size: 3 -rocksdb.max-background-jobs: 3 -raftdb.max-background-jobs: 3 -raftdb.allow-concurrent-memtable-write: true -server.grpc-concurrency: 6 -readpool.storage.normal-concurrency: 10 -readpool.coprocessor.normal-concurrency: 5 -``` - -#### TiDB v4.0 configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV v4.0 configuration - -{{< copyable "" >}} - -```yaml -storage.scheduler-worker-pool-size: 5 -raftstore.store-pool-size: 3 -raftstore.apply-pool-size: 3 -rocksdb.max-background-jobs: 3 -raftdb.max-background-jobs: 3 -raftdb.allow-concurrent-memtable-write: true -server.grpc-concurrency: 6 -readpool.unified.min-thread-count: 5 -readpool.unified.max-thread-count: 20 -readpool.storage.normal-concurrency: 10 -pessimistic-txn.pipelined: true -``` - -#### Global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_disable_txn_auto_retry=0; -``` - -## Test plan - -1. Deploy TiDB v4.0 and v3.0 using TiUP. -2. Use Sysbench to import 16 tables, each table with 10 million rows of data. -3. Execute the `analyze table` statement on each table. -4. Back up the data used for restore before different concurrency tests, which ensures data consistency for each test. -5. Start the Sysbench client to perform the `point_select`, `read_write`, `update_index`, and `update_non_index` tests. Perform stress tests on TiDB via AWS NLB. In each type of test, the warm-up takes 1 minute and the test takes 5 minutes. -6. After each type of test is completed, stop the cluster, overwrite the cluster with the backup data in step 4, and restart the cluster. - -### Prepare test data - -Execute the following command to prepare the test data: - -{{< copyable "shell-regular" >}} - -```bash -sysbench oltp_common \ - --threads=16 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - --mysql-user=root \ - --mysql-password=password \ - prepare --tables=16 --table-size=10000000 -``` - -### Perform the test - -Execute the following command to perform the test. - -{{< copyable "shell-regular" >}} - -```bash -sysbench $testname \ - --threads=$threads \ - --time=300 \ - --report-interval=1 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - run --tables=16 --table-size=10000000 -``` - -## Test results - -### Point Select performance - -| Threads | v3.0 QPS | v3.0 95% latency (ms) | v4.0 QPS | v4.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -| 150 | 117085.701 | 1.667 | 118165.1357 | 1.608 | 0.92% | -| 300 | 200621.4471| 2.615 | 207774.0859 | 2.032 | 3.57% | -| 600 | 283928.9323| 4.569 | 320673.342 | 3.304 | 12.94% | -| 900 | 343218.2624| 6.686 | 383913.3855 | 4.652 | 11.86% | -| 1200 | 347200.2366| 8.092 | 408929.4372 | 6.318 | 17.78% | -| 1500 | 366406.2767| 10.562 | 418268.8856 | 7.985 | 14.15% | - -Compared with v3.0, the Point Select performance of TiDB v4.0 has increased by 14%. - -![Point Select](/media/sysbench-v4vsv3-point-select.png) - -### Update Non-index performance - -| Threads | v3.0 QPS | v3.0 95% latency (ms) | v4.0 QPS | v4.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -| 150 | 15446.41024 | 11.446 | 16954.39971 | 10.844 | 9.76% | -| 300 | 22276.15572| 17.319 | 24364.44689 | 16.706 | 9.37% | -| 600 | 28784.88353| 29.194 | 31635.70833 | 28.162 | 9.90% | -| 900 | 32194.15548| 42.611 | 35787.66078 | 38.942 | 11.16% | -| 1200 | 33954.69114| 58.923 | 38552.63158 | 51.018 | 13.54% | -| 1500 | 35412.0032| 74.464 | 40859.63755 | 62.193 | 15.38% | - -Compared with v3.0, the Update Non-index performance of TiDB v4.0 has increased by 15%. - -![Update Non-index](/media/sysbench-v4vsv3-update-non-index.png) - -### Update Index performance - -| Threads | v3.0 QPS | v3.0 95% latency (ms) | v4.0 QPS | v4.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -| 150 | 11164.40571 | 16.706 | 11954.73635 | 16.408 | 7.08% | -| 300 | 14460.98057| 28.162 | 15243.40899 | 28.162 | 5.41% | -| 600 | 17112.73036| 53.85 | 18535.07515 | 50.107 | 8.31% | -| 900 | 18233.83426| 86.002 | 20339.6901 | 70.548 | 11.55% | -| 1200 | 18622.50283| 127.805 | 21390.25122 | 94.104 | 14.86% | -| 1500 | 18980.34447| 170.479 | 22359.996 | 114.717 | 17.81% | - -Compared with v3.0, the Update Index performance of TiDB v4.0 has increased by 17%. - -![Update Index](/media/sysbench-v4vsv3-update-index.png) - -### Read-write performance - -| Threads | v3.0 QPS | v3.0 95% latency (ms) | v4.0 QPS | v4.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -| 150 | 43768.33633 | 71.83 | 53912.63705 | 59.993 | 23.18% | -| 300 | 55655.63589| 121.085 | 71327.21336 | 97.555 | 28.16% | -| 600 | 64642.96992| 223.344 | 84487.75483 | 176.731 | 30.70% | -| 900 | 68947.25293| 325.984 | 90177.94612 | 257.95 | 30.79% | -| 1200 | 71334.80099| 434.829 | 92779.71507 | 344.078 | 30.06% | -| 1500 | 72069.9115| 580.017 | 95088.50812 | 434.829 | 31.94% | - -Compared with v3.0, the read-write performance of TiDB v4.0 has increased by 31%. - -![Read Write](/media/sysbench-v4vsv3-read-write.png) diff --git a/benchmark/benchmark-sysbench-v5-vs-v4.md b/benchmark/benchmark-sysbench-v5-vs-v4.md deleted file mode 100644 index 87900a3a21222..0000000000000 --- a/benchmark/benchmark-sysbench-v5-vs-v4.md +++ /dev/null @@ -1,223 +0,0 @@ ---- -title: TiDB Sysbench Performance Test Report -- v5.0 vs. v4.0 -summary: TiDB v5.0 outperforms v4.0 in Sysbench performance tests. Point Select performance improved by 2.7%, Update Non-index by 81%, Update Index by 28%, and Read Write by 9%. The test aimed to compare performance in the OLTP scenario using AWS EC2. Test results were presented in tables and graphs. ---- - -# TiDB Sysbench Performance Test Report -- v5.0 vs. v4.0 - -## Test purpose - -This test aims at comparing the Sysbench performance of TiDB v5.0 and TiDB v4.0 in the Online Transactional Processing (OLTP) scenario. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| Sysbench | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | 4.0 and 5.0 | -| TiDB | 4.0 and 5.0 | -| TiKV | 4.0 and 5.0 | -| Sysbench | 1.0.20 | - -### Parameter configuration - -#### TiDB v4.0 configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV v4.0 configuration - -{{< copyable "" >}} - -```yaml -storage.scheduler-worker-pool-size: 5 -raftstore.store-pool-size: 3 -raftstore.apply-pool-size: 3 -rocksdb.max-background-jobs: 3 -raftdb.max-background-jobs: 3 -raftdb.allow-concurrent-memtable-write: true -server.grpc-concurrency: 6 -readpool.unified.min-thread-count: 5 -readpool.unified.max-thread-count: 20 -readpool.storage.normal-concurrency: 10 -pessimistic-txn.pipelined: true -``` - -#### TiDB v5.0 configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV v5.0 configuration - -{{< copyable "" >}} - -```yaml -storage.scheduler-worker-pool-size: 5 -raftstore.store-pool-size: 3 -raftstore.apply-pool-size: 3 -rocksdb.max-background-jobs: 8 -raftdb.max-background-jobs: 4 -raftdb.allow-concurrent-memtable-write: true -server.grpc-concurrency: 6 -readpool.unified.min-thread-count: 5 -readpool.unified.max-thread-count: 20 -readpool.storage.normal-concurrency: 10 -pessimistic-txn.pipelined: true -server.enable-request-batch: false -``` - -#### TiDB v4.0 global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -``` - -#### TiDB v5.0 global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; - -``` - -## Test plan - -1. Deploy TiDB v5.0 and v4.0 using TiUP. -2. Use Sysbench to import 16 tables, each table with 10 million rows of data. -3. Execute the `analyze table` statement on each table. -4. Back up the data used for restore before different concurrency tests, which ensures data consistency for each test. -5. Start the Sysbench client to perform the `point_select`, `read_write`, `update_index`, and `update_non_index` tests. Perform stress tests on TiDB via AWS NLB. In each type of test, the warm-up takes 1 minute and the test takes 5 minutes. -6. After each type of test is completed, stop the cluster, overwrite the cluster with the backup data in step 4, and restart the cluster. - -### Prepare test data - -Execute the following command to prepare the test data: - -{{< copyable "shell-regular" >}} - -```bash -sysbench oltp_common \ - --threads=16 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - --mysql-user=root \ - --mysql-password=password \ - prepare --tables=16 --table-size=10000000 -``` - -### Perform the test - -Execute the following command to perform the test. - -{{< copyable "shell-regular" >}} - -```bash -sysbench $testname \ - --threads=$threads \ - --time=300 \ - --report-interval=1 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - run --tables=16 --table-size=10000000 -``` - -## Test results - -### Point Select performance - -| Threads | v4.0 QPS | v4.0 95% latency (ms) | v5.0 QPS | v5.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -| 150 | 159451.19 | 1.32 | 177876.25 | 1.23 | 11.56% | -| 300 | 244790.38 | 1.96 | 252675.03 | 1.82 | 3.22% | -| 600 | 322929.05 | 3.75 | 331956.84 | 3.36 | 2.80% | -| 900 | 364840.05 | 5.67 | 365655.04 | 5.09 | 0.22% | -| 1200 | 376529.18 | 7.98 | 366507.47 | 7.04 | -2.66% | -| 1500 | 368390.52 | 10.84 | 372476.35 | 8.90 | 1.11% | - -Compared with v4.0, the Point Select performance of TiDB v5.0 has increased by 2.7%. - -![Point Select](/media/sysbench_v5vsv4_point_select.png) - -### Update Non-index performance - -| Threads | v4.0 QPS | v4.0 95% latency (ms) | v5.0 QPS | v5.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -| 150 | 17243.78 | 11.04 | 30866.23 | 6.91 | 79.00% | -| 300 | 25397.06 | 15.83 | 45915.39 | 9.73 | 80.79% | -| 600 | 33388.08 | 25.28 | 60098.52 | 16.41 | 80.00% | -| 900 | 38291.75 | 36.89 | 70317.41 | 21.89 | 83.64% | -| 1200 | 41003.46 | 55.82 | 76376.22 | 28.67 | 86.27% | -| 1500 | 44702.84 | 62.19 | 80234.58 | 34.95 | 79.48% | - -Compared with v4.0, the Update Non-index performance of TiDB v5.0 has increased by 81%. - -![Update Non-index](/media/sysbench_v5vsv4_update_non_index.png) - -### Update Index performance - -| Threads | v4.0 QPS | v4.0 95% latency (ms) | v5.0 QPS | v5.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -| 150 | 11736.21 | 17.01 | 15631.34 | 17.01 | 33.19% | -| 300 | 15435.95 | 28.67 | 19957.06 | 22.69 | 29.29% | -| 600 | 18983.21 | 49.21 | 23218.14 | 41.85 | 22.31% | -| 900 | 20855.29 | 74.46 | 26226.76 | 53.85 | 25.76% | -| 1200 | 21887.64 | 102.97 | 28505.41 | 69.29 | 30.24% | -| 1500 | 23621.15 | 110.66 | 30341.06 | 82.96 | 28.45% | - -Compared with v4.0, the Update Index performance of TiDB v5.0 has increased by 28%. - -![Update Index](/media/sysbench_v5vsv4_update_index.png) - -### Read Write performance - -| Threads | v4.0 QPS | v4.0 95% latency (ms) | v5.0 QPS | v5.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -| 150 | 59979.91 | 61.08 | 66098.57 | 55.82 | 10.20% | -| 300 | 77118.32 | 102.97 | 84639.48 | 90.78 | 9.75% | -| 600 | 90619.52 | 183.21 | 101477.46 | 167.44 | 11.98% | -| 900 | 97085.57 | 267.41 | 109463.46 | 240.02 | 12.75% | -| 1200 | 106521.61 | 331.91 | 115416.05 | 320.17 | 8.35% | -| 1500 | 116278.96 | 363.18 | 118807.5 | 411.96 | 2.17% | - -Compared with v4.0, the read-write performance of TiDB v5.0 has increased by 9%. - -![Read Write](/media/sysbench_v5vsv4_read_write.png) diff --git a/benchmark/benchmark-sysbench-v5.1.0-vs-v5.0.2.md b/benchmark/benchmark-sysbench-v5.1.0-vs-v5.0.2.md deleted file mode 100644 index a9d398939913f..0000000000000 --- a/benchmark/benchmark-sysbench-v5.1.0-vs-v5.0.2.md +++ /dev/null @@ -1,186 +0,0 @@ ---- -title: TiDB Sysbench Performance Test Report -- v5.1.0 vs. v5.0.2 -summary: TiDB v5.1.0 shows a 19.4% improvement in Point Select performance compared to v5.0.2. However, the Read Write and Update Index performance is slightly reduced in v5.1.0. The test was conducted on AWS EC2 using Sysbench with specific hardware and software configurations. The test plan involved deploying, importing data, and performing stress tests. Overall, v5.1.0 demonstrates improved Point Select performance but reduced performance in other areas. ---- - -# TiDB Sysbench Performance Test Report -- v5.1.0 vs. v5.0.2 - -## Test overview - -This test aims at comparing the Sysbench performance of TiDB v5.1.0 and TiDB v5.0.2 in the Online Transactional Processing (OLTP) scenario. The results show that compared with v5.0.2, the Point Select performance of v5.1.0 is improved by 19.4%, and the performance of the Read Write and Update Index is slightly reduced. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| Sysbench | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | v5.0.2 and v5.1.0 | -| TiDB | v5.0.2 and v5.1.0 | -| TiKV | v5.0.2 and v5.1.0 | -| Sysbench | 1.0.20 | - -### Parameter configuration - -TiDB v5.1.0 and TiDB v5.0.2 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -storage.scheduler-worker-pool-size: 5 -raftstore.store-pool-size: 3 -raftstore.apply-pool-size: 3 -rocksdb.max-background-jobs: 8 -raftdb.max-background-jobs: 4 -raftdb.allow-concurrent-memtable-write: true -server.grpc-concurrency: 6 -readpool.unified.min-thread-count: 5 -readpool.unified.max-thread-count: 20 -readpool.storage.normal-concurrency: 10 -pessimistic-txn.pipelined: true -server.enable-request-batch: false -``` - -#### TiDB global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -``` - -## Test plan - -1. Deploy TiDB v5.1.0 and v5.0.2 using TiUP. -2. Use Sysbench to import 16 tables, each table with 10 million rows of data. -3. Execute the `analyze table` statement on each table. -4. Back up the data used for restore before different concurrency tests, which ensures data consistency for each test. -5. Start the Sysbench client to perform the `point_select`, `read_write`, `update_index`, and `update_non_index` tests. Perform stress tests on TiDB via HAProxy. The test takes 5 minutes. -6. After each type of test is completed, stop the cluster, overwrite the cluster with the backup data in step 4, and restart the cluster. - -### Prepare test data - -Execute the following command to prepare the test data: - -{{< copyable "shell-regular" >}} - -```bash -sysbench oltp_common \ - --threads=16 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - --mysql-user=root \ - --mysql-password=password \ - prepare --tables=16 --table-size=10000000 -``` - -### Perform the test - -Execute the following command to perform the test: - -{{< copyable "shell-regular" >}} - -```bash -sysbench $testname \ - --threads=$threads \ - --time=300 \ - --report-interval=1 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - run --tables=16 --table-size=10000000 -``` - -## Test results - -### Point Select performance - -| Threads | v5.0.2 QPS | v5.0.2 95% latency (ms) | v5.1.0 QPS | v5.1.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -|150|137732.27|1.86|158861.67|2|15.34%| -|300|201420.58|2.91|238038.44|2.71|18.18%| -|600|303631.52|3.49|428573.21|2.07|41.15%| -|900|383628.13|3.55|464863.22|3.89|21.18%| -|1200|391451.54|5.28|413656.74|13.46|5.67%| -|1500|410276.93|7.43|471418.78|10.65|14.90%| - -Compared with v5.0.2, the Point Select performance of v5.1.0 is improved by 19.4%. - -![Point Select](/media/sysbench_v510vsv502_point_select.png) - -### Update Non-index performance - -| Threads | v5.0.2 QPS | v5.0.2 95% latency (ms) | v5.1.0 QPS | v5.1.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -|150|29248.2|7.17|29362.7|8.13|0.39%| -|300|40316.09|12.52|39651.52|13.7|-1.65%| -|600|51011.11|22.28|47047.9|27.66|-7.77%| -|900|58814.16|27.66|59331.84|28.67|0.88%| -|1200|65286.52|32.53|67745.39|31.37|3.77%| -|1500|68300.86|39.65|67899.17|44.17|-0.59%| - -Compared with v5.0.2, the Update Non-index performance of v5.1.0 is reduced by 0.8%. - -![Update Non-index](/media/sysbench_v510vsv502_update_non_index.png) - -### Update Index performance - -| Threads | v5.0.2 QPS | v5.0.2 95% latency (ms) | v5.1.0 QPS | v5.1.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -|150|15066.54|14.73|14829.31|14.73|-1.57%| -|300|18535.92|24.83|17401.01|29.72|-6.12%| -|600|22862.73|41.1|21923.78|44.98|-4.11%| -|900|25286.74|57.87|24916.76|58.92|-1.46%| -|1200|27566.18|70.55|27800.62|69.29|0.85%| -|1500|28184.76|92.42|28679.72|86|1.76%| - -Compared with v5.0.2, the Update Index performance of v5.1.0 is reduced by 1.8%. - -![Update Index](/media/sysbench_v510vsv502_update_index.png) - -### Read Write performance - -| Threads | v5.0.2 QPS | v5.0.2 95% latency (ms) | v5.1.0 QPS | v5.1.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -|150|66415.33|56.84|66591.49|57.87|0.27%| -|300|82488.39|97.55|81226.41|101.13|-1.53%| -|600|99195.36|173.58|97357.86|179.94|-1.85%| -|900|107382.76|253.35|101665.95|267.41|-5.32%| -|1200|112389.23|337.94|107426.41|350.33|-4.42%| -|1500|113548.73|450.77|109805.26|442.73|-3.30%| - -Compared with v5.0.2, the Read Write performance of v5.1.0 is reduced by 2.7%. - -![Read Write](/media/sysbench_v510vsv502_read_write.png) diff --git a/benchmark/benchmark-sysbench-v5.2.0-vs-v5.1.1.md b/benchmark/benchmark-sysbench-v5.2.0-vs-v5.1.1.md deleted file mode 100644 index ae539a95e551b..0000000000000 --- a/benchmark/benchmark-sysbench-v5.2.0-vs-v5.1.1.md +++ /dev/null @@ -1,186 +0,0 @@ ---- -title: TiDB Sysbench Performance Test Report -- v5.2.0 vs. v5.1.1 -summary: TiDB v5.2.0 shows an 11.03% improvement in Point Select performance compared to v5.1.1. However, other scenarios show a slight reduction in performance. The hardware and software configurations, test plan, and results are detailed in the report. ---- - -# TiDB Sysbench Performance Test Report -- v5.2.0 vs. v5.1.1 - -## Test overview - -This test aims at comparing the Sysbench performance of TiDB v5.2.0 and TiDB v5.1.1 in the Online Transactional Processing (OLTP) scenario. The results show that compared with v5.1.1, the Point Select performance of v5.2.0 is improved by 11.03%, and the performance of other scenarios is slightly reduced. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| Sysbench | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | v5.1.1 and v5.2.0 | -| TiDB | v5.1.1 and v5.2.0 | -| TiKV | v5.1.1 and v5.2.0 | -| Sysbench | 1.1.0-ead2689 | - -### Parameter configuration - -TiDB v5.2.0 and TiDB v5.1.1 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -storage.scheduler-worker-pool-size: 5 -raftstore.store-pool-size: 3 -raftstore.apply-pool-size: 3 -rocksdb.max-background-jobs: 8 -raftdb.max-background-jobs: 4 -raftdb.allow-concurrent-memtable-write: true -server.grpc-concurrency: 6 -readpool.unified.min-thread-count: 5 -readpool.unified.max-thread-count: 20 -readpool.storage.normal-concurrency: 10 -pessimistic-txn.pipelined: true -server.enable-request-batch: false -``` - -#### TiDB global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -``` - -## Test plan - -1. Deploy TiDB v5.2.0 and v5.1.1 using TiUP. -2. Use Sysbench to import 16 tables, each table with 10 million rows of data. -3. Execute the `analyze table` statement on each table. -4. Back up the data used for restore before different concurrency tests, which ensures data consistency for each test. -5. Start the Sysbench client to perform the `point_select`, `read_write`, `update_index`, and `update_non_index` tests. Perform stress tests on TiDB via HAProxy. The test takes 5 minutes. -6. After each type of test is completed, stop the cluster, overwrite the cluster with the backup data in step 4, and restart the cluster. - -### Prepare test data - -Execute the following command to prepare the test data: - -{{< copyable "shell-regular" >}} - -```bash -sysbench oltp_common \ - --threads=16 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - --mysql-user=root \ - --mysql-password=password \ - prepare --tables=16 --table-size=10000000 -``` - -### Perform the test - -Execute the following command to perform the test: - -{{< copyable "shell-regular" >}} - -```bash -sysbench $testname \ - --threads=$threads \ - --time=300 \ - --report-interval=1 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - run --tables=16 --table-size=10000000 -``` - -## Test results - -### Point Select performance - -| Threads | v5.1.1 QPS | v5.1.1 95% latency (ms) | v5.2.0 QPS | v5.2.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -|150|143014.13|2.35|174402.5|1.23|21.95%| -|300|199133.06|3.68|272018|1.64|36.60%| -|600|389391.65|2.18|393536.4|2.11|1.06%| -|900|468338.82|2.97|447981.98|3.3|-4.35%| -|1200|448348.52|5.18|468241.29|4.65|4.44%| -|1500|454376.79|7.04|483888.42|6.09|6.49%| - -Compared with v5.1.1, the Point Select performance of v5.2.0 is improved by 11.03%. - -![Point Select](/media/sysbench_v511vsv520_point_select.png) - -### Update Non-index performance - -| Threads | v5.1.1 QPS | v5.1.1 95% latency (ms) | v5.2.0 QPS | v5.2.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -|150|31198.68|6.43|30714.73|6.09|-1.55%| -|300|43577.15|10.46|42997.92|9.73|-1.33%| -|600|57230.18|17.32|56168.81|16.71|-1.85%| -|900|65325.11|23.1|64098.04|22.69|-1.88%| -|1200|71528.26|28.67|69908.15|28.67|-2.26%| -|1500|76652.5|33.12|74371.79|33.72|-2.98%| - -Compared with v5.1.1, the Update Non-index performance of v5.2.0 is reduced by 1.98%. - -![Update Non-index](/media/sysbench_v511vsv520_update_non_index.png) - -### Update Index performance - -| Threads | v5.1.1 QPS | v5.1.1 95% latency (ms) | v5.2.0 QPS | v5.2.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -|150|15641.04|13.22|15320|13.46|-2.05%| -|300|19787.73|21.89|19161.35|22.69|-3.17%| -|600|24566.74|36.89|23616.07|38.94|-3.87%| -|900|27516.57|50.11|26270.04|54.83|-4.53%| -|1200|29421.10|63.32|28002.65|69.29|-4.82%| -|1500|30957.84|77.19|28624.44|95.81|-7.54%| - -Compared with v5.0.2, the Update Index performance of v5.1.0 is reduced by 4.33%. - -![Update Index](/media/sysbench_v511vsv520_update_index.png) - -### Read Write performance - -| Threads | v5.1.1 QPS | v5.1.1 95% latency (ms) | v5.2.0 QPS | v5.2.0 95% latency (ms) | QPS improvement | -|:----------|:----------|:----------|:----------|:----------|:----------| -|150|68471.02|57.87|69246|54.83|1.13%| -|300|86573.09|97.55|85340.42|94.10|-1.42%| -|600|101760.75|176.73|102221.31|173.58|0.45%| -|900|111877.55|248.83|109276.45|257.95|-2.32%| -|1200|117479.4|337.94|114231.33|344.08|-2.76%| -|1500|119662.91|419.45|116663.28|434.83|-2.51%| - -Compared with v5.0.2, the Read Write performance of v5.1.0 is reduced by 1.24%. - -![Read Write](/media/sysbench_v511vsv520_read_write.png) diff --git a/benchmark/benchmark-sysbench-v5.3.0-vs-v5.2.2.md b/benchmark/benchmark-sysbench-v5.3.0-vs-v5.2.2.md deleted file mode 100644 index 8759d0928e53e..0000000000000 --- a/benchmark/benchmark-sysbench-v5.3.0-vs-v5.2.2.md +++ /dev/null @@ -1,203 +0,0 @@ ---- -title: TiDB Sysbench Performance Test Report -- v5.3.0 vs. v5.2.2 -summary: TiDB v5.3.0 and v5.2.2 were compared in a Sysbench performance test for Online Transactional Processing (OLTP). Results show that v5.3.0 performance is nearly the same as v5.2.2. Point Select performance of v5.3.0 is reduced by 0.81%, Update Non-index performance is improved by 0.95%, Update Index performance is improved by 1.83%, and Read Write performance is reduced by 0.62%. ---- - -# TiDB Sysbench Performance Test Report -- v5.3.0 vs. v5.2.2 - -## Test overview - -This test aims at comparing the Sysbench performance of TiDB v5.3.0 and TiDB v5.2.2 in the Online Transactional Processing (OLTP) scenario. The results show that the performance of v5.3.0 is nearly the same as that of v5.2.2. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| Sysbench | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | v5.2.2 and v5.3.0 | -| TiDB | v5.2.2 and v5.3.0 | -| TiKV | v5.2.2 and v5.3.0 | -| Sysbench | 1.1.0-ead2689 | - -### Parameter configuration - -TiDB v5.3.0 and TiDB v5.2.2 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -storage.scheduler-worker-pool-size: 5 -raftstore.store-pool-size: 3 -raftstore.apply-pool-size: 3 -rocksdb.max-background-jobs: 8 -raftdb.max-background-jobs: 4 -raftdb.allow-concurrent-memtable-write: true -server.grpc-concurrency: 6 -readpool.unified.min-thread-count: 5 -readpool.unified.max-thread-count: 20 -readpool.storage.normal-concurrency: 10 -pessimistic-txn.pipelined: true -``` - -#### TiDB global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -``` - -#### HAProxy configuration - haproxy.cfg - -For more details about how to use HAProxy on TiDB, see [Best Practices for Using HAProxy in TiDB](/best-practices/haproxy-best-practices.md). - -```yaml -global # Global configuration. - chroot /var/lib/haproxy # Changes the current directory and sets superuser privileges for the startup process to improve security. - pidfile /var/run/haproxy.pid # Writes the PIDs of HAProxy processes into this file. - maxconn 4000 # The maximum number of concurrent connections for a single HAProxy process. - user haproxy # Same with the UID parameter. - group haproxy # Same with the GID parameter. A dedicated user group is recommended. - nbproc 64 # The number of processes created when going daemon. When starting multiple processes to forward requests, ensure that the value is large enough so that HAProxy does not block processes. - daemon # Makes the process fork into background. It is equivalent to the command line "-D" argument. It can be disabled by the command line "-db" argument. - -defaults # Default configuration. - log global # Inherits the settings of the global configuration. - retries 2 # The maximum number of retries to connect to an upstream server. If the number of connection attempts exceeds the value, the backend server is considered unavailable. - timeout connect 2s # The maximum time to wait for a connection attempt to a backend server to succeed. It should be set to a shorter time if the server is located on the same LAN as HAProxy. - timeout client 30000s # The maximum inactivity time on the client side. - timeout server 30000s # The maximum inactivity time on the server side. - -listen tidb-cluster # Database load balancing. - bind 0.0.0.0:3390 # The Floating IP address and listening port. - mode tcp # HAProxy uses layer 4, the transport layer. - balance roundrobin # The server with the fewest connections receives the connection. "leastconn" is recommended where long sessions are expected, such as LDAP, SQL and TSE, rather than protocols using short sessions, such as HTTP. The algorithm is dynamic, which means that server weights might be adjusted on the fly for slow starts for instance. - server tidb-1 10.9.18.229:4000 check inter 2000 rise 2 fall 3 # Detects port 4000 at a frequency of once every 2000 milliseconds. If it is detected as successful twice, the server is considered available; if it is detected as failed three times, the server is considered unavailable. - server tidb-2 10.9.39.208:4000 check inter 2000 rise 2 fall 3 - server tidb-3 10.9.64.166:4000 check inter 2000 rise 2 fall 3 -``` - -## Test plan - -1. Deploy TiDB v5.3.0 and v5.2.2 using TiUP. -2. Use Sysbench to import 16 tables, each table with 10 million rows of data. -3. Execute the `analyze table` statement on each table. -4. Back up the data used for restore before different concurrency tests, which ensures data consistency for each test. -5. Start the Sysbench client to perform the `point_select`, `read_write`, `update_index`, and `update_non_index` tests. Perform stress tests on TiDB via HAProxy. For each concurrency under each workload, the test takes 20 minutes. -6. After each type of test is completed, stop the cluster, overwrite the cluster with the backup data in step 4, and restart the cluster. - -### Prepare test data - -Run the following command to prepare the test data: - -{{< copyable "shell-regular" >}} - -```bash -sysbench oltp_common \ - --threads=16 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - --mysql-user=root \ - --mysql-password=password \ - prepare --tables=16 --table-size=10000000 -``` - -### Perform the test - -Run the following command to perform the test: - -{{< copyable "shell-regular" >}} - -```bash -sysbench $testname \ - --threads=$threads \ - --time=1200 \ - --report-interval=1 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - run --tables=16 --table-size=10000000 -``` - -## Test results - -### Point Select performance - -| Threads | v5.2.2 TPS | v5.3.0 TPS | v5.2.2 95% latency (ms) | v5.3.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|267673.17|267516.77|1.76|1.67|-0.06| -|600|369820.29|361672.56|2.91|2.97|-2.20| -|900|417143.31|416479.47|4.1|4.18|-0.16| - -Compared with v5.2.2, the Point Select performance of v5.3.0 is reduced slightly by 0.81%. - -![Point Select](/media/sysbench_v522vsv530_point_select.png) - -### Update Non-index performance - -| Threads | v5.2.2 TPS | v5.3.0 TPS | v5.2.2 95% latency (ms) | v5.3.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|39715.31|40041.03|11.87|12.08|0.82| -|600|50239.42|51110.04|20.74|20.37|1.73| -|900|57073.97|57252.74|28.16|27.66|0.31| - -Compared with v5.2.2, the Update Non-index performance of v5.3.0 is improved slightly by 0.95%. - -![Update Non-index](/media/sysbench_v522vsv530_update_non_index.png) - -### Update Index performance - -| Threads | v5.2.2 TPS | v5.3.0 TPS | v5.2.2 95% latency (ms) | v5.3.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|17634.03|17821.1|25.74|25.74|1.06| -|600|20998.59|21534.13|46.63|45.79|2.55| -|900|23420.75|23859.64|64.47|62.19|1.87| - -Compared with v5.2.2, the Update Index performance of v5.3.0 is improved slightly by 1.83%. - -![Update Index](/media/sysbench_v522vsv530_update_index.png) - -### Read Write performance - -| Threads | v5.2.2 TPS | v5.3.0 TPS | v5.2.2 95% latency (ms) | v5.3.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|3872.01|3848.63|106.75|106.75|-0.60| -|600|4514.17|4471.77|200.47|196.89|-0.94| -|900|4877.05|4861.45|287.38|282.25|-0.32| - -Compared with v5.2.2, the Read Write performance of v5.3.0 is reduced slightly by 0.62%. - -![Read Write](/media/sysbench_v522vsv530_read_write.png) \ No newline at end of file diff --git a/benchmark/benchmark-sysbench-v5.4.0-vs-v5.3.0.md b/benchmark/benchmark-sysbench-v5.4.0-vs-v5.3.0.md deleted file mode 100644 index c1b0c055e805f..0000000000000 --- a/benchmark/benchmark-sysbench-v5.4.0-vs-v5.3.0.md +++ /dev/null @@ -1,203 +0,0 @@ ---- -title: TiDB Sysbench Performance Test Report -- v5.4.0 vs. v5.3.0 -summary: TiDB v5.4.0 shows improved performance of 2.59% to 4.85% in write-heavy workloads compared to v5.3.0. Results show performance improvements in point select, update non-index, update index, and read write scenarios. ---- - -# TiDB Sysbench Performance Test Report -- v5.4.0 vs. v5.3.0 - -## Test overview - -This test aims at comparing the Sysbench performance of TiDB v5.4.0 and TiDB v5.3.0 in the Online Transactional Processing (OLTP) scenario. The results show that performance of v5.4.0 is improved by 2.59% ~ 4.85% in the write-heavy workload. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| Sysbench | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | v5.3.0 and v5.4.0 | -| TiDB | v5.3.0 and v5.4.0 | -| TiKV | v5.3.0 and v5.4.0 | -| Sysbench | 1.1.0-ead2689 | - -### Parameter configuration - -TiDB v5.4.0 and TiDB v5.3.0 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -storage.scheduler-worker-pool-size: 5 -raftstore.store-pool-size: 3 -raftstore.apply-pool-size: 3 -rocksdb.max-background-jobs: 8 -raftdb.max-background-jobs: 4 -raftdb.allow-concurrent-memtable-write: true -server.grpc-concurrency: 6 -readpool.unified.min-thread-count: 5 -readpool.unified.max-thread-count: 20 -readpool.storage.normal-concurrency: 10 -pessimistic-txn.pipelined: true -``` - -#### TiDB global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -``` - -#### HAProxy configuration - haproxy.cfg - -For more details about how to use HAProxy on TiDB, see [Best Practices for Using HAProxy in TiDB](/best-practices/haproxy-best-practices.md). - -{{< copyable "" >}} - -```yaml -global # Global configuration. - chroot /var/lib/haproxy # Changes the current directory and sets superuser privileges for the startup process to improve security. - pidfile /var/run/haproxy.pid # Writes the PIDs of HAProxy processes into this file. - maxconn 4000 # The maximum number of concurrent connections for a single HAProxy process. - user haproxy # The same with the UID parameter. - group haproxy # The same with the GID parameter. A dedicated user group is recommended. - nbproc 64 # The number of processes created when going daemon. When starting multiple processes to forward requests, ensure that the value is large enough so that HAProxy does not block processes. - daemon # Makes the process fork into background. It is equivalent to the command line "-D" argument. It can be disabled by the command line "-db" argument. -defaults # Default configuration. - log global # Inherits the settings of the global configuration. - retries 2 # The maximum number of retries to connect to an upstream server. If the number of connection attempts exceeds the value, the backend server is considered unavailable. - timeout connect 2s # The maximum time to wait for a connection attempt to a backend server to succeed. It should be set to a shorter time if the server is located on the same LAN as HAProxy. - timeout client 30000s # The maximum inactivity time on the client side. - timeout server 30000s # The maximum inactivity time on the server side. -listen tidb-cluster # Database load balancing. - bind 0.0.0.0:3390 # The Floating IP address and listening port. - mode tcp # HAProxy uses layer 4, the transport layer. - balance roundrobin # The server with the fewest connections receives the connection. "leastconn" is recommended where long sessions are expected, such as LDAP, SQL and TSE, rather than protocols using short sessions, such as HTTP. The algorithm is dynamic, which means that server weights might be adjusted on the fly for slow starts for instance. - server tidb-1 10.9.18.229:4000 check inter 2000 rise 2 fall 3 # Detects port 4000 at a frequency of once every 2000 milliseconds. If it is detected as successful twice, the server is considered available; if it is detected as failed three times, the server is considered unavailable. - server tidb-2 10.9.39.208:4000 check inter 2000 rise 2 fall 3 - server tidb-3 10.9.64.166:4000 check inter 2000 rise 2 fall 3 -``` - -## Test plan - -1. Deploy TiDB v5.4.0 and v5.3.0 using TiUP. -2. Use Sysbench to import 16 tables, each table with 10 million rows of data. -3. Execute the `analyze table` statement on each table. -4. Back up the data used for restore before different concurrency tests, which ensures data consistency for each test. -5. Start the Sysbench client to perform the `point_select`, `read_write`, `update_index`, and `update_non_index` tests. Perform stress tests on TiDB via HAProxy. For each concurrency under each workload, the test takes 20 minutes. -6. After each type of test is completed, stop the cluster, overwrite the cluster with the backup data in step 4, and restart the cluster. - -### Prepare test data - -Run the following command to prepare the test data: - -{{< copyable "shell-regular" >}} - -```bash -sysbench oltp_common \ - --threads=16 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - --mysql-user=root \ - --mysql-password=password \ - prepare --tables=16 --table-size=10000000 -``` - -### Perform the test - -Run the following command to perform the test: - -{{< copyable "shell-regular" >}} - -```bash -sysbench $testname \ - --threads=$threads \ - --time=1200 \ - --report-interval=1 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - run --tables=16 --table-size=10000000 -``` - -## Test results - -### Point Select performance - -| Threads | v5.3.0 TPS | v5.4.0 TPS | v5.3.0 95% latency (ms) | v5.4.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|266041.84|264345.73|1.96|2.07|-0.64| -|600|351782.71|348715.98|3.43|3.49|-0.87| -|900|386553.31|399777.11|5.09|4.74|3.42| - -Compared with v5.3.0, the Point Select performance of v5.4.0 is slightly improved by 0.64%. - -![Point Select](/media/sysbench_v530vsv540_point_select.png) - -### Update Non-index performance - -| Threads | v5.3.0 TPS | v5.4.0 TPS | v5.3.0 95% latency (ms) | v5.4.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|40804.31|41187.1|11.87|11.87|0.94| -|600|51239.4|53172.03|20.74|19.65|3.77| -|900|57897.56|59666.8|27.66|27.66|3.06| - -Compared with v5.3.0, the Update Non-index performance of v5.4.0 is improved by 2.59%. - -![Update Non-index](/media/sysbench_v530vsv540_update_non_index.png) - -### Update Index performance - -| Threads | v5.3.0 TPS | v5.4.0 TPS | v5.3.0 95% latency (ms) | v5.4.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|17737.82|18716.5|26.2|24.83|5.52| -|600|21614.39|22670.74|44.98|42.61|4.89| -|900|23933.7|24922.05|62.19|61.08|4.13| - -Compared with v5.3.0, the Update Index performance of v5.4.0 is improved by 4.85%. - -![Update Index](/media/sysbench_v530vsv540_update_index.png) - -### Read Write performance - -| Threads | v5.3.0 TPS | v5.4.0 TPS | v5.3.0 95% latency (ms) | v5.4.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|3810.78|3929.29|108.68|106.75|3.11| -|600|4514.28|4684.64|193.38|186.54|3.77| -|900|4842.49|4988.49|282.25|277.21|3.01| - -Compared with v5.3.0, the Read Write performance of v5.4.0 is improved by 3.30%. - -![Read Write](/media/sysbench_v530vsv540_read_write.png) diff --git a/benchmark/benchmark-sysbench-v6.0.0-vs-v5.4.0.md b/benchmark/benchmark-sysbench-v6.0.0-vs-v5.4.0.md deleted file mode 100644 index 0af44a97299aa..0000000000000 --- a/benchmark/benchmark-sysbench-v6.0.0-vs-v5.4.0.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -title: TiDB Sysbench Performance Test Report -- v6.0.0 vs. v5.4.0 -summary: TiDB v6.0.0 shows a 16.17% improvement in read-write workload performance compared to v5.4.0. Other workloads show similar performance between the two versions. Test results show performance comparisons for point select, update non-index, update index, and read-write workloads. ---- - -# TiDB Sysbench Performance Test Report -- v6.0.0 vs. v5.4.0 - -## Test overview - -This test aims at comparing the Sysbench performance of TiDB v6.0.0 and TiDB v5.4.0 in the Online Transactional Processing (OLTP) scenario. The results show that performance of v6.0.0 is significantly improved by 16.17% in the read-write workload. The performance of other workload is basically the same as in v5.4.0. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| Sysbench | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | v5.4.0 and v6.0.0 | -| TiDB | v5.4.0 and v6.0.0 | -| TiKV | v5.4.0 and v6.0.0 | -| Sysbench | 1.1.0-df89d34 | - -### Parameter configuration - -TiDB v6.0.0 and TiDB v5.4.0 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -storage.scheduler-worker-pool-size: 5 -raftstore.store-pool-size: 3 -raftstore.apply-pool-size: 3 -rocksdb.max-background-jobs: 8 -raftdb.max-background-jobs: 4 -raftdb.allow-concurrent-memtable-write: true -server.grpc-concurrency: 6 -readpool.storage.normal-concurrency: 10 -pessimistic-txn.pipelined: true -``` - -#### TiDB global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -``` - -#### HAProxy configuration - haproxy.cfg - -For more details about how to use HAProxy on TiDB, see [Best Practices for Using HAProxy in TiDB](/best-practices/haproxy-best-practices.md). - -{{< copyable "" >}} - -```yaml -global # Global configuration. - pidfile /var/run/haproxy.pid # Writes the PIDs of HAProxy processes into this file. - maxconn 4000 # The maximum number of concurrent connections for a single HAProxy process. - user haproxy # The same with the UID parameter. - group haproxy # The same with the GID parameter. A dedicated user group is recommended. - nbproc 64 # The number of processes created when going daemon. When starting multiple processes to forward requests, ensure that the value is large enough so that HAProxy does not block processes. - daemon # Makes the process fork into background. It is equivalent to the command line "-D" argument. It can be disabled by the command line "-db" argument. -defaults # Default configuration. - log global # Inherits the settings of the global configuration. - retries 2 # The maximum number of retries to connect to an upstream server. If the number of connection attempts exceeds the value, the backend server is considered unavailable. - timeout connect 2s # The maximum time to wait for a connection attempt to a backend server to succeed. It should be set to a shorter time if the server is located on the same LAN as HAProxy. - timeout client 30000s # The maximum inactivity time on the client side. - timeout server 30000s # The maximum inactivity time on the server side. -listen tidb-cluster # Database load balancing. - bind 0.0.0.0:3390 # The Floating IP address and listening port. - mode tcp # HAProxy uses layer 4, the transport layer. - balance leastconn # The server with the fewest connections receives the connection. "leastconn" is recommended where long sessions are expected, such as LDAP, SQL and TSE, rather than protocols using short sessions, such as HTTP. The algorithm is dynamic, which means that server weights might be adjusted on the fly for slow starts for instance. - server tidb-1 10.9.18.229:4000 check inter 2000 rise 2 fall 3 # Detects port 4000 at a frequency of once every 2000 milliseconds. If it is detected as successful twice, the server is considered available; if it is detected as failed three times, the server is considered unavailable. - server tidb-2 10.9.39.208:4000 check inter 2000 rise 2 fall 3 - server tidb-3 10.9.64.166:4000 check inter 2000 rise 2 fall 3 -``` - -## Test plan - -1. Deploy TiDB v6.0.0 and v5.4.0 using TiUP. -2. Use Sysbench to import 16 tables, each table with 10 million rows of data. -3. Execute the `analyze table` statement on each table. -4. Back up the data used for restore before different concurrency tests, which ensures data consistency for each test. -5. Start the Sysbench client to perform the `point_select`, `read_write`, `update_index`, and `update_non_index` tests. Perform stress tests on TiDB via HAProxy. For each concurrency under each workload, the test takes 20 minutes. -6. After each type of test is completed, stop the cluster, overwrite the cluster with the backup data in step 4, and restart the cluster. - -### Prepare test data - -Run the following command to prepare the test data: - -{{< copyable "shell-regular" >}} - -```bash -sysbench oltp_common \ - --threads=16 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - --mysql-user=root \ - --mysql-password=password \ - prepare --tables=16 --table-size=10000000 -``` - -### Perform the test - -Run the following command to perform the test: - -{{< copyable "shell-regular" >}} - -```bash -sysbench $testname \ - --threads=$threads \ - --time=1200 \ - --report-interval=1 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - run --tables=16 --table-size=10000000 -``` - -## Test results - -### Point Select performance - -| Threads | v5.4.0 TPS | v6.0.0 TPS | v5.4.0 95% latency (ms) | v6.0.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|260085.19|265207.73|1.82|1.93|1.97| -|600|378098.48|365173.66|2.48|2.61|-3.42| -|900|441294.61|424031.23|3.75|3.49|-3.91| - -Compared with v5.4.0, the Point Select performance of v6.0.0 is slightly dropped by 1.79%. - -![Point Select](/media/sysbench_v540vsv600_point_select.png) - -### Update Non-index performance - -| Threads | v5.4.0 TPS | v6.0.0 TPS | v5.4.0 95% latency (ms) | v6.0.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|41528.7|40814.23|11.65|11.45|-1.72| -|600|53220.96|51746.21|19.29|20.74|-2.77| -|900|59977.58|59095.34|26.68|28.16|-1.47| - -Compared with v5.4.0, the Update Non-index performance of v6.0.0 is slightly dropped by 1.98%. - -![Update Non-index](/media/sysbench_v540vsv600_update_non_index.png) - -### Update Index performance - -| Threads | v5.4.0 TPS | v6.0.0 TPS | v5.4.0 95% latency (ms) | v6.0.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|18659.11|18187.54|23.95|25.74|-2.53| -|600|23195.83|22270.81|40.37|44.17|-3.99| -|900|25798.31|25118.78|56.84|57.87|-2.63| - -Compared with v5.4.0, the Update Index performance of v6.0.0 is dropped by 3.05%. - -![Update Index](/media/sysbench_v540vsv600_update_index.png) - -### Read Write performance - -| Threads | v5.4.0 TPS | v6.0.0 TPS | v5.4.0 95% latency (ms) | v6.0.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|4141.72|4829.01|97.55|82.96|16.59| -|600|4892.76|5693.12|173.58|153.02|16.36| -|900|5217.94|6029.95|257.95|235.74|15.56| - -Compared with v5.4.0, the Read Write performance of v6.0.0 is significantly improved by 16.17%. - -![Read Write](/media/sysbench_v540vsv600_read_write.png) diff --git a/benchmark/benchmark-sysbench-v6.1.0-vs-v6.0.0.md b/benchmark/benchmark-sysbench-v6.1.0-vs-v6.0.0.md deleted file mode 100644 index a61d18baf57ea..0000000000000 --- a/benchmark/benchmark-sysbench-v6.1.0-vs-v6.0.0.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -title: TiDB Sysbench Performance Test Report -- v6.1.0 vs. v6.0.0 -summary: TiDB v6.1.0 shows improved performance in write-heavy workloads compared to v6.0.0, with a 2.33% ~ 4.61% improvement. The test environment includes AWS EC2 instances and Sysbench 1.1.0-df89d34. Both versions use the same parameter configuration. Test plan involves deploying, importing data, and performing stress tests. Results show slight drop in Point Select performance, while Update Non-index, Update Index, and Read Write performance are improved by 2.90%, 4.61%, and 2.23% respectively. ---- - -# TiDB Sysbench Performance Test Report -- v6.1.0 vs. v6.0.0 - -## Test overview - -This test aims at comparing the Sysbench performance of TiDB v6.1.0 and TiDB v6.0.0 in the Online Transactional Processing (OLTP) scenario. The results show that performance of v6.1.0 is improved in the write workload. The performance of write-heavy workload is improved by 2.33% ~ 4.61%. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| Sysbench | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | v6.0.0 and v6.1.0 | -| TiDB | v6.0.0 and v6.1.0 | -| TiKV | v6.0.0 and v6.1.0 | -| Sysbench | 1.1.0-df89d34 | - -### Parameter configuration - -TiDB v6.1.0 and TiDB v6.0.0 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -storage.scheduler-worker-pool-size: 5 -raftstore.store-pool-size: 3 -raftstore.apply-pool-size: 3 -rocksdb.max-background-jobs: 8 -server.grpc-concurrency: 6 -readpool.storage.normal-concurrency: 10 -``` - -#### TiDB global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -set global tidb_prepared_plan_cache_size=1000; -``` - -#### HAProxy configuration - haproxy.cfg - -For more details about how to use HAProxy on TiDB, see [Best Practices for Using HAProxy in TiDB](/best-practices/haproxy-best-practices.md). - -{{< copyable "" >}} - -```yaml -global # Global configuration. - pidfile /var/run/haproxy.pid # Writes the PIDs of HAProxy processes into this file. - maxconn 4000 # The maximum number of concurrent connections for a single HAProxy process. - user haproxy # The same with the UID parameter. - group haproxy # The same with the GID parameter. A dedicated user group is recommended. - nbproc 64 # The number of processes created when going daemon. When starting multiple processes to forward requests, ensure that the value is large enough so that HAProxy does not block processes. - daemon # Makes the process fork into background. It is equivalent to the command line "-D" argument. It can be disabled by the command line "-db" argument. - -defaults # Default configuration. - log global # Inherits the settings of the global configuration. - retries 2 # The maximum number of retries to connect to an upstream server. If the number of connection attempts exceeds the value, the backend server is considered unavailable. - timeout connect 2s # The maximum time to wait for a connection attempt to a backend server to succeed. It should be set to a shorter time if the server is located on the same LAN as HAProxy. - timeout client 30000s # The maximum inactivity time on the client side. - timeout server 30000s # The maximum inactivity time on the server side. - -listen tidb-cluster # Database load balancing. - bind 0.0.0.0:3390 # The Floating IP address and listening port. - mode tcp # HAProxy uses layer 4, the transport layer. - balance leastconn # The server with the fewest connections receives the connection. "leastconn" is recommended where long sessions are expected, such as LDAP, SQL and TSE, rather than protocols using short sessions, such as HTTP. The algorithm is dynamic, which means that server weights might be adjusted on the fly for slow starts for instance. - server tidb-1 10.9.18.229:4000 check inter 2000 rise 2 fall 3 # Detects port 4000 at a frequency of once every 2000 milliseconds. If it is detected as successful twice, the server is considered available; if it is detected as failed three times, the server is considered unavailable. - server tidb-2 10.9.39.208:4000 check inter 2000 rise 2 fall 3 - server tidb-3 10.9.64.166:4000 check inter 2000 rise 2 fall 3 -``` - -## Test plan - -1. Deploy TiDB v6.1.0 and v6.0.0 using TiUP. -2. Use Sysbench to import 16 tables, each table with 10 million rows of data. -3. Execute the `analyze table` statement on each table. -4. Back up the data used for restore before different concurrency tests, which ensures data consistency for each test. -5. Start the Sysbench client to perform the `point_select`, `read_write`, `update_index`, and `update_non_index` tests. Perform stress tests on TiDB via HAProxy. For each concurrency under each workload, the test takes 20 minutes. -6. After each type of test is completed, stop the cluster, overwrite the cluster with the backup data in step 4, and restart the cluster. - -### Prepare test data - -Run the following command to prepare the test data: - -{{< copyable "shell-regular" >}} - -```bash -sysbench oltp_common \ - --threads=16 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - --mysql-user=root \ - --mysql-password=password \ - prepare --tables=16 --table-size=10000000 -``` - -### Perform the test - -Run the following command to perform the test: - -{{< copyable "shell-regular" >}} - -```bash -sysbench $testname \ - --threads=$threads \ - --time=1200 \ - --report-interval=1 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - run --tables=16 --table-size=10000000 -``` - -## Test results - -### Point Select performance - -| Threads | v6.0.0 TPS | v6.1.0 TPS | v6.0.0 95% latency (ms) | v6.1.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|268934.84|265353.15|1.89|1.96|-1.33| -|600|365217.96|358976.94|2.57|2.66|-1.71| -|900|420799.64|407625.11|3.68|3.82|-3.13| - -Compared with v6.0.0, the Point Select performance of v6.1.0 slightly drops by 2.1%. - -![Point Select](/media/sysbench_v600vsv610_point_select.png) - -### Update Non-index performance - -| Threads | v6.0.0 TPS | v6.1.0 TPS | v6.0.0 95% latency (ms) | v6.1.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|41778.95|42991.9|11.24|11.45|2.90 | -|600|52045.39|54099.58|20.74|20.37|3.95| -|900|59243.35|62084.65|27.66|26.68|4.80| - -Compared with v6.0.0, the Update Non-index performance of v6.1.0 is improved by 3.88%. - -![Update Non-index](/media/sysbench_v600vsv610_update_non_index.png) - -### Update Index performance - -| Threads | v6.0.0 TPS | v6.1.0 TPS | v6.0.0 95% latency (ms) | v6.1.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|18085.79|19198.89|25.28|23.95|6.15| -|600|22210.8|22877.58|42.61|41.85|3.00| -|900|25249.81|26431.12|55.82|53.85|4.68| - -Compared with v6.0.0, the Update Index performance of v6.1.0 is improved by 4.61%. - -![Update Index](/media/sysbench_v600vsv610_update_index.png) - -### Read Write performance - -| Threads | v6.0.0 TPS | v6.1.0 TPS | v6.0.0 95% latency (ms) | v6.1.0 95% latency (ms) | TPS improvement (%) | -|:----------|:----------|:----------|:----------|:----------|:----------| -|300|4856.23|4914.11|84.47|82.96|1.19| -|600|5676.46|5848.09|161.51|150.29|3.02| -|900|6072.97|6223.95|240.02|223.34|2.49| - -Compared with v6.0.0, the Read Write performance of v6.1.0 is improved by 2.23%. - -![Read Write](/media/sysbench_v600vsv610_read_write.png) diff --git a/benchmark/benchmark-sysbench-v6.2.0-vs-v6.1.0.md b/benchmark/benchmark-sysbench-v6.2.0-vs-v6.1.0.md deleted file mode 100644 index f98bbf3130b13..0000000000000 --- a/benchmark/benchmark-sysbench-v6.2.0-vs-v6.1.0.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -title: TiDB Sysbench Performance Test Report -- v6.2.0 vs. v6.1.0 -summary: TiDB v6.2.0 and v6.1.0 show similar performance in the Sysbench test. Point Select performance slightly drops by 3.58%. Update Non-index and Update Index performance are basically unchanged, reduced by 0.85% and 0.47% respectively. Read Write performance is reduced by 1.21%. ---- - -# TiDB Sysbench Performance Test Report -- v6.2.0 vs. v6.1.0 - -## Test overview - -This test aims at comparing the Sysbench performance of TiDB v6.2.0 and TiDB v6.1.0 in the Online Transactional Processing (OLTP) scenario. The results show that performance of v6.2.0 is basically the same as that of v6.1.0. The performance of Point Select slightly drops by 3.58%. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| Sysbench | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | v6.1.0 and v6.2.0 | -| TiDB | v6.1.0 and v6.2.0 | -| TiKV | v6.1.0 and v6.2.0 | -| Sysbench | 1.1.0-df89d34 | - -### Parameter configuration - -TiDB v6.2.0 and TiDB v6.1.0 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -storage.scheduler-worker-pool-size: 5 -raftstore.store-pool-size: 3 -raftstore.apply-pool-size: 3 -rocksdb.max-background-jobs: 8 -server.grpc-concurrency: 6 -readpool.unified.max-thread-count: 10 -``` - -#### TiDB global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -set global tidb_prepared_plan_cache_size=1000; -``` - -#### HAProxy configuration - haproxy.cfg - -For more details about how to use HAProxy on TiDB, see [Best Practices for Using HAProxy in TiDB](/best-practices/haproxy-best-practices.md). - -{{< copyable "" >}} - -```yaml -global # Global configuration. - pidfile /var/run/haproxy.pid # Writes the PIDs of HAProxy processes into this file. - maxconn 4000 # The maximum number of concurrent connections for a single HAProxy process. - user haproxy # The same with the UID parameter. - group haproxy # The same with the GID parameter. A dedicated user group is recommended. - nbproc 64 # The number of processes created when going daemon. When starting multiple processes to forward requests, ensure that the value is large enough so that HAProxy does not block processes. - daemon # Makes the process fork into background. It is equivalent to the command line "-D" argument. It can be disabled by the command line "-db" argument. - -defaults # Default configuration. - log global # Inherits the settings of the global configuration. - retries 2 # The maximum number of retries to connect to an upstream server. If the number of connection attempts exceeds the value, the backend server is considered unavailable. - timeout connect 2s # The maximum time to wait for a connection attempt to a backend server to succeed. It should be set to a shorter time if the server is located on the same LAN as HAProxy. - timeout client 30000s # The maximum inactivity time on the client side. - timeout server 30000s # The maximum inactivity time on the server side. - -listen tidb-cluster # Database load balancing. - bind 0.0.0.0:3390 # The Floating IP address and listening port. - mode tcp # HAProxy uses layer 4, the transport layer. - balance leastconn # The server with the fewest connections receives the connection. "leastconn" is recommended where long sessions are expected, such as LDAP, SQL and TSE, rather than protocols using short sessions, such as HTTP. The algorithm is dynamic, which means that server weights might be adjusted on the fly for slow starts for instance. - server tidb-1 10.9.18.229:4000 check inter 2000 rise 2 fall 3 # Detects port 4000 at a frequency of once every 2000 milliseconds. If it is detected as successful twice, the server is considered available; if it is detected as failed three times, the server is considered unavailable. - server tidb-2 10.9.39.208:4000 check inter 2000 rise 2 fall 3 - server tidb-3 10.9.64.166:4000 check inter 2000 rise 2 fall 3 -``` - -## Test plan - -1. Deploy TiDB v6.2.0 and v6.1.0 using TiUP. -2. Use Sysbench to import 16 tables, each table with 10 million rows of data. -3. Execute the `analyze table` statement on each table. -4. Back up the data used for restore before different concurrency tests, which ensures data consistency for each test. -5. Start the Sysbench client to perform the `point_select`, `read_write`, `update_index`, and `update_non_index` tests. Perform stress tests on TiDB via HAProxy. For each concurrency under each workload, the test takes 20 minutes. -6. After each type of test is completed, stop the cluster, overwrite the cluster with the backup data in step 4, and restart the cluster. - -### Prepare test data - -Run the following command to prepare the test data: - -{{< copyable "shell-regular" >}} - -```bash -sysbench oltp_common \ - --threads=16 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - --mysql-user=root \ - --mysql-password=password \ - prepare --tables=16 --table-size=10000000 -``` - -### Perform the test - -Run the following command to perform the test: - -{{< copyable "shell-regular" >}} - -```bash -sysbench $testname \ - --threads=$threads \ - --time=1200 \ - --report-interval=1 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$aws_nlb_host \ - --mysql-port=$aws_nlb_port \ - run --tables=16 --table-size=10000000 -``` - -## Test results - -### Point Select performance - -| Threads | v6.1.0 TPS | v6.2.0 TPS | v6.1.0 95% latency (ms) | v6.2.0 95% latency (ms) | TPS improvement (%) | -| :------ | :--------- | :--------- | :---------------------- | :---------------------- | :----------- | -| 300 | 243530.01 | 236885.24 | 1.93 | 2.07 | -2.73 | -| 600 | 304121.47 | 291395.84 | 3.68 | 4.03 | -4.18 | -| 900 | 327301.23 | 314720.02 | 5 | 5.47 | -3.84 | - -Compared with v6.1.0, the Point Select performance of v6.2.0 slightly drops by 3.58%. - -![Point Select](/media/sysbench_v610vsv620_point_select.png) - -### Update Non-index performance - -| Threads | v6.1.0 TPS | v6.2.0 TPS | v6.1.0 95% latency (ms) | v6.2.0 95% latency (ms) | TPS improvement (%) | -| :------ | :--------- | :--------- | :---------------------- | :---------------------- | :----------- | -| 300 | 42608.8 | 42372.82 | 11.45 | 11.24 | -0.55 | -| 600 | 54264.47 | 53672.69 | 18.95 | 18.95 | -1.09 | -| 900 | 60667.47 | 60116.14 | 26.2 | 26.68 | -0.91 | - -Compared with v6.1.0, the Update Non-index performance of v6.2.0 is basically unchanged, reduced by 0.85%. - -![Update Non-index](/media/sysbench_v610vsv620_update_non_index.png) - -### Update Index performance - -| Threads | v6.1.0 TPS | v6.2.0 TPS | v6.1.0 95% latency (ms) | v6.2.0 95% latency (ms) | TPS improvement (%) | -| :------ | :--------- | :--------- | :---------------------- | :---------------------- | :----------- | -| 300 | 19384.75 | 19353.58 | 23.52 | 23.52 | -0.16 | -| 600 | 24144.78 | 24007.57 | 38.25 | 37.56 | -0.57 | -| 900 | 26770.9 | 26589.84 | 51.94 | 52.89 | -0.68 | - -Compared with v6.1.0, the Update Index performance of v6.2.0 is basically unchanged, reduced by 0.47%. - -![Update Index](/media/sysbench_v610vsv620_update_index.png) - -### Read Write performance - -| Threads | v6.1.0 TPS | v6.2.0 TPS | v6.1.0 95% latency (ms) | v6.2.0 95% latency (ms) | TPS improvement (%) | -| :------ | :--------- | :--------- | :---------------------- | :---------------------- | :----------- | -| 300 | 4849.67 | 4797.59 | 86 | 84.47 | -1.07 | -| 600 | 5643.89 | 5565.17 | 161.51 | 161.51 | -1.39 | -| 900 | 5954.91 | 5885.22 | 235.74 | 235.74 | -1.17 | - -Compared with v6.1.0, the Read Write performance of v6.2.0 is reduced by 1.21%. - -![Read Write](/media/sysbench_v610vsv620_read_write.png) diff --git a/benchmark/benchmark-tpch.md b/benchmark/benchmark-tpch.md deleted file mode 100644 index 2689a52021d3d..0000000000000 --- a/benchmark/benchmark-tpch.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -title: TiDB TPC-H 50G Performance Test Report V2.0 -aliases: ['/docs/dev/benchmark/benchmark-tpch/','/docs/dev/benchmark/tpch/'] -summary: TiDB TPC-H 50G Performance Test compared TiDB 1.0 and TiDB 2.0 in an OLAP scenario. Test results show that TiDB 2.0 outperformed TiDB 1.0 in most queries, with significant improvements in query processing time. Some queries in TiDB 1.0 did not return results, while others had high memory consumption. Future releases plan to support VIEW and address these issues. ---- - -# TiDB TPC-H 50G Performance Test Report - -## Test purpose - -This test aims to compare the performances of TiDB 1.0 and TiDB 2.0 in the OLAP scenario. - -> **Note:** -> -> Different test environments might lead to different test results. - -## Test environment - -### Machine information - -System information: - -| Machine IP | Operation system | Kernel version | File system type | -|--------------|------------------------|------------------------------|--------------| -| 172.16.31.2 | Ubuntu 17.10 64bit | 4.13.0-16-generic | ext4 | -| 172.16.31.3 | Ubuntu 17.10 64bit | 4.13.0-16-generic | ext4 | -| 172.16.31.4 | Ubuntu 17.10 64bit | 4.13.0-16-generic | ext4 | -| 172.16.31.6 | CentOS 7.4.1708 64bit | 3.10.0-693.11.6.el7.x86\_64 | ext4 | -| 172.16.31.8 | CentOS 7.4.1708 64bit | 3.10.0-693.11.6.el7.x86\_64 | ext4 | -| 172.16.31.10 | CentOS 7.4.1708 64bit | 3.10.0-693.11.6.el7.x86\_64 | ext4 | - -Hardware information: - -| Type | Name | -|------------|------------------------------------------------------| -| CPU | 40 vCPUs, Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz | -| RAM | 128GB, 16GB RDIMM * 8, 2400MT/s, dual channel, x8 bitwidth | -| DISK | Intel P4500 4T SSD * 2 | -| Network Card | 10 Gigabit Ethernet | - -### TPC-H - -[tidb-bench/tpch](https://github.com/pingcap/tidb-bench/tree/master/tpch) - -### Cluster topology - -| Machine IP | Deployment Instance | -|--------------|---------------------| -| 172.16.31.2 | TiKV \* 2 | -| 172.16.31.3 | TiKV \* 2 | -| 172.16.31.6 | TiKV \* 2 | -| 172.16.31.8 | TiKV \* 2 | -| 172.16.31.10 | TiKV \* 2 | -| 172.16.31.10 | PD \* 1 | -| 172.16.31.4 | TiDB \* 1 | - -### Corresponding TiDB version information - -TiDB 1.0: - -| Component | Version | Commit Hash | -|--------|-------------|--------------------------------------------| -| TiDB | v1.0.9 | 4c7ee3580cd0a69319b2c0c08abdc59900df7344 | -| TiKV | v1.0.8 | 2bb923a4cd23dbf68f0d16169fd526dc5c1a9f4a | -| PD | v1.0.8 | 137fa734472a76c509fbfd9cb9bc6d0dc804a3b7 | - -TiDB 2.0: - -| Component | Version | Commit Hash | -|--------|-------------|--------------------------------------------| -| TiDB | v2.0.0-rc.6 | 82d35f1b7f9047c478f4e1e82aa0002abc8107e7 | -| TiKV | v2.0.0-rc.6 | 8bd5c54966c6ef42578a27519bce4915c5b0c81f | -| PD | v2.0.0-rc.6 | 9b824d288126173a61ce7d51a71fc4cb12360201 | - -## Test result - -| Query ID | TiDB 2.0 | TiDB 1.0 | -|-----------|--------------------|------------------| -| 1 | 33.915s | 215.305s | -| 2 | 25.575s | Nan | -| 3 | 59.631s | 196.003s | -| 4 | 30.234s | 249.919s | -| 5 | 31.666s | OOM | -| 6 | 13.111s | 118.709s | -| 7 | 31.710s | OOM | -| 8 | 31.734s | 800.546s | -| 9 | 34.211s | 630.639s | -| 10 | 30.774s | 133.547s | -| 11 | 27.692s | 78.026s | -| 12 | 27.962s | 124.641s | -| 13 | 27.676s | 174.695s | -| 14 | 19.676s | 110.602s | -| 15 | NaN | Nan | -| 16 | 24.890s | 40.529s | -| 17 | 245.796s | NaN | -| 18 | 91.256s | OOM | -| 19 | 37.615s | NaN | -| 20 | 44.167s | 212.201s | -| 21 | 31.466s | OOM | -| 22 | 31.539s | 125.471s | - -![TPC-H Query Result](/media/tpch-query-result.png) - -It should be noted that: - -- In the diagram above, the orange bars represent the query results of Release 1.0 and the blue bars represent the query results of Release 2.0. The y-axis represents the processing time of queries in seconds, the shorter the faster. -- Query 15 is tagged with "NaN" because VIEW is currently not supported in either TiDB 1.0 or 2.0. We have plans to provide VIEW support in a future release. -- Queries 2, 17, and 19 in the TiDB 1.0 column are tagged with "NaN" because TiDB 1.0 did not return results for these queries. -- Queries 5, 7, 18, and 21 in the TiDB 1.0 column are tagged with "OOM" because the memory consumption was too high. diff --git a/benchmark/v3.0-performance-benchmarking-with-sysbench.md b/benchmark/v3.0-performance-benchmarking-with-sysbench.md deleted file mode 100644 index 30afbc7bb9e54..0000000000000 --- a/benchmark/v3.0-performance-benchmarking-with-sysbench.md +++ /dev/null @@ -1,289 +0,0 @@ ---- -title: TiDB Sysbench Performance Test Report -- v3.0 vs. v2.1 -aliases: ['/docs/dev/benchmark/v3.0-performance-benchmarking-with-sysbench/','/docs/dev/benchmark/sysbench-v4/'] -summary: TiDB v3.0 outperformed v2.1 in all tests, with higher QPS and lower latency. Configuration changes in v3.0 contributed to the improved performance. ---- - -# TiDB Sysbench Performance Test Report -- v3.0 vs. v2.1 - -## Test purpose - -This test aims to compare the performance of TiDB 3.0 and TiDB 2.1 in the OLTP scenario. - -## Test version, time, and place - -TiDB version: v3.0.0 vs. v2.1.13 - -Time: June, 2019 - -Place: Beijing - -## Test environment - -This test runs on AWS EC2 and uses the CentOS-7.6.1810-Nitro (ami-028946f4cffc8b916) image. The components and types of instances are as follows: - -| Component | Instance type | -| :--- | :-------- | -| PD | r5d.xlarge | -| TiKV | c5d.4xlarge | -| TiDB | c5.4xlarge | - -Sysbench version: 1.0.17 - -## Test plan - -Use Sysbench to import **16 tables, with 10,000,000 rows in each table**. Start three sysbench to add pressure to three TiDB instances. The number of concurrent requests increases incrementally. A single concurrent test lasts 5 minutes. - -Prepare data using the following command: - -{{< copyable "shell-regular" >}} - -```sh -sysbench oltp_common \ - --threads=16 \ - --rand-type=uniform \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$tidb_host \ - --mysql-port=$tidb_port \ - --mysql-user=root \ - --mysql-password=password \ - prepare --tables=16 --table-size=10000000 -``` - -Then test TiDB using the following command: - -{{< copyable "shell-regular" >}} - -```sh -sysbench $testname \ - --threads=$threads \ - --time=300 \ - --report-interval=15 \ - --rand-type=uniform \ - --rand-seed=$RANDOM \ - --db-driver=mysql \ - --mysql-db=sbtest \ - --mysql-host=$tidb_host \ - --mysql-port=$tidb_port \ - --mysql-user=root \ - --mysql-password=password \ - run --tables=16 --table-size=10000000 -``` - -### TiDB version information - -### v3.0.0 - -| Component | GitHash | -| :--- | :-------------------------------------- | -| TiDB | `8efbe62313e2c1c42fd76d35c6f020087eef22c2` | -| TiKV | `a467f410d235fa9c5b3c355e3b620f81d3ac0e0c` | -| PD | `70aaa5eee830e21068f1ba2d4c9bae59153e5ca3` | - -### v2.1.13 - -| Component | GitHash | -| :--- | :-------------------------------------- | -| TiDB | `6b5b1a6802f9b8f5a22d8aab24ac80729331e1bc` | -| TiKV | `b3cf3c8d642534ea6fa93d475a46da285cc6acbf` | -| PD | `886362ebfb26ef0834935afc57bcee8a39c88e54` | - -### TiDB parameter configuration - -Enable the prepared plan cache in both TiDB v2.1 and v3.0 (`point select` and `read write` are not enabled in v2.1 for optimization reasons): - -```toml -[prepared-plan-cache] -enabled = true -``` - -Then configure global variables: - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -``` - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_partial_concurrency=1; -``` - -{{< copyable "sql" >}} - -```sql -set global tidb_disable_txn_auto_retry=0; -``` - -In addition, make the following configuration in v3.0: - -```toml -[tikv-client] -max-batch-wait-time = 2000000 -``` - -### TiKV parameter configuration - -Configure the global variable in both TiDB v2.1 and v3.0: - -```toml -log-level = "error" -[readpool.storage] -normal-concurrency = 10 -[server] -grpc-concurrency = 6 -[rocksdb.defaultcf] -block-cache-size = "14GB" -[rocksdb.writecf] -block-cache-size = "8GB" -[rocksdb.lockcf] -block-cache-size = "1GB" -``` - -In addition, make the following configuration in v3.0: - -```toml -[raftstore] -apply-pool-size = 3 -store-pool-size = 3 -``` - -### Cluster topology - -| Machine IP | Deployment instance | -| :-------------------------------------- | :--------- | -| 172.31.8.8 | 3 * Sysbench | -| 172.31.7.80, 172.31.5.163, 172.31.11.123 | PD | -| 172.31.4.172, 172.31.1.155, 172.31.9.210 | TiKV | -| 172.31.7.80, 172.31.5.163, 172.31.11.123 | TiDB | - -## Test result - -### `Point Select` test - -**v2.1:** - -| Threads | QPS | 95% latency(ms) | -| :------- | :-------- | :------------- | -| 150 | 240304.06 | 1.61 | -| 300 | 276635.75 | 2.97 | -| 600 | 307838.06 | 5.18 | -| 900 | 323667.93 | 7.30 | -| 1200 | 330925.73 | 9.39 | -| 1500 | 336250.38 | 11.65 | - - - -**v3.0:** - -| Threads | QPS | 95% latency(ms) | -| :------- | :-------- | :-------------- | -| 150 | 334219.04 | 0.64 | -| 300 | 456444.86 | 1.10 | -| 600 | 512177.48 | 2.11 | -| 900 | 525945.13 | 3.13 | -| 1200 | 534577.36 | 4.18 | -| 1500 | 533944.64 | 5.28 | - -![point select](/media/sysbench_v4_point_select.png) - -### `Update Non-Index` test - -**v2.1:** - -| Threads | QPS | 95% latency (ms) | -| :------- | :------- | :-------------- | -| 150 | 21785.37 | 8.58 | -| 300 | 28979.27 | 13.70 | -| 600 | 34629.72 | 24.83 | -| 900 | 36410.06 | 43.39 | -| 1200 | 37174.15 | 62.19 | -| 1500 | 37408.88 | 87.56 | - -**v3.0:** - -| Threads | QPS | 95% latency (ms) | -| :------- | :------- | :-------------- | -| 150 | 28045.75 | 6.67 | -| 300 | 39237.77 | 9.91 | -| 600 | 49536.56 | 16.71 | -| 900 | 55963.73 | 22.69 | -| 1200 | 59904.02 | 29.72 | -| 1500 | 62247.95 | 42.61 | - -![update non-index](/media/sysbench_v4_update_non_index.png) - -### `Update Index` test - -**v2.1:** - -| Threads | QPS | 95% latency(ms) | -| :------- | :------- | :-------------- | -| 150 | 14378.24 | 13.22 | -| 300 | 16916.43 | 24.38 | -| 600 | 17636.11 | 57.87 | -| 900 | 17740.92 | 95.81 | -| 1200 | 17929.24 | 130.13 | -| 1500 | 18012.80 | 161.51 | - -**v3.0:** - -| Threads | QPS | 95% latency(ms) | -| :------- | :------- | :-------------- | -| 150 | 19047.32 | 10.09 | -| 300 | 24467.64 | 16.71 | -| 600 | 28882.66 | 31.94 | -| 900 | 30298.41 | 57.87 | -| 1200 | 30419.40 | 92.42 | -| 1500 | 30643.55 | 125.52 | - -![update index](/media/sysbench_v4_update_index.png) - -### `Read Write` test - -**v2.1:** - -| Threads | QPS | 95% latency(ms) | -| :------- | :-------- | :-------------- | -| 150 | 85140.60 | 44.98 | -| 300 | 96773.01 | 82.96 | -| 600 | 105139.81 | 153.02 | -| 900 | 110041.83 | 215.44 | -| 1200 | 113242.70 | 277.21 | -| 1500 | 114542.19 | 337.94 | - - - -**v3.0:** - -| Threads | QPS | 95% latency(ms) | -| :------- | :-------- | :-------------- | -| 150 | 105692.08 | 35.59 | -| 300 | 129769.69 | 58.92 | -| 600 | 141430.86 | 114.72 | -| 900 | 144371.76 | 170.48 | -| 1200 | 143344.37 | 223.34 | -| 1500 | 144567.91 | 277.21 | - -![read write](/media/sysbench_v4_read_write.png) diff --git a/benchmark/v3.0-performance-benchmarking-with-tpcc.md b/benchmark/v3.0-performance-benchmarking-with-tpcc.md deleted file mode 100644 index 3e0daae7b5dd1..0000000000000 --- a/benchmark/v3.0-performance-benchmarking-with-tpcc.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -title: TiDB TPC-C Performance Test Report -- v3.0 vs. v2.1 -aliases: ['/docs/dev/benchmark/v3.0-performance-benchmarking-with-tpcc/','/docs/dev/benchmark/tpcc/'] -summary: TiDB v3.0 outperforms v2.1 in TPC-C performance test. With 1000 warehouses, v3.0 achieved 450% higher performance than v2.1. ---- - -# TiDB TPC-C Performance Test Report -- v3.0 vs. v2.1 - -## Test purpose - -This test aims to compare the TPC-C performance of TiDB 3.0 and TiDB 2.1. - -## Test version, time, and place - -TiDB version: v3.0.0 vs. v2.1.13 - -Time: June, 2019 - -Place: Beijing - -## Test environment - -IDC machine: - -| Type | Name | -| :-- | :-- | -| OS | Linux (CentOS 7.3.1611) | -| CPU | 40 vCPUs, Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz | -| RAM | 128GB | -| DISK | 1.5TB SSD \* 2 | - -This test uses the open-source BenchmarkSQL 5.0 as the TPC-C testing tool and adds the support for the MySQL protocol. You can download the testing program by using the following command: - -{{< copyable "shell-regular" >}} - -```shell -git clone -b 5.0-mysql-support-opt https://github.com/pingcap/benchmarksql.git -``` - -## Test plan - -Use BenchmarkSQL to load the data of **1000 warehouses** into the TiDB cluster. By using HAProxy, send concurrent requests to the cluster at an incremental number. A single concurrent test lasts 10 minutes. - -### TiDB version information - -### v3.0.0 - -| Component | GitHash | -| :-- | :-- | -| TiDB | 46c38e15eba43346fb3001280c5034385171ee20 | -| TiKV | a467f410d235fa9c5b3c355e3b620f81d3ac0e0c | -| PD | 70aaa5eee830e21068f1ba2d4c9bae59153e5ca3 | - -### v2.1.13 - -| Component | GitHash | -| :-- | :-- | -| TiDB | 6b5b1a6802f9b8f5a22d8aab24ac80729331e1bc | -| TiKV | b3cf3c8d642534ea6fa93d475a46da285cc6acbf | -| PD | 886362ebfb26ef0834935afc57bcee8a39c88e54 | - -### TiDB parameter configuration - -```toml -[log] -level = "error" -[performance] -max-procs = 20 -[prepared_plan_cache] -enabled = true -``` - -### TiKV parameter configuration - -The default TiKV configuration is used in both v2.1 and v3.0. - -### Cluster topology - -| Machine IP | Deployment instance | -| :-- | :-- | -| 172.16.4.75 | 2\*TiDB 2\*TiKV 1\*pd | -| 172.16.4.76 | 2\*TiDB 2\*TiKV 1\*pd | -| 172.16.4.77 | 2\*TiDB 2\*TiKV 1\*pd | - -## Test result - -| Version | Threads | tpmC | -| :-- | :-- | :-- | -| v3.0 | 128 | 44068.55 | -| v3.0 | 256 | 47094.06 | -| v3.0 | 512 | 48808.65 | -| v2.1 | 128 | 10641.71 | -| v2.1 | 256 | 10861.62 | -| v2.1 | 512 | 10965.39 | - -![point select](/media/tpcc-2.1-3.0.png) - -According to the testing statistics, the performance of TiDB 3.0 **has increased by 450%** than that of TiDB 2.1. diff --git a/benchmark/v4.0-performance-benchmarking-with-tpcc.md b/benchmark/v4.0-performance-benchmarking-with-tpcc.md deleted file mode 100644 index 1169a9a92f492..0000000000000 --- a/benchmark/v4.0-performance-benchmarking-with-tpcc.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -title: TiDB TPC-C Performance Test Report -- v4.0 vs. v3.0 -summary: Compare the TPC-C performance of TiDB 4.0 and TiDB 3.0 using BenchmarkSQL. -aliases: ['/docs/dev/benchmark/v4.0-performance-benchmarking-with-tpcc/'] ---- - -# TiDB TPC-C Performance Test Report -- v4.0 vs. v3.0 - -## Test purpose - -This test aims to compare the TPC-C performance of TiDB 4.0 and TiDB 3.0 in the Online Transactional Processing (OLTP) scenario. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| TPC-C | m5.4xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | 3.0 and 4.0 | -| TiDB | 3.0 and 4.0 | -| TiKV | 3.0 and 4.0 | -| BenchmarkSQL | None | - -### Parameter configuration - -#### TiDB v3.0 configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV v3.0 configuration - -{{< copyable "" >}} - -```yaml -storage.scheduler-worker-pool-size: 5 -raftstore.store-pool-size: 3 -raftstore.apply-pool-size: 3 -rocksdb.max-background-jobs: 3 -raftdb.max-background-jobs: 3 -raftdb.allow-concurrent-memtable-write: true -server.grpc-concurrency: 6 -readpool.storage.normal-concurrency: 10 -readpool.coprocessor.normal-concurrency: 5 -``` - -#### TiDB v4.0 configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV v4.0 configuration - -{{< copyable "" >}} - -```yaml -storage.scheduler-worker-pool-size: 5 -raftstore.store-pool-size: 3 -raftstore.apply-pool-size: 3 -rocksdb.max-background-jobs: 3 -raftdb.max-background-jobs: 3 -raftdb.allow-concurrent-memtable-write: true -server.grpc-concurrency: 6 -readpool.unified.min-thread-count: 5 -readpool.unified.max-thread-count: 20 -readpool.storage.normal-concurrency: 10 -pessimistic-txn.pipelined: true -``` - -#### Global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_disable_txn_auto_retry=0; -``` - -## Test plan - -1. Deploy TiDB v4.0 and v3.0 using TiUP. - -2. Use BenchmarkSQL to import the TPC-C 5000 Warehouse data. - - 1. Compile BenchmarkSQL: - - {{< copyable "bash" >}} - - ```bash - git clone https://github.com/pingcap/benchmarksql && cd benchmarksql && ant - ``` - - 2. Enter the `run` directory, edit the `props.mysql` file according to the actual situation, and modify the `conn`, `warehouses`, `loadWorkers`, `terminals`, and `runMins` configuration items. - - 3. Execute the `runSQL.sh ./props.mysql sql.mysql/tableCreates.sql` command. - - 4. Execute the `runSQL.sh ./props.mysql sql.mysql/indexCreates.sql` command. - - 5. Run MySQL client and execute the `analyze table` statement on every table. - -3. Execute the `runBenchmark.sh ./props.mysql` command. - -4. Extract the tpmC data of New Order from the result. - -## Test result - -According to the test statistics, the TPC-C performance of TiDB v4.0 has **increased by 50%** compared with that of TiDB v3.0. - -![TPC-C](/media/tpcc-v4vsv3.png) diff --git a/benchmark/v4.0-performance-benchmarking-with-tpch.md b/benchmark/v4.0-performance-benchmarking-with-tpch.md deleted file mode 100644 index 801db52f05b69..0000000000000 --- a/benchmark/v4.0-performance-benchmarking-with-tpch.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -title: TiDB TPC-H Performance Test Report -- v4.0 vs. v3.0 -summary: Compare the TPC-H performance of TiDB 4.0 and TiDB 3.0. -aliases: ['/docs/dev/benchmark/v4.0-performance-benchmarking-with-tpch/'] ---- - -# TiDB TPC-H Performance Test Report -- v4.0 vs. v3.0 - -## Test purpose - -This test aims to compare the TPC-H performance of TiDB 4.0 and TiDB 3.0 in the online analytical processing (OLAP) scenario. - -Because [TiFlash](/tiflash/tiflash-overview.md) is introduced in TiDB v4.0, which enhances TiDB's Hybrid Transactional and Analytical Processing (HTAP) capabilities, test objects in this report are as follows: - -+ TiDB v3.0 that reads data only from TiKV. -+ TiDB v4.0 that reads data only from TiKV. -+ TiDB v4.0 that reads data from TiKV and TiFlash automatically based on intelligent choice. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------------|:------------|:----| -| PD | m5.xlarge | 3 | -| TiDB | c5.4xlarge | 2 | -| TiKV & TiFlash | i3.4xlarge | 3 | -| TPC-H | m5.xlarge | 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | 3.0 and 4.0 | -| TiDB | 3.0 and 4.0 | -| TiKV | 3.0 and 4.0 | -| TiFlash | 4.0 | -| tiup-bench | 0.2 | - -### Parameter configuration - -#### v3.0 - -For v3.0, TiDB, TiKV, and PD use the default parameter configuration. - -##### Variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_distsql_scan_concurrency = 30; -set global tidb_projection_concurrency = 16; -set global tidb_hashagg_partial_concurrency = 16; -set global tidb_hashagg_final_concurrency = 16; -set global tidb_hash_join_concurrency = 16; -set global tidb_index_lookup_concurrency = 16; -set global tidb_index_lookup_join_concurrency = 16; -``` - -#### v4.0 - -For v4.0, TiDB uses the default parameter configuration. - -##### TiKV configuration - -{{< copyable "" >}} - -```yaml -readpool.storage.use-unified-pool: false -readpool.coprocessor.use-unified-pool: true -``` - -##### PD configuration - -{{< copyable "" >}} - -```yaml -replication.enable-placement-rules: true -``` - -##### TiFlash configuration - -{{< copyable "" >}} - -```yaml -logger.level: "info" -learner_config.log-level: "info" -``` - -##### Variable configuration - -> **Note:** -> -> There might be session variable(s). It is recommended that all queries are executed in the current session. - -{{< copyable "sql" >}} - -```sql -set global tidb_allow_batch_cop = 1; -set session tidb_opt_distinct_agg_push_down = 1; -set global tidb_distsql_scan_concurrency = 30; -set global tidb_projection_concurrency = 16; -set global tidb_hashagg_partial_concurrency = 16; -set global tidb_hashagg_final_concurrency = 16; -set global tidb_hash_join_concurrency = 16; -set global tidb_index_lookup_concurrency = 16; -set global tidb_index_lookup_join_concurrency = 16; -``` - -## Test plan - -### Hardware prerequisite - -To avoid TiKV and TiFlash racing for disk and I/O resources, mount the two NVMe SSD disks configured on EC2 to `/data1` and `/data2`. Deploy TiKV on `/data1` and deploy TiFlash on `/data2`. - -### Test process - -1. Deploy TiDB v4.0 and v3.0 using [TiUP](/tiup/tiup-overview.md#tiup-overview). - -2. Use the bench tool of TiUP to import the TPC-H data with the scale factor 10. - - * Execute the following command to import data into v3.0: - - {{< copyable "bash" >}} - - ```bash - tiup bench tpch prepare \ - --host ${tidb_v3_host} --port ${tidb_v3_port} --db tpch_10 \ - --sf 10 \ - --analyze --tidb_build_stats_concurrency 8 --tidb_distsql_scan_concurrency 30 - ``` - - * Execute the following command to import data into v4.0: - - {{< copyable "bash" >}} - - ```bash - tiup bench tpch prepare \ - --host ${tidb_v4_host} --port ${tidb_v4_port} --db tpch_10 --password ${password} \ - --sf 10 \ - --tiflash \ - --analyze --tidb_build_stats_concurrency 8 --tidb_distsql_scan_concurrency 30 - ``` - -3. Execute the TPC-H queries. - - 1. Download the TPC-H SQL query file: - - {{< copyable "" >}} - - ```bash - git clone https://github.com/pingcap/tidb-bench.git && cd tpch/queries - ``` - - 2. Execute TPC-H queries and record the executing time of each query. - - * For TiDB v3.0, use the MySQL client to connect to TiDB, execute the queries, and record the execution time of each query. - * For TiDB v4.0, use the MySQL client to connect to TiDB, and choose one of the following operations based on where data is read from: - * If data is read only from TiKV, set `set @@session.tidb_isolation_read_engines = 'tikv,tidb';`, execute the queries, and record the execution time of each query. - * If data is read from TiKV and TiFlash automatically based on cost-based intelligent choice, set `set @@session.tidb_isolation_read_engines = 'tikv,tiflash,tidb';`, execute the query, and record the execution time of each query. - -4. Extract and organize the data of query execution time. - -## Test result - -> **Note:** -> -> The tables on which SQL statements are executed in this test only have primary keys and do not have secondary indexes. Therefore, the test result below is not influenced by indexes. - -| Query ID | v3.0 | v4.0 TiKV Only | v4.0 TiKV/TiFlash Automatically | -| :-------- | :----------- | :------------ | :-------------- | -| 1 | 7.78 s | 7.45 s | 2.09 s | -| 2 | 3.15 s | 1.71 s | 1.71 s | -| 3 | 6.61 s | 4.10 s | 4.05 s | -| 4 | 2.98 s | 2.56 s | 1.87 s | -| 5 | 20.35 s | 5.71 s | 8.53 s | -| 6 | 4.75 s | 2.44 s | 0.39 s | -| 7 | 7.97 s | 3.72 s | 3.59 s | -| 8 | 5.89 s | 3.22 s | 8.59 s | -| 9 | 34.08 s | 11.87 s | 15.41 s | -| 10 | 4.83 s | 2.75 s | 3.35 s | -| 11 | 3.98 s | 1.60 s | 1.59 s | -| 12 | 5.63 s | 3.40 s | 1.03 s | -| 13 | 5.41 s | 4.56 s | 4.02 s | -| 14 | 5.19 s | 3.10 s | 0.78 s | -| 15 | 10.25 s | 1.82 s | 1.26 s | -| 16 | 2.46 s | 1.51 s | 1.58 s | -| 17 | 23.76 s | 12.38 s | 8.52 s | -| 18 | 17.14 s | 16.38 s | 16.06 s | -| 19 | 5.70 s | 4.59 s | 3.20 s | -| 20 | 4.98 s | 1.89 s | 1.29 s | -| 21 | 11.12 s | 6.23 s | 6.26 s | -| 22 | 4.49 s | 3.05 s | 2.31 s | - -![TPC-H](/media/tpch-v4vsv3.png) - -In the performance diagram above: - -+ Blue lines represent v3.0; -+ Red lines represent v4.0 (data read only from TiKV); -+ Yellow lines represent v4.0 (data read from TiKV and TiFlash automatically based on intelligent choice). -+ The y-axis represents the execution time of the query. The less the time, the better the performance. - -Result description: - -+ **v4.0 TiKV Only** means that TiDB reads data only from TiKV. The result shows that the TPC-H performance increased after TiDB and TiKV are upgraded to v4.0. -+ **v4.0 TiKV/TiFlash Automatically** means that the TiDB optimizer automatically determines whether to read data from the TiFlash replica according to the cost estimation. The result shows that the TPC-H performance increased in the full HTAP form of v4.0. - -From the diagram above, you can see that TPC-H performance increases by about 100% on average over a set of 22 queries. diff --git a/benchmark/v5.0-performance-benchmarking-with-tpcc.md b/benchmark/v5.0-performance-benchmarking-with-tpcc.md deleted file mode 100644 index 5472e67664cca..0000000000000 --- a/benchmark/v5.0-performance-benchmarking-with-tpcc.md +++ /dev/null @@ -1,149 +0,0 @@ ---- -title: TiDB TPC-C Performance Test Report -- v5.0 vs. v4.0 -summary: TiDB v5.0 outperforms v4.0 in TPC-C performance, showing a 36% increase. ---- - -# TiDB TPC-C Performance Test Report -- v5.0 vs. v4.0 - -## Test purpose - -This test aims at comparing the TPC-C performance of TiDB v5.0 and TiDB v4.0 in the Online Transactional Processing (OLTP) scenario. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| TPC-C | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | 4.0 and 5.0 | -| TiDB | 4.0 and 5.0 | -| TiKV | 4.0 and 5.0 | -| BenchmarkSQL | None | - -### Parameter configuration - -#### TiDB v4.0 configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV v4.0 configuration - -{{< copyable "" >}} - -```yaml -pessimistic-txn.pipelined: true -raftdb.allow-concurrent-memtable-write: true -raftdb.max-background-jobs: 4 -raftstore.apply-max-batch-size: 2048 -raftstore.apply-pool-size: 3 -raftstore.store-max-batch-size: 2048 -raftstore.store-pool-size: 3 -readpool.storage.normal-concurrency: 10 -readpool.unified.max-thread-count: 20 -readpool.unified.min-thread-count: 5 -rocksdb.max-background-jobs: 8 -server.grpc-concurrency: 6 -storage.scheduler-worker-pool-size: 20 -``` - -#### TiDB v5.0 configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV v5.0 configuration - -{{< copyable "" >}} - -```yaml -pessimistic-txn.pipelined: true -raftdb.allow-concurrent-memtable-write: true -raftdb.max-background-jobs: 4 -raftstore.apply-max-batch-size: 2048 -raftstore.apply-pool-size: 3 -raftstore.store-max-batch-size: 2048 -raftstore.store-pool-size: 3 -readpool.storage.normal-concurrency: 10 -readpool.unified.max-thread-count: 20 -readpool.unified.min-thread-count: 5 -rocksdb.max-background-jobs: 8 -server.grpc-concurrency: 6 -storage.scheduler-worker-pool-size: 20 -server.enable-request-batch: false -``` - -#### TiDB v4.0 global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -``` - -#### TiDB v5.0 global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -``` - -## Test plan - -1. Deploy TiDB v5.0 and v4.0 using TiUP. - -2. Use BenchmarkSQL to import the TPC-C 5000 Warehouse data. - - 1. Compile BenchmarkSQL: - - {{< copyable "bash" >}} - - ```bash - git clone https://github.com/pingcap/benchmarksql && cd benchmarksql && ant - ``` - - 2. Enter the `run` directory, edit the `props.mysql` file according to the actual situation, and modify the `conn`, `warehouses`, `loadWorkers`, `terminals`, and `runMins` configuration items. - - 3. Execute the `runSQL.sh ./props.mysql sql.mysql/tableCreates.sql` command. - - 4. Execute the `runSQL.sh ./props.mysql sql.mysql/indexCreates.sql` command. - - 5. Run MySQL client and execute the `analyze table` statement on every table. - -3. Execute the `runBenchmark.sh ./props.mysql` command. - -4. Extract the tpmC data of New Order from the result. - -## Test result - -According to the test statistics, the TPC-C performance of TiDB v5.0 has **increased by 36%** compared with that of TiDB v4.0. - -![TPC-C](/media/tpcc_v5vsv4_corrected_v2.png) diff --git a/benchmark/v5.1-performance-benchmarking-with-tpcc.md b/benchmark/v5.1-performance-benchmarking-with-tpcc.md deleted file mode 100644 index a9d8a26503b69..0000000000000 --- a/benchmark/v5.1-performance-benchmarking-with-tpcc.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: TiDB TPC-C Performance Test Report -- v5.1.0 vs. v5.0.2 -summary: TiDB v5.1.0 TPC-C performance is 2.8% better than v5.0.2. Parameter configuration is the same for both versions. Test plan includes deployment, database creation, data import, stress testing, and result extraction. ---- - -# TiDB TPC-C Performance Test Report -- v5.1.0 vs. v5.0.2 - -## Test overview - -This test aims to compare the TPC-H performance of TiDB v5.1.0 and TiDB v5.0.2 in the online analytical processing (OLAP) scenario. The results show that compared with v5.0.2, the TPC-C performance of v5.1.0 is improved by 2.8%. - -## Test environment (AWS EC2) - -## Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| TPC-C | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | v5.0.2 and v5.1.0 | -| TiDB | v5.0.2 and v5.1.0 | -| TiKV | v5.0.2 and v5.1.0 | -| TiUP | 1.5.1 | - -### Parameter configuration - -TiDB v5.1.0 and TiDB v5.0.2 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -pessimistic-txn.pipelined: true -raftdb.allow-concurrent-memtable-write: true -raftdb.max-background-jobs: 4 -raftstore.apply-max-batch-size: 2048 -raftstore.apply-pool-size: 3 -raftstore.store-max-batch-size: 2048 -raftstore.store-pool-size: 3 -readpool.storage.normal-concurrency: 10 -readpool.unified.max-thread-count: 20 -readpool.unified.min-thread-count: 5 -rocksdb.max-background-jobs: 8 -server.grpc-concurrency: 6 -storage.scheduler-worker-pool-size: 20 -server.enable-request-batch: false -``` - -#### TiDB global variable configuration - -{{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -``` - -## Test plan - -1. Deploy TiDB v5.1.0 and v5.0.2 using TiUP. -2. Create a database named `tpcc`: `create database tpcc;`. -3. Use BenchmarkSQL to import the TPC-C 5000 Warehouse data: `tiup bench tpcc prepare --warehouses 5000 --db tpcc -H 127.0.0.1 -p 4000`. -4. Execute the `tiup bench tpcc run -U root --db tpcc --host 127.0.0.1 --port 4000 --time 300s --warehouses 5000 --threads {{thread}}` command to perform stress tests on TiDB via HAProxy. -5. Extract the tpmC data of New Order from the result. - -## Test result - -Compared with v5.0.2, the TPC-C performance of v5.1.0 is **improved by 2.8%**. - -![TPC-C](/media/tpcc_v510_vs_v502.png) diff --git a/benchmark/v5.2-performance-benchmarking-with-tpcc.md b/benchmark/v5.2-performance-benchmarking-with-tpcc.md deleted file mode 100644 index 8d26565e7f607..0000000000000 --- a/benchmark/v5.2-performance-benchmarking-with-tpcc.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: TiDB TPC-C Performance Test Report -- v5.2.0 vs. v5.1.1 -summary: TiDB v5.2.0 TPC-C performance is 4.22% lower than v5.1.1. Test environment AWS EC2. Hardware and software configurations are the same for both versions. Test plan includes deployment, database creation, data import, stress testing, and result extraction. ---- - -# TiDB TPC-C Performance Test Report -- v5.2.0 vs. v5.1.1 - -## Test overview - -This test aims to compare the TPC-C performance of TiDB v5.2.0 and TiDB v5.1.1 in the online transactional processing (OLTP) scenario. The results show that compared with v5.1.1, the TPC-C performance of v5.2.0 is reduced by 4.22%. - -## Test environment (AWS EC2) - -## Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| TPC-C | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | v5.1.1 and v5.2.0 | -| TiDB | v5.1.1 and v5.2.0 | -| TiKV | v5.1.1 and v5.2.0 | -| TiUP | 1.5.1 | - -### Parameter configuration - -TiDB v5.2.0 and TiDB v5.1.1 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -pessimistic-txn.pipelined: true -raftdb.allow-concurrent-memtable-write: true -raftdb.max-background-jobs: 4 -raftstore.apply-max-batch-size: 2048 -raftstore.apply-pool-size: 3 -raftstore.store-max-batch-size: 2048 -raftstore.store-pool-size: 3 -readpool.storage.normal-concurrency: 10 -readpool.unified.max-thread-count: 20 -readpool.unified.min-thread-count: 5 -rocksdb.max-background-jobs: 8 -server.grpc-concurrency: 6 -storage.scheduler-worker-pool-size: 20 -server.enable-request-batch: false -``` - -#### TiDB global variable configuration - -{{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -``` - -## Test plan - -1. Deploy TiDB v5.2.0 and v5.1.1 using TiUP. -2. Create a database named `tpcc`: `create database tpcc;`. -3. Use BenchmarkSQL to import the TPC-C 5000 Warehouse data: `tiup bench tpcc prepare --warehouses 5000 --db tpcc -H 127.0.0.1 -p 4000`. -4. Execute the `tiup bench tpcc run -U root --db tpcc --host 127.0.0.1 --port 4000 --time 300s --warehouses 5000 --threads {{thread}}` command to perform stress tests on TiDB via HAProxy. -5. Extract the tpmC data of New Order from the result. - -## Test result - -Compared with v5.1.1, the TPC-C performance of v5.2.0 is **reduced by 4.22%**. - -![TPC-C](/media/tpcc_v511_vs_v520.png) diff --git a/benchmark/v5.3-performance-benchmarking-with-tpcc.md b/benchmark/v5.3-performance-benchmarking-with-tpcc.md deleted file mode 100644 index a5a4c35261850..0000000000000 --- a/benchmark/v5.3-performance-benchmarking-with-tpcc.md +++ /dev/null @@ -1,129 +0,0 @@ ---- -title: TiDB TPC-C Performance Test Report -- v5.3.0 vs. v5.2.2 -summary: TiDB v5.3.0 TPC-C performance is slightly reduced by 2.99% compared to v5.2.2. The test used AWS EC2 with specific hardware and software configurations. The test plan involved deploying TiDB, creating a database, importing data, and running stress tests. The result showed a decrease in performance across different thread counts. ---- - -# TiDB TPC-C Performance Test Report -- v5.3.0 vs. v5.2.2 - -## Test overview - -This test aims at comparing the TPC-C performance of TiDB v5.3.0 and TiDB v5.2.2 in the online transactional processing (OLTP) scenario. The result shows that compared with v5.2.2, the TPC-C performance of v5.3.0 is reduced by 2.99%. - -## Test environment (AWS EC2) - -## Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| TPC-C | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | v5.2.2 and v5.3.0 | -| TiDB | v5.2.2 and v5.3.0 | -| TiKV | v5.2.2 and v5.3.0 | -| TiUP | 1.5.1 | - -### Parameter configuration - -TiDB v5.3.0 and TiDB v5.2.2 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -pessimistic-txn.pipelined: true -raftdb.allow-concurrent-memtable-write: true -raftdb.max-background-jobs: 4 -raftstore.apply-max-batch-size: 2048 -raftstore.apply-pool-size: 3 -raftstore.store-max-batch-size: 2048 -raftstore.store-pool-size: 3 -readpool.storage.normal-concurrency: 10 -readpool.unified.max-thread-count: 20 -readpool.unified.min-thread-count: 5 -rocksdb.max-background-jobs: 8 -server.grpc-concurrency: 6 -storage.scheduler-worker-pool-size: 20 -``` - -#### TiDB global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -``` - -#### HAProxy configuration - haproxy.cfg - -For more details about how to use HAProxy on TiDB, see [Best Practices for Using HAProxy in TiDB](/best-practices/haproxy-best-practices.md). - -```yaml -global # Global configuration. - chroot /var/lib/haproxy # Changes the current directory and sets superuser privileges for the startup process to improve security. - pidfile /var/run/haproxy.pid # Writes the PIDs of HAProxy processes into this file. - maxconn 4000 # The maximum number of concurrent connections for a single HAProxy process. - user haproxy # Same with the UID parameter. - group haproxy # Same with the GID parameter. A dedicated user group is recommended. - nbproc 64 # The number of processes created when going daemon. When starting multiple processes to forward requests, ensure that the value is large enough so that HAProxy does not block processes. - daemon # Makes the process fork into background. It is equivalent to the command line "-D" argument. It can be disabled by the command line "-db" argument. - -defaults # Default configuration. - log global # Inherits the settings of the global configuration. - retries 2 # The maximum number of retries to connect to an upstream server. If the number of connection attempts exceeds the value, the backend server is considered unavailable. - timeout connect 2s # The maximum time to wait for a connection attempt to a backend server to succeed. It should be set to a shorter time if the server is located on the same LAN as HAProxy. - timeout client 30000s # The maximum inactivity time on the client side. - timeout server 30000s # The maximum inactivity time on the server side. - -listen tidb-cluster # Database load balancing. - bind 0.0.0.0:3390 # The Floating IP address and listening port. - mode tcp # HAProxy uses layer 4, the transport layer. - balance roundrobin # The server with the fewest connections receives the connection. "leastconn" is recommended where long sessions are expected, such as LDAP, SQL and TSE, rather than protocols using short sessions, such as HTTP. The algorithm is dynamic, which means that server weights might be adjusted on the fly for slow starts for instance. - server tidb-1 10.9.18.229:4000 check inter 2000 rise 2 fall 3 # Detects port 4000 at a frequency of once every 2000 milliseconds. If it is detected as successful twice, the server is considered available; if it is detected as failed three times, the server is considered unavailable. - server tidb-2 10.9.39.208:4000 check inter 2000 rise 2 fall 3 - server tidb-3 10.9.64.166:4000 check inter 2000 rise 2 fall 3 -``` - -## Test plan - -1. Deploy TiDB v5.3.0 and v5.2.2 using TiUP. -2. Create a database named `tpcc`: `create database tpcc;`. -3. Use BenchmarkSQL to import the TPC-C 5000 Warehouse data: `tiup bench tpcc prepare --warehouses 5000 --db tpcc -H 127.0.0.1 -p 4000`. -4. Run the `tiup bench tpcc run -U root --db tpcc --host 127.0.0.1 --port 4000 --time 1800s --warehouses 5000 --threads {{thread}}` command to perform stress tests on TiDB via HAProxy. For each concurrency, the test takes 30 minutes. -5. Extract the tpmC data of New Order from the result. - -## Test result - -Compared with v5.2.2, the TPC-C performance of v5.3.0 is **reduced slightly by 2.99%**. - -| Threads | v5.2.2 tpmC | v5.3.0 tpmC | tpmC improvement (%) | -|:----------|:----------|:----------|:----------| -|50|42228.8|41580|-1.54| -|100|49400|48248.2|-2.33| -|200|54436.6|52809.4|-2.99| -|400|57026.7|54117.1|-5.10| - -![TPC-C](/media/tpcc_v522_vs_v530.png) \ No newline at end of file diff --git a/benchmark/v5.4-performance-benchmarking-with-tpcc.md b/benchmark/v5.4-performance-benchmarking-with-tpcc.md deleted file mode 100644 index 05be43f4ca314..0000000000000 --- a/benchmark/v5.4-performance-benchmarking-with-tpcc.md +++ /dev/null @@ -1,129 +0,0 @@ ---- -title: TiDB TPC-C Performance Test Report -- v5.4.0 vs. v5.3.0 -summary: TiDB v5.4.0 TPC-C performance is 3.16% better than v5.3.0. The improvement is consistent across different thread counts 2.80% (50 threads), 4.27% (100 threads), 3.45% (200 threads), and 2.11% (400 threads). ---- - -# TiDB TPC-C Performance Test Report -- v5.4.0 vs. v5.3.0 - -## Test overview - -This test aims at comparing the TPC-C performance of TiDB v5.4.0 and v5.3.0 in the Online Transactional Processing (OLTP) scenario. The results show that compared with v5.3.0, the TPC-C performance of v5.4.0 is improved by 3.16%. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| TPC-C | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| PD | v5.3.0 and v5.4.0 | -| TiDB | v5.3.0 and v5.4.0 | -| TiKV | v5.3.0 and v5.4.0 | -| TiUP | 1.5.1 | - -### Parameter configuration - -TiDB v5.4.0 and TiDB v5.3.0 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -performance.max-procs: 20 -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -pessimistic-txn.pipelined: true -raftdb.allow-concurrent-memtable-write: true -raftdb.max-background-jobs: 4 -raftstore.apply-max-batch-size: 2048 -raftstore.apply-pool-size: 3 -raftstore.store-max-batch-size: 2048 -raftstore.store-pool-size: 3 -readpool.storage.normal-concurrency: 10 -readpool.unified.max-thread-count: 20 -readpool.unified.min-thread-count: 5 -rocksdb.max-background-jobs: 8 -server.grpc-concurrency: 6 -storage.scheduler-worker-pool-size: 20 -``` - -#### TiDB global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -``` - -#### HAProxy configuration - haproxy.cfg - -For more details about how to use HAProxy on TiDB, see [Best Practices for Using HAProxy in TiDB](/best-practices/haproxy-best-practices.md). - -{{< copyable "" >}} - -```yaml -global # Global configuration. - chroot /var/lib/haproxy # Changes the current directory and sets superuser privileges for the startup process to improve security. - pidfile /var/run/haproxy.pid # Writes the PIDs of HAProxy processes into this file. - maxconn 4000 # The maximum number of concurrent connections for a single HAProxy process. - user haproxy # The same with the UID parameter. - group haproxy # The same with the GID parameter. A dedicated user group is recommended. - nbproc 64 # The number of processes created when going daemon. When starting multiple processes to forward requests, ensure that the value is large enough so that HAProxy does not block processes. - daemon # Makes the process fork into background. It is equivalent to the command line "-D" argument. It can be disabled by the command line "-db" argument. -defaults # Default configuration. - log global # Inherits the settings of the global configuration. - retries 2 # The maximum number of retries to connect to an upstream server. If the number of connection attempts exceeds the value, the backend server is considered unavailable. - timeout connect 2s # The maximum time to wait for a connection attempt to a backend server to succeed. It should be set to a shorter time if the server is located on the same LAN as HAProxy. - timeout client 30000s # The maximum inactivity time on the client side. - timeout server 30000s # The maximum inactivity time on the server side. -listen tidb-cluster # Database load balancing. - bind 0.0.0.0:3390 # The Floating IP address and listening port. - mode tcp # HAProxy uses layer 4, the transport layer. - balance roundrobin # The server with the fewest connections receives the connection. "leastconn" is recommended where long sessions are expected, such as LDAP, SQL and TSE, rather than protocols using short sessions, such as HTTP. The algorithm is dynamic, which means that server weights might be adjusted on the fly for slow starts for instance. - server tidb-1 10.9.18.229:4000 check inter 2000 rise 2 fall 3 # Detects port 4000 at a frequency of once every 2000 milliseconds. If it is detected as successful twice, the server is considered available; if it is detected as failed three times, the server is considered unavailable. - server tidb-2 10.9.39.208:4000 check inter 2000 rise 2 fall 3 - server tidb-3 10.9.64.166:4000 check inter 2000 rise 2 fall 3 -``` - -### Prepare test data - -1. Deploy TiDB v5.4.0 and v5.3.0 using TiUP. -2. Create a database named `tpcc`: `create database tpcc;`. -3. Use BenchmarkSQL to import the TPC-C 5000 Warehouse data: `tiup bench tpcc prepare --warehouses 5000 --db tpcc -H 127.0.0.1 -P 4000`. -4. Run the `tiup bench tpcc run -U root --db tpcc --host 127.0.0.1 --port 4000 --time 1800s --warehouses 5000 --threads {{thread}}` command to perform stress tests on TiDB via HAProxy. For each concurrency, the test takes 30 minutes. -5. Extract the tpmC data of New Order from the result. - -## Test result - -Compared with v5.3.0, the TPC-C performance of v5.4.0 is **improved by 3.16%**. - -| Threads | v5.3.0 tpmC | v5.4.0 tpmC | tpmC improvement (%) | -|:----------|:----------|:----------|:----------| -|50|43002.4|44204.4|2.80| -|100|50162.7|52305|4.27| -|200|55768.2|57690.7|3.45| -|400|56836.8|58034.6|2.11| - -![TPC-C](/media/tpcc_v530_vs_v540.png) diff --git a/benchmark/v5.4-performance-benchmarking-with-tpch.md b/benchmark/v5.4-performance-benchmarking-with-tpch.md deleted file mode 100644 index c459c56268f35..0000000000000 --- a/benchmark/v5.4-performance-benchmarking-with-tpch.md +++ /dev/null @@ -1,128 +0,0 @@ ---- -title: TiDB TPC-H Performance Test Report -- v5.4 MPP mode vs. Greenplum 6.15.0 and Apache Spark 3.1.1 -summary: TiDB v5.4 MPP mode outperforms Greenplum 6.15.0 and Apache Spark 3.1.1 in TPC-H 100 GB performance test. TiDB's MPP mode is 2-3 times faster. Test results show TiDB v5.4 has significantly lower query execution times compared to Greenplum and Apache Spark. ---- - -# TiDB TPC-H Performance Test Report -- TiDB v5.4 MPP mode vs. Greenplum 6.15.0 and Apache Spark 3.1.1 - -## Test overview - -This test aims at comparing the TPC-H 100 GB performance of TiDB v5.4 in the MPP mode with that of Greenplum and Apache Spark, two mainstream analytics engines, in their latest versions. The test result shows that the performance of TiDB v5.4 in the MPP mode is two to three times faster than that of the other two solutions under TPC-H workload. - -In v5.0, TiDB introduces the MPP mode for [TiFlash](/tiflash/tiflash-overview.md), which significantly enhances TiDB's Hybrid Transactional and Analytical Processing (HTAP) capabilities. Test objects in this report are as follows: - -+ TiDB v5.4 columnar storage in the MPP mode -+ Greenplum 6.15.0 -+ Apache Spark 3.1.1 + Parquet - -## Test environment - -### Hardware prerequisite - -| Instance type | Instance count | -|:----------|:----------| -| PD | 1 | -| TiDB | 1 | -| TiKV | 3 | -| TiFlash | 3 | - -+ CPU: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz, 40 cores -+ Memory: 189 GB -+ Disks: NVMe 3TB * 2 - -### Software version - -| Service type | Software version | -|:----------|:-----------| -| TiDB | 5.4 | -| Greenplum | 6.15.0 | -| Apache Spark | 3.1.1 | - -### Parameter configuration - -#### TiDB v5.4 - -For the v5.4 cluster, TiDB uses the default parameter configuration except for the following configuration items. - -In the configuration file `users.toml` of TiFlash, configure `max_memory_usage` as follows: - -```toml -[profiles.default] -max_memory_usage = 10000000000000 -``` - -Set session variables with the following SQL statements: - -```sql -set @@tidb_isolation_read_engines='tiflash'; -set @@tidb_allow_mpp=1; -set @@tidb_mem_quota_query = 10 << 30; -``` - -All TPC-H test tables are replicated to TiFlash in columnar format, with no additional partitions or indexes. - -#### Greenplum - -Except for the initial 3 nodes, the Greenplum cluster is deployed using an additional master node. Each segment server contains 8 segments, which means 4 segments per NVMe SSD. So there are 24 segments in total. The storage format is append-only/column-oriented storage and partition keys are used as primary keys. - -{{< copyable "" >}} - -``` -log_statement = all -gp_autostats_mode = none -statement_mem = 2048MB -gp_vmem_protect_limit = 16384 -``` - -#### Apache Spark - -The test of Apache Spark uses Apache Parquet as the storage format and stores the data on HDFS. The HDFS system consists of three nodes. Each node has two assigned NVMe SSD disks as the data disks. The Spark cluster is deployed in standalone mode, using NVMe SSD disks as the local directory of `spark.local.dir` to speed up the shuffle spill, with no additional partitions or indexes. - -{{< copyable "" >}} - -``` ---driver-memory 20G ---total-executor-cores 120 ---executor-cores 5 ---executor-memory 15G -``` - -## Test result - -> **Note:** -> -> The following test results are the average data of three tests. All numbers are in seconds. - -| Query ID | TiDB v5.4 | Greenplum 6.15.0 | Apache Spark 3.1.1 + Parquet | -| :-------- | :----------- | :------------ | :-------------- | -| 1 | 8.08 | 64.1307 | 52.64 | -| 2 | 2.53 | 4.76612 | 11.83 | -| 3 | 4.84 | 15.62898 | 13.39 | -| 4 | 10.94 | 12.88318 | 8.54 | -| 5 | 12.27 | 23.35449 | 25.23 | -| 6 | 1.32 | 6.033 | 2.21 | -| 7 | 5.91 | 12.31266 | 25.45 | -| 8 | 6.71 | 11.82444 | 23.12 | -| 9 | 44.19 | 22.40144 | 35.2 | -| 10 | 7.13 | 12.51071 | 12.18 | -| 11 | 2.18 | 2.6221 | 10.99 | -| 12 | 2.88 | 7.97906 | 6.99 | -| 13 | 6.84 | 10.15873 | 12.26 | -| 14 | 1.69 | 4.79394 | 3.89 | -| 15 | 3.29 | 10.48785 | 9.82 | -| 16 | 5.04 | 4.64262 | 6.76 | -| 17 | 11.7 | 74.65243 | 44.65 | -| 18 | 12.87 | 64.87646 | 30.27 | -| 19 | 4.75 | 8.08625 | 4.7 | -| 20 | 8.89 | 15.47016 | 8.4 | -| 21 | 24.44 | 39.08594 | 34.83 | -| 22 | 1.23 | 7.67476 | 4.59 | - -![TPC-H](/media/tidb-v5.4-tpch-100-vs-gp-spark.png) - -In the performance diagram above: - -- Blue lines represent TiDB v5.4; -- Red lines represent Greenplum 6.15.0; -- Yellow lines represent Apache Spark 3.1.1. -- The y-axis represents the execution time of the query. The less the time is, the better the performance is. diff --git a/benchmark/v6.0-performance-benchmarking-with-tpcc.md b/benchmark/v6.0-performance-benchmarking-with-tpcc.md deleted file mode 100644 index e8fa4c4a9e278..0000000000000 --- a/benchmark/v6.0-performance-benchmarking-with-tpcc.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -title: TiDB TPC-C Performance Test Report -- v6.0.0 vs. v5.4.0 -summary: TiDB v6.0.0 TPC-C performance is 24.20% better than v5.4.0. The improvement is consistent across different thread counts, with the highest improvement at 26.97% for 100 threads. ---- - -# TiDB TPC-C Performance Test Report -- v6.0.0 vs. v5.4.0 - -## Test overview - -This test aims at comparing the TPC-C performance of TiDB v6.0.0 and v5.4.0 in the Online Transactional Processing (OLTP) scenario. The results show that compared with v5.4.0, the TPC-C performance of v6.0.0 is improved by 24.20%. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| TPC-C | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -| :----------- | :---------------- | -| PD | v5.4.0 and v6.0.0 | -| TiDB | v5.4.0 and v6.0.0 | -| TiKV | v5.4.0 and v6.0.0 | -| TiUP | 1.9.3 | -| HAProxy | 2.5.0 | - -### Parameter configuration - -TiDB v6.0.0 and TiDB v5.4.0 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -pessimistic-txn.pipelined: true -raftdb.allow-concurrent-memtable-write: true -raftdb.max-background-jobs: 4 -raftstore.apply-max-batch-size: 2048 -raftstore.apply-pool-size: 3 -raftstore.store-max-batch-size: 2048 -raftstore.store-pool-size: 3 -readpool.storage.normal-concurrency: 10 -rocksdb.max-background-jobs: 8 -server.grpc-concurrency: 6 -``` - -#### TiDB global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -``` - -#### HAProxy configuration - haproxy.cfg - -For more details about how to use HAProxy on TiDB, see [Best Practices for Using HAProxy in TiDB](/best-practices/haproxy-best-practices.md). - -{{< copyable "" >}} - -```yaml -global # Global configuration. - pidfile /var/run/haproxy.pid # Writes the PIDs of HAProxy processes into this file. - maxconn 4000 # The maximum number of concurrent connections for a single HAProxy process. - user haproxy # The same with the UID parameter. - group haproxy # The same with the GID parameter. A dedicated user group is recommended. - nbproc 64 # The number of processes created when going daemon. When starting multiple processes to forward requests, ensure that the value is large enough so that HAProxy does not block processes. - daemon # Makes the process fork into background. It is equivalent to the command line "-D" argument. It can be disabled by the command line "-db" argument. -defaults # Default configuration. - log global # Inherits the settings of the global configuration. - retries 2 # The maximum number of retries to connect to an upstream server. If the number of connection attempts exceeds the value, the backend server is considered unavailable. - timeout connect 2s # The maximum time to wait for a connection attempt to a backend server to succeed. It should be set to a shorter time if the server is located on the same LAN as HAProxy. - timeout client 30000s # The maximum inactivity time on the client side. - timeout server 30000s # The maximum inactivity time on the server side. -listen tidb-cluster # Database load balancing. - bind 0.0.0.0:3390 # The Floating IP address and listening port. - mode tcp # HAProxy uses layer 4, the transport layer. - balance leastconn # The server with the fewest connections receives the connection. "leastconn" is recommended where long sessions are expected, such as LDAP, SQL and TSE, rather than protocols using short sessions, such as HTTP. The algorithm is dynamic, which means that server weights might be adjusted on the fly for slow starts for instance. - server tidb-1 10.9.18.229:4000 check inter 2000 rise 2 fall 3 # Detects port 4000 at a frequency of once every 2000 milliseconds. If it is detected as successful twice, the server is considered available; if it is detected as failed three times, the server is considered unavailable. - server tidb-2 10.9.39.208:4000 check inter 2000 rise 2 fall 3 - server tidb-3 10.9.64.166:4000 check inter 2000 rise 2 fall 3 -``` - -### Prepare test data - -1. Deploy TiDB v6.0.0 and v5.4.0 using TiUP. -2. Create a database named `tpcc`: `create database tpcc;`. -3. Use BenchmarkSQL to import the TPC-C 5000 Warehouse data: `tiup bench tpcc prepare --warehouse 5000 --db tpcc -H 127.0.0.1 -p 4000`. -4. Run the `tiup bench tpcc run -U root --db tpcc --host 127.0.0.1 --port 4000 --time 1800s --warehouses 5000 --threads {{thread}}` command to perform stress tests on TiDB via HAProxy. For each concurrency, the test takes 30 minutes. -5. Extract the tpmC data of New Order from the result. - -## Test result - -Compared with v5.4.0, the TPC-C performance of v6.0.0 is **improved by 24.20%**. - -| Threads | v5.4.0 tpmC | v6.0.0 tpmC | tpmC improvement (%) | -|:----------|:----------|:----------|:----------| -|50|44822.8|54956.6|22.61| -|100|52150.3|66216.6|26.97| -|200|57344.9|72116.7|25.76| -|400|58675|71254.8|21.44| - -![TPC-C](/media/tpcc_v540_vs_v600.png) diff --git a/benchmark/v6.0-performance-benchmarking-with-tpch.md b/benchmark/v6.0-performance-benchmarking-with-tpch.md deleted file mode 100644 index b22e4e3b0afab..0000000000000 --- a/benchmark/v6.0-performance-benchmarking-with-tpch.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Performance Comparison between TiFlash and Greenplum/Spark -summary: Performance Comparison between TiFlash and Greenplum/Spark. Refer to TiDB v5.4 TPC-H performance benchmarking report for details. ---- - -# Performance Comparison between TiFlash and Greenplum/Spark - -Refer to [TiDB v5.4 TPC-H performance benchmarking report](https://docs.pingcap.com/tidb/stable/v5.4-performance-benchmarking-with-tpch). \ No newline at end of file diff --git a/benchmark/v6.1-performance-benchmarking-with-tpcc.md b/benchmark/v6.1-performance-benchmarking-with-tpcc.md deleted file mode 100644 index 17265b670510b..0000000000000 --- a/benchmark/v6.1-performance-benchmarking-with-tpcc.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: TiDB TPC-C Performance Test Report -- v6.1.0 vs. v6.0.0 -summary: TiDB v6.1.0 TPC-C performance is 2.85% better than v6.0.0. TiDB and TiKV parameter configurations are the same for both versions. HAProxy is used for load balancing. Results show performance improvement across different thread counts. ---- - -# TiDB TPC-C Performance Test Report -- v6.1.0 vs. v6.0.0 - -## Test overview - -This test aims at comparing the TPC-C performance of TiDB v6.1.0 and v6.0.0 in the Online Transactional Processing (OLTP) scenario. The results show that compared with v6.0.0, the TPC-C performance of v6.1.0 is improved by 2.85%. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| TPC-C | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -| :----------- | :---------------- | -| PD | v6.0.0 and v6.1.0 | -| TiDB | v6.0.0 and v6.1.0 | -| TiKV | v6.0.0 and v6.1.0 | -| TiUP | 1.9.3 | -| HAProxy | 2.5.0 | - -### Parameter configuration - -TiDB v6.1.0 and TiDB v6.0.0 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -raftstore.apply-max-batch-size: 2048 -raftstore.apply-pool-size: 3 -raftstore.store-max-batch-size: 2048 -raftstore.store-pool-size: 2 -readpool.storage.normal-concurrency: 10 -server.grpc-concurrency: 6 -``` - -#### TiDB global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -set global tidb_prepared_plan_cache_size=1000; -``` - -#### HAProxy configuration - haproxy.cfg - -For more details about how to use HAProxy on TiDB, see [Best Practices for Using HAProxy in TiDB](/best-practices/haproxy-best-practices.md). - -{{< copyable "" >}} - -```yaml -global # Global configuration. - pidfile /var/run/haproxy.pid # Writes the PIDs of HAProxy processes into this file. - maxconn 4000 # The maximum number of concurrent connections for a single HAProxy process. - user haproxy # The same with the UID parameter. - group haproxy # The same with the GID parameter. A dedicated user group is recommended. - nbproc 64 # The number of processes created when going daemon. When starting multiple processes to forward requests, ensure that the value is large enough so that HAProxy does not block processes. - daemon # Makes the process fork into background. It is equivalent to the command line "-D" argument. It can be disabled by the command line "-db" argument. - -defaults # Default configuration. - log global # Inherits the settings of the global configuration. - retries 2 # The maximum number of retries to connect to an upstream server. If the number of connection attempts exceeds the value, the backend server is considered unavailable. - timeout connect 2s # The maximum time to wait for a connection attempt to a backend server to succeed. It should be set to a shorter time if the server is located on the same LAN as HAProxy. - timeout client 30000s # The maximum inactivity time on the client side. - timeout server 30000s # The maximum inactivity time on the server side. - -listen tidb-cluster # Database load balancing. - bind 0.0.0.0:3390 # The Floating IP address and listening port. - mode tcp # HAProxy uses layer 4, the transport layer. - balance leastconn # The server with the fewest connections receives the connection. "leastconn" is recommended where long sessions are expected, such as LDAP, SQL and TSE, rather than protocols using short sessions, such as HTTP. The algorithm is dynamic, which means that server weights might be adjusted on the fly for slow starts for instance. - server tidb-1 10.9.18.229:4000 check inter 2000 rise 2 fall 3 # Detects port 4000 at a frequency of once every 2000 milliseconds. If it is detected as successful twice, the server is considered available; if it is detected as failed three times, the server is considered unavailable. - server tidb-2 10.9.39.208:4000 check inter 2000 rise 2 fall 3 - server tidb-3 10.9.64.166:4000 check inter 2000 rise 2 fall 3 -``` - -### Prepare test data - -1. Deploy TiDB v6.1.0 and v6.0.0 using TiUP. -2. Create a database named `tpcc`: `create database tpcc;`. -3. Use BenchmarkSQL to import the TPC-C 5000 Warehouse data: `tiup bench tpcc prepare --warehouse 5000 --db tpcc -H 127.0.0.1 -p 4000`. -4. Run the `tiup bench tpcc run -U root --db tpcc --host 127.0.0.1 --port 4000 --time 1800s --warehouses 5000 --threads {{thread}}` command to perform stress tests on TiDB via HAProxy. For each concurrency, the test takes 30 minutes. -5. Extract the tpmC data of New Order from the result. - -## Test result - -Compared with v6.0.0, the TPC-C performance of v6.1.0 is **improved by 2.85%**. - -| Threads | v6.0.0 tpmC | v6.1.0 tpmC | tpmC improvement (%) | -|:----------|:----------|:----------|:----------| -|50|59059.2|60424.4|2.31| -|100|69357.6|71235.5|2.71| -|200|71364.8|74117.8|3.86| -|400|72694.3|74525.3|2.52| - -![TPC-C](/media/tpcc_v600_vs_v610.png) diff --git a/benchmark/v6.1-performance-benchmarking-with-tpch.md b/benchmark/v6.1-performance-benchmarking-with-tpch.md deleted file mode 100644 index b22e4e3b0afab..0000000000000 --- a/benchmark/v6.1-performance-benchmarking-with-tpch.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Performance Comparison between TiFlash and Greenplum/Spark -summary: Performance Comparison between TiFlash and Greenplum/Spark. Refer to TiDB v5.4 TPC-H performance benchmarking report for details. ---- - -# Performance Comparison between TiFlash and Greenplum/Spark - -Refer to [TiDB v5.4 TPC-H performance benchmarking report](https://docs.pingcap.com/tidb/stable/v5.4-performance-benchmarking-with-tpch). \ No newline at end of file diff --git a/benchmark/v6.2-performance-benchmarking-with-tpcc.md b/benchmark/v6.2-performance-benchmarking-with-tpcc.md deleted file mode 100644 index 455f3ca53a90f..0000000000000 --- a/benchmark/v6.2-performance-benchmarking-with-tpcc.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: TiDB TPC-C Performance Test Report -- v6.2.0 vs. v6.1.0 -summary: TiDB v6.2.0 TPC-C performance declined by 2.00% compared to v6.1.0. The test used AWS EC2 with specific hardware and software configurations. Test data was prepared and stress tests were conducted via HAProxy. Results showed a decline in performance across different thread counts. ---- - -# TiDB TPC-C Performance Test Report -- v6.2.0 vs. v6.1.0 - -## Test overview - -This test aims at comparing the TPC-C performance of TiDB v6.2.0 and v6.1.0 in the Online Transactional Processing (OLTP) scenario. The results show that compared with v6.1.0, the TPC-C performance of v6.2.0 is declined by 2.00%. - -## Test environment (AWS EC2) - -### Hardware configuration - -| Service type | EC2 type | Instance count | -|:----------|:----------|:----------| -| PD | m5.xlarge | 3 | -| TiKV | i3.4xlarge| 3 | -| TiDB | c5.4xlarge| 3 | -| TPC-C | c5.9xlarge| 1 | - -### Software version - -| Service type | Software version | -| :----------- | :---------------- | -| PD | v6.1.0 and v6.2.0 | -| TiDB | v6.1.0 and v6.2.0 | -| TiKV | v6.1.0 and v6.2.0 | -| TiUP | 1.9.3 | -| HAProxy | 2.5.0 | - -### Parameter configuration - -TiDB v6.2.0 and TiDB v6.1.0 use the same configuration. - -#### TiDB parameter configuration - -{{< copyable "" >}} - -```yaml -log.level: "error" -prepared-plan-cache.enabled: true -tikv-client.max-batch-wait-time: 2000000 -``` - -#### TiKV parameter configuration - -{{< copyable "" >}} - -```yaml -raftstore.apply-max-batch-size: 2048 -raftstore.apply-pool-size: 3 -raftstore.store-max-batch-size: 2048 -raftstore.store-pool-size: 2 -readpool.storage.normal-concurrency: 10 -server.grpc-concurrency: 6 -``` - -#### TiDB global variable configuration - -{{< copyable "sql" >}} - -```sql -set global tidb_hashagg_final_concurrency=1; -set global tidb_hashagg_partial_concurrency=1; -set global tidb_enable_async_commit = 1; -set global tidb_enable_1pc = 1; -set global tidb_guarantee_linearizability = 0; -set global tidb_enable_clustered_index = 1; -set global tidb_prepared_plan_cache_size=1000; -``` - -#### HAProxy configuration - haproxy.cfg - -For more details about how to use HAProxy on TiDB, see [Best Practices for Using HAProxy in TiDB](/best-practices/haproxy-best-practices.md). - -{{< copyable "" >}} - -```yaml -global # Global configuration. - pidfile /var/run/haproxy.pid # Writes the PIDs of HAProxy processes into this file. - maxconn 4000 # The maximum number of concurrent connections for a single HAProxy process. - user haproxy # The same with the UID parameter. - group haproxy # The same with the GID parameter. A dedicated user group is recommended. - nbproc 64 # The number of processes created when going daemon. When starting multiple processes to forward requests, ensure that the value is large enough so that HAProxy does not block processes. - daemon # Makes the process fork into background. It is equivalent to the command line "-D" argument. It can be disabled by the command line "-db" argument. - -defaults # Default configuration. - log global # Inherits the settings of the global configuration. - retries 2 # The maximum number of retries to connect to an upstream server. If the number of connection attempts exceeds the value, the backend server is considered unavailable. - timeout connect 2s # The maximum time to wait for a connection attempt to a backend server to succeed. It should be set to a shorter time if the server is located on the same LAN as HAProxy. - timeout client 30000s # The maximum inactivity time on the client side. - timeout server 30000s # The maximum inactivity time on the server side. - -listen tidb-cluster # Database load balancing. - bind 0.0.0.0:3390 # The Floating IP address and listening port. - mode tcp # HAProxy uses layer 4, the transport layer. - balance leastconn # The server with the fewest connections receives the connection. "leastconn" is recommended where long sessions are expected, such as LDAP, SQL and TSE, rather than protocols using short sessions, such as HTTP. The algorithm is dynamic, which means that server weights might be adjusted on the fly for slow starts for instance. - server tidb-1 10.9.18.229:4000 check inter 2000 rise 2 fall 3 # Detects port 4000 at a frequency of once every 2000 milliseconds. If it is detected as successful twice, the server is considered available; if it is detected as failed three times, the server is considered unavailable. - server tidb-2 10.9.39.208:4000 check inter 2000 rise 2 fall 3 - server tidb-3 10.9.64.166:4000 check inter 2000 rise 2 fall 3 -``` - -### Prepare test data - -1. Deploy TiDB v6.2.0 and v6.1.0 using TiUP. -2. Create a database named `tpcc`: `create database tpcc;`. -3. Use BenchmarkSQL to import the TPC-C 5000 Warehouse data: `tiup bench tpcc prepare --warehouse 5000 --db tpcc -H 127.0.0.1 -P 4000`. -4. Run the `tiup bench tpcc run -U root --db tpcc --host 127.0.0.1 --port 4000 --time 1800s --warehouses 5000 --threads {{thread}}` command to perform stress tests on TiDB via HAProxy. For each concurrency, the test takes 30 minutes. -5. Extract the tpmC data of New Order from the result. - -## Test result - -Compared with v6.1.0, the TPC-C performance of v6.2.0 is **declined by 2.00%**. - -| Threads | v6.1.0 tpmC | v6.2.0 tpmC | tpmC improvement (%) | -| :------ | :---------- | :---------- | :------------ | -| 50 | 62212.4 | 61874.4 | -0.54 | -| 100 | 72790.7 | 71317.5 | -2.02 | -| 200 | 75818.6 | 73090.4 | -3.60 | -| 400 | 74515.3 | 73156.9 | -1.82 | - -![TPC-C](/media/tpcc_v610_vs_v620.png) diff --git a/benchmark/v6.2-performance-benchmarking-with-tpch.md b/benchmark/v6.2-performance-benchmarking-with-tpch.md deleted file mode 100644 index 834a7bb3276bf..0000000000000 --- a/benchmark/v6.2-performance-benchmarking-with-tpch.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Performance Comparison between TiFlash and Greenplum/Spark -summary: Performance Comparison between TiFlash and Greenplum/Spark. Refer to TiDB v5.4 TPC-H performance benchmarking report at the provided link. ---- - -# Performance Comparison between TiFlash and Greenplum/Spark - -Refer to [TiDB v5.4 TPC-H performance benchmarking report](https://docs.pingcap.com/tidb/stable/v5.4-performance-benchmarking-with-tpch). diff --git a/system-variables.md b/system-variables.md index 8048345c154c9..314017d4b3c89 100644 --- a/system-variables.md +++ b/system-variables.md @@ -1688,7 +1688,7 @@ mysql> SELECT job_info FROM mysql.analyze_jobs ORDER BY end_time DESC LIMIT 1; - Unit: Rows - This variable is used to set the batch size during the `re-organize` phase of the DDL operation. For example, when TiDB executes the `ADD INDEX` operation, the index data needs to backfilled by `tidb_ddl_reorg_worker_cnt` (the number) concurrent workers. Each worker backfills the index data in batches. - If `tidb_ddl_enable_fast_reorg` is set to `OFF`, `ADD INDEX` is executed as a transaction. If there are many update operations such as `UPDATE` and `REPLACE` in the target columns during the `ADD INDEX` execution, a larger batch size indicates a larger probability of transaction conflicts. In this case, it is recommended that you set the batch size to a smaller value. The minimum value is 32. - - If the transaction conflict does not exist, or if `tidb_ddl_enable_fast_reorg` is set to `ON`, you can set the batch size to a large value. This makes data backfilling faster but also increases the write pressure on TiKV. For a proper batch size, you also need to refer to the value of `tidb_ddl_reorg_worker_cnt`. See [Interaction Test on Online Workloads and `ADD INDEX` Operations](https://docs.pingcap.com/tidb/dev/online-workloads-and-add-index-operations) for reference. + - If the transaction conflict does not exist, or if `tidb_ddl_enable_fast_reorg` is set to `ON`, you can set the batch size to a large value. This makes data backfilling faster but also increases the write pressure on TiKV. For a proper batch size, you also need to refer to the value of `tidb_ddl_reorg_worker_cnt`. See [Interaction Test on Online Workloads and `ADD INDEX` Operations](https://docs.pingcap.com/tidb/dev/online-workloads-and-add-index-operations) for reference.