diff --git a/TOC-tidb-cloud.md b/TOC-tidb-cloud.md index 786fcf99e3a9e..1687c43728683 100644 --- a/TOC-tidb-cloud.md +++ b/TOC-tidb-cloud.md @@ -245,6 +245,7 @@ - [Connect AWS DMS to TiDB Cloud clusters](/tidb-cloud/tidb-cloud-connect-aws-dms.md) - Explore Data - [Chat2Query (Beta) in SQL Editor](/tidb-cloud/explore-data-with-chat2query.md) + - [SQL Proxy Account](/tidb-cloud/sql-proxy-account.md) - Vector Search (Beta) - [Overview](/tidb-cloud/vector-search-overview.md) - Get Started diff --git a/TOC.md b/TOC.md index b3ac7de614459..4d80df6c7ca10 100644 --- a/TOC.md +++ b/TOC.md @@ -1044,6 +1044,7 @@ - v7.2 - [7.2.0-DMR](/releases/release-7.2.0.md) - v7.1 + - [7.1.6](/releases/release-7.1.6.md) - [7.1.5](/releases/release-7.1.5.md) - [7.1.4](/releases/release-7.1.4.md) - [7.1.3](/releases/release-7.1.3.md) diff --git a/br/br-pitr-manual.md b/br/br-pitr-manual.md index edaba953e2037..55d59459aa65c 100644 --- a/br/br-pitr-manual.md +++ b/br/br-pitr-manual.md @@ -289,7 +289,7 @@ Usage example: ```shell ./br log truncate --until='2022-07-26 21:20:00+0800' \ -–-storage='s3://backup-101/logbackup?access-key=${access-key}&secret-access-key=${secret-access-key}"' +--storage='s3://backup-101/logbackup?access-key=${access-key}&secret-access-key=${secret-access-key}' ``` Expected output: @@ -329,7 +329,7 @@ The `--storage` parameter is used to specify the backup storage address. Current Usage example: ```shell -./br log metadata –-storage='s3://backup-101/logbackup?access-key=${access-key}&secret-access-key=${secret-access-key}"' +./br log metadata --storage='s3://backup-101/logbackup?access-key=${access-key}&secret-access-key=${secret-access-key}' ``` Expected output: @@ -380,7 +380,7 @@ Usage example: ```shell ./br restore point --pd="${PD_IP}:2379" --storage='s3://backup-101/logbackup?access-key=${access-key}&secret-access-key=${secret-access-key}"' ---full-backup-storage='s3://backup-101/snapshot-202205120000?access-key=${access-key}&secret-access-key=${secret-access-key}"' +--full-backup-storage='s3://backup-101/snapshot-202205120000?access-key=${access-key}&secret-access-key=${secret-access-key}' Full Restore <--------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00% *** ***["Full Restore success summary"] ****** [total-take=3.112928252s] [restore-data-size(after-compressed)=5.056kB] [Size=5056] [BackupTS=434693927394607136] [total-kv=4] [total-kv-size=290B] [average-speed=93.16B/s] diff --git a/configure-store-limit.md b/configure-store-limit.md index 9db2ee397406d..bf03441287b96 100644 --- a/configure-store-limit.md +++ b/configure-store-limit.md @@ -69,6 +69,10 @@ store limit 1 5 remove-peer // store 1 can at most delete 5 peers per mi ### Principles of store limit v2 +> **Warning:** +> +> Store limit v2 is an experimental feature. It is not recommended that you use it in the production environment. This feature might be changed or removed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub. + When [`store-limit-version`](/pd-configuration-file.md#store-limit-version-new-in-v710) is set to `v2`, store limit v2 takes effect. In v2 mode, the limit of operators are dynamically adjusted based on the capability of TiKV snapshots. When TiKV has fewer pending tasks, PD increases its scheduling tasks. Otherwise, PD reduces the scheduling tasks for the node. Therefore, you do not need to manually set `store limit` to speed up the scheduling process. In v2 mode, the execution speed of TiKV becomes the main bottleneck during migration. You can check whether the current scheduling speed has reached the upper limit through the **TiKV Details** > **Snapshot** > **Snapshot Speed** panel. To increase or decrease the scheduling speed of a node, you can adjust the TiKV snapshot limit ([`snap-io-max-bytes-per-sec`](/tikv-configuration-file.md#snap-io-max-bytes-per-sec)). diff --git a/faq/sql-faq.md b/faq/sql-faq.md index 07591b11987a8..7d7aa30421f74 100644 --- a/faq/sql-faq.md +++ b/faq/sql-faq.md @@ -32,7 +32,9 @@ In addition, you can also use the [SQL binding](/sql-plan-management.md#sql-bind ## How to prevent the execution of a particular SQL statement? -You can create [SQL bindings](/sql-plan-management.md#sql-binding) with the [`MAX_EXECUTION_TIME`](/optimizer-hints.md#max_execution_timen) hint to limit the execution time of a particular statement to a small value (for example, 1ms). In this way, the statement is terminated automatically by the threshold. +For TiDB v7.5.0 or later versions, you can use the [`QUERY WATCH`](/sql-statements/sql-statement-query-watch.md) statement to terminate specific SQL statements. For more details, see [Manage queries that consume more resources than expected (Runaway Queries)](/tidb-resource-control.md#query-watch-parameters). + +For versions earlier than TiDB v7.5.0, you can create [SQL bindings](/sql-plan-management.md#sql-binding) with the [`MAX_EXECUTION_TIME`](/optimizer-hints.md#max_execution_timen) hint to limit the execution time of a particular statement to a small value (for example, 1ms). In this way, the statement is terminated automatically by the threshold. For example, to prevent the execution of `SELECT * FROM t1, t2 WHERE t1.id = t2.id`, you can use the following SQL binding to limit the execution time of the statement to 1ms: diff --git a/hardware-and-software-requirements.md b/hardware-and-software-requirements.md index b57a0f8d1fa64..96bb62fcd1788 100644 --- a/hardware-and-software-requirements.md +++ b/hardware-and-software-requirements.md @@ -34,7 +34,7 @@ In v7.5 LTS, TiDB ensures multi-level quality standards for various combinations - + @@ -62,8 +62,7 @@ In v7.5 LTS, TiDB ensures multi-level quality standards for various combinations > **Note:** > - > - According to [CentOS Linux EOL](https://blog.centos.org/2023/04/end-dates-are-coming-for-centos-stream-8-and-centos-linux-7/), the upstream support for CentOS Linux 7 ends on June 30, 2024. TiDB will end the support for CentOS 7 in the 8.5 LTS version. It is recommended to use Rocky Linux 9.1 or a later version. - > - According to [CentOS Linux EOL](https://www.centos.org/centos-linux-eol/), the upstream support for CentOS Linux 8 ended on December 31, 2021. The upstream [support for CentOS Stream 8](https://blog.centos.org/2023/04/end-dates-are-coming-for-centos-stream-8-and-centos-linux-7/) ended on May 31, 2024. CentOS Stream 9 continues to be supported by the CentOS organization. + > According to [CentOS Linux EOL](https://blog.centos.org/2023/04/end-dates-are-coming-for-centos-stream-8-and-centos-linux-7/), the upstream support for CentOS Linux 7 ends on June 30, 2024. TiDB ends the support for CentOS 7 starting from the 8.4 DMR version. It is recommended to use Rocky Linux 9.1 or a later version. + For the following combinations of operating systems and CPU architectures, you can compile, build, and deploy TiDB. In addition, you can also use the basic features of OLTP, OLAP, and the data tools. However, TiDB **does not guarantee enterprise-level production quality**: @@ -88,7 +87,7 @@ In v7.5 LTS, TiDB ensures multi-level quality standards for various combinations x86_64 - CentOS 8 Stream + CentOS Stream 8 @@ -114,19 +113,12 @@ In v7.5 LTS, TiDB ensures multi-level quality standards for various combinations > > - For Oracle Enterprise Linux, TiDB supports the Red Hat Compatible Kernel (RHCK), but does not support the Unbreakable Enterprise Kernel provided by Oracle Enterprise Linux. > - Support for Ubuntu 16.04 will be removed in future versions of TiDB. Upgrading to Ubuntu 18.04 or later is strongly recommended. + > - CentOS Stream 8 reaches [End of Builds](https://blog.centos.org/2023/04/end-dates-are-coming-for-centos-stream-8-and-centos-linux-7/) on May 31, 2024. + If you are using the 32-bit version of an operating system listed in the preceding two tables, TiDB **is not guaranteed** to be compilable, buildable or deployable on the 32-bit operating system and the corresponding CPU architecture, or TiDB does not actively adapt to the 32-bit operating system. + Other operating system versions not mentioned above might work but are not officially supported. -> **Note:** -> -> - For Oracle Enterprise Linux, TiDB supports the Red Hat Compatible Kernel (RHCK) and does not support the Unbreakable Enterprise Kernel provided by Oracle Enterprise Linux. -> - According to [CentOS Linux EOL](https://www.centos.org/centos-linux-eol/), the upstream support for CentOS Linux 8 ended on December 31, 2021. CentOS Stream 8 continues to be supported by the CentOS organization. -> - Support for Ubuntu 16.04 will be removed in future versions of TiDB. Upgrading to Ubuntu 18.04 or later is strongly recommended. -> - If you are using the 32-bit version of an operating system listed in the preceding table, TiDB **is not guaranteed** to be compilable, buildable or deployable on the 32-bit operating system and the corresponding CPU architecture, or TiDB does not actively adapt to the 32-bit operating system. -> - Other operating system versions not mentioned above might work but are not officially supported. - ### Libraries required for compiling and running TiDB | Libraries required for compiling and running TiDB | Version | diff --git a/package-lock.json b/package-lock.json index fd224f072eb07..2e983b5f14b33 100644 --- a/package-lock.json +++ b/package-lock.json @@ -21,6 +21,9 @@ "micromark-extension-mdxjs": "^1.0.0", "octokit": "^3.1.0", "unist-util-visit": "^4.1.0" + }, + "devDependencies": { + "prettier": "3.3.3" } }, "node_modules/@octokit/app": { @@ -1568,6 +1571,22 @@ "is-hexadecimal": "^2.0.0" } }, + "node_modules/prettier": { + "version": "3.3.3", + "resolved": "https://registry.npmjs.org/prettier/-/prettier-3.3.3.tgz", + "integrity": "sha512-i2tDNA0O5IrMO757lfrdQZCc2jPNDVntV0m/+4whiDfWaTKfMNgR7Qz0NAeGz/nRqF4m5/6CLzbP4/liHt12Ew==", + "dev": true, + "license": "MIT", + "bin": { + "prettier": "bin/prettier.cjs" + }, + "engines": { + "node": ">=14" + }, + "funding": { + "url": "https://github.com/prettier/prettier?sponsor=1" + } + }, "node_modules/proxy-from-env": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/proxy-from-env/-/proxy-from-env-1.1.0.tgz", @@ -3076,6 +3095,12 @@ "is-hexadecimal": "^2.0.0" } }, + "prettier": { + "version": "3.3.3", + "resolved": "https://registry.npmjs.org/prettier/-/prettier-3.3.3.tgz", + "integrity": "sha512-i2tDNA0O5IrMO757lfrdQZCc2jPNDVntV0m/+4whiDfWaTKfMNgR7Qz0NAeGz/nRqF4m5/6CLzbP4/liHt12Ew==", + "dev": true + }, "proxy-from-env": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/proxy-from-env/-/proxy-from-env-1.1.0.tgz", diff --git a/package.json b/package.json index 7367d12460c5a..92ae96ed20d4f 100644 --- a/package.json +++ b/package.json @@ -17,5 +17,8 @@ "micromark-extension-mdxjs": "^1.0.0", "octokit": "^3.1.0", "unist-util-visit": "^4.1.0" + }, + "devDependencies": { + "prettier": "3.3.3" } } diff --git a/pd-control.md b/pd-control.md index 6d21aa3e83764..b1e2db9702be3 100644 --- a/pd-control.md +++ b/pd-control.md @@ -332,7 +332,7 @@ Usage: - `store-limit-mode` is used to control the mode of limiting the store speed. The optional modes are `auto` and `manual`. In `auto` mode, the stores are automatically balanced according to the load (deprecated). -- `store-limit-version` controls the version of the store limit formula. In v1 mode, you can manually modify the `store limit` to limit the scheduling speed of a single TiKV. The v2 mode is an experimental feature. In v2 mode, you do not need to manually set the `store limit` value, as PD dynamically adjusts it based on the capability of TiKV snapshots. For more details, refer to [Principles of store limit v2](/configure-store-limit.md#principles-of-store-limit-v2). +- `store-limit-version` controls the version of the store limit formula. In v1 mode, you can manually modify the `store limit` to limit the scheduling speed of a single TiKV. The v2 mode is an experimental feature. It is not recommended that you use it in the production environment. In v2 mode, you do not need to manually set the `store limit` value, as PD dynamically adjusts it based on the capability of TiKV snapshots. For more details, refer to [Principles of store limit v2](/configure-store-limit.md#principles-of-store-limit-v2). ```bash config set store-limit-version v2 // using store limit v2 diff --git a/releases/release-6.2.0.md b/releases/release-6.2.0.md index 1c7d9e57dc1b7..5279f419fac43 100644 --- a/releases/release-6.2.0.md +++ b/releases/release-6.2.0.md @@ -261,7 +261,7 @@ In v6.2.0-DMR, the key new features and improvements are as follows: | [tidb_enable_noop_variables](/system-variables.md#tidb_enable_noop_variables-new-in-v620) | Newly added | This variable controls whether to show `noop` variables in the result of `SHOW [GLOBAL] VARIABLES`. | | [tidb_min_paging_size](/system-variables.md#tidb_min_paging_size-new-in-v620) | Newly added | This variable is used to set the maximum number of rows during the coprocessor paging request process. | | [tidb_txn_commit_batch_size](/system-variables.md#tidb_txn_commit_batch_size-new-in-v620) | Newly added | This variable is used to control the batch size of transaction commit requests that TiDB sends to TiKV. | -| tidb_enable_change_multi_schema | Deleted | This variable is used to control whether multiple columns or indexes can be altered in one `ALTER TABLE` statement. | +| tidb_enable_change_multi_schema | Deleted | This variable is deleted because, starting from v6.2.0, you can alter multiple columns or indexes in one `ALTER TABLE` statement by default. | | [tidb_enable_outer_join_reorder](/system-variables.md#tidb_enable_outer_join_reorder-new-in-v610) | Modified | This variable controls whether the Join Reorder algorithm of TiDB supports Outer Join. In v6.1.0, the default value is `ON`, which means the Join Reorder's support for Outer Join is enabled by default. From v6.2.0, the default value is `OFF`, which means the support is disabled by default. | ### Configuration file parameters diff --git a/releases/release-7.1.0.md b/releases/release-7.1.0.md index 4e0e45abd80fc..b4b2acd32c9f3 100644 --- a/releases/release-7.1.0.md +++ b/releases/release-7.1.0.md @@ -386,7 +386,7 @@ Compared with the previous LTS 6.5.0, 7.1.0 not only includes new features, impr + PD - - Add a controller that automatically adjusts the size of the store limit based on the execution details of the snapshot. To enable this controller, set `store-limit-version` to `v2`. Once enabled, you do not need to manually adjust the `store limit` configuration to control the speed of scaling in or scaling out [#6147](https://github.com/tikv/pd/issues/6147) @[bufferflies](https://github.com/bufferflies) + - Add a controller that automatically adjusts the size of the store limit based on the execution details of the snapshot. To enable this controller, set `store-limit-version` to `v2` (experimental). Once enabled, you do not need to manually adjust the `store limit` configuration to control the speed of scaling in or scaling out [#6147](https://github.com/tikv/pd/issues/6147) @[bufferflies](https://github.com/bufferflies) - Add historical load information to avoid frequent scheduling of Regions with unstable loads by the hotspot scheduler when the storage engine is raft-kv2 [#6297](https://github.com/tikv/pd/issues/6297) @[bufferflies](https://github.com/bufferflies) - Add a leader health check mechanism. When the PD server where the etcd leader is located cannot be elected as the leader, PD actively switches the etcd leader to ensure that the PD leader is available [#6403](https://github.com/tikv/pd/issues/6403) @[nolouch](https://github.com/nolouch) diff --git a/releases/release-7.1.6.md b/releases/release-7.1.6.md new file mode 100644 index 0000000000000..30ffa0becccdb --- /dev/null +++ b/releases/release-7.1.6.md @@ -0,0 +1,300 @@ +--- +title: TiDB 7.1.6 Release Notes +summary: Learn about the compatibility changes, improvements, and bug fixes in TiDB 7.1.6. +--- + +# TiDB 7.1.6 Release Notes + +Release date: November 21, 2024 + +TiDB version: 7.1.6 + +Quick access: [Quick start](https://docs.pingcap.com/tidb/v7.1/quick-start-with-tidb) | [Production deployment](https://docs.pingcap.com/tidb/v7.1/production-deployment-using-tiup) + +## Compatibility changes + +- Set a default limit of 2048 for DDL historical tasks retrieved through the [TiDB HTTP API](https://github.com/pingcap/tidb/blob/release-7.1/docs/tidb_http_api.md) to prevent OOM issues caused by excessive historical tasks [#55711](https://github.com/pingcap/tidb/issues/55711) @[joccau](https://github.com/joccau) +- In earlier versions, when processing a transaction containing `UPDATE` changes, if the primary key or non-null unique index value is modified in an `UPDATE` event, TiCDC splits this event into `DELETE` and `INSERT` events. Starting from v7.1.6, when using the MySQL sink, TiCDC splits an `UPDATE` event into `DELETE` and `INSERT` events if the transaction `commitTS` for the `UPDATE` change is less than TiCDC `thresholdTS` (which is the current timestamp fetched from PD when TiCDC starts replicating the corresponding table to the downstream). This behavior change addresses the issue of downstream data inconsistencies caused by the potentially incorrect order of `UPDATE` events received by TiCDC, which can lead to an incorrect order of split `DELETE` and `INSERT` events. For more information, see [documentation](https://docs.pingcap.com/tidb/v7.1/ticdc-split-update-behavior#split-update-events-for-mysql-sinks). [#10918](https://github.com/pingcap/tiflow/issues/10918) @[lidezhu](https://github.com/lidezhu) +- Must set the line terminator when using TiDB Lightning `strict-format` to import CSV files [#37338](https://github.com/pingcap/tidb/issues/37338) @[lance6716](https://github.com/lance6716) + +## Improvements + ++ TiDB + + - Adjust estimation results from 0 to 1 for equality conditions that do not hit TopN when statistics are entirely composed of TopN and the modified row count in the corresponding table statistics is non-zero [#47400](https://github.com/pingcap/tidb/issues/47400) @[terry1purcell](https://github.com/terry1purcell) + - Remove stores without Regions during MPP load balancing [#52313](https://github.com/pingcap/tidb/issues/52313) @[xzhangxian1008](https://github.com/xzhangxian1008) + - Improve the MySQL compatibility of expression default values displayed in the output of `SHOW CREATE TABLE` [#52939](https://github.com/pingcap/tidb/issues/52939) @[CbcWestwolf](https://github.com/CbcWestwolf) + - By batch deleting TiFlash placement rules, improve the processing speed of data GC after performing the `TRUNCATE` or `DROP` operation on partitioned tables [#54068](https://github.com/pingcap/tidb/issues/54068) @[Lloyd-Pottiger](https://github.com/Lloyd-Pottiger) + - Improve sync load performance to reduce latency in loading statistics [#52294](https://github.com/pingcap/tidb/issues/52294) [hawkingrei](https://github.com/hawkingrei) + ++ TiKV + + - Add slow logs for peer and store messages [#16600](https://github.com/tikv/tikv/issues/16600) @[Connor1996](https://github.com/Connor1996) + - Optimize the compaction trigger mechanism of RocksDB to accelerate disk space reclamation when handling a large number of DELETE versions [#17269](https://github.com/tikv/tikv/issues/17269) @[AndreMouche](https://github.com/AndreMouche) + - Optimize the jittery access delay when restarting TiKV due to waiting for the log to be applied, improving the stability of TiKV [#15874](https://github.com/tikv/tikv/issues/15874) @[LykxSassinator](https://github.com/LykxSassinator) + - Remove unnecessary async blocks to reduce memory usage [#16540](https://github.com/tikv/tikv/issues/16540) @[overvenus](https://github.com/overvenus) + ++ TiFlash + + - Optimize the execution efficiency of `LENGTH()` and `ASCII()` functions [#9344](https://github.com/pingcap/tiflash/issues/9344) @[xzhangxian1008](https://github.com/xzhangxian1008) + - Mitigate the issue that TiFlash might panic due to updating certificates after TLS is enabled [#8535](https://github.com/pingcap/tiflash/issues/8535) @[windtalker](https://github.com/windtalker) + - Improve the cancel mechanism of the JOIN operator, so that the JOIN operator can respond to cancel requests in a timely manner [#9430](https://github.com/pingcap/tiflash/issues/9430) @[windtalker](https://github.com/windtalker) + - Reduce lock conflicts under highly concurrent data read operations and optimize short query performance [#9125](https://github.com/pingcap/tiflash/issues/9125) @[JinheLin](https://github.com/JinheLin) + - Improve the garbage collection speed of outdated data in the background for tables with clustered indexes [#9529](https://github.com/pingcap/tiflash/issues/9529) @[JaySon-Huang](https://github.com/JaySon-Huang) + ++ Tools + + + Backup & Restore (BR) + + - Enhance the tolerance of log backup to merge operations. When encountering a reasonably long merge operation, log backup tasks are less likely to enter the error state [#16554](https://github.com/tikv/tikv/issues/16554) @[YuJuncen](https://github.com/YuJuncen) + - BR cleans up empty SST files during data recovery [#16005](https://github.com/tikv/tikv/issues/16005) @[Leavrth](https://github.com/Leavrth) + - Increase the number of retries for failures caused by DNS errors [#53029](https://github.com/pingcap/tidb/issues/53029) @[YuJuncen](https://github.com/YuJuncen) + - Increase the number of retries for failures caused by the absence of a leader in a Region [#54017](https://github.com/pingcap/tidb/issues/54017) @[Leavrth](https://github.com/Leavrth) + - Except for the `br log restore` subcommand, all other `br log` subcommands support skipping the loading of the TiDB `domain` data structure to reduce memory consumption [#52088](https://github.com/pingcap/tidb/issues/52088) @[Leavrth](https://github.com/Leavrth) + - Support checking whether the disk space in TiKV is sufficient before TiKV downloads each SST file. If the space is insufficient, BR terminates the restore and returns an error [#17224](https://github.com/tikv/tikv/issues/17224) @[RidRisR](https://github.com/RidRisR) + - Support setting Alibaba Cloud access credentials through environment variables [#45551](https://github.com/pingcap/tidb/issues/45551) @[RidRisR](https://github.com/RidRisR) + - Reduce unnecessary log printing during backup [#55902](https://github.com/pingcap/tidb/issues/55902) @[Leavrth](https://github.com/Leavrth) + + + TiCDC + + - Support directly outputting raw events when the downstream is a Message Queue (MQ) or cloud storage [#11211](https://github.com/pingcap/tiflow/issues/11211) @[CharlesCheung96](https://github.com/CharlesCheung96) + - Improve memory stability during data recovery using redo logs to reduce the probability of OOM [#10900](https://github.com/pingcap/tiflow/issues/10900) @[CharlesCheung96](https://github.com/CharlesCheung96) + - When the downstream is TiDB with the `SUPER` permission granted, TiCDC supports querying the execution status of `ADD INDEX DDL` from the downstream database to avoid data replication failure due to timeout in retrying executing the DDL statement in some cases [#10682](https://github.com/pingcap/tiflow/issues/10682) @[CharlesCheung96](https://github.com/CharlesCheung96) + + + TiDB Data Migration (DM) + + - Upgrade `go-mysql` to 1.9.1 to support connecting to MySQL server 8.0 using passwords longer than 19 characters [#11603](https://github.com/pingcap/tiflow/pull/11603) @[fishiu](https://github.com/fishiu) + +## Bug fixes + ++ TiDB + + - Fix the issue of inconsistent data indexes caused by concurrent DML operations when adding a unique index [#52914](https://github.com/pingcap/tidb/issues/52914) @[wjhuang2016](https://github.com/wjhuang2016) + - Fix the issue that comparing a column of `YEAR` type with an unsigned integer that is out of range causes incorrect results [#50235](https://github.com/pingcap/tidb/issues/50235) @[qw4990](https://github.com/qw4990) + - Fix the issue that `INDEX_HASH_JOIN` cannot exit properly when SQL is abnormally interrupted [#54688](https://github.com/pingcap/tidb/issues/54688) @[wshwsh12](https://github.com/wshwsh12) + - Fix the issue that the network partition during adding indexes using the Distributed eXecution Framework (DXF) might cause inconsistent data indexes [#54897](https://github.com/pingcap/tidb/issues/54897) @[tangenta](https://github.com/tangenta) + - Fix the issue that using `SHOW WARNINGS;` to obtain warnings might cause a panic [#48756](https://github.com/pingcap/tidb/issues/48756) @[xhebox](https://github.com/xhebox) + - Fix the issue that querying the `INFORMATION_SCHEMA.CLUSTER_SLOW_QUERY` table might cause TiDB to panic [#54324](https://github.com/pingcap/tidb/issues/54324) @[tiancaiamao](https://github.com/tiancaiamao) + - Fix the issue of abnormally high memory usage caused by `memTracker` not being detached when the `HashJoin` or `IndexLookUp` operator is the driven side sub-node of the `Apply` operator [#54005](https://github.com/pingcap/tidb/issues/54005) @[XuHuaiyu](https://github.com/XuHuaiyu) + - Fix the issue that recursive CTE queries might result in invalid pointers [#54449](https://github.com/pingcap/tidb/issues/54449) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that an empty projection causes TiDB to panic [#49109](https://github.com/pingcap/tidb/issues/49109) @[winoros](https://github.com/winoros) + - Fix the issue that TiDB might return incorrect query results when you query tables with virtual columns in transactions that involve data modification operations [#53951](https://github.com/pingcap/tidb/issues/53951) @[qw4990](https://github.com/qw4990) + - Fix the issue that for tables containing auto-increment columns with `AUTO_ID_CACHE=1`, setting `auto_increment_increment` and `auto_increment_offset` system variables to non-default values might cause incorrect auto-increment ID allocation [#52622](https://github.com/pingcap/tidb/issues/52622) @[tiancaiamao](https://github.com/tiancaiamao) + - Fix the issue that subqueries included in the `ALL` function might cause incorrect results [#52755](https://github.com/pingcap/tidb/issues/52755) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that predicates cannot be pushed down properly when the filter condition of a SQL query contains virtual columns and the execution condition contains `UnionScan` [#54870](https://github.com/pingcap/tidb/issues/54870) @[qw4990](https://github.com/qw4990) + - Fix the issue that subqueries in an `UPDATE` list might cause TiDB to panic [#52687](https://github.com/pingcap/tidb/issues/52687) @[winoros](https://github.com/winoros) + - Fix the issue that indirect placeholder `?` references in a `GROUP BY` statement cannot find columns [#53872](https://github.com/pingcap/tidb/issues/53872) @[qw4990](https://github.com/qw4990) + - Fix the issue that disk files might not be deleted after the `Sort` operator spills and a query error occurs [#55061](https://github.com/pingcap/tidb/issues/55061) @[wshwsh12](https://github.com/wshwsh12) + - Fix the issue of reusing wrong point get plans for `SELECT ... FOR UPDATE` [#54652](https://github.com/pingcap/tidb/issues/54652) @[qw4990](https://github.com/qw4990) + - Fix the issue that `max_execute_time` settings at multiple levels interfere with each other [#50914](https://github.com/pingcap/tidb/issues/50914) @[jiyfhust](https://github.com/jiyfhust) + - Fix the issue that the histogram and TopN in the primary key column statistics are not loaded after restarting TiDB [#37548](https://github.com/pingcap/tidb/issues/37548) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that the TopN operator might be pushed down incorrectly [#37986](https://github.com/pingcap/tidb/issues/37986) @[qw4990](https://github.com/qw4990) + - Fix the issue that the performance of the `SELECT ... WHERE ... ORDER BY ...` statement execution is poor in some cases [#54969](https://github.com/pingcap/tidb/issues/54969) @[tiancaiamao](https://github.com/tiancaiamao) + - Fix the issue that TiDB reports an error in the log when closing the connection in some cases [#53689](https://github.com/pingcap/tidb/issues/53689) @[jackysp](https://github.com/jackysp) + - Fix the issue that the illegal column type `DECIMAL(0,0)` can be created in some cases [#53779](https://github.com/pingcap/tidb/issues/53779) @[tangenta](https://github.com/tangenta) + - Fix the issue that obtaining the column information using `information_schema.columns` returns warning 1356 when a subquery is used as a column definition in a view definition [#54343](https://github.com/pingcap/tidb/issues/54343) @[lance6716](https://github.com/lance6716) + - Fix the issue that the optimizer incorrectly estimates the number of rows as 1 when accessing a unique index with the query condition `column IS NULL` [#56116](https://github.com/pingcap/tidb/issues/56116) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that `SELECT INTO OUTFILE` does not work when clustered indexes are used as predicates [#42093](https://github.com/pingcap/tidb/issues/42093) @[qw4990](https://github.com/qw4990) + - Fix the issue of incorrect WARNINGS information when using Optimizer Hints [#53767](https://github.com/pingcap/tidb/issues/53767) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that the Sync Load QPS monitoring metric is incorrect [#53558](https://github.com/pingcap/tidb/issues/53558) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that executing `CREATE OR REPLACE VIEW` concurrently might result in the `table doesn't exist` error [#53673](https://github.com/pingcap/tidb/issues/53673) @[tangenta](https://github.com/tangenta) + - Fix the issue that restoring a table with `AUTO_ID_CACHE=1` using the `RESTORE` statement might cause a `Duplicate entry` error [#52680](https://github.com/pingcap/tidb/issues/52680) @[tiancaiamao](https://github.com/tiancaiamao) + - Fix the issue that the `SUB_PART` value in the `INFORMATION_SCHEMA.STATISTICS` table is `NULL` [#55812](https://github.com/pingcap/tidb/issues/55812) @[Defined2014](https://github.com/Defined2014) + - Fix the overflow issue of the `Longlong` type in predicates [#45783](https://github.com/pingcap/tidb/issues/45783) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that incorrect results are returned when the cached execution plans contain the comparison between date types and `unix_timestamp` [#48165](https://github.com/pingcap/tidb/issues/48165) @[qw4990](https://github.com/qw4990) + - Fix the issue that the `LENGTH()` condition is unexpectedly removed when the collation is `utf8_bin` or `utf8mb4_bin` [#53730](https://github.com/pingcap/tidb/issues/53730) @[elsa0520](https://github.com/elsa0520) + - Fix the issue that when an `UPDATE` or `DELETE` statement contains a recursive CTE, the statement might report an error or not take effect [#55666](https://github.com/pingcap/tidb/issues/55666) @[time-and-fate](https://github.com/time-and-fate) + - Fix the issue that TiDB might hang or return incorrect results when executing a query containing a correlated subquery and CTE [#55551](https://github.com/pingcap/tidb/issues/55551) @[guo-shaoge](https://github.com/guo-shaoge) + - Fix the issue that statistics for string columns with non-binary collations might fail to load when initializing statistics [#55684](https://github.com/pingcap/tidb/issues/55684) @[winoros](https://github.com/winoros) + - Fix the issue that IndexJoin produces duplicate rows when calculating hash values in the Left Outer Anti Semi type [#52902](https://github.com/pingcap/tidb/issues/52902) @[yibin87](https://github.com/yibin87) + - Fix the issue that a query statement that contains `UNION` might return incorrect results [#52985](https://github.com/pingcap/tidb/issues/52985) @[XuHuaiyu](https://github.com/XuHuaiyu) + - Fix the issue that empty `groupOffset` in `StreamAggExec` might cause TiDB to panic [#53867](https://github.com/pingcap/tidb/issues/53867) @[xzhangxian1008](https://github.com/xzhangxian1008) + - Fix the issue that RANGE partitioned tables that are not strictly self-incrementing can be created [#54829](https://github.com/pingcap/tidb/issues/54829) @[Defined2014](https://github.com/Defined2014) + - Fix the issue that the query might get stuck when terminated because the memory usage exceeds the limit set by `tidb_mem_quota_query` [#55042](https://github.com/pingcap/tidb/issues/55042) @[yibin87](https://github.com/yibin87) + - Fix the issue that the `STATE` field in the `INFORMATION_SCHEMA.TIDB_TRX` table is empty due to the `size` of the `STATE` field not being defined [#53026](https://github.com/pingcap/tidb/issues/53026) @[cfzjywxk](https://github.com/cfzjywxk) + - Fix the data race issue in `IndexNestedLoopHashJoin` [#49692](https://github.com/pingcap/tidb/issues/49692) @[solotzg](https://github.com/solotzg) + - Fix the issue that a wrong TableDual plan causes empty query results [#50051](https://github.com/pingcap/tidb/issues/50051) @[onlyacat](https://github.com/onlyacat) + - Fix the issue that the `tot_col_size` column in the `mysql.stats_histograms` table might be a negative number [#55126](https://github.com/pingcap/tidb/issues/55126) @[qw4990](https://github.com/qw4990) + - Fix the issue that data conversion from the `FLOAT` type to the `UNSIGNED` type returns incorrect results [#41736](https://github.com/pingcap/tidb/issues/41736) @[guo-shaoge](https://github.com/guo-shaoge) + - Fix the issue that TiDB fails to reject unauthenticated user connections in some cases when using the `auth_socket` authentication plugin [#54031](https://github.com/pingcap/tidb/issues/54031) @[lcwangchao](https://github.com/lcwangchao) + - Fix the issue that the `memory_quota` hint might not work in subqueries [#53834](https://github.com/pingcap/tidb/issues/53834) @[qw4990](https://github.com/qw4990) + - Fix the issue that the metadata lock fails to prevent DDL operations from executing in the plan cache scenario [#51407](https://github.com/pingcap/tidb/issues/51407) @[wjhuang2016](https://github.com/wjhuang2016) + - Fix the issue that using `CURRENT_DATE()` as the default value for a column results in incorrect query results [#53746](https://github.com/pingcap/tidb/issues/53746) @[tangenta](https://github.com/tangenta) + - Fix the issue that the `COALESCE()` function returns incorrect result type for `DATE` type parameters [#46475](https://github.com/pingcap/tidb/issues/46475) @[xzhangxian1008](https://github.com/xzhangxian1008) + - Reset the parameters in the `Open` method of `PipelinedWindow` to fix the unexpected error that occurs when the `PipelinedWindow` is used as a child node of `Apply` due to the reuse of previous parameter values caused by repeated opening and closing operations [#53600](https://github.com/pingcap/tidb/issues/53600) @[XuHuaiyu](https://github.com/XuHuaiyu) + - Fix the incorrect result of the TopN operator in correlated subqueries [#52777](https://github.com/pingcap/tidb/issues/52777) @[yibin87](https://github.com/yibin87) + - Fix the issue that the recursive CTE operator incorrectly tracks memory usage [#54181](https://github.com/pingcap/tidb/issues/54181) @[guo-shaoge](https://github.com/guo-shaoge) + - Fix the issue that an error occurs when using `SHOW COLUMNS` to view columns in a view [#54964](https://github.com/pingcap/tidb/issues/54964) @[lance6716](https://github.com/lance6716) + - Fix the issue that reducing the value of `tidb_ttl_delete_worker_count` during TTL job execution makes the job fail to complete [#55561](https://github.com/pingcap/tidb/issues/55561) @[lcwangchao](https://github.com/lcwangchao) + - Fix the issue that using a view does not work in recursive CTE [#49721](https://github.com/pingcap/tidb/issues/49721) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that TiDB does not create corresponding statistics metadata (`stats_meta`) when creating a table with foreign keys [#53652](https://github.com/pingcap/tidb/issues/53652) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that the query might return incorrect results instead of an error after being killed [#50089](https://github.com/pingcap/tidb/issues/50089) @[D3Hunter](https://github.com/D3Hunter) + - Fix the issue that the statistics synchronous loading mechanism might fail unexpectedly under high query concurrency [#52294](https://github.com/pingcap/tidb/issues/52294) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that certain filter conditions in queries might cause the planner module to report an `invalid memory address or nil pointer dereference` error [#53582](https://github.com/pingcap/tidb/issues/53582) [#53580](https://github.com/pingcap/tidb/issues/53580) [#53594](https://github.com/pingcap/tidb/issues/53594) [#53603](https://github.com/pingcap/tidb/issues/53603) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that the TiDB synchronously loading statistics mechanism retries to load empty statistics indefinitely and prints the `fail to get stats version for this histogram` log [#52657](https://github.com/pingcap/tidb/issues/52657) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that the `TIMESTAMPADD()` function goes into an infinite loop when the first argument is `month` and the second argument is negative [#54908](https://github.com/pingcap/tidb/issues/54908) @[xzhangxian1008](https://github.com/xzhangxian1008) + - Fix the issue that TiDB might crash when `tidb_mem_quota_analyze` is enabled and the memory used by updating statistics exceeds the limit [#52601](https://github.com/pingcap/tidb/issues/52601) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that `duplicate entry` might occur when adding unique indexes [#56161](https://github.com/pingcap/tidb/issues/56161) @[tangenta](https://github.com/tangenta) + - Fix the issue that the query latency of stale reads increases, caused by information schema cache misses [#53428](https://github.com/pingcap/tidb/issues/53428) @[crazycs520](https://github.com/crazycs520) + - Fix the issue that the `Distinct_count` information in GlobalStats might be incorrect [#53752](https://github.com/pingcap/tidb/issues/53752) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that executing the `SELECT DISTINCT CAST(col AS DECIMAL), CAST(col AS SIGNED) FROM ...` query might return incorrect results [#53726](https://github.com/pingcap/tidb/issues/53726) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that the `read_from_storage` hint might not take effect when the query has an available Index Merge execution plan [#56217](https://github.com/pingcap/tidb/issues/56217) @[AilinKid](https://github.com/AilinKid) + - Fix the issue that the `TIMESTAMPADD()` function returns incorrect results [#41052](https://github.com/pingcap/tidb/issues/41052) @[xzhangxian1008](https://github.com/xzhangxian1008) + - Fix the issue that `PREPARE`/`EXECUTE` statements with the `CONV` expression containing a `?` argument might result in incorrect query results when executed multiple times [#53505](https://github.com/pingcap/tidb/issues/53505) @[qw4990](https://github.com/qw4990) + - Fix the issue that the memory used by transactions might be tracked multiple times [#53984](https://github.com/pingcap/tidb/issues/53984) @[ekexium](https://github.com/ekexium) + - Fix the issue that column pruning without using shallow copies of slices might cause TiDB to panic [#52768](https://github.com/pingcap/tidb/issues/52768) @[winoros](https://github.com/winoros) + - Fix the issue that a SQL binding containing window functions might not take effect in some cases [#55981](https://github.com/pingcap/tidb/issues/55981) @[winoros](https://github.com/winoros) + - Fix the issue that TiDB might panic when parsing index data [#47115](https://github.com/pingcap/tidb/issues/47115) @[zyguan](https://github.com/zyguan) + - Fix the issue that TiDB might report an error due to GC when loading statistics at startup [#53592](https://github.com/pingcap/tidb/issues/53592) @[you06](https://github.com/you06) + - Fix the issue that an error occurs when a DML statement contains nested generated columns [#53967](https://github.com/pingcap/tidb/issues/53967) @[wjhuang2016](https://github.com/wjhuang2016) + - Fix the issue that TiDB panics when executing the `SHOW ERRORS` statement with a predicate that is always `true` [#46962](https://github.com/pingcap/tidb/issues/46962) @[elsa0520](https://github.com/elsa0520) + - Fix the issue that improper use of metadata locks might lead to writing anomalous data when using the plan cache under certain circumstances [#53634](https://github.com/pingcap/tidb/issues/53634) @[zimulala](https://github.com/zimulala) + - Fix the issue of data index inconsistency caused by retries during index addition [#55808](https://github.com/pingcap/tidb/issues/55808) @[lance6716](https://github.com/lance6716) + - Fix the issue that unstable unique IDs of columns might cause the `UPDATE` statement to return errors [#53236](https://github.com/pingcap/tidb/issues/53236) @[winoros](https://github.com/winoros) + - Fix the issue that after a statement within a transaction is killed by OOM, if TiDB continues to execute the next statement within the same transaction, you might get an error `Trying to start aggressive locking while it's already started` and a panic occurs [#53540](https://github.com/pingcap/tidb/issues/53540) @[MyonKeminta](https://github.com/MyonKeminta) + - Fix the issue that executing `RECOVER TABLE BY JOB JOB_ID;` might cause TiDB to panic [#55113](https://github.com/pingcap/tidb/issues/55113) @[crazycs520](https://github.com/crazycs520) + - Fix the issue that executing `ADD INDEX` might fail after modifying the PD member in the distributed execution framework [#48680](https://github.com/pingcap/tidb/issues/48680) @[lance6716](https://github.com/lance6716) + - Fix the issue that two DDL Owners might exist at the same time [#54689](https://github.com/pingcap/tidb/issues/54689) @[joccau](https://github.com/joccau) + - Fix the issue that TiDB rolling restart during the execution of `ADD INDEX` might cause the adding index operation to fail [#52805](https://github.com/pingcap/tidb/issues/52805) @[tangenta](https://github.com/tangenta) + - Fix the issue that the `LOAD DATA ... REPLACE INTO` operation causes data inconsistency [#56408](https://github.com/pingcap/tidb/issues/56408) @[fzzf678](https://github.com/fzzf678) + - Fix the issue that the `AUTO_INCREMENT` field is not correctly set after importing data using the `IMPORT INTO` statement [#56476](https://github.com/pingcap/tidb/issues/56476) @[D3Hunter](https://github.com/D3Hunter) + - Fix the issue that TiDB does not check for the existence of local files before restoring from a checkpoint [#53009](https://github.com/pingcap/tidb/issues/53009) @[lance6716](https://github.com/lance6716) + - Fix the issue that the DM schema tracker cannot create indexes longer than the default length [#55138](https://github.com/pingcap/tidb/issues/55138) @[lance6716](https://github.com/lance6716) + - Fix the issue that `ALTER TABLE` does not handle the `AUTO_INCREMENT` field correctly [#47899](https://github.com/pingcap/tidb/issues/47899) @[D3Hunter](https://github.com/D3Hunter) + - Fix the issue that unreleased session resources might lead to memory leaks [#56271](https://github.com/pingcap/tidb/issues/56271) @[lance6716](https://github.com/lance6716) + - Fix the issue that float or integer overflow affects the plan cache [#46538](https://github.com/pingcap/tidb/issues/46538) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that part of the memory of the `IndexLookUp` operator is not tracked [#56440](https://github.com/pingcap/tidb/issues/56440) @[wshwsh12](https://github.com/wshwsh12) + - Fix the issue that stale read does not strictly verify the timestamp of the read operation, resulting in a small probability of affecting the consistency of the transaction when an offset exists between the TSO and the real physical time [#56809](https://github.com/pingcap/tidb/issues/56809) @[MyonKeminta](https://github.com/MyonKeminta) + - Fix the issue that TTL might fail if TiKV is not selected as the storage engine [#56402](https://github.com/pingcap/tidb/issues/56402) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that TTL tasks cannot be canceled when there is a write conflict [#56422](https://github.com/pingcap/tidb/issues/56422) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that inserting oversized numbers in scientific notation causes an error `ERROR 1264 (22003)`, to make the behavior consistent with MySQL [#47787](https://github.com/pingcap/tidb/issues/47787) @[lcwangchao](https://github.com/lcwangchao) + - Fix the issue that when canceling a TTL task, the corresponding SQL is not killed forcibly [#56511](https://github.com/pingcap/tidb/issues/56511) @[lcwangchao](https://github.com/lcwangchao) + - Fix the issue that the `INSERT ... ON DUPLICATE KEY` statement is not compatible with `mysql_insert_id` [#55965](https://github.com/pingcap/tidb/issues/55965) @[tiancaiamao](https://github.com/tiancaiamao) + - Fix the issue that audit log filtering does not take effect when SQL cannot build an execution plan [#50988](https://github.com/pingcap/tidb/issues/50988) @[CbcWestwolf](https://github.com/CbcWestwolf) + - Fix the issue that existing TTL tasks are executed unexpectedly frequently in a cluster that is upgraded from v6.5 to v7.5 or later [#56539](https://github.com/pingcap/tidb/issues/56539) @[lcwangchao](https://github.com/lcwangchao) + - Fix the issue that the `CAST` function does not support explicitly setting the character set [#55677](https://github.com/pingcap/tidb/issues/55677) @[Defined2014](https://github.com/Defined2014) + - Fix the issue that TiDB does not check the index length limitation when executing `ADD INDEX` [#56930](https://github.com/pingcap/tidb/issues/56930) @[fzzf678](https://github.com/fzzf678) + ++ TiKV + + - Add the `RawKvMaxTimestampNotSynced` error, log detailed error information in `errorpb.Error.max_ts_not_synced`, and add a retry mechanism for the `must_raw_put` operation when this error occurs [#16789](https://github.com/tikv/tikv/issues/16789) @[pingyu](https://github.com/pingyu) + - Fix a traffic control issue that might occur after deleting large tables or partitions [#17304](https://github.com/tikv/tikv/issues/17304) @[SpadeA-Tang](https://github.com/SpadeA-Tang) + - Fix the panic issue that occurs when read threads access outdated indexes in the MemTable of the Raft Engine [#17383](https://github.com/tikv/tikv/issues/17383) @[LykxSassinator](https://github.com/LykxSassinator) + - Fix the issue that CDC and log-backup do not limit the timeout of `check_leader` using the `advance-ts-interval` configuration, causing the `resolved_ts` lag to be too large when TiKV restarts normally in some cases [#17107](https://github.com/tikv/tikv/issues/17107) @[SpadeA-Tang](https://github.com/SpadeA-Tang) + - Fix the issue that SST files imported by TiDB Lightning are lost after TiKV is restarted [#15912](https://github.com/tikv/tikv/issues/15912) @[lance6716](https://github.com/lance6716) + - Fix the issue that TiKV might panic due to ingesting deleted `sst_importer` SST files [#15053](https://github.com/tikv/tikv/issues/15053) @[lance6716](https://github.com/lance6716) + - Fix the issue that when there are a large number of Regions in a TiKV instance, TiKV might be OOM during data import [#16229](https://github.com/tikv/tikv/issues/16229) @[SpadeA-Tang](https://github.com/SpadeA-Tang) + - Fix the issue that bloom filters are incompatible between earlier versions (earlier than v7.1) and later versions [#17272](https://github.com/tikv/tikv/issues/17272) @[v01dstar](https://github.com/v01dstar) + - Fix the issue that setting the gRPC message compression method via `grpc-compression-type` does not take effect on messages sent from TiKV to TiDB [#17176](https://github.com/tikv/tikv/issues/17176) @[ekexium](https://github.com/ekexium) + - Fix the issue of unstable test cases, ensuring that each test uses an independent temporary directory to avoid online configuration changes affecting other test cases [#16871](https://github.com/tikv/tikv/issues/16871) @[glorv](https://github.com/glorv) + - Fix the issue that when a large number of transactions are queuing for lock release on the same key and the key is frequently updated, excessive pressure on deadlock detection might cause TiKV OOM issues [#17394](https://github.com/tikv/tikv/issues/17394) @[MyonKeminta](https://github.com/MyonKeminta) + - Fix the issue that the decimal part of the `DECIMAL` type is incorrect in some cases [#16913](https://github.com/tikv/tikv/issues/16913) @[gengliqi](https://github.com/gengliqi) + - Fix the issue that the `CONV()` function in queries might overflow during numeric system conversion, leading to TiKV panic [#16969](https://github.com/tikv/tikv/issues/16969) @[gengliqi](https://github.com/gengliqi) + - Fix the issue that TiKV might panic when a stale replica processes Raft snapshots, triggered by a slow split operation and immediate removal of the new replica [#17469](https://github.com/tikv/tikv/issues/17469) @[hbisheng](https://github.com/hbisheng) + - Fix the issue that highly concurrent Coprocessor requests might cause TiKV OOM [#16653](https://github.com/tikv/tikv/issues/16653) @[overvenus](https://github.com/overvenus) + - Fix the issue that prevents master key rotation when the master key is stored in a Key Management Service (KMS) [#17410](https://github.com/tikv/tikv/issues/17410) @[hhwyt](https://github.com/hhwyt) + - Fix the issue that the output of the `raft region` command in tikv-ctl does not include the Region status information [#17037](https://github.com/tikv/tikv/issues/17037) @[glorv](https://github.com/glorv) + - Fix the issue that the **Storage async write duration** monitoring metric on the TiKV panel in Grafana is inaccurate [#17579](https://github.com/tikv/tikv/issues/17579) @[overvenus](https://github.com/overvenus) + - Fix the issue that TiKV converts the time zone incorrectly for Brazil and Egypt [#16220](https://github.com/tikv/tikv/issues/16220) @[overvenus](https://github.com/overvenus) + ++ PD + + - Fix the memory leak issue in label statistics [#8700](https://github.com/tikv/pd/issues/8700) @[lhy1024](https://github.com/lhy1024) + - Fix the issue that resource groups print excessive logs [#8159](https://github.com/tikv/pd/issues/8159) @[nolouch](https://github.com/nolouch) + - Fix the performance jitter issue caused by frequent creation of random number generator [#8674](https://github.com/tikv/pd/issues/8674) @[rleungx](https://github.com/rleungx) + - Fix the memory leak issue in Region statistics [#8710](https://github.com/tikv/pd/issues/8710) @[rleungx](https://github.com/rleungx) + - Fix the memory leak issue in hotspot cache [#8698](https://github.com/tikv/pd/issues/8698) @[lhy1024](https://github.com/lhy1024) + - Fix the issue that `evict-leader-scheduler` fails to work properly when it is repeatedly created with the same Store ID [#8756](https://github.com/tikv/pd/issues/8756) @[okJiang](https://github.com/okJiang) + - Fix the issue that setting `replication.strictly-match-label` to `true` causes TiFlash to fail to start [#8480](https://github.com/tikv/pd/issues/8480) @[rleungx](https://github.com/rleungx) + - Fix the issue that changing the log level via the configuration file does not take effect [#8117](https://github.com/tikv/pd/issues/8117) @[rleungx](https://github.com/rleungx) + - Fix the issue that resource groups could not effectively limit resource usage under high concurrency [#8435](https://github.com/tikv/pd/issues/8435) @[nolouch](https://github.com/nolouch) + - Fix the data race issue that PD encounters during operator checks [#8263](https://github.com/tikv/pd/issues/8263) @[lhy1024](https://github.com/lhy1024) + - Fix the issue that a resource group encounters quota limits when requesting tokens for more than 500 ms [#8349](https://github.com/tikv/pd/issues/8349) @[nolouch](https://github.com/nolouch) + - Fix the issue that some logs are not redacted [#8419](https://github.com/tikv/pd/issues/8419) @[rleungx](https://github.com/rleungx) + - Fix the issue that no error is reported when binding a role to a resource group [#54417](https://github.com/pingcap/tidb/issues/54417) @[JmPotato](https://github.com/JmPotato) + - Fix the issue that PD's Region API cannot be requested when a large number of Regions exist [#55872](https://github.com/pingcap/tidb/issues/55872) @[rleungx](https://github.com/rleungx) + - Fix the issue that a large number of retries occur when canceling resource groups queries [#8217](https://github.com/tikv/pd/issues/8217) @[nolouch](https://github.com/nolouch) + - Fix the issue that the encryption manager is not initialized before use [#8384](https://github.com/tikv/pd/issues/8384) @[rleungx](https://github.com/rleungx) + - Fix the issue that the `Filter target` monitoring metric for PD does not provide scatter range information [#8125](https://github.com/tikv/pd/issues/8125) @[HuSharp](https://github.com/HuSharp) + - Fix the data race issue of resource groups [#8267](https://github.com/tikv/pd/issues/8267) @[HuSharp](https://github.com/HuSharp) + - Fix the issue that setting the TiKV configuration item [`coprocessor.region-split-size`](/tikv-configuration-file.md#region-split-size) to a value less than 1 MiB causes PD panic [#8323](https://github.com/tikv/pd/issues/8323) @[JmPotato](https://github.com/JmPotato) + - Fix the issue that when using a wrong parameter in `evict-leader-scheduler`, PD does not report errors correctly and some schedulers are unavailable [#8619](https://github.com/tikv/pd/issues/8619) @[rleungx](https://github.com/rleungx) + - Fix the issue that slots are not fully deleted in a resource group client, which causes the number of the allocated tokens to be less than the specified value [#7346](https://github.com/tikv/pd/issues/7346) @[guo-shaoge](https://github.com/guo-shaoge) + - Fix the issue that down peers might not recover when using Placement Rules [#7808](https://github.com/tikv/pd/issues/7808) @[rleungx](https://github.com/rleungx) + ++ TiFlash + + - Fix the issue that TiFlash metadata might become corrupted and cause the process to panic when upgrading a cluster from a version earlier than v6.5.0 to v6.5.0 or later [#9039](https://github.com/pingcap/tiflash/issues/9039) @[JaySon-Huang](https://github.com/JaySon-Huang) + - Fix the issue that some queries might report a column type mismatch error after late materialization is enabled [#9175](https://github.com/pingcap/tiflash/issues/9175) @[JinheLin](https://github.com/JinheLin) + - Fix the issue that some queries might report errors when late materialization is enabled [#9472](https://github.com/pingcap/tiflash/issues/9472) @[Lloyd-Pottiger](https://github.com/Lloyd-Pottiger) + - Fix the issue that some JSON functions unsupported by TiFlash are pushed down to TiFlash [#9444](https://github.com/pingcap/tiflash/issues/9444) @[windtalker](https://github.com/windtalker) + - Fix the issue that setting the SSL certificate configuration to an empty string in TiFlash incorrectly enables TLS and causes TiFlash to fail to start [#9235](https://github.com/pingcap/tiflash/issues/9235) @[JaySon-Huang](https://github.com/JaySon-Huang) + - Fix the issue that a network partition (network disconnection) between TiFlash and any PD might cause read request timeout errors [#9243](https://github.com/pingcap/tiflash/issues/9243) @[Lloyd-Pottiger](https://github.com/Lloyd-Pottiger) + - Fix the issue that a large number of duplicate rows might be read in FastScan mode after importing data via BR or TiDB Lightning [#9118](https://github.com/pingcap/tiflash/issues/9118) @[JinheLin](https://github.com/JinheLin) + - Fix the issue that TiFlash fails to parse the table schema when the table contains Bit-type columns with a default value that contains invalid characters [#9461](https://github.com/pingcap/tiflash/issues/9461) @[Lloyd-Pottiger](https://github.com/Lloyd-Pottiger) + - Fix the issue that queries with virtual generated columns might return incorrect results after late materialization is enabled [#9188](https://github.com/pingcap/tiflash/issues/9188) @[JinheLin](https://github.com/JinheLin) + - Fix the issue that TiFlash might fail to synchronize schemas after executing `ALTER TABLE ... EXCHANGE PARTITION` across databases [#7296](https://github.com/pingcap/tiflash/issues/7296) @[JaySon-Huang](https://github.com/JaySon-Huang) + - Fix the issue that TiFlash might panic when a database is deleted shortly after creation [#9266](https://github.com/pingcap/tiflash/issues/9266) @[JaySon-Huang](https://github.com/JaySon-Huang) + - Fix the issue that when using the `CAST()` function to convert a string to a datetime with a time zone or invalid characters, the result is incorrect [#8754](https://github.com/pingcap/tiflash/issues/8754) @[solotzg](https://github.com/solotzg) + - Fix the issue that TiFlash might return transiently incorrect results in high-concurrency read scenarios [#8845](https://github.com/pingcap/tiflash/issues/8845) @[JinheLin](https://github.com/JinheLin) + - Fix the issue that the `SUBSTRING_INDEX()` function might cause TiFlash to crash in some corner cases [#9116](https://github.com/pingcap/tiflash/issues/9116) @[wshwsh12](https://github.com/wshwsh12) + - Fix the issue that frequent `EXCHANGE PARTITION` and `DROP TABLE` operations over a long period in a cluster might slow down the replication of TiFlash table metadata and degrade the query performance [#9227](https://github.com/pingcap/tiflash/issues/9227) @[JaySon-Huang](https://github.com/JaySon-Huang) + - Fix the issue that a query with an empty key range fails to correctly generate read tasks on TiFlash, which might block TiFlash queries [#9108](https://github.com/pingcap/tiflash/issues/9108) @[JinheLin](https://github.com/JinheLin) + - Fix the issue that the sign in the result of the `CAST AS DECIMAL` function is incorrect in certain cases [#9301](https://github.com/pingcap/tiflash/issues/9301) @[guo-shaoge](https://github.com/guo-shaoge) + - Fix the issue that the `SUBSTRING()` function does not support the `pos` and `len` arguments for certain integer types, causing query errors [#9473](https://github.com/pingcap/tiflash/issues/9473) @[gengliqi](https://github.com/gengliqi) + - Fix the issue that executing `DROP TABLE` on large tables might cause TiFlash OOM [#9437](https://github.com/pingcap/tiflash/issues/9437) @[JaySon-Huang](https://github.com/JaySon-Huang) + ++ Tools + + + Backup & Restore (BR) + + - Fix the issue that BR integration test cases are unstable, and add a new test case to simulate snapshot or log backup file corruption [#53835](https://github.com/pingcap/tidb/issues/53835) @[Leavrth](https://github.com/Leavrth) + - Fix the issue that DDLs requiring backfilling, such as `ADD INDEX` and `MODIFY COLUMN`, might not be correctly recovered during incremental restore [#54426](https://github.com/pingcap/tidb/issues/54426) @[3pointer](https://github.com/3pointer) + - Fix the issue that after a log backup PITR task fails and you stop it, the safepoints related to that task are not properly cleared in PD [#17316](https://github.com/tikv/tikv/issues/17316) @[Leavrth](https://github.com/Leavrth) + - Fix the issue that log backup might be paused after the advancer owner migration [#53561](https://github.com/pingcap/tidb/issues/53561) @[RidRisR](https://github.com/RidRisR) + - Fix the inefficiency issue in scanning DDL jobs during incremental backups [#54139](https://github.com/pingcap/tidb/issues/54139) @[3pointer](https://github.com/3pointer) + - Fix the issue that the backup performance during checkpoint backups is affected due to interruptions in seeking Region leaders [#17168](https://github.com/tikv/tikv/issues/17168) @[Leavrth](https://github.com/Leavrth) + - Fix the issue that BR logs might print sensitive credential information when log backup is enabled [#55273](https://github.com/pingcap/tidb/issues/55273) @[RidRisR](https://github.com/RidRisR) + - Fix the issue that BR fails to correctly identify errors due to multiple nested retries during the restore process [#54053](https://github.com/pingcap/tidb/issues/54053) @[RidRisR](https://github.com/RidRisR) + - Fix the issue that TiKV might panic when resuming a paused log backup task with unstable network connections to PD [#17020](https://github.com/tikv/tikv/issues/17020) @[YuJuncen](https://github.com/YuJuncen) + - Fix the issue that backup tasks might get stuck if TiKV becomes unresponsive during the backup process [#53480](https://github.com/pingcap/tidb/issues/53480) @[Leavrth](https://github.com/Leavrth) + - Fix the issue that the checkpoint path of backup and restore is incompatible with some external storage [#55265](https://github.com/pingcap/tidb/issues/55265) @[Leavrth](https://github.com/Leavrth) + - Fix the issue that the Region fetched from PD does not have a Leader when restoring data using BR or importing data using TiDB Lightning in physical import mode [#51124](https://github.com/pingcap/tidb/issues/51124) [#50501](https://github.com/pingcap/tidb/issues/50501) @[Leavrth](https://github.com/Leavrth) + - Fix the issue that the transfer of PD leaders might cause BR to panic when restoring data [#53724](https://github.com/pingcap/tidb/issues/53724) @[Leavrth](https://github.com/Leavrth) + - Fix the issue that after pausing, stopping, and rebuilding the log backup task, the task status is normal, but the checkpoint does not advance [#53047](https://github.com/pingcap/tidb/issues/53047) @[RidRisR](https://github.com/RidRisR) + - Fix the issue that log backups cannot resolve residual locks promptly, causing the checkpoint to fail to advance [#57134](https://github.com/pingcap/tidb/issues/57134) @[3pointer](https://github.com/3pointer) + + + TiCDC + + - Fix the issue that the default value of `TIMEZONE` type is not set according to the correct time zone [#10931](https://github.com/pingcap/tiflow/issues/10931) @[3AceShowHand](https://github.com/3AceShowHand) + - Fix the issue that TiCDC might panic when the Sorter module reads disk data [#10853](https://github.com/pingcap/tiflow/issues/10853) @[hicqu](https://github.com/hicqu) + - Fix the issue that data inconsistency might occur when restarting Changefeed repeatedly when performing a large number of `UPDATE` operations in a multi-node environment [#11219](https://github.com/pingcap/tiflow/issues/11219) @[lidezhu](https://github.com/lidezhu) + - Fix the issue that after filtering out `add table partition` events is configured in `ignore-event`, TiCDC does not replicate other types of DML changes for related partitions to the downstream [#10524](https://github.com/pingcap/tiflow/issues/10524) @[CharlesCheung96](https://github.com/CharlesCheung96) + - Fix the issue that TiCDC might get stuck when replicating data to Kafka [#9855](https://github.com/pingcap/tiflow/issues/9855) @[hicqu](https://github.com/hicqu) + - Fix the issue that `DROP PRIMARY KEY` and `DROP UNIQUE KEY` statements are not replicated correctly [#10890](https://github.com/pingcap/tiflow/issues/10890) @[asddongmen](https://github.com/asddongmen) + - Fix the issue that the Processor module might get stuck when the downstream Kafka is inaccessible [#11340](https://github.com/pingcap/tiflow/issues/11340) @[asddongmen](https://github.com/asddongmen) + + + TiDB Data Migration (DM) + + - Fix the issue that DM does not set the default database when processing the `ALTER DATABASE` statement, which causes a replication error [#11503](https://github.com/pingcap/tiflow/issues/11503) @[lance6716](https://github.com/lance6716) + - Fix the issue that multiple DM-master nodes might simultaneously become leaders, leading to data inconsistency [#11602](https://github.com/pingcap/tiflow/issues/11602) @[GMHDBJD](https://github.com/GMHDBJD) + - Fix the connection blocking issue by upgrading `go-mysql` [#11041](https://github.com/pingcap/tiflow/issues/11041) @[D3Hunter](https://github.com/D3Hunter) + - Fix the issue that data replication is interrupted when the index length exceeds the default value of `max-index-length` [#11459](https://github.com/pingcap/tiflow/issues/11459) @[michaelmdeng](https://github.com/michaelmdeng) + - Fix the issue that DM returns an error when replicating the `ALTER TABLE ... DROP PARTITION` statement for LIST partitioned tables [#54760](https://github.com/pingcap/tidb/issues/54760) @[lance6716](https://github.com/lance6716) + + + TiDB Lightning + + - Fix the issue that TiDB Lightning fails to receive oversized messages sent from TiKV [#56114](https://github.com/pingcap/tidb/issues/56114) @[fishiu](https://github.com/fishiu) + - Fix the issue that TiKV data might be corrupted when importing data after disabling the import mode of TiDB Lightning [#15003](https://github.com/tikv/tikv/issues/15003) [#47694](https://github.com/pingcap/tidb/issues/47694) @[lance6716](https://github.com/lance6716) + - Fix the issue that transaction conflicts occur during data import using TiDB Lightning [#49826](https://github.com/pingcap/tidb/issues/49826) @[lance6716](https://github.com/lance6716) + - Fix the issue that TiDB Lightning might fail to import data when EBS BR is running [#49517](https://github.com/pingcap/tidb/issues/49517) @[mittalrishabh](https://github.com/mittalrishabh) + - Fix the issue that TiDB Lightning reports a `verify allocator base failed` error when two instances simultaneously start parallel import tasks and are assigned the same task ID [#55384](https://github.com/pingcap/tidb/issues/55384) @[ei-sugimoto](https://github.com/ei-sugimoto) + - Fix the issue that killing the PD Leader causes TiDB Lightning to report the `invalid store ID 0` error during data import [#50501](https://github.com/pingcap/tidb/issues/50501) @[Leavrth](https://github.com/Leavrth) + + + Dumpling + + - Fix the issue that Dumpling reports an error when exporting tables and views at the same time [#53682](https://github.com/pingcap/tidb/issues/53682) @[tangenta](https://github.com/tangenta) + + + TiDB Binlog + + - Fix the issue that deleting rows during the execution of `ADD COLUMN` might report an error `data and columnID count not match` when TiDB Binlog is enabled [#53133](https://github.com/pingcap/tidb/issues/53133) @[tangenta](https://github.com/tangenta) diff --git a/releases/release-7.4.0.md b/releases/release-7.4.0.md index fcc60a550e1d6..af131dc40c5e3 100644 --- a/releases/release-7.4.0.md +++ b/releases/release-7.4.0.md @@ -199,6 +199,14 @@ Quick access: [Quick start](https://docs.pingcap.com/tidb/v7.4/quick-start-with- For more information, see [documentation](/system-variables.md#tidb_opt_enable_hash_join-new-in-v656-v712-and-v740). +* Memory control for the statistics cache is generally available (GA) [#45367](https://github.com/pingcap/tidb/issues/45367) @[hawkingrei](https://github.com/hawkingrei) + + TiDB instances can cache table statistics to accelerate execution plan generation and improve SQL performance. Starting from v6.1.0, TiDB introduces the system variable [`tidb_stats_cache_mem_quota`](https://github.com/system-variables.md#tidb_stats_cache_mem_quota-new-in-v610). By configuring this system variable, you can set a memory usage limit for the statistics cache. When the cache reaches its limit, TiDB automatically evicts inactive cache entries, helping control instance memory usage and improve stability. + + Starting from v7.4.0, this feature becomes generally available (GA). + + For more information, see [documentation](/system-variables.md#tidb_stats_cache_mem_quota-new-in-v610). + ### SQL * TiDB supports partition type management [#42728](https://github.com/pingcap/tidb/issues/42728) @[mjonss](https://github.com/mjonss) diff --git a/releases/release-notes.md b/releases/release-notes.md index 22b566ee5be47..074d9549bece8 100644 --- a/releases/release-notes.md +++ b/releases/release-notes.md @@ -29,6 +29,7 @@ summary: TiDB has released multiple versions, including 7.5.1, 7.5.0, 7.4.0-DMR, ## 7.1 +- [7.1.6](/releases/release-7.1.6.md): 2024-11-21 - [7.1.5](/releases/release-7.1.5.md): 2024-04-26 - [7.1.4](/releases/release-7.1.4.md): 2024-03-11 - [7.1.3](/releases/release-7.1.3.md): 2023-12-21 diff --git a/releases/release-timeline.md b/releases/release-timeline.md index 42a2e06dae8dc..dc6efec245f94 100644 --- a/releases/release-timeline.md +++ b/releases/release-timeline.md @@ -11,6 +11,7 @@ This document shows all the released TiDB versions in reverse chronological orde | Version | Release Date | | :--- | :--- | +| [7.1.6](/releases/release-7.1.6.md) | 2024-11-21 | | [7.5.4](/releases/release-7.5.4.md) | 2024-10-15 | | [6.5.11](/releases/release-6.5.11.md) | 2024-09-20 | | [7.5.3](/releases/release-7.5.3.md) | 2024-08-05 | diff --git a/sql-statements/sql-statement-insert.md b/sql-statements/sql-statement-insert.md index d7c4e35436cb9..f497a8239aef8 100644 --- a/sql-statements/sql-statement-insert.md +++ b/sql-statements/sql-statement-insert.md @@ -41,6 +41,10 @@ OnDuplicateKeyUpdate ::= ( 'ON' 'DUPLICATE' 'KEY' 'UPDATE' AssignmentList )? ``` +> **Note:** +> +> Starting from v6.6.0, TiDB supports [Resource Control](/tidb-resource-control.md). You can use this feature to execute SQL statements with different priorities in different resource groups. By configuring proper quotas and priorities for these resource groups, you can gain better scheduling control for SQL statements with different priorities. When resource control is enabled, statement priority (`PriorityOpt`) will no longer take effect. It is recommended that you use [Resource Control](/tidb-resource-control.md) to manage resource usage for different SQL statements. + ## Examples ```sql diff --git a/sql-statements/sql-statement-replace.md b/sql-statements/sql-statement-replace.md index 397b51fa0b230..d44db793a3af0 100644 --- a/sql-statements/sql-statement-replace.md +++ b/sql-statements/sql-statement-replace.md @@ -32,6 +32,10 @@ InsertValues ::= | 'SET' ColumnSetValue? ( ',' ColumnSetValue )* ``` +> **Note:** +> +> Starting from v6.6.0, TiDB supports [Resource Control](/tidb-resource-control.md). You can use this feature to execute SQL statements with different priorities in different resource groups. By configuring proper quotas and priorities for these resource groups, you can gain better scheduling control for SQL statements with different priorities. When resource control is enabled, statement priority (`PriorityOpt`) will no longer take effect. It is recommended that you use [Resource Control](/tidb-resource-control.md) to manage resource usage for different SQL statements. + ## Examples ```sql diff --git a/sql-statements/sql-statement-select.md b/sql-statements/sql-statement-select.md index 67fdd2f239f3b..15aeca289b921 100644 --- a/sql-statements/sql-statement-select.md +++ b/sql-statements/sql-statement-select.md @@ -111,6 +111,10 @@ TableSampleOpt ::= |`LOCK IN SHARE MODE` | To guarantee compatibility, TiDB parses these three modifiers, but will ignore them. | | `TABLESAMPLE` | To get a sample of rows from the table. | +> **Note:** +> +> Starting from v6.6.0, TiDB supports [Resource Control](/tidb-resource-control.md). You can use this feature to execute SQL statements with different priorities in different resource groups. By configuring proper quotas and priorities for these resource groups, you can gain better scheduling control for SQL statements with different priorities. When resource control is enabled, statement priority (`HIGH_PRIORITY`) will no longer take effect. It is recommended that you use [Resource Control](/tidb-resource-control.md) to manage resource usage for different SQL statements. + ## Examples ### SELECT diff --git a/sql-statements/sql-statement-update.md b/sql-statements/sql-statement-update.md index 88f019146a2e3..0e8f2d088abbf 100644 --- a/sql-statements/sql-statement-update.md +++ b/sql-statements/sql-statement-update.md @@ -33,6 +33,10 @@ The `UPDATE` statement is used to modify data in a specified table. ![WhereClauseOptional](/media/sqlgram/WhereClauseOptional.png) +> **Note:** +> +> Starting from v6.6.0, TiDB supports [Resource Control](/tidb-resource-control.md). You can use this feature to execute SQL statements with different priorities in different resource groups. By configuring proper quotas and priorities for these resource groups, you can gain better scheduling control for SQL statements with different priorities. When resource control is enabled, statement priority (`LOW_PRIORITY` and `HIGH_PRIORITY`) will no longer take effect. It is recommended that you use [Resource Control](/tidb-resource-control.md) to manage resource usage for different SQL statements. + ## Examples ```sql diff --git a/sync-diff-inspector/sync-diff-inspector-overview.md b/sync-diff-inspector/sync-diff-inspector-overview.md index 15c2c8e155240..4a00e0f7e49a2 100644 --- a/sync-diff-inspector/sync-diff-inspector-overview.md +++ b/sync-diff-inspector/sync-diff-inspector-overview.md @@ -72,6 +72,9 @@ check-thread-count = 4 # If enabled, SQL statements is exported to fix inconsistent tables. export-fix-sql = true +# Only compares the data instead of the table structure. This configuration item is an experimental feature. It is not recommended that you use it in the production environment. +check-data-only = false + # Only compares the table structure instead of the data. check-struct-only = false diff --git a/system-variables.md b/system-variables.md index a26d5fd05f37c..739f713ccefcb 100644 --- a/system-variables.md +++ b/system-variables.md @@ -2840,16 +2840,12 @@ For a system upgraded to v5.0 from an earlier version, if you have not modified ### tidb_gc_life_time New in v5.0 -> **Note:** -> -> This variable is read-only for [TiDB Cloud Serverless](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-cloud-serverless). - - Scope: GLOBAL - Persists to cluster: Yes - Applies to hint [SET_VAR](/optimizer-hints.md#set_varvar_namevar_value): No - Type: Duration - Default value: `10m0s` -- Range: `[10m0s, 8760h0m0s]` +- Range: `[10m0s, 8760h0m0s]` for TiDB Self-Managed and [TiDB Cloud Dedicated](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-cloud-dedicated), `[10m0s, 168h0m0s]` for [TiDB Cloud Serverless](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-cloud-serverless) - The time limit during which data is retained for each GC, in the format of Go Duration. When a GC happens, the current time minus this value is the safe point. > **Note:** @@ -4621,7 +4617,7 @@ SHOW WARNINGS; ### tidb_read_consistency New in v5.4.0 - Scope: SESSION -- Applies to hint [SET_VAR](/optimizer-hints.md#set_varvar_namevar_value): Yes +- Applies to hint [SET_VAR](/optimizer-hints.md#set_varvar_namevar_value): Yes (Note that if [non-transactional DML statements](/non-transactional-dml.md) exist, modifying the value of this variable using hint might not take effect.) - Type: String - Default value: `strict` - This variable is used to control the read consistency for an auto-commit read statement. @@ -4631,7 +4627,7 @@ SHOW WARNINGS; ### tidb_read_staleness New in v5.4.0 - Scope: SESSION -- Applies to hint [SET_VAR](/optimizer-hints.md#set_varvar_namevar_value): Yes +- Applies to hint [SET_VAR](/optimizer-hints.md#set_varvar_namevar_value): No - Type: Integer - Default value: `0` - Range: `[-2147483648, 0]` diff --git a/ticdc/ticdc-split-update-behavior.md b/ticdc/ticdc-split-update-behavior.md index 0a07e74b99fcf..86be8105de940 100644 --- a/ticdc/ticdc-split-update-behavior.md +++ b/ticdc/ticdc-split-update-behavior.md @@ -7,7 +7,7 @@ summary: Introduce the behavior changes about whether TiCDC splits `UPDATE` even ## Split `UPDATE` events for MySQL sinks -Starting from v7.5.2, when using the MySQL sink, any TiCDC node that receives a request for replicating a table will fetch the current timestamp `thresholdTS` from PD before starting the replication to the downstream. Based on the value of this timestamp, TiCDC decides whether to split `UPDATE` events: +Starting from v6.5.10, v7.1.6, and v7.5.2, when using the MySQL sink, any TiCDC node that receives a request for replicating a table will fetch the current timestamp `thresholdTS` from PD before starting the replication to the downstream. Based on the value of this timestamp, TiCDC decides whether to split `UPDATE` events: - For transactions containing one or multiple `UPDATE` changes, if the transaction `commitTS` is less than `thresholdTS`, TiCDC splits the `UPDATE` event into a `DELETE` event and an `INSERT` event before writing them to the Sorter module. - For `UPDATE` events with the transaction `commitTS` greater than or equal to `thresholdTS`, TiCDC does not split them. For more information, see GitHub issue [#10918](https://github.com/pingcap/tiflow/issues/10918). @@ -138,7 +138,7 @@ Starting from v6.5.10, v7.1.6, and v7.5.3, when using a non-MySQL sink, TiCDC su | v7.1.1 | Canal/Open | ✗ | ✓ | | | v7.1.1 | CSV/Avro | ✗ | ✗ | Split but does not sort. See [#9086](https://github.com/pingcap/tiflow/issues/9658) | | v7.1.2 ~ v7.1.5 | ALL | ✓ | ✗ | | -| \>= v7.1.6 (not released yet) | ALL | ✓ (Default value: `output-raw-change-event = false`) | ✓ (Optional: `output-raw-change-event = true`) | | +| \>= v7.1.6 | ALL | ✓ (Default value: `output-raw-change-event = false`) | ✓ (Optional: `output-raw-change-event = true`) | | #### Release 7.5 compatibility diff --git a/tidb-cloud/backup-and-restore.md b/tidb-cloud/backup-and-restore.md index 4d90cd3f1b611..f2413aba3e336 100644 --- a/tidb-cloud/backup-and-restore.md +++ b/tidb-cloud/backup-and-restore.md @@ -91,8 +91,9 @@ To configure the backup schedule, perform the following steps: > **Note** > - > - After you delete a cluster, the automatic backup files will be retained for a specified period, as set in backup retention. You need to delete the backup files accordingly. - > - After you delete a cluster, the existing manual backup files will be retained until you manually delete them, or your account is closed. + > - All auto-backups, except the latest one, will be deleted if their lifetime exceeds the retention period. The latest auto-backup will not be deleted unless you delete it manually. This ensures that you can restore cluster data if accidental deletion occurs. + > - After you delete a cluster, auto-backups with a lifetime within the retention period will be moved to the recycle bin. + > - After you delete a cluster, existing manual backups will be retained until manually deleted or your account is closed. ### Turn on dual region backup diff --git a/tidb-cloud/branch-github-integration.md b/tidb-cloud/branch-github-integration.md index abdb2d3a339ca..98e41a47faa05 100644 --- a/tidb-cloud/branch-github-integration.md +++ b/tidb-cloud/branch-github-integration.md @@ -58,8 +58,8 @@ After you connect your TiDB Cloud Serverless cluster to your GitHub repository, | Pull request changes | TiDB Cloud Branching app behaviors | |------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Create a pull request | When you create a pull request in the repository, the [TiDB Cloud Branching](https://github.com/apps/tidb-cloud-branching) app creates a branch for your TiDB Cloud Serverless cluster. The branch name is in the `${github_branch_name}_${pr_id}_${commit_sha}` format. Note that the number of branches has a [limit](/tidb-cloud/branch-overview.md#limitations-and-quotas). | -| Push new commits to a pull request | Every time you push a new commit to a pull request in the repository, the [TiDB Cloud Branching](https://github.com/apps/tidb-cloud-branching) app deletes the previous TiDB Cloud Serverless branch and creates a new branch for the latest commit. | +| Create a pull request | When you create a pull request in the repository, the [TiDB Cloud Branching](https://github.com/apps/tidb-cloud-branching) app creates a branch for your TiDB Cloud Serverless cluster. When `branch.mode` is set to `reset`, the branch name follows the `${github_branch_name}_${pr_id}` format. When `branch.mode` is set to `reserve`, the branch name follows the `${github_branch_name}_${pr_id}_${commit_sha}` format. Note that the number of branches has a [limit](/tidb-cloud/branch-overview.md#limitations-and-quotas). | +| Push new commits to a pull request | When `branch.mode` is set to `reset`, every time you push a new commit to a pull request in the repository, the [TiDB Cloud Branching](https://github.com/apps/tidb-cloud-branching) app resets the TiDB Cloud Serverless branch. When `branch.mode` is set to `reserve`, the app creates a new branch for the latest commit. | | Close or merge a pull request | When you close or merge a pull request, the [TiDB Cloud Branching](https://github.com/apps/tidb-cloud-branching) app deletes the branch for this pull request. | | Reopen a pull request | When you reopen a pull request, the [TiDB Cloud Branching](https://github.com/apps/tidb-cloud-branching) app creates a branch for the lasted commit of the pull request. | @@ -94,16 +94,19 @@ github: - ".*_db" ``` -### branch.autoReserved +### branch.mode -**Type:** boolean. **Default:** `false`. +**Type:** string. **Default:** `reset`. -If it is set to `true`, the TiDB Cloud Branching app will not delete the TiDB Cloud Serverless branch that is created in the previous commit. +Specify how the TiDB Cloud Branching app handles branch updates: + +- If it is set to `reset`, the TiDB Cloud Branching app will update the existing branch with the latest data. +- If it is set to `reserve`, the TiDB Cloud Branching app will create a new branch for your latest commit. ```yaml github: branch: - autoReserved: false + mode: reset ``` ### branch.autoDestroy diff --git a/tidb-cloud/branch-manage.md b/tidb-cloud/branch-manage.md index 9a0bf8d7294a3..da9f3b88a36a8 100644 --- a/tidb-cloud/branch-manage.md +++ b/tidb-cloud/branch-manage.md @@ -24,8 +24,19 @@ To create a branch, perform the following steps: 1. In the [TiDB Cloud console](https://tidbcloud.com/), navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project, and then click the name of your target TiDB Cloud Serverless cluster to go to its overview page. 2. Click **Branches** in the left navigation pane. -3. Click **Create Branch** in the upper-right corner. -4. Enter the branch name, and then click **Create**. +3. In the upper-right corner of the **Branches** page, click **Create Branch**. A dialog is displayed. + + Alternatively, to create a branch from an existing parent branch, locate the row of your target parent branch, and then click **...** > **Create Branch** in the **Action** column. + +4. In the **Create Branch** dialog, configure the following options: + + - **Name**: enter a name for the branch. + - **Parent branch**: select the original cluster or an existing branch. `main` represents the current cluster. + - **Include data up to**: choose one of the following: + - **Current point in time**: create a branch from the current state. + - **Specific date and time**: create a branch from a specified time. + +5. Click **Create**. Depending on the data size in your cluster, the branch creation will be completed in a few minutes. @@ -67,6 +78,22 @@ To delete a branch, perform the following steps: 4. Click **Delete** in the drop-down list. 5. Confirm the deletion. +## Reset a branch + +Resetting a branch synchronizes it with the latest data from its parent. + +> **Note:** +> +> This operation is irreversible. Before resetting a branch, make sure that you have backed up any important data. + +To reset a branch, perform the following steps: + +1. In the [TiDB Cloud console](https://tidbcloud.com/), navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project, and then click the name of your target TiDB Cloud Serverless cluster to go to its overview page. +2. Click **Branches** in the left navigation pane. +3. In the row of your target branch to be reset, click **...** in the **Action** column. +4. Click **Reset** in the drop-down list. +5. Confirm the reset. + ## What's next - [Integrate TiDB Cloud Serverless branching into your GitHub CI/CD pipeline](/tidb-cloud/branch-github-integration.md) diff --git a/tidb-cloud/branch-overview.md b/tidb-cloud/branch-overview.md index 292d471dca847..6908ebf402984 100644 --- a/tidb-cloud/branch-overview.md +++ b/tidb-cloud/branch-overview.md @@ -11,7 +11,7 @@ With TiDB Cloud Serverless branches, developers can work in parallel, iterate ra ## Implementations -When a branch for a cluster is created, the data in the branch diverges from the original cluster. This means that subsequent changes made in either the original cluster or the branch will not be synchronized with each other. +When a branch for a cluster is created, the data in the branch diverges from the original cluster or its parent branch at a specific point in time. This means that subsequent changes made in either the parent or the branch will not be synchronized with each other. To ensure fast and seamless branch creation, TiDB Cloud Serverless uses a copy-on-write technique for sharing data between the original cluster and its branches. This process usually completes within a few minutes and is imperceptible to users, ensuring that it does not affect the performance of your original cluster. @@ -41,6 +41,11 @@ Currently, TiDB Cloud Serverless branches are in beta and free of charge. - For each branch of a free cluster, 10 GiB storage is allowed. For each branch of a scalable cluster, 100 GiB storage is allowed. Once the storage is reached, the read and write operations on this branch will be throttled until you reduce the storage. +- When [creating a branch](/tidb-cloud/branch-manage.md#create-a-branch) from a specific point in time: + + - For branches of a free cluster, you can select any time within the last 24 hours. + - For branches of a scalable cluster, you can select any time within the last 14 days. + If you need more quotas, [contact TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). ## What's next diff --git a/tidb-cloud/sql-proxy-account.md b/tidb-cloud/sql-proxy-account.md new file mode 100644 index 0000000000000..6b0d69630b121 --- /dev/null +++ b/tidb-cloud/sql-proxy-account.md @@ -0,0 +1,90 @@ +--- +title: SQL Proxy Account +summary: Learn about the SQL proxy account in TiDB Cloud. +--- + +# SQL Proxy Account + +A SQL proxy account is a SQL user account that is automatically created by TiDB Cloud to access the database via [SQL Editor](/tidb-cloud/explore-data-with-chat2query.md) or [Data Service](https://docs.pingcap.com/tidbcloud/api/v1beta1/dataservice) on behalf of a TiDB Cloud user. For example, `testuser@pingcap.com` is a TiDB Cloud user account, while `3jhEcSimm7keKP8.testuser._41mqK6H4` is its corresponding SQL proxy account. + +SQL proxy accounts provide a secure, token-based authentication mechanism for accessing the database in TiDB Cloud. By eliminating the need for traditional username and password credentials, SQL proxy accounts enhance security and simplify access management. + +The key benefits of SQL proxy accounts are as follows: + +- Enhanced security: mitigates risks associated with static credentials by using JWT tokens. +- Streamlined access: restricts access specifically to the SQL Editor and Data Service, ensuring precise control. +- Ease of management: simplifies authentication for developers and administrators working with TiDB Cloud. + +## Identify the SQL proxy account + +If you want to identify whether a specific SQL account is a SQL proxy account, take the following steps: + +1. Examine the `mysql.user` table: + + ```sql + USE mysql; + SELECT user FROM user WHERE plugin = 'tidb_auth_token'; + ``` + +2. Check grants for the SQL account. If roles like `role_admin`, `role_readonly`, or `role_readwrite` are listed, then it is a SQL proxy account. + + ```sql + SHOW GRANTS for 'username'; + ``` + +## How the SQL proxy account is created + +The SQL proxy account is automatically created during TiDB Cloud cluster initialization for the TiDB Cloud user who is granted a role with permissions in the cluster. + +## How the SQL proxy account is deleted + +When a user is removed from [an organization](/tidb-cloud/manage-user-access.md#remove-an-organization-member) or [a project](/tidb-cloud/manage-user-access.md#remove-a-project-member), or their role changes to one that does not have access to the cluster, the SQL proxy account is automatically deleted. + +Note that if a SQL proxy account is manually deleted, it will be automatically recreated when the user log in to the TiDB Cloud console next time. + +## SQL proxy account username + +In some cases, the SQL proxy account username is exactly the same as the TiDB Cloud username, but in other cases it is not exactly the same. The SQL proxy account username is determined by the length of the TiDB Cloud user's email address. The rules are as follows: + +| Environment | Email length | Username format | +| ----------- | ------------ | --------------- | +| TiDB Cloud Dedicated | <= 32 characters | Full email address | +| TiDB Cloud Dedicated | > 32 characters | `prefix($email, 23)_prefix(base58(sha1($email)), 8)` | +| TiDB Cloud Serverless | <= 15 characters | `serverless_unique_prefix + "." + email` | +| TiDB Cloud Serverless | > 15 characters | `serverless_unique_prefix + "." + prefix($email, 6)_prefix(base58(sha1($email)), 8)` | + +Examples: + +| Environment | Email address | SQL proxy account username | +| ----------- | ----- | -------- | +| TiDB Cloud Dedicated | `user@pingcap.com` | `user@pingcap.com` | +| TiDB Cloud Dedicated | `longemailaddressexample@pingcap.com` | `longemailaddressexample_48k1jwL9` | +| TiDB Cloud Serverless | `u1@pingcap.com` | `{user_name_prefix}.u1@pingcap.com` | +| TiDB Cloud Serverless | `longemailaddressexample@pingcap.com` | `{user_name_prefix}.longem_48k1jwL9`| + +> **Note:** +> +> In the preceding table, `{user_name_prefix}` is a unique prefix generated by TiDB Cloud to distinguish TiDB Cloud Serverless clusters. For details, see the [user name prefix](/tidb-cloud/select-cluster-tier.md#user-name-prefix) of TiDB Cloud Serverless clusters. + +## SQL proxy account password + +Since SQL proxy accounts are JWT token-based, it is not necessary to manage passwords for these accounts. The security token is automatically managed by the system. + +## SQL proxy account roles + +The SQL proxy account's role depends on the TiDB Cloud user's IAM role: + +- Organization level: + - Organization Owner: role_admin + - Organization Billing Admin: No proxy account + - Organization Member: No proxy account + - Organization Console Audit admin: No proxy account + +- Project level: + - Project Owner: role_admin + - Project Data Access Read-Write: role_readwrite + - Project Data Access Read-Only: role_readonly + +## SQL proxy account access control + +SQL proxy accounts are JWT token-based and only accessible to the Data Service and SQL Editor. It is impossible to access the TiDB Cloud cluster using a SQL proxy account with a username and password. diff --git a/tidb-cloud/ticloud-help.md b/tidb-cloud/ticloud-help.md index 88966528564c0..11aa4732d3e6d 100644 --- a/tidb-cloud/ticloud-help.md +++ b/tidb-cloud/ticloud-help.md @@ -19,10 +19,10 @@ To get help for the `auth` command: ticloud help auth ``` -To get help for the `serveless create` command: +To get help for the `serverless create` command: ```shell -ticloud help serveless create +ticloud help serverless create ``` ## Flags diff --git a/tidb-cloud/tidb-cloud-auditing.md b/tidb-cloud/tidb-cloud-auditing.md index 9eb89aabbe37d..1e3c11065ed8f 100644 --- a/tidb-cloud/tidb-cloud-auditing.md +++ b/tidb-cloud/tidb-cloud-auditing.md @@ -24,9 +24,9 @@ The audit logging feature is disabled by default. To audit a cluster, you need t - You are using a TiDB Cloud Dedicated cluster. Audit logging is not available for TiDB Cloud Serverless clusters. - You are in the `Organization Owner` or `Project Owner` role of your organization. Otherwise, you cannot see the database audit-related options in the TiDB Cloud console. For more information, see [User roles](/tidb-cloud/manage-user-access.md#user-roles). -## Enable audit logging for AWS or Google Cloud +## Enable audit logging -To allow TiDB Cloud to write audit logs to your cloud bucket, you need to enable audit logging first. +TiDB Cloud supports recording the audit logs of a TiDB Cloud Dedicated cluster to your cloud storage service. Before enabling database audit logging, configure your cloud storage service on the cloud provider where the cluster is located. ### Enable audit logging for AWS @@ -40,12 +40,17 @@ For more information, see [Creating a bucket](https://docs.aws.amazon.com/Amazon #### Step 2. Configure Amazon S3 access -1. Get the TiDB Cloud account ID and the External ID of the TiDB cluster that you want to enable audit logging. +1. Get the TiDB Cloud Account ID and the External ID of the TiDB cluster that you want to enable audit logging. - 1. In the TiDB Cloud console, choose a project and a cluster deployed on AWS. - 2. Select **Settings** > **Audit Settings**. The **Audit Logging** dialog is displayed. - 3. In the **Audit Logging** dialog, click **Show AWS IAM policy settings**. The corresponding TiDB Cloud Account ID and TiDB Cloud External ID of the TiDB cluster are displayed. - 4. Record the TiDB Cloud Account ID and the External ID for later use. + 1. In the TiDB Cloud console, navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. + + > **Tip:** + > + > If you have multiple projects, you can click in the lower-left corner and switch to another project. + + 2. Click the name of your target cluster to go to its overview page, and then click **DB Audit Logging** in the left navigation pane. + 3. On the **DB Audit Logging** page, click **Enable** in the upper-right corner. + 4. In the **Enable Database Audit Logging** dialog, locate the **AWS IAM Policy Settings** section, and record **TiDB Cloud Account ID** and **TiDB Cloud External ID** for later use. 2. In the AWS Management Console, go to **IAM** > **Access Management** > **Policies**, and then check whether there is a storage bucket policy with the `s3:PutObject` write-only permission. @@ -79,23 +84,23 @@ For more information, see [Creating a bucket](https://docs.aws.amazon.com/Amazon #### Step 3. Enable audit logging -In the TiDB Cloud console, go back to the **Audit Logging** dialog box where you got the TiDB Cloud account ID and the External ID values, and then take the following steps: +In the TiDB Cloud console, go back to the **Enable Database Audit Logging** dialog box where you got the TiDB Cloud account ID and the External ID values, and then take the following steps: 1. In the **Bucket URI** field, enter the URI of your S3 bucket where the audit log files are to be written. 2. In the **Bucket Region** drop-down list, select the AWS region where the bucket locates. 3. In the **Role ARN** field, fill in the Role ARN value that you copied in [Step 2. Configure Amazon S3 access](#step-2-configure-amazon-s3-access). -4. Click **Test Connectivity** to verify whether TiDB Cloud can access and write to the bucket. +4. Click **Test Connection** to verify whether TiDB Cloud can access and write to the bucket. - If it is successful, **Pass** is displayed. Otherwise, check your access configuration. + If it is successful, **The connection is successfully** is displayed. Otherwise, check your access configuration. -5. In the upper-right corner, toggle the audit setting to **On**. +5. Click **Enable** to enable audit logging for the cluster. TiDB Cloud is ready to write audit logs for the specified cluster to your Amazon S3 bucket. > **Note:** > -> - After enabling audit logging, if you make any new changes to the bucket URI, location, or ARN, you must click **Restart** to load the changes and rerun the **Test Connectivity** check to make the changes effective. -> - To remove Amazon S3 access from TiDB Cloud, simply delete the trust policy that you added. +> - After enabling audit logging, if you make any new changes to the bucket URI, location, or ARN, you must click **Test Connection** again to verify that TiDB Cloud can connect to the bucket. Then, click **Enable** to apply the changes. +> - To remove TiDB Cloud's access to your Amazon S3, simply delete the trust policy granted to this cluster in the AWS Management Console. ### Enable audit logging for Google Cloud @@ -111,9 +116,15 @@ For more information, see [Creating storage buckets](https://cloud.google.com/st 1. Get the Google Cloud Service Account ID of the TiDB cluster that you want to enable audit logging. - 1. In the TiDB Cloud console, choose a project and a cluster deployed on Google Cloud Platform. - 2. Select **Settings** > **Audit Settings**. The **Audit Logging** dialog box is displayed. - 3. Click **Show Google Cloud Server Account ID**, and then copy the Service Account ID for later use. + 1. In the TiDB Cloud console, navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. + + > **Tip:** + > + > If you have multiple projects, you can click in the lower-left corner and switch to another project. + + 2. Click the name of your target cluster to go to its overview page, and then click **DB Audit Logging** in the left navigation pane. + 3. On the **DB Audit Logging** page, click **Enable** in the upper-right corner. + 4. In the **Enable Database Audit Logging** dialog, locate the **Google Cloud Server Account ID** section, and record **Service Account ID** for later use. 2. In the Google Cloud console, go to **IAM & Admin** > **Roles**, and then check whether a role with the following write-only permissions of the storage container exists. @@ -138,22 +149,22 @@ For more information, see [Creating storage buckets](https://cloud.google.com/st #### Step 3. Enable audit logging -In the TiDB Cloud console, go back to the **Audit Logging** dialog box where you got the TiDB Cloud account ID, and then take the following steps: +In the TiDB Cloud console, go back to the **Enable Database Audit Logging** dialog box where you got the TiDB Cloud account ID, and then take the following steps: 1. In the **Bucket URI** field, enter your full GCS bucket name. 2. In the **Bucket Region** field, select the GCS region where the bucket locates. -3. Click **Test Connectivity** to verify whether TiDB Cloud can access and write to the bucket. +3. Click **Test Connection** to verify whether TiDB Cloud can access and write to the bucket. - If it is successful, **Pass** is displayed. Otherwise, check your access configuration. + If it is successful, **The connection is successfully** is displayed. Otherwise, check your access configuration. -4. In the upper-right corner, toggle the audit setting to **On**. +4. Click **Enable** to enable audit logging for the cluster. - TiDB Cloud is ready to write audit logs for the specified cluster to your Amazon S3 bucket. + TiDB Cloud is ready to write audit logs for the specified cluster to your GCS bucket. > **Note:** > -> - After enabling audit logging, if you make any new changes to bucket URI or location, you must click **Restart** to load the changes and rerun the **Test Connectivity** check to make the changes effective. -> - To remove GCS access from TiDB Cloud, simply delete the principal that you added. +> - After enabling audit logging, if you make any new changes to the bucket URI or location, you must click **Test Connection** again to verify that TiDB Cloud can connect to the bucket. Then, click **Enable** to apply the changes. +> - To remove TiDB Cloud's access to your GCS bucket, delete the trust policy granted to this cluster in the Google Cloud console. ## Specify auditing filter rules diff --git a/tidb-cloud/tidb-cloud-faq.md b/tidb-cloud/tidb-cloud-faq.md index 96434051913e4..94d298725e660 100644 --- a/tidb-cloud/tidb-cloud-faq.md +++ b/tidb-cloud/tidb-cloud-faq.md @@ -41,7 +41,7 @@ No. ### What versions of TiDB are supported on TiDB Cloud? -- Starting from October 22, 2024, the default TiDB version for new TiDB Cloud Dedicated clusters is v7.5.4. +- Starting from November 26, 2024, the default TiDB version for new TiDB Cloud Dedicated clusters is v8.1.1. - Starting from February 21, 2024, the TiDB version for TiDB Cloud Serverless clusters is v7.1.3. For more information, see [TiDB Cloud Release Notes](/tidb-cloud/tidb-cloud-release-notes.md). diff --git a/tidb-cloud/tidb-cloud-release-notes.md b/tidb-cloud/tidb-cloud-release-notes.md index 092e143b64b22..e08780259bfd3 100644 --- a/tidb-cloud/tidb-cloud-release-notes.md +++ b/tidb-cloud/tidb-cloud-release-notes.md @@ -8,6 +8,34 @@ aliases: ['/tidbcloud/supported-tidb-versions','/tidbcloud/release-notes'] This page lists the release notes of [TiDB Cloud](https://www.pingcap.com/tidb-cloud/) in 2024. +## November 26, 2024 + +**General changes** + +- Upgrade the default TiDB version of new [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters from [v7.5.4](https://docs.pingcap.com/tidb/v7.5/release-7.5.4) to [v8.1.1](https://docs.pingcap.com/tidb/stable/release-8.1.1). + +- [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless) reduces costs for large data writes by up to 80% for the following scenarios: + + - When you perform write operations larger than 16 MiB in [autocommit mode](/transaction-overview.md#autocommit). + - When you perform write operations larger than 16 MiB in [optimistic transaction model](/optimistic-transaction.md). + - When you [import data into TiDB Cloud](/tidb-cloud/tidb-cloud-migration-overview.md#import-data-from-files-to-tidb-cloud). + + This improvement enhances the efficiency and cost-effectiveness of your data operations, providing greater savings as your workload scales. + +## November 19, 2024 + +**General changes** + +- [TiDB Cloud Serverless branching (beta)](/tidb-cloud/branch-overview.md) introduces the following improvements to branch management: + + - **Flexible branch creation**: When creating a branch, you can select a specific cluster or branch as the parent and specify a precise point in time to use from the parent. This gives you precise control over the data in your branch. + + - **Branch reset**: You can reset a branch to synchronize it with the latest state of its parent. + + - **Improved GitHub integration**: The [TiDB Cloud Branching](https://github.com/apps/tidb-cloud-branching) GitHub App introduces the [`branch.mode`](/tidb-cloud/branch-github-integration.md#branchmode) parameter, which controls the behavior during pull request synchronization. In the default mode `reset`, the app resets the branch to match the latest changes in the pull request. + + For more information, see [Manage TiDB Cloud Serverless Branches](/tidb-cloud/branch-manage.md) and [Integrate TiDB Cloud Serverless Branching (Beta) with GitHub](/tidb-cloud/branch-github-integration.md). + ## November 12, 2024 **General changes** diff --git a/tidb-cloud/vector-search-data-types.md b/tidb-cloud/vector-search-data-types.md index c3d3f03f03365..0c390ab6f03e5 100644 --- a/tidb-cloud/vector-search-data-types.md +++ b/tidb-cloud/vector-search-data-types.md @@ -18,9 +18,9 @@ Using vector data types provides the following advantages over using the [`JSON` - Dimension enforcement: You can specify a dimension to forbid inserting vectors with different dimensions. - Optimized storage format: Vector data types are optimized for handling vector data, offering better space efficiency and performance compared to `JSON` types. -> **Note:** +> **Note** > -> Vector data types are only available for [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless) clusters. +> TiDB Vector Search is only available for TiDB Self-Managed (TiDB >= v8.4) and [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless). It is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated). ## Syntax @@ -231,9 +231,9 @@ Currently, direct casting between Vector and other data types (such as `JSON`) i Note that vector data type columns stored in a table cannot be converted to other data types using `ALTER TABLE ... MODIFY COLUMN ...`. -## Restrictions +## Limitations -For restrictions on vector data types, see [Vector search limitations](/tidb-cloud/vector-search-limitations.md) and [Vector index restrictions](/tidb-cloud/vector-search-index.md#restrictions). +See [Vector data type limitations](/tidb-cloud/vector-search-limitations.md#vector-data-type-limitations). ## MySQL compatibility @@ -243,4 +243,4 @@ Vector data types are TiDB specific, and are not supported in MySQL. - [Vector Functions and Operators](/tidb-cloud/vector-search-functions-and-operators.md) - [Vector Search Index](/tidb-cloud/vector-search-index.md) -- [Improve Vector Search Performance](/tidb-cloud/vector-search-improve-performance.md) \ No newline at end of file +- [Improve Vector Search Performance](/tidb-cloud/vector-search-improve-performance.md) diff --git a/tidb-cloud/vector-search-functions-and-operators.md b/tidb-cloud/vector-search-functions-and-operators.md index 4d20fc3e30aab..4081a2f4fcc45 100644 --- a/tidb-cloud/vector-search-functions-and-operators.md +++ b/tidb-cloud/vector-search-functions-and-operators.md @@ -9,7 +9,7 @@ This document lists the functions and operators available for Vector data types. > **Note** > -> Vector data types and these vector functions are only available for [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless) clusters. +> TiDB Vector Search is only available for TiDB Self-Managed (TiDB >= v8.4) and [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless). It is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated). ## Vector functions @@ -17,8 +17,8 @@ The following functions are designed specifically for [Vector data types](/tidb- **Vector distance functions:** -| Function Name | Description | -| --------------------------------------------------------- | ---------------------------------------------------------------- | +| Function Name | Description | +| ----------------------------------------------------------- | ---------------------------------------------------------------- | | [`VEC_L2_DISTANCE`](#vec_l2_distance) | Calculates L2 distance (Euclidean distance) between two vectors | | [`VEC_COSINE_DISTANCE`](#vec_cosine_distance) | Calculates the cosine distance between two vectors | | [`VEC_NEGATIVE_INNER_PRODUCT`](#vec_negative_inner_product) | Calculates the negative of the inner product between two vectors | @@ -26,8 +26,8 @@ The following functions are designed specifically for [Vector data types](/tidb- **Other vector functions:** -| Function Name | Description | -| ------------------------------- | --------------------------------------------------- | +| Function Name | Description | +| --------------------------------- | --------------------------------------------------- | | [`VEC_DIMS`](#vec_dims) | Returns the dimension of a vector | | [`VEC_L2_NORM`](#vec_l2_norm) | Calculates the L2 norm (Euclidean norm) of a vector | | [`VEC_FROM_TEXT`](#vec_from_text) | Converts a string into a vector | @@ -48,8 +48,8 @@ For more information about how vector arithmetic works, see [Vector Data Type | **Aggregate (GROUP BY) functions:** -| Name | Description | -| :----------------------- | :----------------------------------------------- | +| Name | Description | +| :------------------------------------------------------------------------------------------------------------ | :----------------------------------------------- | | [`COUNT()`](https://dev.mysql.com/doc/refman/8.0/en/aggregate-functions.html#function_count) | Return a count of the number of rows returned | | [`COUNT(DISTINCT)`](https://dev.mysql.com/doc/refman/8.0/en/aggregate-functions.html#function_count-distinct) | Return the count of a number of different values | | [`MAX()`](https://dev.mysql.com/doc/refman/8.0/en/aggregate-functions.html#function_max) | Return the maximum value | @@ -57,8 +57,8 @@ For more information about how vector arithmetic works, see [Vector Data Type | **Comparison functions and operators:** -| Name | Description | -| ---------------------------------------- | ----------------------------------------------------- | +| Name | Description | +| ------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | | [`BETWEEN ... AND ...`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_between) | Check whether a value is within a range of values | | [`COALESCE()`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#function_coalesce) | Return the first non-NULL argument | | [`=`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_equal) | Equal operator | @@ -67,8 +67,8 @@ For more information about how vector arithmetic works, see [Vector Data Type | | [`>=`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_greater-than-or-equal) | Greater than or equal operator | | [`GREATEST()`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#function_greatest) | Return the largest argument | | [`IN()`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_in) | Check whether a value is within a set of values | -| [`IS NULL`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_is-null) | Test whether a value is `NULL` | -| [`ISNULL()`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#function_isnull) | Test whether the argument is `NULL` | +| [`IS NULL`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_is-null) | Test whether a value is `NULL` | +| [`ISNULL()`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#function_isnull) | Test whether the argument is `NULL` | | [`LEAST()`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#function_least) | Return the smallest argument | | [`<`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_less-than) | Less than operator | | [`<=`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_less-than-or-equal) | Less than or equal operator | @@ -80,19 +80,19 @@ For more information about how vectors are compared, see [Vector Data Type | Com **Control flow functions:** -| Name | Description | -| :------------------------------------------------------------------------------------------------ | :--------------------------- | -| [`CASE`](https://dev.mysql.com/doc/refman/8.0/en/flow-control-functions.html#operator_case) | Case operator | -| [`IF()`](https://dev.mysql.com/doc/refman/8.0/en/flow-control-functions.html#function_if) | If/else construct | -| [`IFNULL()`](https://dev.mysql.com/doc/refman/8.0/en/flow-control-functions.html#function_ifnull) | Null if/else construct | +| Name | Description | +| :------------------------------------------------------------------------------------------------ | :----------------------------- | +| [`CASE`](https://dev.mysql.com/doc/refman/8.0/en/flow-control-functions.html#operator_case) | Case operator | +| [`IF()`](https://dev.mysql.com/doc/refman/8.0/en/flow-control-functions.html#function_if) | If/else construct | +| [`IFNULL()`](https://dev.mysql.com/doc/refman/8.0/en/flow-control-functions.html#function_ifnull) | Null if/else construct | | [`NULLIF()`](https://dev.mysql.com/doc/refman/8.0/en/flow-control-functions.html#function_nullif) | Return `NULL` if expr1 = expr2 | **Cast functions:** -| Name | Description | -| :------------------------------------------------------------------------------------------ | :----------------------------- | +| Name | Description | +| :------------------------------------------------------------------------------------------ | :--------------------------------- | | [`CAST()`](https://dev.mysql.com/doc/refman/8.0/en/cast-functions.html#function_cast) | Cast a value as a string or vector | -| [`CONVERT()`](https://dev.mysql.com/doc/refman/8.0/en/cast-functions.html#function_convert) | Cast a value as a string | +| [`CONVERT()`](https://dev.mysql.com/doc/refman/8.0/en/cast-functions.html#function_convert) | Cast a value as a string | For more information about how to use `CAST()`, see [Vector Data Type | Cast](/tidb-cloud/vector-search-data-types.md#cast). @@ -222,7 +222,7 @@ Examples: VEC_L2_NORM(vector) ``` -Calculates the [L2 norm](https://en.wikipedia.org/wiki/Norm_(mathematics)) (Euclidean norm) of a vector using the following formula: +Calculates the [L2 norm]() (Euclidean norm) of a vector using the following formula: $NORM(p)=\sqrt {\sum \limits _{i=1}^{n}{p_{i}^{2}}}$ diff --git a/tidb-cloud/vector-search-get-started-using-python.md b/tidb-cloud/vector-search-get-started-using-python.md index 0fcd84d098497..be7f8e0ff236d 100644 --- a/tidb-cloud/vector-search-get-started-using-python.md +++ b/tidb-cloud/vector-search-get-started-using-python.md @@ -11,7 +11,7 @@ Throughout this tutorial, you will develop this AI application using [TiDB Vecto > **Note** > -> TiDB Vector Search is currently in beta and is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters. +> TiDB Vector Search is only available for TiDB Self-Managed (TiDB >= v8.4) and [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless). It is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated). ## Prerequisites @@ -54,28 +54,28 @@ pip install sqlalchemy pymysql sentence-transformers tidb-vector python-dotenv 3. Ensure the configurations in the connection dialog match your operating environment. - - **Connection Type** is set to `Public`. - - **Branch** is set to `main`. - - **Connect With** is set to `SQLAlchemy`. - - **Operating System** matches your environment. + - **Connection Type** is set to `Public`. + - **Branch** is set to `main`. + - **Connect With** is set to `SQLAlchemy`. + - **Operating System** matches your environment. - > **Tip:** - > - > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. + > **Tip:** + > + > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. 4. Click the **PyMySQL** tab and copy the connection string. - > **Tip:** - > - > If you have not set a password yet, click **Generate Password** to generate a random password. + > **Tip:** + > + > If you have not set a password yet, click **Generate Password** to generate a random password. 5. In the root directory of your Python project, create a `.env` file and paste the connection string into it. - The following is an example for macOS: + The following is an example for macOS: - ```dotenv - TIDB_DATABASE_URL="mysql+pymysql://.root:@gateway01..prod.aws.tidbcloud.com:4000/test?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" - ``` + ```dotenv + TIDB_DATABASE_URL="mysql+pymysql://.root:@gateway01..prod.aws.tidbcloud.com:4000/test?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" + ``` ### Step 4. Initialize the embedding model @@ -192,4 +192,4 @@ Therefore, according to the output, the swimming animal is most likely a fish, o ## See also - [Vector Data Types](/tidb-cloud/vector-search-data-types.md) -- [Vector Search Index](/tidb-cloud/vector-search-index.md) \ No newline at end of file +- [Vector Search Index](/tidb-cloud/vector-search-index.md) diff --git a/tidb-cloud/vector-search-get-started-using-sql.md b/tidb-cloud/vector-search-get-started-using-sql.md index ffa2b01b9a1b4..7546b060a5c79 100644 --- a/tidb-cloud/vector-search-get-started-using-sql.md +++ b/tidb-cloud/vector-search-get-started-using-sql.md @@ -16,7 +16,7 @@ This tutorial demonstrates how to get started with TiDB Vector Search just using > **Note** > -> TiDB Vector Search is currently in beta and is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters. +> TiDB Vector Search is only available for TiDB Self-Managed (TiDB >= v8.4) and [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless). It is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated). ## Prerequisites @@ -39,9 +39,9 @@ To complete this tutorial, you need: 5. Copy the connection command and paste it into your terminal. The following is an example for macOS: - ```bash - mysql -u '.root' -h '' -P 4000 -D 'test' --ssl-mode=VERIFY_IDENTITY --ssl-ca=/etc/ssl/cert.pem -p'' - ``` + ```bash + mysql -u '.root' -h '' -P 4000 -D 'test' --ssl-mode=VERIFY_IDENTITY --ssl-ca=/etc/ssl/cert.pem -p'' + ``` ### Step 2. Create a vector table diff --git a/tidb-cloud/vector-search-index.md b/tidb-cloud/vector-search-index.md index 10a3d56e0e597..00e6dce61ec25 100644 --- a/tidb-cloud/vector-search-index.md +++ b/tidb-cloud/vector-search-index.md @@ -11,18 +11,6 @@ In TiDB, you can create and use vector search indexes for such approximate neare Currently, TiDB supports the [HNSW (Hierarchical Navigable Small World)](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world) vector search index algorithm. -## Restrictions - -- TiFlash nodes must be deployed in your cluster in advance. -- Vector search indexes cannot be used as primary keys or unique indexes. -- Vector search indexes can only be created on a single vector column and cannot be combined with other columns (such as integers or strings) to form composite indexes. -- A distance function must be specified when creating and using vector search indexes. Currently, only cosine distance `VEC_COSINE_DISTANCE()` and L2 distance `VEC_L2_DISTANCE()` functions are supported. -- For the same column, creating multiple vector search indexes using the same distance function is not supported. -- Directly dropping columns with vector search indexes is not supported. You can drop such a column by first dropping the vector search index on that column and then dropping the column itself. -- Modifying the type of a column with a vector index is not supported. -- Setting vector search indexes as [invisible](/sql-statements/sql-statement-alter-index.md) is not supported. -- Building vector search indexes on TiFlash nodes with [encryption at rest](https://docs.pingcap.com/tidb/stable/encryption-at-rest) enabled is not supported. - ## Create the HNSW vector index [HNSW](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world) is one of the most popular vector indexing algorithms. The HNSW index provides good performance with relatively high accuracy, up to 98% in specific cases. @@ -31,24 +19,24 @@ In TiDB, you can create an HNSW index for a column with a [vector data type](/ti - When creating a table, use the following syntax to specify the vector column for the HNSW index: - ```sql - CREATE TABLE foo ( - id INT PRIMARY KEY, - embedding VECTOR(5), - VECTOR INDEX idx_embedding ((VEC_COSINE_DISTANCE(embedding))) - ); - ``` + ```sql + CREATE TABLE foo ( + id INT PRIMARY KEY, + embedding VECTOR(5), + VECTOR INDEX idx_embedding ((VEC_COSINE_DISTANCE(embedding))) + ); + ``` - For an existing table that already contains a vector column, use the following syntax to create an HNSW index for the vector column: - ```sql - CREATE VECTOR INDEX idx_embedding ON foo ((VEC_COSINE_DISTANCE(embedding))); - ALTER TABLE foo ADD VECTOR INDEX idx_embedding ((VEC_COSINE_DISTANCE(embedding))); + ```sql + CREATE VECTOR INDEX idx_embedding ON foo ((VEC_COSINE_DISTANCE(embedding))); + ALTER TABLE foo ADD VECTOR INDEX idx_embedding ((VEC_COSINE_DISTANCE(embedding))); - -- You can also explicitly specify "USING HNSW" to build the vector search index. - CREATE VECTOR INDEX idx_embedding ON foo ((VEC_COSINE_DISTANCE(embedding))) USING HNSW; - ALTER TABLE foo ADD VECTOR INDEX idx_embedding ((VEC_COSINE_DISTANCE(embedding))) USING HNSW; - ``` + -- You can also explicitly specify "USING HNSW" to build the vector search index. + CREATE VECTOR INDEX idx_embedding ON foo ((VEC_COSINE_DISTANCE(embedding))) USING HNSW; + ALTER TABLE foo ADD VECTOR INDEX idx_embedding ((VEC_COSINE_DISTANCE(embedding))) USING HNSW; + ``` > **Note:** > @@ -64,7 +52,7 @@ When creating an HNSW vector index, you need to specify the distance function fo The vector index can only be created for fixed-dimensional vector columns, such as a column defined as `VECTOR(3)`. It cannot be created for non-fixed-dimensional vector columns (such as a column defined as `VECTOR`) because vector distances can only be calculated between vectors with the same dimension. -For restrictions and limitations of vector search indexes, see [Restrictions](#restrictions). +For other limitations, see [Vector index limitations](/tidb-cloud/vector-search-limitations.md#vector-index-limitations). ## Use the vector index @@ -126,17 +114,17 @@ SELECT * FROM INFORMATION_SCHEMA.TIFLASH_INDEXES; - You can check the `ROWS_STABLE_INDEXED` and `ROWS_STABLE_NOT_INDEXED` columns for the index build progress. When `ROWS_STABLE_NOT_INDEXED` becomes 0, the index build is complete. - As a reference, indexing a 500 MiB vector dataset with 768 dimensions might take up to 20 minutes. The indexer can run in parallel for multiple tables. Currently, adjusting the indexer priority or speed is not supported. + As a reference, indexing a 500 MiB vector dataset with 768 dimensions might take up to 20 minutes. The indexer can run in parallel for multiple tables. Currently, adjusting the indexer priority or speed is not supported. - You can check the `ROWS_DELTA_NOT_INDEXED` column for the number of rows in the Delta layer. Data in the storage layer of TiFlash is stored in two layers: Delta layer and Stable layer. The Delta layer stores recently inserted or updated rows and is periodically merged into the Stable layer according to the write workload. This merge process is called Compaction. - The Delta layer is always not indexed. To achieve optimal performance, you can force the merge of the Delta layer into the Stable layer so that all data can be indexed: + The Delta layer is always not indexed. To achieve optimal performance, you can force the merge of the Delta layer into the Stable layer so that all data can be indexed: - ```sql - ALTER TABLE COMPACT; - ``` + ```sql + ALTER TABLE COMPACT; + ``` - For more information, see [`ALTER TABLE ... COMPACT`](/sql-statements/sql-statement-alter-table-compact.md). + For more information, see [`ALTER TABLE ... COMPACT`](/sql-statements/sql-statement-alter-table-compact.md). In addition, you can monitor the execution progress of the DDL job by executing `ADMIN SHOW DDL JOBS;` and checking the `row count`. However, this method is not fully accurate, because the `row count` value is obtained from the `rows_stable_indexed` field in `TIFLASH_INDEXES`. You can use this approach as a reference for tracking the progress of indexing. @@ -245,6 +233,10 @@ Explanation of some important fields: See [`EXPLAIN`](/sql-statements/sql-statement-explain.md), [`EXPLAIN ANALYZE`](/sql-statements/sql-statement-explain-analyze.md), and [EXPLAIN Walkthrough](/explain-walkthrough.md) for interpreting the output. +## Limitations + +See [Vector index limitations](/tidb-cloud/vector-search-limitations.md#vector-index-limitations). + ## See also - [Improve Vector Search Performance](/tidb-cloud/vector-search-improve-performance.md) diff --git a/tidb-cloud/vector-search-integrate-with-django-orm.md b/tidb-cloud/vector-search-integrate-with-django-orm.md index 5ca099bfe520a..61c9012ca54d8 100644 --- a/tidb-cloud/vector-search-integrate-with-django-orm.md +++ b/tidb-cloud/vector-search-integrate-with-django-orm.md @@ -9,7 +9,7 @@ This tutorial walks you through how to use [Django](https://www.djangoproject.co > **Note** > -> TiDB Vector Search is currently in beta and is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters. +> TiDB Vector Search is only available for TiDB Self-Managed (TiDB >= v8.4) and [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless). It is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated). ## Prerequisites @@ -73,40 +73,40 @@ For more information, refer to [django-tidb repository](https://github.com/pingc 3. Ensure the configurations in the connection dialog match your operating environment. - - **Connection Type** is set to `Public` - - **Branch** is set to `main` - - **Connect With** is set to `General` - - **Operating System** matches your environment. + - **Connection Type** is set to `Public` + - **Branch** is set to `main` + - **Connect With** is set to `General` + - **Operating System** matches your environment. - > **Tip:** - > - > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. + > **Tip:** + > + > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. 4. Copy the connection parameters from the connection dialog. - > **Tip:** - > - > If you have not set a password yet, click **Generate Password** to generate a random password. + > **Tip:** + > + > If you have not set a password yet, click **Generate Password** to generate a random password. 5. In the root directory of your Python project, create a `.env` file and paste the connection parameters to the corresponding environment variables. - - `TIDB_HOST`: The host of the TiDB cluster. - - `TIDB_PORT`: The port of the TiDB cluster. - - `TIDB_USERNAME`: The username to connect to the TiDB cluster. - - `TIDB_PASSWORD`: The password to connect to the TiDB cluster. - - `TIDB_DATABASE`: The database name to connect to. - - `TIDB_CA_PATH`: The path to the root certificate file. - - The following is an example for macOS: - - ```dotenv - TIDB_HOST=gateway01.****.prod.aws.tidbcloud.com - TIDB_PORT=4000 - TIDB_USERNAME=********.root - TIDB_PASSWORD=******** - TIDB_DATABASE=test - TIDB_CA_PATH=/etc/ssl/cert.pem - ``` + - `TIDB_HOST`: The host of the TiDB cluster. + - `TIDB_PORT`: The port of the TiDB cluster. + - `TIDB_USERNAME`: The username to connect to the TiDB cluster. + - `TIDB_PASSWORD`: The password to connect to the TiDB cluster. + - `TIDB_DATABASE`: The database name to connect to. + - `TIDB_CA_PATH`: The path to the root certificate file. + + The following is an example for macOS: + + ```dotenv + TIDB_HOST=gateway01.****.prod.aws.tidbcloud.com + TIDB_PORT=4000 + TIDB_USERNAME=********.root + TIDB_PASSWORD=******** + TIDB_DATABASE=test + TIDB_CA_PATH=/etc/ssl/cert.pem + ``` ### Step 5. Run the demo diff --git a/tidb-cloud/vector-search-integrate-with-jinaai-embedding.md b/tidb-cloud/vector-search-integrate-with-jinaai-embedding.md index 80e915f2d6f4b..71e79db40fd1d 100644 --- a/tidb-cloud/vector-search-integrate-with-jinaai-embedding.md +++ b/tidb-cloud/vector-search-integrate-with-jinaai-embedding.md @@ -9,7 +9,7 @@ This tutorial walks you through how to use [Jina AI](https://jina.ai/) to genera > **Note** > -> TiDB Vector Search is currently in beta and is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters. +> TiDB Vector Search is only available for TiDB Self-Managed (TiDB >= v8.4) and [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless). It is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated). ## Prerequisites @@ -59,33 +59,33 @@ Get the Jina AI API key from the [Jina AI Embeddings API](https://jina.ai/embedd 3. Ensure the configurations in the connection dialog match your operating environment. - - **Connection Type** is set to `Public` - - **Branch** is set to `main` - - **Connect With** is set to `SQLAlchemy` - - **Operating System** matches your environment. + - **Connection Type** is set to `Public` + - **Branch** is set to `main` + - **Connect With** is set to `SQLAlchemy` + - **Operating System** matches your environment. - > **Tip:** - > - > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. + > **Tip:** + > + > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. 4. Switch to the **PyMySQL** tab and click the **Copy** icon to copy the connection string. - > **Tip:** - > - > If you have not set a password yet, click **Create password** to generate a random password. + > **Tip:** + > + > If you have not set a password yet, click **Create password** to generate a random password. 5. Set the Jina AI API key and the TiDB connection string as environment variables in your terminal, or create a `.env` file with the following environment variables: - ```dotenv - JINAAI_API_KEY="****" - TIDB_DATABASE_URL="{tidb_connection_string}" - ``` + ```dotenv + JINAAI_API_KEY="****" + TIDB_DATABASE_URL="{tidb_connection_string}" + ``` - The following is an example connection string for macOS: + The following is an example connection string for macOS: - ```dotenv - TIDB_DATABASE_URL="mysql+pymysql://.root:@gateway01..prod.aws.tidbcloud.com:4000/test?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" - ``` + ```dotenv + TIDB_DATABASE_URL="mysql+pymysql://.root:@gateway01..prod.aws.tidbcloud.com:4000/test?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" + ``` ### Step 5. Run the demo diff --git a/tidb-cloud/vector-search-integrate-with-langchain.md b/tidb-cloud/vector-search-integrate-with-langchain.md index 500f5b7d36c12..04b518a82b31d 100644 --- a/tidb-cloud/vector-search-integrate-with-langchain.md +++ b/tidb-cloud/vector-search-integrate-with-langchain.md @@ -9,7 +9,7 @@ This tutorial demonstrates how to integrate the [vector search](/tidb-cloud/vect > **Note** > -> TiDB Vector Search is currently in beta and is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters. +> TiDB Vector Search is only available for TiDB Self-Managed (TiDB >= v8.4) and [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless). It is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated). > **Tip** > @@ -66,33 +66,33 @@ Take the following steps to obtain the cluster connection string and configure e 3. Ensure the configurations in the connection dialog match your operating environment. - - **Connection Type** is set to `Public`. - - **Branch** is set to `main`. - - **Connect With** is set to `SQLAlchemy`. - - **Operating System** matches your environment. + - **Connection Type** is set to `Public`. + - **Branch** is set to `main`. + - **Connect With** is set to `SQLAlchemy`. + - **Operating System** matches your environment. 4. Click the **PyMySQL** tab and copy the connection string. - > **Tip:** - > - > If you have not set a password yet, click **Generate Password** to generate a random password. + > **Tip:** + > + > If you have not set a password yet, click **Generate Password** to generate a random password. 5. Configure environment variables. - This document uses [OpenAI](https://platform.openai.com/docs/introduction) as the embedding model provider. In this step, you need to provide the connection string obtained from the previous step and your [OpenAI API key](https://platform.openai.com/docs/quickstart/step-2-set-up-your-api-key). + This document uses [OpenAI](https://platform.openai.com/docs/introduction) as the embedding model provider. In this step, you need to provide the connection string obtained from the previous step and your [OpenAI API key](https://platform.openai.com/docs/quickstart/step-2-set-up-your-api-key). - To configure the environment variables, run the following code. You will be prompted to enter your connection string and OpenAI API key: + To configure the environment variables, run the following code. You will be prompted to enter your connection string and OpenAI API key: - ```python - # Use getpass to securely prompt for environment variables in your terminal. - import getpass - import os + ```python + # Use getpass to securely prompt for environment variables in your terminal. + import getpass + import os - # Copy your connection string from the TiDB Cloud console. - # Connection string format: "mysql+pymysql://:@:4000/?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" - tidb_connection_string = getpass.getpass("TiDB Connection String:") - os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") - ``` + # Copy your connection string from the TiDB Cloud console. + # Connection string format: "mysql+pymysql://:@:4000/?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" + tidb_connection_string = getpass.getpass("TiDB Connection String:") + os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") + ``` ### Step 4. Load the sample document diff --git a/tidb-cloud/vector-search-integrate-with-llamaindex.md b/tidb-cloud/vector-search-integrate-with-llamaindex.md index 54f16467b9838..117afbbecf9b0 100644 --- a/tidb-cloud/vector-search-integrate-with-llamaindex.md +++ b/tidb-cloud/vector-search-integrate-with-llamaindex.md @@ -9,7 +9,7 @@ This tutorial demonstrates how to integrate the [vector search](/tidb-cloud/vect > **Note** > -> TiDB Vector Search is currently in beta and is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters. +> TiDB Vector Search is only available for TiDB Self-Managed (TiDB >= v8.4) and [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless). It is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated). > **Tip** > @@ -65,33 +65,33 @@ Take the following steps to obtain the cluster connection string and configure e 3. Ensure the configurations in the connection dialog match your operating environment. - - **Connection Type** is set to `Public`. - - **Branch** is set to `main`. - - **Connect With** is set to `SQLAlchemy`. - - **Operating System** matches your environment. + - **Connection Type** is set to `Public`. + - **Branch** is set to `main`. + - **Connect With** is set to `SQLAlchemy`. + - **Operating System** matches your environment. 4. Click the **PyMySQL** tab and copy the connection string. - > **Tip:** - > - > If you have not set a password yet, click **Generate Password** to generate a random password. + > **Tip:** + > + > If you have not set a password yet, click **Generate Password** to generate a random password. 5. Configure environment variables. - This document uses [OpenAI](https://platform.openai.com/docs/introduction) as the embedding model provider. In this step, you need to provide the connection string obtained from from the previous step and your [OpenAI API key](https://platform.openai.com/docs/quickstart/step-2-set-up-your-api-key). + This document uses [OpenAI](https://platform.openai.com/docs/introduction) as the embedding model provider. In this step, you need to provide the connection string obtained from from the previous step and your [OpenAI API key](https://platform.openai.com/docs/quickstart/step-2-set-up-your-api-key). - To configure the environment variables, run the following code. You will be prompted to enter your connection string and OpenAI API key: + To configure the environment variables, run the following code. You will be prompted to enter your connection string and OpenAI API key: - ```python - # Use getpass to securely prompt for environment variables in your terminal. - import getpass - import os + ```python + # Use getpass to securely prompt for environment variables in your terminal. + import getpass + import os - # Copy your connection string from the TiDB Cloud console. - # Connection string format: "mysql+pymysql://:@:4000/?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" - tidb_connection_string = getpass.getpass("TiDB Connection String:") - os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") - ``` + # Copy your connection string from the TiDB Cloud console. + # Connection string format: "mysql+pymysql://:@:4000/?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" + tidb_connection_string = getpass.getpass("TiDB Connection String:") + os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") + ``` ### Step 4. Load the sample document diff --git a/tidb-cloud/vector-search-integrate-with-peewee.md b/tidb-cloud/vector-search-integrate-with-peewee.md index 0e6dd89d8332a..0a72a34135ef8 100644 --- a/tidb-cloud/vector-search-integrate-with-peewee.md +++ b/tidb-cloud/vector-search-integrate-with-peewee.md @@ -9,7 +9,7 @@ This tutorial walks you through how to use [peewee](https://docs.peewee-orm.com/ > **Note** > -> TiDB Vector Search is currently in beta and is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters. +> TiDB Vector Search is only available for TiDB Self-Managed (TiDB >= v8.4) and [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless). It is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated). ## Prerequisites @@ -63,40 +63,40 @@ pip install peewee pymysql python-dotenv tidb-vector 3. Ensure the configurations in the connection dialog match your operating environment. - - **Connection Type** is set to `Public`. - - **Branch** is set to `main`. - - **Connect With** is set to `General`. - - **Operating System** matches your environment. + - **Connection Type** is set to `Public`. + - **Branch** is set to `main`. + - **Connect With** is set to `General`. + - **Operating System** matches your environment. - > **Tip:** - > - > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. + > **Tip:** + > + > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. 4. Copy the connection parameters from the connection dialog. - > **Tip:** - > - > If you have not set a password yet, click **Generate Password** to generate a random password. + > **Tip:** + > + > If you have not set a password yet, click **Generate Password** to generate a random password. 5. In the root directory of your Python project, create a `.env` file and paste the connection parameters to the corresponding environment variables. - - `TIDB_HOST`: The host of the TiDB cluster. - - `TIDB_PORT`: The port of the TiDB cluster. - - `TIDB_USERNAME`: The username to connect to the TiDB cluster. - - `TIDB_PASSWORD`: The password to connect to the TiDB cluster. - - `TIDB_DATABASE`: The database name to connect to. - - `TIDB_CA_PATH`: The path to the root certificate file. - - The following is an example for macOS: - - ```dotenv - TIDB_HOST=gateway01.****.prod.aws.tidbcloud.com - TIDB_PORT=4000 - TIDB_USERNAME=********.root - TIDB_PASSWORD=******** - TIDB_DATABASE=test - TIDB_CA_PATH=/etc/ssl/cert.pem - ``` + - `TIDB_HOST`: The host of the TiDB cluster. + - `TIDB_PORT`: The port of the TiDB cluster. + - `TIDB_USERNAME`: The username to connect to the TiDB cluster. + - `TIDB_PASSWORD`: The password to connect to the TiDB cluster. + - `TIDB_DATABASE`: The database name to connect to. + - `TIDB_CA_PATH`: The path to the root certificate file. + + The following is an example for macOS: + + ```dotenv + TIDB_HOST=gateway01.****.prod.aws.tidbcloud.com + TIDB_PORT=4000 + TIDB_USERNAME=********.root + TIDB_PASSWORD=******** + TIDB_DATABASE=test + TIDB_CA_PATH=/etc/ssl/cert.pem + ``` ### Step 5. Run the demo diff --git a/tidb-cloud/vector-search-integrate-with-sqlalchemy.md b/tidb-cloud/vector-search-integrate-with-sqlalchemy.md index 4fe443a471ced..f9fea8e3f11cd 100644 --- a/tidb-cloud/vector-search-integrate-with-sqlalchemy.md +++ b/tidb-cloud/vector-search-integrate-with-sqlalchemy.md @@ -9,7 +9,7 @@ This tutorial walks you through how to use [SQLAlchemy](https://www.sqlalchemy.o > **Note** > -> TiDB Vector Search is currently in beta and is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters. +> TiDB Vector Search is only available for TiDB Self-Managed (TiDB >= v8.4) and [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless). It is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated). ## Prerequisites @@ -63,28 +63,28 @@ pip install pymysql python-dotenv sqlalchemy tidb-vector 3. Ensure the configurations in the connection dialog match your environment. - - **Connection Type** is set to `Public`. - - **Branch** is set to `main`. - - **Connect With** is set to `SQLAlchemy`. - - **Operating System** matches your environment. + - **Connection Type** is set to `Public`. + - **Branch** is set to `main`. + - **Connect With** is set to `SQLAlchemy`. + - **Operating System** matches your environment. - > **Tip:** - > - > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. + > **Tip:** + > + > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. 4. Click the **PyMySQL** tab and copy the connection string. - > **Tip:** - > - > If you have not set a password yet, click **Generate Password** to generate a random password. + > **Tip:** + > + > If you have not set a password yet, click **Generate Password** to generate a random password. 5. In the root directory of your Python project, create a `.env` file and paste the connection string into it. - The following is an example for macOS: + The following is an example for macOS: - ```dotenv - TIDB_DATABASE_URL="mysql+pymysql://.root:@gateway01..prod.aws.tidbcloud.com:4000/test?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" - ``` + ```dotenv + TIDB_DATABASE_URL="mysql+pymysql://.root:@gateway01..prod.aws.tidbcloud.com:4000/test?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" + ``` ### Step 5. Run the demo diff --git a/tidb-cloud/vector-search-integration-overview.md b/tidb-cloud/vector-search-integration-overview.md index 9d7b7bc247492..2e3be236cf743 100644 --- a/tidb-cloud/vector-search-integration-overview.md +++ b/tidb-cloud/vector-search-integration-overview.md @@ -9,14 +9,14 @@ This document provides an overview of TiDB Vector Search integration, including > **Note** > -> TiDB Vector Search is currently in beta and is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters. +> TiDB Vector Search is only available for TiDB Self-Managed (TiDB >= v8.4) and [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless). It is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated). ## AI frameworks TiDB provides official support for the following AI frameworks, enabling you to easily integrate AI applications developed based on these frameworks with TiDB Vector Search. | AI frameworks | Tutorial | -|---------------|---------------------------------------------------------------------------------------------------| +| ------------- | ------------------------------------------------------------------------------------------------- | | Langchain | [Integrate Vector Search with LangChain](/tidb-cloud/vector-search-integrate-with-langchain.md) | | LlamaIndex | [Integrate Vector Search with LlamaIndex](/tidb-cloud/vector-search-integrate-with-llamaindex.md) | @@ -31,7 +31,7 @@ You can either use self-deployed open-source embedding models or third-party emb The following table lists some mainstream embedding service providers and the corresponding integration tutorials. | Embedding service providers | Tutorial | -|-----------------------------|---------------------------------------------------------------------------------------------------------------------| +| --------------------------- | ------------------------------------------------------------------------------------------------------------------- | | Jina AI | [Integrate Vector Search with Jina AI Embeddings API](/tidb-cloud/vector-search-integrate-with-jinaai-embedding.md) | ## Object Relational Mapping (ORM) libraries diff --git a/tidb-cloud/vector-search-limitations.md b/tidb-cloud/vector-search-limitations.md index c533bcc673b37..a3b72c488cd77 100644 --- a/tidb-cloud/vector-search-limitations.md +++ b/tidb-cloud/vector-search-limitations.md @@ -9,21 +9,26 @@ This document describes the known limitations of TiDB Vector Search. > **Note** > -> TiDB Vector Search is only available for [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless) clusters. It is not available for TiDB Cloud Dedicated. +> TiDB Vector Search is only available for TiDB Self-Managed (TiDB >= v8.4) and [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless). It is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated). ## Vector data type limitations - Each [vector](/tidb-cloud/vector-search-data-types.md) supports up to 16383 dimensions. - Vector data types cannot store `NaN`, `Infinity`, or `-Infinity` values. - Vector data types cannot store double-precision floating-point numbers. If you insert or store double-precision floating-point numbers in vector columns, TiDB converts them to single-precision floating-point numbers. -- Vector columns cannot be used as primary keys or as part of a primary key. -- Vector columns cannot be used as unique indexes or as part of a unique index. -- Vector columns cannot be used as partition keys or as part of a partition key. -- Currently, TiDB does not support modifying a vector column to other data types (such as `JSON` and `VARCHAR`). +- Vector columns cannot be used in primary keys, unique indexes, or partition keys. To accelerate the vector search performance, use [Vector Search Index](/tidb-cloud/vector-search-index.md). +- A table can have multiple vector columns. However, there is [a limit on the total number of columns in a table](/tidb-limitations.md#limitations-on-a-single-table). +- Currently, TiDB does not support dropping a vector column with a vector index. To drop such a column, drop the vector index first, then drop the vector column. +- Currently, TiDB does not support modifying a vector column to other data types such as `JSON` and `VARCHAR`. ## Vector index limitations -See [Vector search restrictions](/tidb-cloud/vector-search-index.md#restrictions). +- Vector index is used for vector search. It cannot accelerate other queries like range queries or equality queries. Thus, it is not possible to create a vector index on a non-vector column, or on multiple vector columns. +- A table can have multiple vector indexes. However, there is [a limit on the total number of indexes in a table](/tidb-limitations.md#limitations-on-a-single-table). +- Creating multiple vector indexes on the same column is allowed only if they use different distance functions. +- Currently, only `VEC_COSINE_DISTANCE()` and `VEC_L2_DISTANCE()` are supported as the distance functions for vector indexes. +- Currently, TiDB does not support dropping a vector column with a vector index. To drop such a column, drop the vector index first, then drop the vector column. +- Currently, TiDB does not support setting vector index as [invisible](/sql-statements/sql-statement-alter-index.md). ## Compatibility with TiDB tools @@ -34,4 +39,4 @@ See [Vector search restrictions](/tidb-cloud/vector-search-index.md#restrictions We value your feedback and are always here to help: - [Join our Discord](https://discord.gg/zcqexutz2R) -- [Visit our Support Portal](https://tidb.support.pingcap.com/) \ No newline at end of file +- [Visit our Support Portal](https://tidb.support.pingcap.com/) diff --git a/tidb-cloud/vector-search-overview.md b/tidb-cloud/vector-search-overview.md index 1c207a84d9951..612022eae176a 100644 --- a/tidb-cloud/vector-search-overview.md +++ b/tidb-cloud/vector-search-overview.md @@ -9,7 +9,7 @@ TiDB Vector Search (beta) provides an advanced search solution for performing se > **Note** > -> TiDB Vector Search is currently in beta and is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters. +> TiDB Vector Search is only available for TiDB Self-Managed (TiDB >= v8.4) and [TiDB Cloud Serverless](/tidb-cloud/select-cluster-tier.md#tidb-cloud-serverless). It is not available for [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated). ## Concepts diff --git a/tiflash/tiflash-configuration.md b/tiflash/tiflash-configuration.md index 17b44ad18ebf7..8fc16488d5025 100644 --- a/tiflash/tiflash-configuration.md +++ b/tiflash/tiflash-configuration.md @@ -250,6 +250,15 @@ delta_index_cache_size = 0 ## New in v7.4.0. This item controls whether to enable the TiFlash resource control feature. When it is set to true, TiFlash uses the pipeline execution model. enable_resource_control = true + ## New in v6.0.0. This item is used for the MinTSO scheduler. It specifies the maximum number of threads that one resource group can use. The default value is 5000. For details about the MinTSO scheduler, see https://docs.pingcap.com/tidb/v7.5/tiflash-mintso-scheduler. + task_scheduler_thread_soft_limit = 5000 + + ## New in v6.0.0. This item is used for the MinTSO scheduler. It specifies the maximum number of threads in the global scope. The default value is 10000. For details about the MinTSO scheduler, see https://docs.pingcap.com/tidb/v7.5/tiflash-mintso-scheduler. + task_scheduler_thread_hard_limit = 10000 + + ## New in v6.4.0. This item is used for the MinTSO scheduler. It specifies the maximum number of queries that can run simultaneously in a TiFlash instance. The default value is 0, which means twice the number of vCPUs. For details about the MinTSO scheduler, see https://docs.pingcap.com/tidb/v7.5/tiflash-mintso-scheduler. + task_scheduler_active_set_soft_limit = 0 + ## Security settings take effect starting from v4.0.5. [security] ## New in v5.0. This configuration item enables or disables log redaction. If the configuration value diff --git a/tiflash/tiflash-mintso-scheduler.md b/tiflash/tiflash-mintso-scheduler.md index 6cb5eda77e866..0fc21995580cd 100644 --- a/tiflash/tiflash-mintso-scheduler.md +++ b/tiflash/tiflash-mintso-scheduler.md @@ -62,3 +62,7 @@ The scheduling process of the MinTSO Scheduler is as follows: ![TiFlash MinTSO Scheduler v2](/media/tiflash/tiflash_mintso_v2.png) By introducing soft limit and hard limit, the MinTSO scheduler effectively avoids system deadlock while controlling the number of system threads. In high concurrency scenarios, however, most queries might only have part of their MPP tasks scheduled. Queries with only part of MPP tasks scheduled cannot execute normally, leading to low system execution efficiency. To avoid this situation, TiFlash introduces a query-level limit for the MinTSO scheduler, called active_set_soft_limit. This limit allows only MPP tasks of up to active_set_soft_limit queries to participate in scheduling; MPP tasks of other queries do not participate in scheduling, and only after the current queries finish can new queries participate in scheduling. This limit is only a soft limit because for the MinTSO query, all its MPP tasks can be scheduled directly as long as the number of system threads does not exceed the hard limit. + +## See also + +- [Configure TiFlash](/tiflash/tiflash-configuration.md): learn how to configure the MinTSO scheduler.