diff --git a/.circleci/config.yml b/.circleci/config.yml
index f8dd032727d69..32c36be6ba0d6 100644
--- a/.circleci/config.yml
+++ b/.circleci/config.yml
@@ -3,15 +3,23 @@ version: 2
jobs:
lint:
docker:
- - image: circleci/ruby:2.4.1-node
+ - image: circleci/node:lts
working_directory: ~/pingcap/docs
steps:
- checkout
+ - run:
+ name: Setup
+ command: |
+ mkdir ~/.npm-global
+ npm config set prefix '~/.npm-global'
+ echo 'export PATH=~/.npm-global/bin:$PATH' >> $BASH_ENV
+ echo 'export NODE_PATH=~/.npm-global/lib/node_modules:$NODE_PATH' >> $BASH_ENV
+
- run:
name: "Install markdownlint"
command: |
- sudo npm install -g markdownlint-cli@0.17.0
+ npm install -g markdownlint-cli@0.17.0
- run:
name: "Lint README"
@@ -29,6 +37,11 @@ jobs:
command: |
scripts/verify-links.sh
+ - run:
+ name: "Check link anchors"
+ command: |
+ scripts/verify-link-anchors.sh
+
build:
docker:
- image: andelf/doc-build:0.1.9
diff --git a/auto-random.md b/auto-random.md
index c8c1c644f3c98..20bf582040bd3 100644
--- a/auto-random.md
+++ b/auto-random.md
@@ -11,7 +11,7 @@ aliases: ['/docs/dev/auto-random/','/docs/dev/reference/sql/attributes/auto-rand
>
> `AUTO_RANDOM` is still an experimental feature. It is **NOT** recommended that you use this attribute in the production environment. In later TiDB versions, the syntax or semantics of `AUTO_RANDOM` might change.
-Before using the `AUTO_RANDOM` attribute, set `allow-auto-random = true` in the `experimental` section of the TiDB configuration file. Refer to [`allow-auto-random`](/tidb-configuration-file.md#allow-auto-random) for details.
+Before using the `AUTO_RANDOM` attribute, set `allow-auto-random = true` in the `experimental` section of the TiDB configuration file. Refer to [`allow-auto-random`](/tidb-configuration-file.md#allow-auto-random-new-in-v310) for details.
## User scenario
diff --git a/br/backup-and-restore-tool.md b/br/backup-and-restore-tool.md
index 9f8467ab135ef..d02c17d00e672 100644
--- a/br/backup-and-restore-tool.md
+++ b/br/backup-and-restore-tool.md
@@ -82,7 +82,7 @@ The SST file is named in the format of `storeID_regionID_regionEpoch_keyHash_cf`
- `regionID` is the Region ID;
- `regionEpoch` is the version number of the Region;
- `keyHash` is the Hash (sha256) value of the startKey of a range, which ensures the uniqueness of a key;
-- `cf` indicates the [Column Family](/tune-tikv-memory-performance.md#tune-tikv-performance) of RocksDB (`default` or `write` by default).
+- `cf` indicates the [Column Family](/tune-tikv-memory-performance.md) of RocksDB (`default` or `write` by default).
### Restoration principle
diff --git a/certificate-authentication.md b/certificate-authentication.md
index ac0d7326442fc..35d3cbd1f4759 100644
--- a/certificate-authentication.md
+++ b/certificate-authentication.md
@@ -259,7 +259,7 @@ First, connect TiDB using the client to configure the login verification. Then,
The user certificate information can be specified by `require subject`, `require issuer`, `require san`, and `require cipher`, which are used to check the X509 certificate attributes.
-+ `require subject`: Specifies the `subject` information of the client certificate when you log in. With this option specified, you do not need to configure `require ssl` or x509. The information to be specified is consistent with the entered `subject` information in [Generate client keys and certificates](#generate-client-keys-and-certificates).
++ `require subject`: Specifies the `subject` information of the client certificate when you log in. With this option specified, you do not need to configure `require ssl` or x509. The information to be specified is consistent with the entered `subject` information in [Generate client keys and certificates](#generate-client-key-and-certificate).
To get this option, execute the following command:
@@ -502,4 +502,4 @@ Also replace the old CA certificate with the combined certificate so that the cl
sudo openssl x509 -req -in server-req.new.pem -days 365000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.new.pem
```
-3. Configure the TiDB server to use the new server key and certificate. See [Configure TiDB server](#configure-tidb-server) for details.
+3. Configure the TiDB server to use the new server key and certificate. See [Configure TiDB server](#configure-tidb-and-the-client-to-use-certificates) for details.
diff --git a/check-cluster-status-using-sql-statements.md b/check-cluster-status-using-sql-statements.md
index aedb6ef059a64..fe0b44698abaf 100644
--- a/check-cluster-status-using-sql-statements.md
+++ b/check-cluster-status-using-sql-statements.md
@@ -22,7 +22,7 @@ The `INFORMATION_SCHEMA` system database offers system tables as follows to quer
You can also use the following statements to obtain some useful information for troubleshooting and querying the TiDB cluster status.
- `ADMIN SHOW DDL`: obtains the ID of TiDB with the `DDL owner` role and `IP:PORT`.
-- The feature of `SHOW ANALYZE STATUS` is the same with that of [the `ANALYZE_STATUS` table](/system-tables/system-table-information-schema.md#analyze-status-table).
+- The feature of `SHOW ANALYZE STATUS` is the same with that of [the `ANALYZE_STATUS` table](/system-tables/system-table-information-schema.md#analyze_status-table).
- Specific `EXPLAIN` statements
- `EXPLAIN ANALYZE`: obtains some detailed information for execution of a SQL statement.
- `EXPLAIN FOR CONNECTION`: obtains the execution plan for the query executed last in a connection. Can be used along with `SHOW PROCESSLIST`.
diff --git a/dashboard/dashboard-faq.md b/dashboard/dashboard-faq.md
index 6c59d6f51f67f..e6732b3fcd9b6 100644
--- a/dashboard/dashboard-faq.md
+++ b/dashboard/dashboard-faq.md
@@ -15,7 +15,7 @@ This document summarizes the frequently asked questions (FAQs) and answers about
When multiple Placement Driver (PD) instances are deployed in a cluster, only one of the PD instances actually runs the TiDB Dashboard service. If you access other PD instances instead of this one, your browser redirects you to another address. If the firewall or reverse proxy is not properly configured for accessing TiDB Dashboard, when you visit the Dashboard, you might be redirected to an internal address that is protected by the firewall or reverse proxy.
-- See [TiDB Dashboard Multi-PD Instance Deployment](/dashboard/dashboard-ops-deploy.md#) to learn the working principle of TiDB Dashboard with multiple PD instances.
+- See [TiDB Dashboard Multi-PD Instance Deployment](/dashboard/dashboard-ops-deploy.md) to learn the working principle of TiDB Dashboard with multiple PD instances.
- See [Use TiDB Dashboard through a Reverse Proxy](/dashboard/dashboard-ops-reverse-proxy.md) to learn how to correctly configure a reverse proxy.
- See [Secure TiDB Dashboard](/dashboard/dashboard-ops-security.md) to learn how to correctly configure the firewall.
diff --git a/dashboard/dashboard-statement-details.md b/dashboard/dashboard-statement-details.md
index f4af75c9fd4a7..6393d860d3ab4 100644
--- a/dashboard/dashboard-statement-details.md
+++ b/dashboard/dashboard-statement-details.md
@@ -11,7 +11,7 @@ Click any item in the list to enter the detail page of the SQL statement to view
- The overview of SQL statements, which includes the SQL template, the SQL template ID, the current time range of displayed SQL executions, the number of execution plans and the database in which the SQL statement is executed (see area 1 in the image below).
- The execution plan list: If the SQL statement has multiple execution plans, this list is displayed. You can select different execution plans, and the details of the selected plans are displayed below the list. If there is only one execution plan, the list is not displayed (see area 2 below).
-- Execution detail of plans, which displays the detailed information of the selected execution plans. See [Execution plan in details](#execution-plan-in-details) (area 3 in the image below).
+- Execution detail of plans, which displays the detailed information of the selected execution plans. See [Execution plan in details](#execution-details-of-plans) (area 3 in the image below).
![Details](/media/dashboard/dashboard-statement-detail.png)
diff --git a/dashboard/dashboard-statement-list.md b/dashboard/dashboard-statement-list.md
index fe2c0fa47fbae..b0b9e23eb7ebd 100644
--- a/dashboard/dashboard-statement-list.md
+++ b/dashboard/dashboard-statement-list.md
@@ -56,7 +56,7 @@ On the setting page, you can disable or enable the SQL statements feature. When
- Collect interval: The length of period for each SQL statement analysis, which is 30 minutes by default. The SQL statements feature summarizes and counts all SQL statements within a period of time. If the period is too long, the granularity of the summary is coarse, which is not good for locating problems; if the period is too short, the granularity of the statistics is fine, which is good for locating problems, but this will result in more records and more memory usage within the same data retention duration. Therefore, you need to adjust this value based on the actual situation, and properly lower this value when locating problems.
- Data retain duration: The retention duration of summary information, which is 1 day by default. Data retained longer than this duration will be deleted from system tables.
-See [Configurations of Statement Summary Tables](/statement-summary-tables.md#configurations) for details.
+See [Configurations of Statement Summary Tables](/statement-summary-tables.md#parameter-configuration) for details.
> **Note:**
>
diff --git a/get-started-with-tidb-binlog.md b/get-started-with-tidb-binlog.md
index 08fe8282fce7a..250ae201e7f56 100644
--- a/get-started-with-tidb-binlog.md
+++ b/get-started-with-tidb-binlog.md
@@ -329,7 +329,7 @@ You should see the same rows that you inserted into TiDB when querying the Maria
## binlogctl
-Information about Pumps and Drainers that have joined the cluster is stored in PD. You can use the binlogctl tool query and manipulate information about their states. See [binlogctl guide](/tidb-binlog/maintain-tidb-binlog-cluster.md#binlogctl-guide) for more information.
+Information about Pumps and Drainers that have joined the cluster is stored in PD. You can use the binlogctl tool query and manipulate information about their states. See [binlogctl guide](/tidb-binlog/binlog-control.md) for more information.
Use `binlogctl` to get a view of the current status of Pumps and Drainers in the cluster:
diff --git a/glossary.md b/glossary.md
index 91b1cd128fcc8..6ce84b4c6a935 100644
--- a/glossary.md
+++ b/glossary.md
@@ -13,7 +13,7 @@ aliases: ['/docs/dev/glossary/']
ACID refers to the four key properties of a transaction: atomicity, consistency, isolation, and durability. Each of these properties is described below.
-- **Atomicity** means that either all the changes of an operation are performed, or none of them are. TiDB ensures the atomicity of the [Region](#region) that stores the Primary Key to achieve the atomicity of transactions.
+- **Atomicity** means that either all the changes of an operation are performed, or none of them are. TiDB ensures the atomicity of the [Region](#regionpeerraft-group) that stores the Primary Key to achieve the atomicity of transactions.
- **Consistency** means that transactions always bring the database from one consistent state to another. In TiDB, data consistency is ensured before writing data to the memory.
diff --git a/pessimistic-transaction.md b/pessimistic-transaction.md
index a74be46898593..bb51f7a81b628 100644
--- a/pessimistic-transaction.md
+++ b/pessimistic-transaction.md
@@ -41,7 +41,7 @@ The `BEGIN PESSIMISTIC;` and `BEGIN OPTIMISTIC;` statements take precedence over
## Behaviors
-Pessimistic transactions in TiDB behave similarly to those in MySQL. See the minor differences in [Difference with MySQL InnoDB](#difference-with-mysql-innoDB).
+Pessimistic transactions in TiDB behave similarly to those in MySQL. See the minor differences in [Difference with MySQL InnoDB](#difference-with-mysql-innodb).
- When you perform the `SELECT FOR UPDATE` statement, transactions read the **lastest** committed data and apply a pessimistic lock on the data being read.
diff --git a/quick-start-with-tidb.md b/quick-start-with-tidb.md
index 7bfd75644ce11..5ccf47abbab96 100644
--- a/quick-start-with-tidb.md
+++ b/quick-start-with-tidb.md
@@ -127,7 +127,7 @@ The smallest TiDB cluster topology is as follows:
Other requirements for the target machine:
- The `root` user and its password is required
-- [Stop the firewall service of the target machine](/production-deployment-using-tiup.md#how-to-stop-the-firewall-service-of-deployment-machines), or open the port needed by the TiDB cluster nodes
+- [Stop the firewall service of the target machine](/check-before-deployment.md#check-and-stop-the-firewall-service-of-target-machines), or open the port needed by the TiDB cluster nodes
- Currently, TiUP only supports deploying TiDB on the x86_64 (AMD64) architecture (the ARM architecture will be supported in TiDB 4.0 GA):
- It is recommended to use CentOS 7.3 or later versions on AMD64
diff --git a/releases/release-2.1-ga.md b/releases/release-2.1-ga.md
index a5b85a839a0fc..a2e7559f2aa4d 100644
--- a/releases/release-2.1-ga.md
+++ b/releases/release-2.1-ga.md
@@ -179,7 +179,7 @@ On November 30, 2018, TiDB 2.1 GA is released. See the following updates in this
- [calling `jq` to format the JSON output](/pd-control.md#jq-formatted-json-output-usage)
- - [checking the Region information of the specified store](/pd-control.md#region-store-store-id)
+ - [checking the Region information of the specified store](/pd-control.md#region-store-store_id)
- [checking topN Region list sorted by versions](/pd-control.md#region-topconfver-limit)
diff --git a/scripts/verify-link-anchors.sh b/scripts/verify-link-anchors.sh
new file mode 100755
index 0000000000000..445159d537f15
--- /dev/null
+++ b/scripts/verify-link-anchors.sh
@@ -0,0 +1,14 @@
+#!/bin/bash
+#
+# In addition to verify-links.sh, this script additionally check anchors.
+#
+# See https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally if you meet permission problems when executing npm install.
+
+ROOT=$(unset CDPATH && cd $(dirname "${BASH_SOURCE[0]}")/.. && pwd)
+cd $ROOT
+
+npm install -g remark-cli remark-lint breeswish/remark-lint-pingcap-docs-anchor
+
+echo "info: checking links anchors under $ROOT directory..."
+
+remark --ignore-path .gitignore -u lint -u remark-lint-pingcap-docs-anchor . --frail --quiet
diff --git a/scripts/verify-links.sh b/scripts/verify-links.sh
index 300b75d8401c7..70b5b38c8aba6 100755
--- a/scripts/verify-links.sh
+++ b/scripts/verify-links.sh
@@ -10,13 +10,12 @@
# - When a file was moved, all other references are required to be updated for now, even if alias are given
# - This is recommended because of less redirects and better anchors support.
#
+# See https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally if you meet permission problems when executing npm install.
ROOT=$(unset CDPATH && cd $(dirname "${BASH_SOURCE[0]}")/.. && pwd)
cd $ROOT
-if ! which markdown-link-check &>/dev/null; then
- sudo npm install -g markdown-link-check@3.7.3
-fi
+npm install -g markdown-link-check@3.8.1
VERBOSE=${VERBOSE:-}
CONFIG_TMP=$(mktemp)
@@ -50,7 +49,7 @@ fi
while read -r tasks; do
for task in $tasks; do
(
- output=$(markdown-link-check --color --config "$CONFIG_TMP" "$task" -q)
+ output=$(markdown-link-check --config "$CONFIG_TMP" "$task" -q)
if [ $? -ne 0 ]; then
printf "$output" >> $ERROR_REPORT
fi
diff --git a/sql-plan-management.md b/sql-plan-management.md
index bf2b71341332e..924bd176c42df 100644
--- a/sql-plan-management.md
+++ b/sql-plan-management.md
@@ -91,7 +91,7 @@ This statement removes a specified execution plan binding at the GLOBAL or SESSI
Generally, the binding in the SESSION scope is mainly used for test or in special situations. For a binding to take effect in all TiDB processes, you need to use the GLOBAL binding. A created SESSION binding shields the corresponding GLOBAL binding until the end of the SESSION, even if the SESSION binding is dropped before the session closes. In this case, no binding takes effect and the plan is selected by the optimizer.
-The following example is based on the example in [create binding](#create-binding) in which the SESSION binding shields the GLOBAL binding:
+The following example is based on the example in [create binding](#create-a-binding) in which the SESSION binding shields the GLOBAL binding:
```sql
-- Drops the binding created in the SESSION scope.
diff --git a/sql-statements/sql-statement-recover-table.md b/sql-statements/sql-statement-recover-table.md
index a473dad56f046..20e949573c3f3 100644
--- a/sql-statements/sql-statement-recover-table.md
+++ b/sql-statements/sql-statement-recover-table.md
@@ -63,7 +63,7 @@ When you use `RECOVER TABLE` in the upstream TiDB during TiDB Binlog replication
+ Latency occurs during replication between upstream and downstream databases. An error instance: `snapshot is older than GC safe point 2019-07-10 13:45:57 +0800 CST`.
-For the above three situations, you can resume data replication from TiDB Binlog with a [full import of the deleted table](/ecosystem-tool-user-guide.md#full-backup-and-restore-of-tidb-cluster-data-1).
+For the above three situations, you can resume data replication from TiDB Binlog with a [full import of the deleted table](/ecosystem-tool-user-guide.md#backup-and-restore).
## Examples
diff --git a/system-tables/system-table-inspection-result.md b/system-tables/system-table-inspection-result.md
index 2a054cf6deab1..235522ba9ecd8 100644
--- a/system-tables/system-table-inspection-result.md
+++ b/system-tables/system-table-inspection-result.md
@@ -275,7 +275,7 @@ The `threshold-check` diagnostic rule checks whether the following metrics in th
| Component | Monitoring metric | Monitoring table | Expected value | Description |
| :---- | :---- | :---- | :---- | :---- |
| TiDB | tso-duration | pd_tso_wait_duration | < 50ms | The wait duration of getting the TSO of transaction. |
-| TiDB | get-token-duration | tidb_get_token_duration | < 1ms | Queries the time it takes to get the token. The related TiDB configuration item is [`token-limit`](/command-line-flags-for-tidb-configuration.md#token-limit). |
+| TiDB | get-token-duration | tidb_get_token_duration | < 1ms | Queries the time it takes to get the token. The related TiDB configuration item is [`token-limit`](/command-line-flags-for-tidb-configuration.md#--token-limit). |
| TiDB | load-schema-duration | tidb_load_schema_duration | < 1s | The time it takes for TiDB to update the schema metadata.|
| TiKV | scheduler-cmd-duration | tikv_scheduler_command_duration | < 0.1s | The time it takes for TiKV to execute the KV `cmd` request. |
| TiKV | handle-snapshot-duration | tikv_handle_snapshot_duration | < 30s | The time it takes for TiKV to handle the snapshot. |
diff --git a/telemetry.md b/telemetry.md
index d492a187f1d71..af21f034b76ad 100644
--- a/telemetry.md
+++ b/telemetry.md
@@ -61,7 +61,7 @@ TIUP_CLUSTER_DEBUG=enable tiup cluster list
### Disable TiDB telemetry at deployment
-When deploying TiDB clusters, configure [`enable-telemetry = false`](/tidb-configuration-file.md#enable-telemetry) to disable the TiDB telemetry collection on all TiDB instances. You can also use this setting to disable telemetry in an existing TiDB cluster, which does not take effect until you restart the cluster.
+When deploying TiDB clusters, configure [`enable-telemetry = false`](/tidb-configuration-file.md#enable-telemetry-new-in-v402) to disable the TiDB telemetry collection on all TiDB instances. You can also use this setting to disable telemetry in an existing TiDB cluster, which does not take effect until you restart the cluster.
Detailed steps to disable telemetry in different deployment tools are listed below.
@@ -78,7 +78,7 @@ enable-telemetry = false
Specify the `--config=tidb_config.toml` command-line parameter when starting TiDB for the configuration file above to take effect.
-See [TiDB Configuration Options](/command-line-flags-for-tidb-configuration.md#--config) and [TiDB Configuration File](/tidb-configuration-file.md#enable-telemetry) for details.
+See [TiDB Configuration Options](/command-line-flags-for-tidb-configuration.md#--config) and [TiDB Configuration File](/tidb-configuration-file.md#enable-telemetry-new-in-v402) for details.
@@ -154,7 +154,7 @@ See [Deploy TiDB Operator in Kubernetes](https://docs.pingcap.com/tidb-in-kubern
### Disable TiDB telemetry for deployed TiDB clusters
-In existing TiDB clusters, you can also modify the system variable [`tidb_enable_telemetry`](/tidb-specific-system-variables.md#tidb_enable_telemetry) to dynamically disable the TiDB telemetry collection:
+In existing TiDB clusters, you can also modify the system variable [`tidb_enable_telemetry`](/tidb-specific-system-variables.md#tidb_enable_telemetry-new-in-v402-version) to dynamically disable the TiDB telemetry collection:
{{< copyable "sql" >}}
diff --git a/tidb-binlog/handle-tidb-binlog-errors.md b/tidb-binlog/handle-tidb-binlog-errors.md
index 8ce3333670e7a..e7dd0ed8567d8 100644
--- a/tidb-binlog/handle-tidb-binlog-errors.md
+++ b/tidb-binlog/handle-tidb-binlog-errors.md
@@ -33,4 +33,4 @@ Solution: Clean up the disk space and then restart Pump.
Cause: When Pump is started, it notifies all Drainer nodes that are in the `online` state. If it fails to notify Drainer, this error log is printed.
-Solution: Use the [binlogctl tool](/tidb-binlog/maintain-tidb-binlog-cluster.md#binlog-guide) to check whether each Drainer node is normal or not. This is to ensure that all Drainer nodes that are in the `online` state are working normally. If the state of a Drainer node is not consistent with its actual working status, use the binlogctl tool to change its state and then restart Pump.
+Solution: Use the [binlogctl tool](/tidb-binlog/binlog-control.md) to check whether each Drainer node is normal or not. This is to ensure that all Drainer nodes that are in the `online` state are working normally. If the state of a Drainer node is not consistent with its actual working status, use the binlogctl tool to change its state and then restart Pump.
diff --git a/tidb-binlog/maintain-tidb-binlog-cluster.md b/tidb-binlog/maintain-tidb-binlog-cluster.md
index 5f6ace8338f48..95c52208110d9 100644
--- a/tidb-binlog/maintain-tidb-binlog-cluster.md
+++ b/tidb-binlog/maintain-tidb-binlog-cluster.md
@@ -43,7 +43,7 @@ Pump or Drainer state description:
* Pause: You can pause a Drainer process by using the `kill` command (not `kill -9`), pressing Ctrl+C or using the `pause-drainer` command in the binlogctl tool. After receiving the pause instruction, the Drainer node sets its state to `pausing` and stops pulling binlogs from Pump nodes. After all threads are safely exited, the Drainer node sets its state to `paused` and exits the process.
* Offline: You can close a Drainer process only by using the `offline-drainer` command in the binlogctl tool. After receiving the offline instruction, the Drainer node sets its state to `closing` and stops pulling binlogs from Pump nodes. After all threads are safely exited, the Drainer node updates its state to `offline` and exits the process.
-For how to pause, close, check, and modify the state of Drainer, see the [binlogctl guide](#binlogctl-guide) as follows.
+For how to pause, close, check, and modify the state of Drainer, see the [binlogctl guide](/tidb-binlog/binlog-control.md).
## Use `binlogctl` to manage Pump/Drainer
diff --git a/tidb-binlog/troubleshoot-tidb-binlog.md b/tidb-binlog/troubleshoot-tidb-binlog.md
index a7d1dd2b41cba..aef97dc542b8b 100644
--- a/tidb-binlog/troubleshoot-tidb-binlog.md
+++ b/tidb-binlog/troubleshoot-tidb-binlog.md
@@ -13,7 +13,7 @@ If you encounter errors while running TiDB Binlog, take the following steps to t
1. Check whether each monitoring metric is normal or not. Refer to [TiDB Binlog Monitoring](/tidb-binlog/monitor-tidb-binlog-cluster.md) for details.
-2. Use the [binlogctl tool](/tidb-binlog/maintain-tidb-binlog-cluster.md#binlogctl-guide) to check whether the state of each Pump or Drainer node is normal or not.
+2. Use the [binlogctl tool](/tidb-binlog/binlog-control.md) to check whether the state of each Pump or Drainer node is normal or not.
3. Check whether `ERROR` or `WARN` exists in the Pump log or Drainer log.
diff --git a/tidb-configuration-file.md b/tidb-configuration-file.md
index fc02ed7e9efdc..c4c6111e0a9cc 100644
--- a/tidb-configuration-file.md
+++ b/tidb-configuration-file.md
@@ -146,7 +146,7 @@ The TiDB configuration file supports more options than command-line parameters.
- Enables or disables the telemetry collection in TiDB.
- Default value: `true`
-- When this configuration is set to `false` on all TiDB instances, the telemetry collection in TiDB is disabled and the [`tidb_enable_telemetry`](/tidb-specific-system-variables.md#tidb_enable_telemetry) system variable does not take effect. See [Telemetry](/telemetry.md) for details.
+- When this configuration is set to `false` on all TiDB instances, the telemetry collection in TiDB is disabled and the [`tidb_enable_telemetry`](/tidb-specific-system-variables.md#tidb_enable_telemetry-new-in-v402-version) system variable does not take effect. See [Telemetry](/telemetry.md) for details.
## Log
diff --git a/tidb-lightning/deploy-tidb-lightning.md b/tidb-lightning/deploy-tidb-lightning.md
index cf2129d6e592e..37665e3969082 100644
--- a/tidb-lightning/deploy-tidb-lightning.md
+++ b/tidb-lightning/deploy-tidb-lightning.md
@@ -97,7 +97,7 @@ If the data source consists of CSV files, see [CSV support](/tidb-lightning/migr
This section describes two deployment methods of TiDB Lightning:
-- [Deploy TiDB Lightning using TiDB Ansible](#deploy-tidb-lightning-using-ansible)
+- [Deploy TiDB Lightning using TiDB Ansible](#deploy-tidb-lightning-using-tidb-ansible)
- [Deploy TiDB Lightning manually](#deploy-tidb-lightning-manually)
### Deploy TiDB Lightning using TiDB Ansible
diff --git a/tidb-lightning/tidb-lightning-faq.md b/tidb-lightning/tidb-lightning-faq.md
index 81c0723abcedb..caec21e727ee3 100644
--- a/tidb-lightning/tidb-lightning-faq.md
+++ b/tidb-lightning/tidb-lightning-faq.md
@@ -61,7 +61,7 @@ If `tikv-importer` needs to be restarted:
4. Start `tikv-importer`.
5. Start `tidb-lightning` *and wait until the program fails with CHECKSUM error, if any*.
* Restarting `tikv-importer` would destroy all engine files still being written, but `tidb-lightning` did not know about it. As of v3.0 the simplest way is to let `tidb-lightning` go on and retry.
-6. [Destroy the failed tables and checkpoints](/troubleshoot-tidb-lightning.md#checkpoint-for-has-invalid-status)
+6. [Destroy the failed tables and checkpoints](/troubleshoot-tidb-lightning.md#checkpoint-for--has-invalid-status-error-code)
7. Start `tidb-lightning` again.
## How to ensure the integrity of the imported data?
diff --git a/tidb-specific-system-variables.md b/tidb-specific-system-variables.md
index af8acb2837ccf..8978a7859fcc2 100644
--- a/tidb-specific-system-variables.md
+++ b/tidb-specific-system-variables.md
@@ -592,4 +592,4 @@ set tidb_query_log_max_len = 20
- Scope: GLOBAL
- Default value: 1
-- This variable dynamically controls whether the telemetry collection in TiDB is enabled. By setting the value to `0`, the telemetry collection is disabled. If the [`enable-telemetry`](/tidb-configuration-file.md#enable-telemetry) TiDB configuration item is set to `false` on all TiDB instances, the telemetry collection is always disabled and this system variable will not take effect. See [Telemetry](/telemetry.md) for details.
+- This variable dynamically controls whether the telemetry collection in TiDB is enabled. By setting the value to `0`, the telemetry collection is disabled. If the [`enable-telemetry`](/tidb-configuration-file.md#enable-telemetry-new-in-v402) TiDB configuration item is set to `false` on all TiDB instances, the telemetry collection is always disabled and this system variable will not take effect. See [Telemetry](/telemetry.md) for details.
diff --git a/tiflash/troubleshoot-tiflash.md b/tiflash/troubleshoot-tiflash.md
index 3abdc0ee6bb84..4c14e2049bbc9 100644
--- a/tiflash/troubleshoot-tiflash.md
+++ b/tiflash/troubleshoot-tiflash.md
@@ -91,6 +91,6 @@ In this case, you can balance the load pressure by adding more TiFlash nodes.
Take the following steps to handle the data file corruption:
-1. Refer to [Take a TiFlash node down](#take-a-tiflash-node-down) to take the corresponding TiFlash node down.
+1. Refer to [Take a TiFlash node down](/scale-tidb-using-tiup.md#scale-in-a-tiflash-node) to take the corresponding TiFlash node down.
2. Delete the related data of the TiFlash node.
3. Redeploy the TiFlash node in the cluster.
diff --git a/tiflash/tune-tiflash-performance.md b/tiflash/tune-tiflash-performance.md
index 3e71aefa7148d..c5128d60babfa 100644
--- a/tiflash/tune-tiflash-performance.md
+++ b/tiflash/tune-tiflash-performance.md
@@ -25,7 +25,7 @@ If you want to save machine resources and have no requirement on isolation, you
2. Enable the super batch feature:
- You can use the [`tidb_allow_batch_cop`](/tidb-specific-system-variables.md#tidb_allow_batch_cop) variable to set whether to merge Region requests when reading from TiFlash.
+ You can use the [`tidb_allow_batch_cop`](/tidb-specific-system-variables.md#tidb_allow_batch_cop-new-in-v40-version) variable to set whether to merge Region requests when reading from TiFlash.
When the number of Regions involved in the query is relatively large, try to set this variable to `1` (effective for coprocessor requests with `aggregation` operators that are pushed down to TiFlash), or set this variable to `2` (effective for all coprocessor requests that are pushed down to TiFlash).
diff --git a/tune-operating-system.md b/tune-operating-system.md
index ba4a24e6ce497..e6e6c2383b065 100644
--- a/tune-operating-system.md
+++ b/tune-operating-system.md
@@ -61,7 +61,7 @@ cpufreq is a module that dynamically adjusts the CPU frequency. It supports five
### NUMA CPU binding
-To avoid accessing memory across Non-Uniform Memory Access (NUMA) nodes as much as possible, you can bind a thread/process to certain CPU cores by setting the CPU affinity of the thread. For ordinary programs, you can use the `numactl` command for the CPU binding. For detailed usage, see the Linux manual pages. For network interface card (NIC) interrupts, see [tune network](#tune-network).
+To avoid accessing memory across Non-Uniform Memory Access (NUMA) nodes as much as possible, you can bind a thread/process to certain CPU cores by setting the CPU affinity of the thread. For ordinary programs, you can use the `numactl` command for the CPU binding. For detailed usage, see the Linux manual pages. For network interface card (NIC) interrupts, see [tune network](#network-tuning).
### Memory—transparent huge page (THP)