Skip to content

Commit

Permalink
Add scripts to verify anchors in CI (#3128)
Browse files Browse the repository at this point in the history
* add link anchor check in CI

* fix 29 anchors
  • Loading branch information
yikeke authored Jul 3, 2020
1 parent ff871af commit a10d954
Show file tree
Hide file tree
Showing 29 changed files with 61 additions and 35 deletions.
17 changes: 15 additions & 2 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,23 @@ version: 2
jobs:
lint:
docker:
- image: circleci/ruby:2.4.1-node
- image: circleci/node:lts
working_directory: ~/pingcap/docs
steps:
- checkout

- run:
name: Setup
command: |
mkdir ~/.npm-global
npm config set prefix '~/.npm-global'
echo 'export PATH=~/.npm-global/bin:$PATH' >> $BASH_ENV
echo 'export NODE_PATH=~/.npm-global/lib/node_modules:$NODE_PATH' >> $BASH_ENV
- run:
name: "Install markdownlint"
command: |
sudo npm install -g [email protected]
npm install -g [email protected]
- run:
name: "Lint README"
Expand All @@ -29,6 +37,11 @@ jobs:
command: |
scripts/verify-links.sh
- run:
name: "Check link anchors"
command: |
scripts/verify-link-anchors.sh
build:
docker:
- image: andelf/doc-build:0.1.9
Expand Down
2 changes: 1 addition & 1 deletion auto-random.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ aliases: ['/docs/dev/auto-random/','/docs/dev/reference/sql/attributes/auto-rand
>
> `AUTO_RANDOM` is still an experimental feature. It is **NOT** recommended that you use this attribute in the production environment. In later TiDB versions, the syntax or semantics of `AUTO_RANDOM` might change.
Before using the `AUTO_RANDOM` attribute, set `allow-auto-random = true` in the `experimental` section of the TiDB configuration file. Refer to [`allow-auto-random`](/tidb-configuration-file.md#allow-auto-random) for details.
Before using the `AUTO_RANDOM` attribute, set `allow-auto-random = true` in the `experimental` section of the TiDB configuration file. Refer to [`allow-auto-random`](/tidb-configuration-file.md#allow-auto-random-new-in-v310) for details.

## User scenario

Expand Down
2 changes: 1 addition & 1 deletion br/backup-and-restore-tool.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ The SST file is named in the format of `storeID_regionID_regionEpoch_keyHash_cf`
- `regionID` is the Region ID;
- `regionEpoch` is the version number of the Region;
- `keyHash` is the Hash (sha256) value of the startKey of a range, which ensures the uniqueness of a key;
- `cf` indicates the [Column Family](/tune-tikv-memory-performance.md#tune-tikv-performance) of RocksDB (`default` or `write` by default).
- `cf` indicates the [Column Family](/tune-tikv-memory-performance.md) of RocksDB (`default` or `write` by default).

### Restoration principle

Expand Down
4 changes: 2 additions & 2 deletions certificate-authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ First, connect TiDB using the client to configure the login verification. Then,

The user certificate information can be specified by `require subject`, `require issuer`, `require san`, and `require cipher`, which are used to check the X509 certificate attributes.

+ `require subject`: Specifies the `subject` information of the client certificate when you log in. With this option specified, you do not need to configure `require ssl` or x509. The information to be specified is consistent with the entered `subject` information in [Generate client keys and certificates](#generate-client-keys-and-certificates).
+ `require subject`: Specifies the `subject` information of the client certificate when you log in. With this option specified, you do not need to configure `require ssl` or x509. The information to be specified is consistent with the entered `subject` information in [Generate client keys and certificates](#generate-client-key-and-certificate).

To get this option, execute the following command:

Expand Down Expand Up @@ -502,4 +502,4 @@ Also replace the old CA certificate with the combined certificate so that the cl
sudo openssl x509 -req -in server-req.new.pem -days 365000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.new.pem
```

3. Configure the TiDB server to use the new server key and certificate. See [Configure TiDB server](#configure-tidb-server) for details.
3. Configure the TiDB server to use the new server key and certificate. See [Configure TiDB server](#configure-tidb-and-the-client-to-use-certificates) for details.
2 changes: 1 addition & 1 deletion check-cluster-status-using-sql-statements.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ The `INFORMATION_SCHEMA` system database offers system tables as follows to quer
You can also use the following statements to obtain some useful information for troubleshooting and querying the TiDB cluster status.

- `ADMIN SHOW DDL`: obtains the ID of TiDB with the `DDL owner` role and `IP:PORT`.
- The feature of `SHOW ANALYZE STATUS` is the same with that of [the `ANALYZE_STATUS` table](/system-tables/system-table-information-schema.md#analyze-status-table).
- The feature of `SHOW ANALYZE STATUS` is the same with that of [the `ANALYZE_STATUS` table](/system-tables/system-table-information-schema.md#analyze_status-table).
- Specific `EXPLAIN` statements
- `EXPLAIN ANALYZE`: obtains some detailed information for execution of a SQL statement.
- `EXPLAIN FOR CONNECTION`: obtains the execution plan for the query executed last in a connection. Can be used along with `SHOW PROCESSLIST`.
Expand Down
2 changes: 1 addition & 1 deletion dashboard/dashboard-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ This document summarizes the frequently asked questions (FAQs) and answers about

When multiple Placement Driver (PD) instances are deployed in a cluster, only one of the PD instances actually runs the TiDB Dashboard service. If you access other PD instances instead of this one, your browser redirects you to another address. If the firewall or reverse proxy is not properly configured for accessing TiDB Dashboard, when you visit the Dashboard, you might be redirected to an internal address that is protected by the firewall or reverse proxy.

- See [TiDB Dashboard Multi-PD Instance Deployment](/dashboard/dashboard-ops-deploy.md#) to learn the working principle of TiDB Dashboard with multiple PD instances.
- See [TiDB Dashboard Multi-PD Instance Deployment](/dashboard/dashboard-ops-deploy.md) to learn the working principle of TiDB Dashboard with multiple PD instances.
- See [Use TiDB Dashboard through a Reverse Proxy](/dashboard/dashboard-ops-reverse-proxy.md) to learn how to correctly configure a reverse proxy.
- See [Secure TiDB Dashboard](/dashboard/dashboard-ops-security.md) to learn how to correctly configure the firewall.

Expand Down
2 changes: 1 addition & 1 deletion dashboard/dashboard-statement-details.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Click any item in the list to enter the detail page of the SQL statement to view

- The overview of SQL statements, which includes the SQL template, the SQL template ID, the current time range of displayed SQL executions, the number of execution plans and the database in which the SQL statement is executed (see area 1 in the image below).
- The execution plan list: If the SQL statement has multiple execution plans, this list is displayed. You can select different execution plans, and the details of the selected plans are displayed below the list. If there is only one execution plan, the list is not displayed (see area 2 below).
- Execution detail of plans, which displays the detailed information of the selected execution plans. See [Execution plan in details](#execution-plan-in-details) (area 3 in the image below).
- Execution detail of plans, which displays the detailed information of the selected execution plans. See [Execution plan in details](#execution-details-of-plans) (area 3 in the image below).

![Details](/media/dashboard/dashboard-statement-detail.png)

Expand Down
2 changes: 1 addition & 1 deletion dashboard/dashboard-statement-list.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ On the setting page, you can disable or enable the SQL statements feature. When
- Collect interval: The length of period for each SQL statement analysis, which is 30 minutes by default. The SQL statements feature summarizes and counts all SQL statements within a period of time. If the period is too long, the granularity of the summary is coarse, which is not good for locating problems; if the period is too short, the granularity of the statistics is fine, which is good for locating problems, but this will result in more records and more memory usage within the same data retention duration. Therefore, you need to adjust this value based on the actual situation, and properly lower this value when locating problems.
- Data retain duration: The retention duration of summary information, which is 1 day by default. Data retained longer than this duration will be deleted from system tables.

See [Configurations of Statement Summary Tables](/statement-summary-tables.md#configurations) for details.
See [Configurations of Statement Summary Tables](/statement-summary-tables.md#parameter-configuration) for details.

> **Note:**
>
Expand Down
2 changes: 1 addition & 1 deletion get-started-with-tidb-binlog.md
Original file line number Diff line number Diff line change
Expand Up @@ -329,7 +329,7 @@ You should see the same rows that you inserted into TiDB when querying the Maria

## binlogctl

Information about Pumps and Drainers that have joined the cluster is stored in PD. You can use the binlogctl tool query and manipulate information about their states. See [binlogctl guide](/tidb-binlog/maintain-tidb-binlog-cluster.md#binlogctl-guide) for more information.
Information about Pumps and Drainers that have joined the cluster is stored in PD. You can use the binlogctl tool query and manipulate information about their states. See [binlogctl guide](/tidb-binlog/binlog-control.md) for more information.

Use `binlogctl` to get a view of the current status of Pumps and Drainers in the cluster:

Expand Down
2 changes: 1 addition & 1 deletion glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ aliases: ['/docs/dev/glossary/']

ACID refers to the four key properties of a transaction: atomicity, consistency, isolation, and durability. Each of these properties is described below.

- **Atomicity** means that either all the changes of an operation are performed, or none of them are. TiDB ensures the atomicity of the [Region](#region) that stores the Primary Key to achieve the atomicity of transactions.
- **Atomicity** means that either all the changes of an operation are performed, or none of them are. TiDB ensures the atomicity of the [Region](#regionpeerraft-group) that stores the Primary Key to achieve the atomicity of transactions.

- **Consistency** means that transactions always bring the database from one consistent state to another. In TiDB, data consistency is ensured before writing data to the memory.

Expand Down
2 changes: 1 addition & 1 deletion pessimistic-transaction.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ The `BEGIN PESSIMISTIC;` and `BEGIN OPTIMISTIC;` statements take precedence over

## Behaviors

Pessimistic transactions in TiDB behave similarly to those in MySQL. See the minor differences in [Difference with MySQL InnoDB](#difference-with-mysql-innoDB).
Pessimistic transactions in TiDB behave similarly to those in MySQL. See the minor differences in [Difference with MySQL InnoDB](#difference-with-mysql-innodb).

- When you perform the `SELECT FOR UPDATE` statement, transactions read the **lastest** committed data and apply a pessimistic lock on the data being read.

Expand Down
2 changes: 1 addition & 1 deletion quick-start-with-tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ The smallest TiDB cluster topology is as follows:
Other requirements for the target machine:

- The `root` user and its password is required
- [Stop the firewall service of the target machine](/production-deployment-using-tiup.md#how-to-stop-the-firewall-service-of-deployment-machines), or open the port needed by the TiDB cluster nodes
- [Stop the firewall service of the target machine](/check-before-deployment.md#check-and-stop-the-firewall-service-of-target-machines), or open the port needed by the TiDB cluster nodes
- Currently, TiUP only supports deploying TiDB on the x86_64 (AMD64) architecture (the ARM architecture will be supported in TiDB 4.0 GA):

- It is recommended to use CentOS 7.3 or later versions on AMD64
Expand Down
2 changes: 1 addition & 1 deletion releases/release-2.1-ga.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ On November 30, 2018, TiDB 2.1 GA is released. See the following updates in this

- [calling `jq` to format the JSON output](/pd-control.md#jq-formatted-json-output-usage)

- [checking the Region information of the specified store](/pd-control.md#region-store-store-id)
- [checking the Region information of the specified store](/pd-control.md#region-store-store_id)

- [checking topN Region list sorted by versions](/pd-control.md#region-topconfver-limit)

Expand Down
14 changes: 14 additions & 0 deletions scripts/verify-link-anchors.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
#!/bin/bash
#
# In addition to verify-links.sh, this script additionally check anchors.
#
# See https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally if you meet permission problems when executing npm install.

ROOT=$(unset CDPATH && cd $(dirname "${BASH_SOURCE[0]}")/.. && pwd)
cd $ROOT

npm install -g remark-cli remark-lint breeswish/remark-lint-pingcap-docs-anchor

echo "info: checking links anchors under $ROOT directory..."

remark --ignore-path .gitignore -u lint -u remark-lint-pingcap-docs-anchor . --frail --quiet
7 changes: 3 additions & 4 deletions scripts/verify-links.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,12 @@
# - When a file was moved, all other references are required to be updated for now, even if alias are given
# - This is recommended because of less redirects and better anchors support.
#
# See https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally if you meet permission problems when executing npm install.

ROOT=$(unset CDPATH && cd $(dirname "${BASH_SOURCE[0]}")/.. && pwd)
cd $ROOT

if ! which markdown-link-check &>/dev/null; then
sudo npm install -g [email protected]
fi
npm install -g [email protected]

VERBOSE=${VERBOSE:-}
CONFIG_TMP=$(mktemp)
Expand Down Expand Up @@ -50,7 +49,7 @@ fi
while read -r tasks; do
for task in $tasks; do
(
output=$(markdown-link-check --color --config "$CONFIG_TMP" "$task" -q)
output=$(markdown-link-check --config "$CONFIG_TMP" "$task" -q)
if [ $? -ne 0 ]; then
printf "$output" >> $ERROR_REPORT
fi
Expand Down
2 changes: 1 addition & 1 deletion sql-plan-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ This statement removes a specified execution plan binding at the GLOBAL or SESSI

Generally, the binding in the SESSION scope is mainly used for test or in special situations. For a binding to take effect in all TiDB processes, you need to use the GLOBAL binding. A created SESSION binding shields the corresponding GLOBAL binding until the end of the SESSION, even if the SESSION binding is dropped before the session closes. In this case, no binding takes effect and the plan is selected by the optimizer.

The following example is based on the example in [create binding](#create-binding) in which the SESSION binding shields the GLOBAL binding:
The following example is based on the example in [create binding](#create-a-binding) in which the SESSION binding shields the GLOBAL binding:

```sql
-- Drops the binding created in the SESSION scope.
Expand Down
2 changes: 1 addition & 1 deletion sql-statements/sql-statement-recover-table.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ When you use `RECOVER TABLE` in the upstream TiDB during TiDB Binlog replication

+ Latency occurs during replication between upstream and downstream databases. An error instance: `snapshot is older than GC safe point 2019-07-10 13:45:57 +0800 CST`.

For the above three situations, you can resume data replication from TiDB Binlog with a [full import of the deleted table](/ecosystem-tool-user-guide.md#full-backup-and-restore-of-tidb-cluster-data-1).
For the above three situations, you can resume data replication from TiDB Binlog with a [full import of the deleted table](/ecosystem-tool-user-guide.md#backup-and-restore).

## Examples

Expand Down
2 changes: 1 addition & 1 deletion system-tables/system-table-inspection-result.md
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ The `threshold-check` diagnostic rule checks whether the following metrics in th
| Component | Monitoring metric | Monitoring table | Expected value | Description |
| :---- | :---- | :---- | :---- | :---- |
| TiDB | tso-duration | pd_tso_wait_duration | < 50ms | The wait duration of getting the TSO of transaction. |
| TiDB | get-token-duration | tidb_get_token_duration | < 1ms | Queries the time it takes to get the token. The related TiDB configuration item is [`token-limit`](/command-line-flags-for-tidb-configuration.md#token-limit). |
| TiDB | get-token-duration | tidb_get_token_duration | < 1ms | Queries the time it takes to get the token. The related TiDB configuration item is [`token-limit`](/command-line-flags-for-tidb-configuration.md#--token-limit). |
| TiDB | load-schema-duration | tidb_load_schema_duration | < 1s | The time it takes for TiDB to update the schema metadata.|
| TiKV | scheduler-cmd-duration | tikv_scheduler_command_duration | < 0.1s | The time it takes for TiKV to execute the KV `cmd` request. |
| TiKV | handle-snapshot-duration | tikv_handle_snapshot_duration | < 30s | The time it takes for TiKV to handle the snapshot. |
Expand Down
6 changes: 3 additions & 3 deletions telemetry.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ TIUP_CLUSTER_DEBUG=enable tiup cluster list

### Disable TiDB telemetry at deployment

When deploying TiDB clusters, configure [`enable-telemetry = false`](/tidb-configuration-file.md#enable-telemetry) to disable the TiDB telemetry collection on all TiDB instances. You can also use this setting to disable telemetry in an existing TiDB cluster, which does not take effect until you restart the cluster.
When deploying TiDB clusters, configure [`enable-telemetry = false`](/tidb-configuration-file.md#enable-telemetry-new-in-v402) to disable the TiDB telemetry collection on all TiDB instances. You can also use this setting to disable telemetry in an existing TiDB cluster, which does not take effect until you restart the cluster.

Detailed steps to disable telemetry in different deployment tools are listed below.

Expand All @@ -78,7 +78,7 @@ enable-telemetry = false

Specify the `--config=tidb_config.toml` command-line parameter when starting TiDB for the configuration file above to take effect.

See [TiDB Configuration Options](/command-line-flags-for-tidb-configuration.md#--config) and [TiDB Configuration File](/tidb-configuration-file.md#enable-telemetry) for details.
See [TiDB Configuration Options](/command-line-flags-for-tidb-configuration.md#--config) and [TiDB Configuration File](/tidb-configuration-file.md#enable-telemetry-new-in-v402) for details.

</details>

Expand Down Expand Up @@ -154,7 +154,7 @@ See [Deploy TiDB Operator in Kubernetes](https://docs.pingcap.com/tidb-in-kubern

### Disable TiDB telemetry for deployed TiDB clusters

In existing TiDB clusters, you can also modify the system variable [`tidb_enable_telemetry`](/tidb-specific-system-variables.md#tidb_enable_telemetry) to dynamically disable the TiDB telemetry collection:
In existing TiDB clusters, you can also modify the system variable [`tidb_enable_telemetry`](/tidb-specific-system-variables.md#tidb_enable_telemetry-new-in-v402-version) to dynamically disable the TiDB telemetry collection:

{{< copyable "sql" >}}

Expand Down
2 changes: 1 addition & 1 deletion tidb-binlog/handle-tidb-binlog-errors.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,4 @@ Solution: Clean up the disk space and then restart Pump.

Cause: When Pump is started, it notifies all Drainer nodes that are in the `online` state. If it fails to notify Drainer, this error log is printed.

Solution: Use the [binlogctl tool](/tidb-binlog/maintain-tidb-binlog-cluster.md#binlog-guide) to check whether each Drainer node is normal or not. This is to ensure that all Drainer nodes that are in the `online` state are working normally. If the state of a Drainer node is not consistent with its actual working status, use the binlogctl tool to change its state and then restart Pump.
Solution: Use the [binlogctl tool](/tidb-binlog/binlog-control.md) to check whether each Drainer node is normal or not. This is to ensure that all Drainer nodes that are in the `online` state are working normally. If the state of a Drainer node is not consistent with its actual working status, use the binlogctl tool to change its state and then restart Pump.
2 changes: 1 addition & 1 deletion tidb-binlog/maintain-tidb-binlog-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Pump or Drainer state description:
* Pause: You can pause a Drainer process by using the `kill` command (not `kill -9`), pressing <kbd>Ctrl</kbd>+<kbd>C</kbd> or using the `pause-drainer` command in the binlogctl tool. After receiving the pause instruction, the Drainer node sets its state to `pausing` and stops pulling binlogs from Pump nodes. After all threads are safely exited, the Drainer node sets its state to `paused` and exits the process.
* Offline: You can close a Drainer process only by using the `offline-drainer` command in the binlogctl tool. After receiving the offline instruction, the Drainer node sets its state to `closing` and stops pulling binlogs from Pump nodes. After all threads are safely exited, the Drainer node updates its state to `offline` and exits the process.

For how to pause, close, check, and modify the state of Drainer, see the [binlogctl guide](#binlogctl-guide) as follows.
For how to pause, close, check, and modify the state of Drainer, see the [binlogctl guide](/tidb-binlog/binlog-control.md).

## Use `binlogctl` to manage Pump/Drainer

Expand Down
Loading

0 comments on commit a10d954

Please sign in to comment.