title | summary |
---|---|
Precheck Errors, Migration Errors, and Alerts for Data Migration |
Learn how to resolve precheck errors, migration errors, and alerts when using Data Migration. |
This document describes how to resolve precheck errors, troubleshoot migration errors, and subscribe to alerts when you use Data Migration to migrate data.
This section describes the precheck errors and corresponding solutions during data migration. These errors are shown on the Precheck page when you migrate data using Data Migration.
The solutions vary depending on your upstream database.
- Amazon Aurora MySQL or Amazon RDS:
server_id
is configured by default. You do not need to configure it. Make sure you are using Amazon Aurora MySQL writer instances to support both full and incremental data migration. - MySQL: to configure
server_id
for MySQL, see Setting the Replication Source Configuration.
- Amazon Aurora MySQL: see How do I turn on binary logging for my Amazon Aurora MySQL-Compatible cluster. Make sure you are using Amazon Aurora MySQL writer instances to support both full and incremental data migration.
- Amazon RDS: see Configuring MySQL binary logging.
- Google Cloud SQL for MySQL: Google enables binary logging through point-in-time recovery for MySQL master databases. See Enable point-in-time recovery.
- MySQL: see Setting the Replication Source Configuration.
- Amazon Aurora MySQL: see How do I turn on binary logging for my Amazon Aurora MySQL-Compatible cluster. Make sure you are using Amazon Aurora MySQL writer instances to support both full and incremental data migration.
- Amazon RDS: see Configuring MySQL binary logging.
- MySQL: execute
set global binlog_format=ROW;
. See Setting The Binary Log Format.
- Amazon Aurora MySQL:
binlog_row_image
is not configurable. This precheck item does not fail for it. Make sure you are using Amazon Aurora MySQL writer instances to support both full and incremental data migration. - Amazon RDS: the process is similar to setting the
binlog_format
parameter. The only difference is that the parameter you need to change isbinlog_row_image
instead ofbinlog_format
. See Configuring MySQL binary logging. - MySQL:
set global binlog_row_image = FULL;
. See Binary Logging Options and Variables.
Make sure that binlog has been enabled in the upstream database. See Check whether mysql binlog is enabled. After that, resolve the issue according to the message you get:
- If the message is similar to
These dbs xxx are not in binlog_do_db xxx
, make sure all the databases that you want to migrate are in the list. See --binlog-do-db=db_name. - If the message is similar to
These dbs xxx are in binlog_ignore_db xxx
, make sure all the databases that you want to migrate are not in the ignore list. See --binlog-ignore-db=db_name.
For Amazon Aurora MySQL, this precheck item does not fail for it. Make sure you are using Amazon Aurora MySQL writer instances to support both full and incremental data migration.
For Amazon RDS, you need to change the following parameters: replicate-do-db
, replicate-do-table
, replicate-ignore-db
, and replicate-ignore-table
. See Configuring MySQL binary logging.
If the error occurs in the upstream database, set max_connections
as follows:
- Amazon Aurora MySQL: the process is similar to setting the
binlog_format
. The only difference is that the parameter you change ismax_connections
instead ofbinlog_format
. See How do I turn on binary logging for my Amazon Aurora MySQL-Compatible cluster. - Amazon RDS: the process is similar to setting the
binlog_format
. The only difference is that the parameter you change ismax_connections
instead ofbinlog_format
. See Configuring MySQL binary logging. - MySQL: configure
max_connections
following the document max_connections.
If the error occurs in the TiDB Cloud cluster, configure max_connections
following the document max_connections.
This section describes the problems and solutions you might encounter during the migration. These error messages are shown on the Migration Job Details page.
Error message: "The required binary log for migration no longer exists on the source database. Please make sure binary log files are kept for long enough time for migration to succeed."
This error means that the binlogs to be migrated have been cleaned up and can only be restored by creating a new task.
Ensure that the binlogs required for incremental migration exist. It is recommended to configure expire_logs_days
to extend the duration of binlogs. Do not use purge binary log
to clean up binlogs if it's needed by some migration job.
Error message: "Failed to connect to the source database using given parameters. Please make sure the source database is up and can be connected using the given parameters."
This error means that the connection to the source database failed. Check whether the source database is started and can be connected to using the specified parameters. After confirming that the source database is available, you can try to recover the task by clicking Restart.
The migration task is interrupted and contains the error "driver: bad connection" or "invalid connection"
This error means that the connection to the downstream TiDB cluster failed. Check whether the downstream TiDB cluster is in a normal state (including Available
and Modifying
) and can be connected with the username and password specified by the job. After confirming that the downstream TiDB cluster is available, you can try to resume the task by clicking Restart.
Error message: "Failed to connect to the TiDB cluster using the given user and password. Please make sure TiDB Cluster is up and can be connected to using the given user and password."
Failed to connect to the TiDB cluster. It is recommended to check whether the TiDB cluster is in a normal state (including Available
and Modifying
). You can connect with the username and password specified by the job. After confirming that the TiDB cluster is available, you can try to resume the task by clicking Restart.
The TiDB cluster storage is running low. It is recommended to increase the TiKV node storage and then resume the task by clicking Restart.
Error message: "Failed to connect to the source database. Please check whether the database is available or the maximum connections have been reached."
Failed to connect to the source database. It is recommended to check whether the source database is started, the number of database connections has not reached the upper limit, and you can connect using the parameters specified by the job. After confirming that the source database is available, you can try to resume the job by clicking Restart.
Error message: "Error 1273: Unsupported collation when new collation is enabled: 'utf8mb4_0900_ai_ci'"
Failed to create a schema in the downstream TiDB cluster. This error means that the collation used by the upstream MySQL is not supported by the TiDB cluster.
To resolve this issue, you can create a schema in the TiDB cluster based on a supported collation, and then resume the task by clicking Restart.
You can subscribe to TiDB Cloud alert emails to be informed in time when an alert occurs.
The following are alerts about Data Migration:
-
"Data migration job met error during data export"
Recommended action: check the error message on the data migration page, and see Migration errors and solutions for help.
-
"Data migration job met error during data import"
Recommended action: check the error message on the data migration page, and see Migration errors and solutions for help.
-
"Data migration job met error during incremental data migration"
Recommended action: check the error message on the data migration page, and see Migration errors and solutions for help.
-
"Data migration job has been paused for more than 6 hours during incremental migration"
Recommended action: resume the data migration job or ignore this alert.
-
"Replication lag is larger than 10 minutes and stilling increasing for more than 20 minutes"
- Recommended action: contact TiDB Cloud Support for help.
If you need help to address these alerts, contact TiDB Cloud Support for consultation.
For more information about how to subscribe to alert emails, see TiDB Cloud Built-in Alerting.