diff --git a/.lycheeignore b/.lycheeignore index ce919d2c77c0f..9c1e0c7f4165d 100644 --- a/.lycheeignore +++ b/.lycheeignore @@ -15,4 +15,19 @@ http://\{grafana-ip\}:3000 http://\{pd-ip\}:2379/dashboard http://localhost:\d+/ https://github\.com/\$user/(docs|docs-cn) -https://linux.die.net/man.* +https://linux\.die\.net/man.* +https://dev\.mysql\.com/doc/.+/5.7/en/.* +https://dev\.mysql\.com/doc/.+/8\.0/en/.* +https://dev\.mysql\.com/doc/.+/8\.4/en/.* +https://dev\.mysql\.com/doc/[a-z\-]+/en/.* +https://dev\.mysql\.com/doc/relnotes/[a-z\-]+/en/.* +https://dev\.mysql\.com/doc/dev/mysql-server/.* +https://dev\.mysql\.com/downloads/.* +https://bugs\.mysql\.com/bug\.php.* +https://www\.mysql\.com/products/.* +https://platform\.openai\.com/docs/.* +https://openai\.com/.* +https://jwt\.io/ +https://typeorm\.io/.* +https://dash\.cloudflare\.com/.* +https://centminmod\.com/mydumper\.html diff --git a/tidb-cloud/import-with-mysql-cli-serverless.md b/tidb-cloud/import-with-mysql-cli-serverless.md index 87e03095d02f2..4818233915b69 100644 --- a/tidb-cloud/import-with-mysql-cli-serverless.md +++ b/tidb-cloud/import-with-mysql-cli-serverless.md @@ -67,7 +67,7 @@ Do the following to import data from an SQL file: 2. Use the following command to import data from the SQL file: ```bash - mysql --comments --connect-timeout 150 -u '' -h -P 4000 -D test --ssl-mode=VERIFY_IDENTITY --ssl-ca= -p < product_data.sql + mysql --comments --connect-timeout 150 -u '' -h -P 4000 -D test --ssl-mode=VERIFY_IDENTITY --ssl-ca= -p < product_data.sql ``` > **Note:** diff --git a/tidb-cloud/import-with-mysql-cli.md b/tidb-cloud/import-with-mysql-cli.md index bc42207d5f4b7..314783d437da4 100644 --- a/tidb-cloud/import-with-mysql-cli.md +++ b/tidb-cloud/import-with-mysql-cli.md @@ -20,9 +20,13 @@ Connect to your TiDB cluster. 1. Navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page, and then click the name of your target cluster to go to its overview page. -2. Click **Connect** in the upper-right corner. A connection dialog is displayed. +2. In the left navigation pane, click **Networking**. -3. Click **Allow Access from Anywhere**. +3. On the **Networking** page, click **Add IP Address** in the **IP Access List** area. + +4. In the dialog, choose **Allow access from anywhere**, and then click **Confirm**. + +5. In the upper-right corner, click **Connect** to open the dialog for connection information. For more details about how to obtain the connection string, see [Connect to TiDB Cloud Dedicated via Public Connection](/tidb-cloud/connect-via-standard-connection.md). @@ -59,7 +63,7 @@ Do the following to import data from an SQL file: 2. Use the following command to import data from the SQL file: ```bash - mysql --comments --connect-timeout 150 -u '' -h -P 4000 -D test --ssl-mode=VERIFY_IDENTITY --ssl-ca= -p < product_data.sql + mysql --comments --connect-timeout 150 -u '' -h -P 4000 -D test --ssl-mode=VERIFY_IDENTITY --ssl-ca= -p < product_data.sql ``` > **Note:** diff --git a/tidb-cloud/migrate-from-mysql-using-data-migration.md b/tidb-cloud/migrate-from-mysql-using-data-migration.md index 392f9d0102143..6032d21a157e5 100644 --- a/tidb-cloud/migrate-from-mysql-using-data-migration.md +++ b/tidb-cloud/migrate-from-mysql-using-data-migration.md @@ -145,7 +145,14 @@ If your MySQL service is in a Google Cloud VPC, take the following steps: ### Enable binary logs -To perform incremental data migration, make sure you have enabled binary logs of the upstream database, and the binary logs have been kept for more than 24 hours. +To perform incremental data migration, make sure the following requirements are met: + +- Binary logs are enabled for the upstream database. +- The binary logs are retained for at least 24 hours. +- The binlog format for the upstream database is set to `ROW`. If not, update the format to `ROW` as follows to avoid the [format error](/tidb-cloud/tidb-cloud-dm-precheck-and-troubleshooting.md#error-message-check-whether-mysql-binlog_format-is-row): + + - MySQL: execute the `SET GLOBAL binlog_format=ROW;` statement. If you want to persist this change across reboots, you can execute the `SET PERSIST binlog_format=ROW;` statement. + - Amazon Aurora MySQL or RDS for MySQL: follow the instructions in [AWS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithDBInstanceParamGroups.html) to create a new DB parameter group. Set the `binlog_format=row` parameter in the new DB parameter group, modify the instance to use the new DB parameter group, and then restart the instance to take effect. ## Step 1: Go to the **Data Migration** page