page_title | subcategory | description |
---|---|---|
tidbcloud_import Resource - terraform-provider-tidbcloud |
import resource |
import resource
terraform {
required_providers {
tidbcloud = {
source = "tidbcloud/tidbcloud"
}
}
}
provider "tidbcloud" {
public_key = "fake_public_key"
private_key = "fake_private_key"
}
resource "tidbcloud_import" "example_local" {
project_id = "fake_id"
cluster_id = "fake_id"
type = "LOCAL"
data_format = "CSV"
target_table = {
schema = "test"
table = "t"
}
file_name = "fake_path"
}
resource "tidbcloud_import" "example_s3_csv" {
project_id = "fake_id"
cluster_id = "fake_id"
type = "S3"
data_format = "CSV"
aws_role_arn = "fake_arn"
source_url = "fake_url"
}
resource "tidbcloud_import" "example_s3_parquet" {
project_id = "1369847559691367867"
cluster_id = "1373933076658240623"
type = "S3"
data_format = "Parquet"
aws_role_arn = "fake_arn"
source_url = "fake_url"
}
cluster_id
(String) The ID of your cluster.data_format
(String) The format of data to import.Enum: "SqlFile" "AuroraSnapshot" "CSV" "Parquet".project_id
(String) The ID of the project. You can get the project ID from tidbcloud_projects datasource.type
(String) The type of data source. Enum: "S3" "LOCAL".
aws_role_arn
(String) The arn of AWS IAM role, used for importing from S3csv_format
(Attributes) The CSV configuration.See https://docs.pingcap.com/tidbcloud/csv-config-for-import-data for more details. (see below for nested schema)file_name
(String) The local file path, used for importing from LOCALsource_url
(String) The full s3 path that contains data to import, used for importing from S3target_table
(Attributes) The target db and table to import data, used for importing from LOCAL (see below for nested schema)
all_completed_tables
(List of Object) Import task all completed tables (see below for nested schema)completed_percent
(Number) Import task completed percentcompleted_tables
(Number) Import task completed tablescreated_at
(String) Import task create timeelapsed_time_seconds
(Number) Import task elapsed time secondsid
(String) The ID of the import.message
(String) Import task messagenew_file_name
(String) The file name returned by generating upload url, used for importing from local file.pending_tables
(Number) Import task pending tablespost_import_completed_percent
(Number) Import task post import completed percentprocessed_source_data_size
(String) Import task processed source data sizestatus
(String) Import task statustotal_files
(Number) Import task total filestotal_size
(String) Import task total sizetotal_tables_count
(Number) Import task total tables count
Optional:
backslash_escape
(Boolean) In CSV file whether to parse backslash inside fields as escape characters (default true).delimiter
(String) The delimiter used for quoting of CSV file (default """).header
(Boolean) In CSV file whether regard the first row as header (default true).separator
(String) The field separator of CSV file (default ",").trim_last_separator
(Boolean) In CSV file whether to treat Separator as the line terminator and trim all trailing separators (default false).
Optional:
database
(String) The database of your cluster.table
(String) The table of your cluster.
Read-Only:
message
(String)result
(String)table_name
(String)