diff --git a/.changes/v3.11.0/1063-features.md b/.changes/v3.11.0/1063-features.md new file mode 100644 index 000000000..8a97de5ab --- /dev/null +++ b/.changes/v3.11.0/1063-features.md @@ -0,0 +1 @@ +* Add support for **Container Service Extension v4.1** by updating the installation guide [GH-1063] diff --git a/.changes/v3.11.0/1120-deprecations.md b/.changes/v3.11.0/1120-deprecations.md new file mode 100644 index 000000000..c899671aa --- /dev/null +++ b/.changes/v3.11.0/1120-deprecations.md @@ -0,0 +1,3 @@ +* Resource `vcd_org_vdc` deprecates `edge_cluster_id` in favor of new resource + `vcd_org_vdc_nsxt_network_profile` that can configure NSX-T Edge Clusters and default Segment + Profile Templates for NSX-T VDCs [GH-1120] diff --git a/.changes/v3.11.0/1120-features.md b/.changes/v3.11.0/1120-features.md new file mode 100644 index 000000000..203ec51c8 --- /dev/null +++ b/.changes/v3.11.0/1120-features.md @@ -0,0 +1,13 @@ +* **New Data Source:** `vcd_nsxt_segment_ip_discovery_profile` to read NSX-T IP Discovery Segment Profiles [GH-1120] +* **New Data Source:** `vcd_nsxt_segment_mac_discovery_profile` to read NSX-T MAC Discovery Segment Profiles [GH-1120] +* **New Data Source:** `vcd_nsxt_segment_spoof_guard_profile` to read NSX-T Spoof Guard Profiles [GH-1120] +* **New Data Source:** `vcd_nsxt_segment_qos_profile` to read NSX-T QoS Profiles [GH-1120] +* **New Data Source:** `vcd_nsxt_segment_security_profile` to read NSX-T Segment Security Profiles [GH-1120] +* **New Resource:** `vcd_nsxt_segment_profile_template` to manage NSX-T Segment Profile Templates [GH-1120] +* **New Data Source:** `vcd_nsxt_segment_profile_template` to read NSX-T Segment Profile Templates [GH-1120] +* **New Resource:** `vcd_nsxt_global_default_segment_profile_template` to manage NSX-T Global Default Segment Profile Templates [GH-1120] +* **New Data Source:** `vcd_nsxt_global_default_segment_profile_template` to read NSX-T Global Default Segment Profile Templates [GH-1120] +* **New Resource:** `vcd_org_vdc_nsxt_network_profile` to manage default Segment Profile Templates for NSX-T VDCs [GH-1120] +* **New Data Source:** `vcd_org_vdc_nsxt_network_profile` to read default Segment Profile Templates for NSX-T VDCs [GH-1120] +* **New Resource:** `vcd_nsxt_network_segment_profile` to manage individual Segment Profiles or Segment Profile Templates for NSX-T Org VDC Networks [GH-1120] +* **New Data Source:** `vcd_nsxt_network_segment_profile` to read individual Segment Profiles or Segment Profile Templates for NSX-T Org VDC Networks [GH-1120] diff --git a/examples/container-service-extension-3.1.x/3.8-cse-3.1.x-install.tf b/examples/container-service-extension-3.1.x/3.8-cse-3.1.x-install.tf index 48b02e9d8..4c613f640 100644 --- a/examples/container-service-extension-3.1.x/3.8-cse-3.1.x-install.tf +++ b/examples/container-service-extension-3.1.x/3.8-cse-3.1.x-install.tf @@ -1,3 +1,8 @@ +# ------------------------------------------------------------------------------------------------------------ +# WARNING: This CSE installation method is deprecated in favor of CSE v4.x. Please have a look at +# https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install +# ------------------------------------------------------------------------------------------------------------ + # ------------------------------------------------------------------------------------------------------------ # CSE 3.1.x installation example HCL: # diff --git a/examples/container-service-extension-3.1.x/terraform.tfvars.example b/examples/container-service-extension-3.1.x/terraform.tfvars.example index 05a75aeed..7527a9c59 100644 --- a/examples/container-service-extension-3.1.x/terraform.tfvars.example +++ b/examples/container-service-extension-3.1.x/terraform.tfvars.example @@ -1,3 +1,8 @@ +# ------------------------------------------------------------------------------------------------------------ +# WARNING: This CSE installation method is deprecated in favor of CSE v4.x. Please have a look at +# https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install +# ------------------------------------------------------------------------------------------------------------ + # Change configuration to your needs and rename to 'terraform.tfvars' # ------------------------------------------------ diff --git a/examples/container-service-extension-3.1.x/variables-cse.tf b/examples/container-service-extension-3.1.x/variables-cse.tf index b63b237c3..d0b6b9621 100644 --- a/examples/container-service-extension-3.1.x/variables-cse.tf +++ b/examples/container-service-extension-3.1.x/variables-cse.tf @@ -1,3 +1,8 @@ +# ------------------------------------------------------------------------------------------------------------ +# WARNING: This CSE installation method is deprecated in favor of CSE v4.x. Please have a look at +# https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install +# ------------------------------------------------------------------------------------------------------------ + # These variables are for configuring the CSE installation variable "org-name" { diff --git a/examples/container-service-extension-3.1.x/variables-provider.tf b/examples/container-service-extension-3.1.x/variables-provider.tf index 48dcd111a..34e265818 100644 --- a/examples/container-service-extension-3.1.x/variables-provider.tf +++ b/examples/container-service-extension-3.1.x/variables-provider.tf @@ -1,3 +1,8 @@ +# ------------------------------------------------------------------------------------------------------------ +# WARNING: This CSE installation method is deprecated in favor of CSE v4.x. Please have a look at +# https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install +# ------------------------------------------------------------------------------------------------------------ + # These variables are for configuring the VCD provider variable "admin-user" { diff --git a/examples/container-service-extension-4.0/install/step1/3.10-cse-4.0-install-step1.tf b/examples/container-service-extension-4.0/install/step1/3.10-cse-4.0-install-step1.tf deleted file mode 100644 index 75db4f1dc..000000000 --- a/examples/container-service-extension-4.0/install/step1/3.10-cse-4.0-install-step1.tf +++ /dev/null @@ -1,111 +0,0 @@ -# ------------------------------------------------------------------------------------------------------------ -# CSE 4.0 installation, step 1: -# -# * Please read the guide present at https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_0_install -# before applying this configuration. -# -# * The installation process is split into two steps as Providers will need to generate an API token for the created -# CSE administrator user, in order to use it with the CSE Server that will be deployed in the second step. -# -# * This step will only create the required Runtime Defined Entity (RDE) Interfaces, Types, Role and finally -# the CSE administrator user. -# -# * Rename "terraform.tfvars.example" to "terraform.tfvars" and adapt the values to your needs. -# Other than that, this snippet should be applied as it is. -# You can check the comments on each resource/data source for more help and context. -# ------------------------------------------------------------------------------------------------------------ - -# VCD Provider configuration. It must be at least v3.10.0 and configured with a System administrator account. -terraform { - required_providers { - vcd = { - source = "vmware/vcd" - version = ">= 3.10" - } - } -} - -provider "vcd" { - url = "${var.vcd_url}/api" - user = var.administrator_user - password = var.administrator_password - auth_type = "integrated" - sysorg = var.administrator_org - org = var.administrator_org - allow_unverified_ssl = var.insecure_login - logging = true - logging_file = "cse_install_step1.log" -} - -# This is the interface required to create the "VCDKEConfig" Runtime Defined Entity Type. -resource "vcd_rde_interface" "vcdkeconfig_interface" { - vendor = "vmware" - nss = "VCDKEConfig" - version = "1.0.0" - name = "VCDKEConfig" -} - -# This resource will manage the "VCDKEConfig" RDE Type required to instantiate the CSE Server configuration. -# The schema URL points to the JSON schema hosted in the terraform-provider-vcd repository. -resource "vcd_rde_type" "vcdkeconfig_type" { - vendor = "vmware" - nss = "VCDKEConfig" - version = "1.0.0" - name = "VCD-KE RDE Schema" - schema_url = "https://raw.githubusercontent.com/vmware/terraform-provider-vcd/main/examples/container-service-extension-4.0/schemas/vcdkeconfig-type-schema.json" - interface_ids = [vcd_rde_interface.vcdkeconfig_interface.id] -} - -# This RDE Interface exists in VCD, so it must be fetched with a RDE Interface data source. This RDE Interface is used to be -# able to create the "capvcdCluster" RDE Type. -data "vcd_rde_interface" "kubernetes_interface" { - vendor = "vmware" - nss = "k8s" - version = "1.0.0" -} - -# This RDE Interface will create the "capvcdCluster" RDE Type required to create Kubernetes clusters. -# The schema URL points to the JSON schema hosted in the terraform-provider-vcd repository. -resource "vcd_rde_type" "capvcdcluster_type" { - vendor = "vmware" - nss = "capvcdCluster" - version = var.capvcd_rde_version - name = "CAPVCD Cluster" - schema_url = "https://raw.githubusercontent.com/vmware/terraform-provider-vcd/main/examples/container-service-extension-4.0/schemas/capvcd-type-schema.json" - interface_ids = [data.vcd_rde_interface.kubernetes_interface.id] -} - -# This role is having only the minimum set of rights required for the CSE Server to function. -# It is created in the "System" provider organization scope. -resource "vcd_role" "cse_admin_role" { - org = var.administrator_org - name = "CSE Admin Role" - description = "Used for administrative purposes" - rights = [ - "API Tokens: Manage", - "${vcd_rde_type.vcdkeconfig_type.vendor}:${vcd_rde_type.vcdkeconfig_type.nss}: Administrator Full access", - "${vcd_rde_type.vcdkeconfig_type.vendor}:${vcd_rde_type.vcdkeconfig_type.nss}: Administrator View", - "${vcd_rde_type.vcdkeconfig_type.vendor}:${vcd_rde_type.vcdkeconfig_type.nss}: Full Access", - "${vcd_rde_type.vcdkeconfig_type.vendor}:${vcd_rde_type.vcdkeconfig_type.nss}: Modify", - "${vcd_rde_type.vcdkeconfig_type.vendor}:${vcd_rde_type.vcdkeconfig_type.nss}: View", - "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Administrator Full access", - "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Administrator View", - "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Full Access", - "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Modify", - "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: View" - ] -} - -# This will allow to have a user with a limited set of rights that can access the Provider area of VCD. -# This user will be used by the CSE Server, with an API token that must be created afterwards. -resource "vcd_org_user" "cse_admin" { - org = var.administrator_org - name = var.cse_admin_username - password = var.cse_admin_password - role = vcd_role.cse_admin_role.name -} - -# This will output the username that you need to create an API token for. -output "ask_to_create_api_token" { - value = "Please go to ${var.vcd_url}/provider/administration/settings/user-preferences, logged in as '${vcd_org_user.cse_admin.name}' and create an API token, as it will be required for step 2" -} diff --git a/examples/container-service-extension-4.0/install/step1/terraform.tfvars.example b/examples/container-service-extension-4.0/install/step1/terraform.tfvars.example deleted file mode 100644 index 55f3fb847..000000000 --- a/examples/container-service-extension-4.0/install/step1/terraform.tfvars.example +++ /dev/null @@ -1,27 +0,0 @@ -# Change configuration to your needs and rename to 'terraform.tfvars' -# For more details about the variables specified here, please read the guide first: -# https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_0_install - -# ------------------------------------------------ -# VCD Provider config -# ------------------------------------------------ -vcd_url = "https://vcd.my-awesome-corp.com" -administrator_user = "administrator" -administrator_password = "change-me" -administrator_org = "System" -insecure_login = "false" - -# ------------------------------------------------ -# CSE administrator user configuration -# ------------------------------------------------ -# This user will be created by the Terraform configuration, so you can -# customise what its username and password will be. -cse_admin_username = "cse-admin" -cse_admin_password = "change-me" - -# ------------------------------------------------ -# # CSE Runtime Defined Entities setup -# ------------------------------------------------ -# Version that the CAPVCD Runtime Defined Entity Type will have when created. -# For CSE v4.0 it should be 1.1.0 -capvcd_rde_version = "1.1.0" diff --git a/examples/container-service-extension-4.0/install/step1/variables.tf b/examples/container-service-extension-4.0/install/step1/variables.tf deleted file mode 100644 index 6bd2c9a92..000000000 --- a/examples/container-service-extension-4.0/install/step1/variables.tf +++ /dev/null @@ -1,56 +0,0 @@ -# ------------------------------------------------ -# Provider config -# ------------------------------------------------ - -variable "vcd_url" { - description = "The VCD URL (Example: 'https://vcd.my-company.com')" - type = string -} - -variable "insecure_login" { - description = "Allow unverified SSL connections when operating with VCD" - type = bool - default = false -} - -variable "administrator_user" { - description = "The VCD administrator user (Example: 'administrator')" - default = "administrator" - type = string -} - -variable "administrator_password" { - description = "The VCD administrator password" - type = string - sensitive = true -} - -variable "administrator_org" { - description = "The VCD administrator organization (Example: 'System')" - type = string - default = "System" -} - -# ------------------------------------------------ -# CSE administrator user configuration -# ------------------------------------------------ - -variable "cse_admin_username" { - description = "The CSE administrator user that will be created (Example: 'cse-admin')" - type = string -} - -variable "cse_admin_password" { - description = "The password to set for the CSE administrator to be created" - type = string - sensitive = true -} - -# ------------------------------------------------ -# CSE Runtime Defined Entities setup -# ------------------------------------------------ -variable "capvcd_rde_version" { - type = string - description = "Version of the CAPVCD Runtime Defined Entity Type" - default = "1.1.0" -} diff --git a/examples/container-service-extension-4.0/entities/tkgmcluster-template.json b/examples/container-service-extension/v4.0/entities/tkgmcluster.json.template similarity index 100% rename from examples/container-service-extension-4.0/entities/tkgmcluster-template.json rename to examples/container-service-extension/v4.0/entities/tkgmcluster.json.template diff --git a/examples/container-service-extension-4.0/entities/vcdkeconfig-template.json b/examples/container-service-extension/v4.0/entities/vcdkeconfig.json.template similarity index 100% rename from examples/container-service-extension-4.0/entities/vcdkeconfig-template.json rename to examples/container-service-extension/v4.0/entities/vcdkeconfig.json.template diff --git a/examples/container-service-extension-4.0/schemas/capvcd-type-schema.json b/examples/container-service-extension/v4.0/schemas/capvcd-type-schema-v1.1.0.json similarity index 100% rename from examples/container-service-extension-4.0/schemas/capvcd-type-schema.json rename to examples/container-service-extension/v4.0/schemas/capvcd-type-schema-v1.1.0.json diff --git a/examples/container-service-extension-4.0/schemas/vcdkeconfig-type-schema.json b/examples/container-service-extension/v4.0/schemas/vcdkeconfig-type-schema-v1.0.0.json similarity index 100% rename from examples/container-service-extension-4.0/schemas/vcdkeconfig-type-schema.json rename to examples/container-service-extension/v4.0/schemas/vcdkeconfig-type-schema-v1.0.0.json diff --git a/examples/container-service-extension-4.0/cluster/3.9-cluster-creation.tf b/examples/container-service-extension/v4.1/cluster/3.9-cluster-creation.tf similarity index 100% rename from examples/container-service-extension-4.0/cluster/3.9-cluster-creation.tf rename to examples/container-service-extension/v4.1/cluster/3.9-cluster-creation.tf diff --git a/examples/container-service-extension-4.0/cluster/cluster-template-v1.22.9.yaml b/examples/container-service-extension/v4.1/cluster/cluster-template-v1.22.9.yaml similarity index 100% rename from examples/container-service-extension-4.0/cluster/cluster-template-v1.22.9.yaml rename to examples/container-service-extension/v4.1/cluster/cluster-template-v1.22.9.yaml diff --git a/examples/container-service-extension-4.0/cluster/terraform.tfvars.example b/examples/container-service-extension/v4.1/cluster/terraform.tfvars.example similarity index 100% rename from examples/container-service-extension-4.0/cluster/terraform.tfvars.example rename to examples/container-service-extension/v4.1/cluster/terraform.tfvars.example diff --git a/examples/container-service-extension-4.0/cluster/variables.tf b/examples/container-service-extension/v4.1/cluster/variables.tf similarity index 100% rename from examples/container-service-extension-4.0/cluster/variables.tf rename to examples/container-service-extension/v4.1/cluster/variables.tf diff --git a/examples/container-service-extension/v4.1/entities/vcdkeconfig.json.template b/examples/container-service-extension/v4.1/entities/vcdkeconfig.json.template new file mode 100644 index 000000000..9a3ef0523 --- /dev/null +++ b/examples/container-service-extension/v4.1/entities/vcdkeconfig.json.template @@ -0,0 +1,89 @@ +{ + "profiles": [ + { + "name": "production", + "active": true, + "serverConfig": { + "rdePollIntervalInMin": 1, + "heartbeatWatcherTimeoutInMin": 10, + "staleHeartbeatIntervalInMin": 30 + }, + "K8Config": { + "certificateAuthorities": [ + ${k8s_cluster_certificates} + ], + "cni": { + "name": "antrea", + "version": "" + }, + "cpi": { + "name": "cpi for cloud director", + "version": "${cpi_version}" + }, + "csi": [ + { + "name": "csi for cloud director", + "version": "${csi_version}" + } + ], + "mhc": { + "maxUnhealthyNodes": ${max_unhealthy_node_percentage}, + "nodeStartupTimeout": "${node_startup_timeout}", + "nodeNotReadyTimeout": "${node_not_ready_timeout}", + "nodeUnknownTimeout": "${node_unknown_timeout}" + }, + "rdeProjectorVersion": "0.6.0" + }, + "vcdConfig": { + "sysLogger": { + "host": "${syslog_host}", + "port": "${syslog_port}" + } + }, + "githubConfig": { + "githubPersonalAccessToken": "" + }, + "bootstrapClusterConfig": { + "capiEcosystem": { + "infraProvider": { + "name": "capvcd", + "version": "v${capvcd_version}", + "capvcdRde": { + "nss": "capvcdCluster", + "vendor": "vmware", + "version": "1.2.0" + } + }, + "coreCapiVersion": "v1.4.0", + "bootstrapProvider": { + "name": "CAPBK", + "version": "v1.4.0" + }, + "controlPlaneProvider": { + "name": "KCP", + "version": "v1.4.0" + }, + "certManagerVersion": "v1.11.1" + }, + "certificateAuthorities": [ + ${bootstrap_vm_certificates} + ], + "clusterctl": { + "version": "v1.4.0", + "clusterctlyaml": "" + }, + "dockerVersion": "", + "kindVersion": "v0.19.0", + "kindestNodeVersion": "v1.27.1", + "kubectlVersion": "", + "proxyConfig": { + "noProxy": "${no_proxy}", + "httpProxy": "${http_proxy}", + "httpsProxy": "${https_proxy}" + }, + "sizingPolicy": "${bootstrap_vm_sizing_policy}" + }, + "containerRegistryUrl": "${container_registry_url}" + } + ] +} diff --git a/examples/container-service-extension/v4.1/install/step1/3.11-cse-install-1-provider-config.tf b/examples/container-service-extension/v4.1/install/step1/3.11-cse-install-1-provider-config.tf new file mode 100644 index 000000000..ba3ef6103 --- /dev/null +++ b/examples/container-service-extension/v4.1/install/step1/3.11-cse-install-1-provider-config.tf @@ -0,0 +1,34 @@ +# ------------------------------------------------------------------------------------------------------------ +# CSE v4.1 installation, step 1: +# +# * Please read the guide present at https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install +# before applying this configuration. +# +# * The installation process is split into two steps as the first one creates a CSE admin user that needs to be +# used in a "provider" block in the second one. +# +# * Rename "terraform.tfvars.example" to "terraform.tfvars" and adapt the values to your needs. +# Other than that, this snippet should be applied as it is. +# ------------------------------------------------------------------------------------------------------------ + +# VCD Provider configuration. It must be at least v3.11.0 and configured with a System administrator account. +terraform { + required_providers { + vcd = { + source = "vmware/vcd" + version = ">= 3.11" + } + } +} + +provider "vcd" { + url = "${var.vcd_url}/api" + user = var.administrator_user + password = var.administrator_password + auth_type = "integrated" + sysorg = var.administrator_org + org = var.administrator_org + allow_unverified_ssl = var.insecure_login + logging = true + logging_file = "cse_install_step1.log" +} diff --git a/examples/container-service-extension/v4.1/install/step1/3.11-cse-install-2-cse-server-prerequisites.tf b/examples/container-service-extension/v4.1/install/step1/3.11-cse-install-2-cse-server-prerequisites.tf new file mode 100644 index 000000000..2da8e4909 --- /dev/null +++ b/examples/container-service-extension/v4.1/install/step1/3.11-cse-install-2-cse-server-prerequisites.tf @@ -0,0 +1,281 @@ +# ------------------------------------------------------------------------------------------------------------ +# CSE v4.1 installation, step 1: +# +# * Please read the guide present at https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install +# before applying this configuration. +# +# * The installation process is split into two steps as the first one creates a CSE admin user that needs to be +# used in a "provider" block in the second one. +# +# * This file contains the same resources created by the "Configure Settings for CSE Server > Set Up Prerequisites" step in the +# UI wizard. +# +# * Rename "terraform.tfvars.example" to "terraform.tfvars" and adapt the values to your needs. +# Other than that, this snippet should be applied as it is. +# You can check the comments on each resource/data source for more help and context. +# ------------------------------------------------------------------------------------------------------------ + +# This is the RDE Interface required to create the "VCDKEConfig" RDE Type. +# This should not be changed. +resource "vcd_rde_interface" "vcdkeconfig_interface" { + vendor = "vmware" + nss = "VCDKEConfig" + version = "1.0.0" + name = "VCDKEConfig" +} + +# This resource will manage the "VCDKEConfig" RDE Type required to instantiate the CSE Server configuration. +# The schema URL points to the JSON schema hosted in the terraform-provider-vcd repository. +# This should not be changed. +resource "vcd_rde_type" "vcdkeconfig_type" { + vendor = "vmware" + nss = "VCDKEConfig" + version = "1.1.0" + name = "VCD-KE RDE Schema" + schema_url = "https://raw.githubusercontent.com/vmware/terraform-provider-vcd/main/examples/container-service-extension/v4.1/schemas/vcdkeconfig-type-schema-v1.1.0.json" + interface_ids = [vcd_rde_interface.vcdkeconfig_interface.id] +} + +# This RDE Interface exists in VCD, so it must be fetched with a RDE Interface data source. This RDE Interface is used to be +# able to create the "capvcdCluster" RDE Type. +# This should not be changed. +data "vcd_rde_interface" "kubernetes_interface" { + vendor = "vmware" + nss = "k8s" + version = "1.0.0" +} + +# This is the interface required to create the "CAPVCD" Runtime Defined Entity Type. +# This should not be changed. +resource "vcd_rde_interface" "cse_interface" { + vendor = "cse" + nss = "capvcd" + version = "1.0.0" + name = "cseInterface" +} + +# This RDE Interface behavior is required to be able to obtain the Kubeconfig and other important information. +# This should not be changed. +resource "vcd_rde_interface_behavior" "capvcd_behavior" { + rde_interface_id = vcd_rde_interface.cse_interface.id + name = "getFullEntity" + execution = { + "type" : "noop" + "id" : "getFullEntity" + } +} + +# This RDE Interface will create the "capvcdCluster" RDE Type required to create Kubernetes clusters. +# The schema URL points to the JSON schema hosted in the terraform-provider-vcd repository. +# This should not be changed. +resource "vcd_rde_type" "capvcdcluster_type" { + vendor = "vmware" + nss = "capvcdCluster" + version = "1.2.0" + name = "CAPVCD Cluster" + schema_url = "https://raw.githubusercontent.com/vmware/terraform-provider-vcd/main/examples/container-service-extension/v4.1/schemas/capvcd-type-schema-v1.2.0.json" + interface_ids = [data.vcd_rde_interface.kubernetes_interface.id] + + depends_on = [vcd_rde_interface_behavior.capvcd_behavior] # Interface Behaviors must be created before any RDE Type +} + +# Access Level for the CAPVCD Type Behavior +# This should not be changed. +resource "vcd_rde_type_behavior_acl" "capvcd_behavior_acl" { + rde_type_id = vcd_rde_type.capvcdcluster_type.id + behavior_id = vcd_rde_interface_behavior.capvcd_behavior.id + access_level_ids = ["urn:vcloud:accessLevel:FullControl"] +} + +# This role is having only the minimum set of rights required for the CSE Server to function. +# It is created in the "System" provider organization scope. +# This should not be changed. +resource "vcd_role" "cse_admin_role" { + org = var.administrator_org + name = "CSE Admin Role" + description = "Used for administrative purposes" + rights = [ + "API Tokens: Manage", + "${vcd_rde_type.vcdkeconfig_type.vendor}:${vcd_rde_type.vcdkeconfig_type.nss}: Administrator Full access", + "${vcd_rde_type.vcdkeconfig_type.vendor}:${vcd_rde_type.vcdkeconfig_type.nss}: Administrator View", + "${vcd_rde_type.vcdkeconfig_type.vendor}:${vcd_rde_type.vcdkeconfig_type.nss}: Full Access", + "${vcd_rde_type.vcdkeconfig_type.vendor}:${vcd_rde_type.vcdkeconfig_type.nss}: Modify", + "${vcd_rde_type.vcdkeconfig_type.vendor}:${vcd_rde_type.vcdkeconfig_type.nss}: View", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Administrator Full access", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Administrator View", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Full Access", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Modify", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: View" + ] +} + +# This will allow to have a user with a limited set of rights that can access the Provider area of VCD. +# This user will be used by the CSE Server, with an API token that must be created in Step 2. +# This should not be changed. +resource "vcd_org_user" "cse_admin" { + org = var.administrator_org + name = var.cse_admin_username + password = var.cse_admin_password + role = vcd_role.cse_admin_role.name +} + +# This resource manages the Rights Bundle required by tenants to create and consume Kubernetes clusters. +# This should not be changed. +resource "vcd_rights_bundle" "k8s_clusters_rights_bundle" { + name = "Kubernetes Clusters Rights Bundle" + description = "Rights bundle with required rights for managing Kubernetes clusters" + rights = [ + "API Tokens: Manage", + "Access All Organization VDCs", + "Catalog: View Published Catalogs", + "Certificate Library: Manage", + "Certificate Library: View", + "General: Administrator View", + "Organization vDC Gateway: Configure Load Balancer", + "Organization vDC Gateway: Configure NAT", + "Organization vDC Gateway: View Load Balancer", + "Organization vDC Gateway: View NAT", + "Organization vDC Gateway: View", + "Organization vDC Named Disk: Create", + "Organization vDC Named Disk: Edit Properties", + "Organization vDC Named Disk: View Properties", + "Organization vDC Shared Named Disk: Create", + "vApp: Allow All Extra Config", + "${vcd_rde_type.vcdkeconfig_type.vendor}:${vcd_rde_type.vcdkeconfig_type.nss}: View", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Administrator Full access", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Full Access", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Modify", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: View", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Administrator View", + "vmware:tkgcluster: Full Access", + "vmware:tkgcluster: Modify", + "vmware:tkgcluster: View", + "vmware:tkgcluster: Administrator View", + "vmware:tkgcluster: Administrator Full access", + ] + publish_to_all_tenants = true # This needs to be published to all the Organizations +} + + +# With the Rights Bundle specified above, we need also a new Role for tenant users who want to create and manage +# Kubernetes clusters. +# This should not be changed. +resource "vcd_global_role" "k8s_cluster_author" { + name = "Kubernetes Cluster Author" + description = "Role to create Kubernetes clusters" + rights = [ + "API Tokens: Manage", + "Access All Organization VDCs", + "Catalog: Add vApp from My Cloud", + "Catalog: View Private and Shared Catalogs", + "Catalog: View Published Catalogs", + "Certificate Library: View", + "Organization vDC Compute Policy: View", + "Organization vDC Disk: View IOPS", + "Organization vDC Gateway: Configure Load Balancer", + "Organization vDC Gateway: Configure NAT", + "Organization vDC Gateway: View", + "Organization vDC Gateway: View Load Balancer", + "Organization vDC Gateway: View NAT", + "Organization vDC Named Disk: Create", + "Organization vDC Named Disk: Delete", + "Organization vDC Named Disk: Edit Properties", + "Organization vDC Named Disk: View Encryption Status", + "Organization vDC Named Disk: View Properties", + "Organization vDC Network: View Properties", + "Organization vDC Shared Named Disk: Create", + "Organization vDC: VM-VM Affinity Edit", + "Organization: View", + "UI Plugins: View", + "VAPP_VM_METADATA_TO_VCENTER", + "vApp Template / Media: Copy", + "vApp Template / Media: Edit", + "vApp Template / Media: View", + "vApp Template: Checkout", + "vApp: Allow All Extra Config", + "vApp: Copy", + "vApp: Create / Reconfigure", + "vApp: Delete", + "vApp: Download", + "vApp: Edit Properties", + "vApp: Edit VM CPU", + "vApp: Edit VM Compute Policy", + "vApp: Edit VM Hard Disk", + "vApp: Edit VM Memory", + "vApp: Edit VM Network", + "vApp: Edit VM Properties", + "vApp: Manage VM Password Settings", + "vApp: Power Operations", + "vApp: Sharing", + "vApp: Snapshot Operations", + "vApp: Upload", + "vApp: Use Console", + "vApp: VM Boot Options", + "vApp: View ACL", + "vApp: View VM and VM's Disks Encryption Status", + "vApp: View VM metrics", + "${vcd_rde_type.vcdkeconfig_type.vendor}:${vcd_rde_type.vcdkeconfig_type.nss}: View", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Administrator Full access", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Full Access", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Modify", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: View", + "${vcd_rde_type.capvcdcluster_type.vendor}:${vcd_rde_type.capvcdcluster_type.nss}: Administrator View", + "vmware:tkgcluster: Full Access", + "vmware:tkgcluster: Modify", + "vmware:tkgcluster: View", + ] + + publish_to_all_tenants = true # This needs to be published to all the Organizations + + # As we use rights created by the CAPVCD Type created previously, we need to depend on it + depends_on = [ + vcd_rights_bundle.k8s_clusters_rights_bundle + ] +} + +# The VM Sizing Policies defined below MUST be created as they are specified in this HCL. These are the default +# policies required by CSE to create TKGm clusters. +# This should not be changed. +resource "vcd_vm_sizing_policy" "tkg_xl" { + name = "TKG extra-large" + description = "Extra-large VM sizing policy for a Kubernetes cluster node (8 CPU, 32GB memory)" + cpu { + count = 8 + } + memory { + size_in_mb = "32768" + } +} + +resource "vcd_vm_sizing_policy" "tkg_l" { + name = "TKG large" + description = "Large VM sizing policy for a Kubernetes cluster node (4 CPU, 16GB memory)" + cpu { + count = 4 + } + memory { + size_in_mb = "16384" + } +} + +resource "vcd_vm_sizing_policy" "tkg_m" { + name = "TKG medium" + description = "Medium VM sizing policy for a Kubernetes cluster node (2 CPU, 8GB memory)" + cpu { + count = 2 + } + memory { + size_in_mb = "8192" + } +} + +resource "vcd_vm_sizing_policy" "tkg_s" { + name = "TKG small" + description = "Small VM sizing policy for a Kubernetes cluster node (2 CPU, 4GB memory)" + cpu { + count = 2 + } + memory { + size_in_mb = "4048" + } +} diff --git a/examples/container-service-extension/v4.1/install/step1/3.11-cse-install-3-cse-server-settings.tf b/examples/container-service-extension/v4.1/install/step1/3.11-cse-install-3-cse-server-settings.tf new file mode 100644 index 000000000..0df5a81d1 --- /dev/null +++ b/examples/container-service-extension/v4.1/install/step1/3.11-cse-install-3-cse-server-settings.tf @@ -0,0 +1,45 @@ +# ------------------------------------------------------------------------------------------------------------ +# CSE v4.1 installation, step 1: +# +# * Please read the guide present at https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_install +# before applying this configuration. +# +# * The installation process is split into two steps as the first one creates a CSE admin user that needs to be +# used in a "provider" block in the second one. +# +# * This file contains the same resources created by the "Configure Settings for CSE Server > Set Configuration Parameters" step in the +# UI wizard. +# +# * Rename "terraform.tfvars.example" to "terraform.tfvars" and adapt the values to your needs. +# Other than that, this snippet should be applied as it is. +# You can check the comments on the resource for context. +# ------------------------------------------------------------------------------------------------------------ + +# This RDE configures the CSE Server. It can be customised through variables, and the bootstrap_cluster_sizing_policy +# can also be changed. +# Other than that, this should be applied as it is. +resource "vcd_rde" "vcdkeconfig_instance" { + org = var.administrator_org + name = "vcdKeConfig" + rde_type_id = vcd_rde_type.vcdkeconfig_type.id + resolve = true + input_entity = templatefile(var.vcdkeconfig_template_filepath, { + capvcd_version = var.capvcd_version + cpi_version = var.cpi_version + csi_version = var.csi_version + github_personal_access_token = var.github_personal_access_token + bootstrap_vm_sizing_policy = vcd_vm_sizing_policy.tkg_s.name # References the small VM Sizing Policy, it can be changed. + no_proxy = var.no_proxy + http_proxy = var.http_proxy + https_proxy = var.https_proxy + syslog_host = var.syslog_host + syslog_port = var.syslog_port + node_startup_timeout = var.node_startup_timeout + node_not_ready_timeout = var.node_not_ready_timeout + node_unknown_timeout = var.node_unknown_timeout + max_unhealthy_node_percentage = var.max_unhealthy_node_percentage + container_registry_url = var.container_registry_url + k8s_cluster_certificates = join(",", var.k8s_cluster_certificates) + bootstrap_vm_certificates = join(",", var.bootstrap_vm_certificates) + }) +} diff --git a/examples/container-service-extension/v4.1/install/step1/terraform.tfvars.example b/examples/container-service-extension/v4.1/install/step1/terraform.tfvars.example new file mode 100644 index 000000000..ffdb56ff3 --- /dev/null +++ b/examples/container-service-extension/v4.1/install/step1/terraform.tfvars.example @@ -0,0 +1,60 @@ +# Change configuration to your needs and rename to 'terraform.tfvars' +# For more details about the variables specified here, please read the guide first: +# https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install + +# ------------------------------------------------ +# VCD Provider config +# ------------------------------------------------ + +vcd_url = "https://vcd.my-awesome-corp.com" +administrator_user = "administrator" +administrator_password = "change-me" +administrator_org = "System" +insecure_login = "false" + +# ------------------------------------------------ +# CSE Server Pre-requisites +# ------------------------------------------------ + +# This user will be created by the Terraform configuration, so you can +# customise what its username and password will be. +# This user will have an API token that must be consumed by the CSE Server. +cse_admin_username = "cse_admin" +cse_admin_password = "change-me" + +# ------------------------------------------------ +# CSE Server Settings +# ------------------------------------------------ + +# These are required to create the Runtime Defined Entity that will contain the CSE Server configuration (vcdKeConfig) +# To know more about the specific versions, please refer to the CSE documentation. +# The values set here correspond to CSE v4.1: +vcdkeconfig_template_filepath = "../../entities/vcdkeconfig.json.template" +capvcd_version = "1.1.0" +cpi_version = "1.4.0" +csi_version = "1.4.0" + +# Optional but recommended to avoid rate limiting when configuring the TKGm clusters. +# Create this one in https://github.com/settings/tokens +github_personal_access_token = "" + +# Node will be considered unhealthy and remediated if joining the cluster takes longer than this timeout (seconds) +node_startup_timeout = "900" +# A newly joined node will be considered unhealthy and remediated if it cannot host workloads for longer than this timeout (seconds) +node_not_ready_timeout = "300" +# A healthy node will be considered unhealthy and remediated if it is unreachable for longer than this timeout (seconds) +node_unknown_timeout = "300" +# Remediation will be suspended when the number of unhealthy nodes exceeds this percentage. +# (100% means that unhealthy nodes will always be remediated, while 0% means that unhealthy nodes will never be remediated) +max_unhealthy_node_percentage = 100 + +# URL from where TKG clusters will fetch container images +container_registry_url = "projects.registry.vmware.com" + +# Certificate(s) to allow the ephemeral VM (created during cluster creation) to authenticate with. +# For example, when pulling images from a container registry. (Copy and paste .cert file contents) +k8s_cluster_certificates = [] + +# Certificate(s) to allow clusters to authenticate with. +# For example, when pulling images from a container registry. (Copy and paste .cert file contents) +bootstrap_vm_certificates = [] diff --git a/examples/container-service-extension/v4.1/install/step1/variables.tf b/examples/container-service-extension/v4.1/install/step1/variables.tf new file mode 100644 index 000000000..c5e5a2e5b --- /dev/null +++ b/examples/container-service-extension/v4.1/install/step1/variables.tf @@ -0,0 +1,158 @@ +# ------------------------------------------------ +# Provider config +# ------------------------------------------------ + +variable "vcd_url" { + description = "The VCD URL (Example: 'https://vcd.my-company.com')" + type = string +} + +variable "insecure_login" { + description = "Allow unverified SSL connections when operating with VCD" + type = bool + default = false +} + +variable "administrator_user" { + description = "The VCD administrator user (Example: 'administrator')" + default = "administrator" + type = string +} + +variable "administrator_password" { + description = "The VCD administrator password" + type = string + sensitive = true +} + +variable "administrator_org" { + description = "The VCD administrator organization (Example: 'System')" + type = string + default = "System" +} + +# ------------------------------------------------ +# CSE Server Pre-requisites +# ------------------------------------------------ + +variable "cse_admin_username" { + description = "The CSE administrator user that will be created (Example: 'cse-admin')" + type = string +} + +variable "cse_admin_password" { + description = "The password to set for the CSE administrator to be created" + type = string + sensitive = true +} + +# ------------------------------------------------ +# CSE Server Settings +# ------------------------------------------------ + +variable "vcdkeconfig_template_filepath" { + type = string + description = "Path to the VCDKEConfig JSON template" + default = "../../entities/vcdkeconfig.json.template" +} + +variable "capvcd_version" { + type = string + description = "Version of CAPVCD" + default = "1.1.0" +} + +variable "cpi_version" { + type = string + description = "VCDKEConfig: Cloud Provider Interface version" + default = "1.4.0" +} + +variable "csi_version" { + type = string + description = "VCDKEConfig: Container Storage Interface version" + default = "1.4.0" +} + +variable "github_personal_access_token" { + type = string + description = "VCDKEConfig: Prevents potential github rate limiting errors during cluster creation and deletion" + default = "" + sensitive = true +} + +variable "no_proxy" { + type = string + description = "VCDKEConfig: List of comma-separated domains without spaces" + default = "localhost,127.0.0.1,cluster.local,.svc" +} + +variable "http_proxy" { + type = string + description = "VCDKEConfig: Address of your HTTP proxy server" + default = "" +} + +variable "https_proxy" { + type = string + description = "VCDKEConfig: Address of your HTTPS proxy server" + default = "" +} + +variable "syslog_host" { + type = string + description = "VCDKEConfig: Domain for system logs" + default = "" +} + +variable "syslog_port" { + type = string + description = "VCDKEConfig: Port for system logs" + default = "" +} + +variable "node_startup_timeout" { + type = string + description = "VCDKEConfig: Node will be considered unhealthy and remediated if joining the cluster takes longer than this timeout (seconds)" + default = "900" +} + +variable "node_not_ready_timeout" { + type = string + description = "VCDKEConfig: A newly joined node will be considered unhealthy and remediated if it cannot host workloads for longer than this timeout (seconds)" + default = "300" +} + +variable "node_unknown_timeout" { + type = string + description = "VCDKEConfig: A healthy node will be considered unhealthy and remediated if it is unreachable for longer than this timeout (seconds)" + default = "300" +} + +variable "max_unhealthy_node_percentage" { + type = number + description = "VCDKEConfig: Remediation will be suspended when the number of unhealthy nodes exceeds this percentage. (100% means that unhealthy nodes will always be remediated, while 0% means that unhealthy nodes will never be remediated)" + default = 100 + validation { + condition = var.max_unhealthy_node_percentage >= 0 && var.max_unhealthy_node_percentage <= 100 + error_message = "The value must be a percentage, hence between 0 and 100" + } +} + +variable "container_registry_url" { + type = string + description = "VCDKEConfig: URL from where TKG clusters will fetch container images" + default = "projects.registry.vmware.com" +} + +variable "bootstrap_vm_certificates" { + type = list(string) + description = "VCDKEConfig: Certificate(s) to allow the ephemeral VM (created during cluster creation) to authenticate with. For example, when pulling images from a container registry. (Copy and paste .cert file contents)" + default = [] +} + +variable "k8s_cluster_certificates" { + type = list(string) + description = "VCDKEConfig: Certificate(s) to allow clusters to authenticate with. For example, when pulling images from a container registry. (Copy and paste .cert file contents)" + default = [] +} diff --git a/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-4-provider-config.tf b/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-4-provider-config.tf new file mode 100644 index 000000000..f3d622de6 --- /dev/null +++ b/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-4-provider-config.tf @@ -0,0 +1,42 @@ +# ------------------------------------------------------------------------------------------------------------ +# CSE v4.1 installation, step 2: +# +# * Please read the guide present at https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install +# before applying this configuration. +# +# * The installation process is split into two steps as the first one creates a CSE admin user that needs to be +# used in a "provider" block in the second one. +# +# * Rename "terraform.tfvars.example" to "terraform.tfvars" and adapt the values to your needs. +# Other than that, this snippet should be applied as it is. +# ------------------------------------------------------------------------------------------------------------ + +# VCD Provider configuration. It must be at least v3.11.0 and configured with a System administrator account. +terraform { + required_providers { + vcd = { + source = "vmware/vcd" + version = ">= 3.11" + } + time = { + source = "hashicorp/time" + version = "0.9.1" + } + local = { + source = "hashicorp/local" + version = "2.4.0" + } + } +} + +provider "vcd" { + url = "${var.vcd_url}/api" + user = var.administrator_user + password = var.administrator_password + auth_type = "integrated" + sysorg = var.administrator_org + org = var.administrator_org + allow_unverified_ssl = var.insecure_login + logging = true + logging_file = "cse_install_step2.log" +} diff --git a/examples/container-service-extension-4.0/install/step2/3.10-cse-4.0-install-step2.tf b/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-5-infrastructure.tf similarity index 51% rename from examples/container-service-extension-4.0/install/step2/3.10-cse-4.0-install-step2.tf rename to examples/container-service-extension/v4.1/install/step2/3.11-cse-install-5-infrastructure.tf index f6cfa0740..a5e2d6cde 100644 --- a/examples/container-service-extension-4.0/install/step2/3.10-cse-4.0-install-step2.tf +++ b/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-5-infrastructure.tf @@ -1,47 +1,16 @@ # ------------------------------------------------------------------------------------------------------------ -# CSE 4.0 installation, step 2: +# CSE v4.1 installation: # -# * Please read the guide present at https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_0_install +# * Please read the guide present at https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install # before applying this configuration. # -# * Please apply "3.10-cse-4.0-install-step1.tf" first, located at -# https://github.com/vmware/terraform-provider-vcd/tree/main/examples/container-service-extension-4.0/install/step1 -# -# * Please review this HCL configuration before applying, to change the settings to the ones that fit best with your organization. -# For example, network settings such as firewall rules, network subnets, VDC allocation modes, ALB feature set, etc should be -# carefully reviewed. -# # * Rename "terraform.tfvars.example" to "terraform.tfvars" and adapt the values to your needs. +# +# * Please review this file carefully, as it shapes the structure of your organization, hence you should customise +# it to your needs. # You can check the comments on each resource/data source for more help and context. # ------------------------------------------------------------------------------------------------------------ -# VCD Provider configuration. It must be at least v3.10.0 and configured with a System administrator account. -# This is needed to build the minimum setup for CSE v4.0 to work, like Organizations, VDCs, Provider Gateways, etc. -terraform { - required_providers { - vcd = { - source = "vmware/vcd" - version = ">= 3.10" - } - time = { - source = "hashicorp/time" - version = ">= 0.9" - } - } -} - -provider "vcd" { - url = "${var.vcd_url}/api" - user = var.administrator_user - password = var.administrator_password - auth_type = "integrated" - sysorg = var.administrator_org - org = var.administrator_org - allow_unverified_ssl = var.insecure_login - logging = true - logging_file = "cse_install_step2.log" -} - # The two resources below will create the two Organizations mentioned in the CSE documentation: # https://docs.vmware.com/en/VMware-Cloud-Director-Container-Service-Extension/index.html @@ -91,52 +60,6 @@ resource "vcd_org" "tenant_organization" { } } -# The VM Sizing Policies defined below MUST be created as they are specified in this HCL. These are the default -# policies required by CSE to create TKGm clusters, hence nothing should be modified here. -resource "vcd_vm_sizing_policy" "tkg_xl" { - name = "TKG extra-large" - description = "Extra-large VM sizing policy for a Kubernetes cluster node (8 CPU, 32GB memory)" - cpu { - count = 8 - } - memory { - size_in_mb = "32768" - } -} - -resource "vcd_vm_sizing_policy" "tkg_l" { - name = "TKG large" - description = "Large VM sizing policy for a Kubernetes cluster node (4 CPU, 16GB memory)" - cpu { - count = 4 - } - memory { - size_in_mb = "16384" - } -} - -resource "vcd_vm_sizing_policy" "tkg_m" { - name = "TKG medium" - description = "Medium VM sizing policy for a Kubernetes cluster node (2 CPU, 8GB memory)" - cpu { - count = 2 - } - memory { - size_in_mb = "8192" - } -} - -resource "vcd_vm_sizing_policy" "tkg_s" { - name = "TKG small" - description = "Small VM sizing policy for a Kubernetes cluster node (2 CPU, 4GB memory)" - cpu { - count = 2 - } - memory { - size_in_mb = "4048" - } -} - # This section will create one VDC per organization. To create the VDCs we need to fetch some elements like # Provider VDC, Edge Clusters, etc. data "vcd_provider_vdc" "nsxt_pvdc" { @@ -149,6 +72,23 @@ data "vcd_nsxt_edge_cluster" "nsxt_edgecluster" { name = var.nsxt_edge_cluster_name } +# Fetch the VM Sizing Policies created in step 1 +data "vcd_vm_sizing_policy" "tkg_s" { + name = "TKG small" +} + +data "vcd_vm_sizing_policy" "tkg_m" { + name = "TKG medium" +} + +data "vcd_vm_sizing_policy" "tkg_l" { + name = "TKG large" +} + +data "vcd_vm_sizing_policy" "tkg_xl" { + name = "TKG extra-large" +} + # The VDC that will host the Kubernetes clusters. resource "vcd_org_vdc" "tenant_vdc" { name = "tenant_vdc" @@ -186,13 +126,13 @@ resource "vcd_org_vdc" "tenant_vdc" { delete_force = true delete_recursive = true - # Make sure you specify the required VM Sizing Policies managed by the resources specified above. - default_compute_policy_id = vcd_vm_sizing_policy.tkg_s.id + # Make sure you specify the required VM Sizing Policies managed by the data sources specified above. + default_compute_policy_id = data.vcd_vm_sizing_policy.tkg_s.id vm_sizing_policy_ids = [ - vcd_vm_sizing_policy.tkg_xl.id, - vcd_vm_sizing_policy.tkg_l.id, - vcd_vm_sizing_policy.tkg_m.id, - vcd_vm_sizing_policy.tkg_s.id, + data.vcd_vm_sizing_policy.tkg_xl.id, + data.vcd_vm_sizing_policy.tkg_l.id, + data.vcd_vm_sizing_policy.tkg_m.id, + data.vcd_vm_sizing_policy.tkg_s.id, ] } @@ -234,184 +174,6 @@ resource "vcd_org_vdc" "solutions_vdc" { delete_recursive = true } -# In this section we create two Catalogs, one to host all CSE Server OVAs and another one to host TKGm OVAs. -# They are created in the Solutions organization and only the TKGm will be shared as read-only. This will guarantee -# that only CSE admins can manage OVAs. -resource "vcd_catalog" "cse_catalog" { - org = vcd_org.solutions_organization.name # References the Solutions Organization created previously - name = "cse_catalog" - - delete_force = "true" - delete_recursive = "true" - - # In this example, everything is created from scratch, so it is needed to wait for the VDC to be available, so the - # Catalog can be created. - depends_on = [ - vcd_org_vdc.solutions_vdc - ] -} - -resource "vcd_catalog" "tkgm_catalog" { - org = vcd_org.solutions_organization.name # References the Solutions Organization - name = "tkgm_catalog" - - delete_force = "true" - delete_recursive = "true" - - # In this example, everything is created from scratch, so it is needed to wait for the VDC to be available, so the - # Catalog can be created. - depends_on = [ - vcd_org_vdc.solutions_vdc - ] -} - -# We share the TKGm Catalog with the Tenant Organization created previously. -resource "vcd_catalog_access_control" "tkgm_catalog_ac" { - org = vcd_org.solutions_organization.name # References the Solutions Organization created previously - catalog_id = vcd_catalog.tkgm_catalog.id - shared_with_everyone = false - shared_with { - org_id = vcd_org.tenant_organization.id # Shared with the Tenant Organization - access_level = "ReadOnly" - } -} - -# We upload a minimum set of OVAs for CSE to work. Read the official documentation to check -# where to find the OVAs: -# https://docs.vmware.com/en/VMware-Cloud-Director-Container-Service-Extension/index.html -resource "vcd_catalog_vapp_template" "tkgm_ova" { - org = vcd_org.solutions_organization.name # References the Solutions Organization created previously - catalog_id = vcd_catalog.tkgm_catalog.id # References the TKGm Catalog created previously - - name = replace(var.tkgm_ova_file, ".ova", "") - description = replace(var.tkgm_ova_file, ".ova", "") - ova_path = format("%s/%s", var.tkgm_ova_folder, var.tkgm_ova_file) -} - -resource "vcd_catalog_vapp_template" "cse_ova" { - org = vcd_org.solutions_organization.name # References the Solutions Organization created previously - catalog_id = vcd_catalog.cse_catalog.id # References the CSE Catalog created previously - - name = replace(var.cse_ova_file, ".ova", "") - description = replace(var.cse_ova_file, ".ova", "") - ova_path = format("%s/%s", var.cse_ova_folder, var.cse_ova_file) -} - -# Fetch the RDE Type created in 3.10-cse-4.0-install-step1.tf. This is required to be able to create the following -# Rights Bundle. -data "vcd_rde_type" "existing_capvcdcluster_type" { - vendor = "vmware" - nss = "capvcdCluster" - version = var.capvcd_rde_version -} - -# This resource manages the Rights Bundle required by tenants to create and consume Kubernetes clusters. -resource "vcd_rights_bundle" "k8s_clusters_rights_bundle" { - name = "Kubernetes Clusters Rights Bundle" - description = "Rights bundle with required rights for managing Kubernetes clusters" - rights = [ - "API Tokens: Manage", - "vApp: Allow All Extra Config", - "Catalog: View Published Catalogs", - "Organization vDC Shared Named Disk: Create", - "Organization vDC Gateway: View", - "Organization vDC Gateway: View NAT", - "Organization vDC Gateway: Configure NAT", - "Organization vDC Gateway: View Load Balancer", - "Organization vDC Gateway: Configure Load Balancer", - "${data.vcd_rde_type.existing_capvcdcluster_type.vendor}:${data.vcd_rde_type.existing_capvcdcluster_type.nss}: Administrator Full access", - "${data.vcd_rde_type.existing_capvcdcluster_type.vendor}:${data.vcd_rde_type.existing_capvcdcluster_type.nss}: Full Access", - "${data.vcd_rde_type.existing_capvcdcluster_type.vendor}:${data.vcd_rde_type.existing_capvcdcluster_type.nss}: Modify", - "${data.vcd_rde_type.existing_capvcdcluster_type.vendor}:${data.vcd_rde_type.existing_capvcdcluster_type.nss}: View", - "${data.vcd_rde_type.existing_capvcdcluster_type.vendor}:${data.vcd_rde_type.existing_capvcdcluster_type.nss}: Administrator View", - "General: Administrator View", - "Certificate Library: Manage", - "Access All Organization VDCs", - "Certificate Library: View", - "Organization vDC Named Disk: Create", - "Organization vDC Named Disk: Edit Properties", - "Organization vDC Named Disk: View Properties", - "vmware:tkgcluster: Full Access", - "vmware:tkgcluster: Modify", - "vmware:tkgcluster: View", - "vmware:tkgcluster: Administrator View", - "vmware:tkgcluster: Administrator Full access", - ] - publish_to_all_tenants = true # This needs to be published to all the Organizations -} - -# With the Rights Bundle specified above, we need also a new Role for tenant users who want to create and manage -# Kubernetes clusters. -resource "vcd_global_role" "k8s_cluster_author" { - name = "Kubernetes Cluster Author" - description = "Role to create Kubernetes clusters" - rights = [ - "API Tokens: Manage", - "Access All Organization VDCs", - "Catalog: Add vApp from My Cloud", - "Catalog: View Private and Shared Catalogs", - "Catalog: View Published Catalogs", - "Certificate Library: View", - "Organization vDC Compute Policy: View", - "Organization vDC Gateway: Configure Load Balancer", - "Organization vDC Gateway: Configure NAT", - "Organization vDC Gateway: View", - "Organization vDC Gateway: View Load Balancer", - "Organization vDC Gateway: View NAT", - "Organization vDC Named Disk: Create", - "Organization vDC Named Disk: Delete", - "Organization vDC Named Disk: Edit Properties", - "Organization vDC Named Disk: View Properties", - "Organization vDC Network: View Properties", - "Organization vDC Shared Named Disk: Create", - "Organization vDC: VM-VM Affinity Edit", - "Organization: View", - "UI Plugins: View", - "VAPP_VM_METADATA_TO_VCENTER", - "vApp Template / Media: Copy", - "vApp Template / Media: Edit", - "vApp Template / Media: View", - "vApp Template: Checkout", - "vApp: Allow All Extra Config", - "vApp: Copy", - "vApp: Create / Reconfigure", - "vApp: Delete", - "vApp: Download", - "vApp: Edit Properties", - "vApp: Edit VM CPU", - "vApp: Edit VM Hard Disk", - "vApp: Edit VM Memory", - "vApp: Edit VM Network", - "vApp: Edit VM Properties", - "vApp: Manage VM Password Settings", - "vApp: Power Operations", - "vApp: Sharing", - "vApp: Snapshot Operations", - "vApp: Upload", - "vApp: Use Console", - "vApp: VM Boot Options", - "vApp: View ACL", - "vApp: View VM metrics", - "${data.vcd_rde_type.existing_capvcdcluster_type.vendor}:${data.vcd_rde_type.existing_capvcdcluster_type.nss}: Administrator Full access", - "${data.vcd_rde_type.existing_capvcdcluster_type.vendor}:${data.vcd_rde_type.existing_capvcdcluster_type.nss}: Full Access", - "${data.vcd_rde_type.existing_capvcdcluster_type.vendor}:${data.vcd_rde_type.existing_capvcdcluster_type.nss}: Modify", - "${data.vcd_rde_type.existing_capvcdcluster_type.vendor}:${data.vcd_rde_type.existing_capvcdcluster_type.nss}: View", - "${data.vcd_rde_type.existing_capvcdcluster_type.vendor}:${data.vcd_rde_type.existing_capvcdcluster_type.nss}: Administrator View", - "vmware:tkgcluster: Full Access", - "vmware:tkgcluster: Modify", - "vmware:tkgcluster: View", - "vmware:tkgcluster: Administrator View", - "vmware:tkgcluster: Administrator Full access", - ] - - publish_to_all_tenants = true # This needs to be published to all the Organizations - - # As we use rights created by the CAPVCD Type created previously, we need to depend on it - depends_on = [ - vcd_rights_bundle.k8s_clusters_rights_bundle - ] -} - # The networking setup specified below will configure one Provider Gateway + Edge Gateway + Routed network per # organization. You can customise this section according to your needs. @@ -685,112 +447,3 @@ resource "vcd_nsxt_firewall" "tenant_firewall" { ip_protocol = "IPV4_IPV6" } } - -# Fetch the RDE Type created in 3.10-cse-4.0-install-step1.tf, as we need to create the configuration instance. -data "vcd_rde_type" "existing_vcdkeconfig_type" { - vendor = "vmware" - nss = "VCDKEConfig" - version = "1.0.0" -} - -# This RDE should be applied as it is. -resource "vcd_rde" "vcdkeconfig_instance" { - org = var.administrator_org - name = "vcdKeConfig" - rde_type_id = data.vcd_rde_type.existing_vcdkeconfig_type.id - resolve = true - input_entity = templatefile(var.vcdkeconfig_template_filepath, { - capvcd_version = var.capvcd_version - capvcd_rde_version = var.capvcd_rde_version - cpi_version = var.cpi_version - csi_version = var.csi_version - github_personal_access_token = var.github_personal_access_token - bootstrap_cluster_sizing_policy = vcd_vm_sizing_policy.tkg_s.name # References the small VM Sizing Policy - no_proxy = var.no_proxy - http_proxy = var.http_proxy - https_proxy = var.https_proxy - syslog_host = var.syslog_host - syslog_port = var.syslog_port - }) -} - -resource "vcd_vapp" "cse_server_vapp" { - org = vcd_org.solutions_organization.name - vdc = vcd_org_vdc.solutions_vdc.name - name = "CSE Server vApp" - - lease { - runtime_lease_in_sec = 0 - storage_lease_in_sec = 0 - } -} - -resource "vcd_vapp_org_network" "cse_server_network" { - org = vcd_org.solutions_organization.name - vdc = vcd_org_vdc.solutions_vdc.name - - vapp_name = vcd_vapp.cse_server_vapp.name - org_network_name = vcd_network_routed_v2.solutions_routed_network.name - - reboot_vapp_on_removal = true -} - -resource "vcd_vapp_vm" "cse_server_vm" { - org = vcd_org.solutions_organization.name - vdc = vcd_org_vdc.solutions_vdc.name - - vapp_name = vcd_vapp.cse_server_vapp.name - name = "CSE Server VM" - - vapp_template_id = vcd_catalog_vapp_template.cse_ova.id - - network { - type = "org" - name = vcd_vapp_org_network.cse_server_network.org_network_name - ip_allocation_mode = "POOL" - } - - guest_properties = { - - # VCD host - "cse.vcdHost" = var.vcd_url - - # CSE Server org - "cse.vAppOrg" = vcd_org.solutions_organization.name - - # CSE admin account's Access Token - "cse.vcdRefreshToken" = var.cse_admin_api_token - - # CSE admin account's username - "cse.vcdUsername" = var.cse_admin_user - - # CSE admin account's org - "cse.userOrg" = var.administrator_org - } - - customization { - force = false - enabled = true - allow_local_admin_password = true - auto_generate_password = true - } - - depends_on = [ - vcd_rde.vcdkeconfig_instance - ] -} - -data "vcd_org" "system_org" { - name = var.administrator_org -} - -resource vcd_ui_plugin "k8s_container_clusters_ui_plugin" { - count = var.k8s_container_clusters_ui_plugin_path == "" ? 0 : 1 - plugin_path = var.k8s_container_clusters_ui_plugin_path - enabled = true - tenant_ids = [ - data.vcd_org.system_org.id, - vcd_org.solutions_organization.id, - vcd_org.tenant_organization.id, - ] -} diff --git a/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-6-ovas.tf b/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-6-ovas.tf new file mode 100644 index 000000000..6e4112601 --- /dev/null +++ b/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-6-ovas.tf @@ -0,0 +1,75 @@ +# ------------------------------------------------------------------------------------------------------------ +# CSE v4.1 installation: +# +# * Please read the guide present at https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install +# before applying this configuration. +# +# * Rename "terraform.tfvars.example" to "terraform.tfvars" and adapt the values to your needs. +# Other than that, this snippet should be applied as it is. +# You can check the comments on each resource/data source for more help and context. +# ------------------------------------------------------------------------------------------------------------ + +# In this section we create two Catalogs, one to host all CSE Server OVAs and another one to host TKGm OVAs. +# They are created in the Solutions organization and only the TKGm will be shared as read-only. This will guarantee +# that only CSE admins can manage OVAs. +resource "vcd_catalog" "cse_catalog" { + org = vcd_org.solutions_organization.name # References the Solutions Organization created previously + name = "cse_catalog" + + delete_force = "true" + delete_recursive = "true" + + # In this example, everything is created from scratch, so it is needed to wait for the VDC to be available, so the + # Catalog can be created. + depends_on = [ + vcd_org_vdc.solutions_vdc + ] +} + +resource "vcd_catalog" "tkgm_catalog" { + org = vcd_org.solutions_organization.name # References the Solutions Organization + name = "tkgm_catalog" + + delete_force = "true" + delete_recursive = "true" + + # In this example, everything is created from scratch, so it is needed to wait for the VDC to be available, so the + # Catalog can be created. + depends_on = [ + vcd_org_vdc.solutions_vdc + ] +} + +# We share the TKGm Catalog with the Tenant Organization created previously. +resource "vcd_catalog_access_control" "tkgm_catalog_ac" { + org = vcd_org.solutions_organization.name # References the Solutions Organization created previously + catalog_id = vcd_catalog.tkgm_catalog.id + shared_with_everyone = false + shared_with { + org_id = vcd_org.tenant_organization.id # Shared with the Tenant Organization + access_level = "ReadOnly" + } +} + +# We upload a minimum set of OVAs for CSE to work. Read the official documentation to check +# where to find the OVAs: +# https://docs.vmware.com/en/VMware-Cloud-Director-Container-Service-Extension/index.html +resource "vcd_catalog_vapp_template" "tkgm_ova" { + for_each = toset(var.tkgm_ova_files) + org = vcd_org.solutions_organization.name # References the Solutions Organization created previously + catalog_id = vcd_catalog.tkgm_catalog.id # References the TKGm Catalog created previously + + name = replace(each.key, ".ova", "") + description = replace(each.key, ".ova", "") + ova_path = format("%s/%s", var.tkgm_ova_folder, each.key) +} + +resource "vcd_catalog_vapp_template" "cse_ova" { + org = vcd_org.solutions_organization.name # References the Solutions Organization created previously + catalog_id = vcd_catalog.cse_catalog.id # References the CSE Catalog created previously + + name = replace(var.cse_ova_file, ".ova", "") + description = replace(var.cse_ova_file, ".ova", "") + ova_path = format("%s/%s", var.cse_ova_folder, var.cse_ova_file) +} + diff --git a/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-7-cse-server-init.tf b/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-7-cse-server-init.tf new file mode 100644 index 000000000..41622cefe --- /dev/null +++ b/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-7-cse-server-init.tf @@ -0,0 +1,105 @@ +# ------------------------------------------------------------------------------------------------------------ +# CSE v4.1 installation: +# +# * Please read the guide present at https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install +# before applying this configuration. +# +# * Rename "terraform.tfvars.example" to "terraform.tfvars" and adapt the values to your needs. +# Other than that, this snippet should be applied as it is. +# You can check the comments on each resource/data source for more help and context. +# ------------------------------------------------------------------------------------------------------------ + +# Log in to VCD with the cse_admin username created above. This will be used to provision +# an API token that must be consumed by the CSE Server. +# This should not be changed. +provider "vcd" { + alias = "cse_admin" + url = "${var.vcd_url}/api" + user = var.cse_admin_username + password = var.cse_admin_password + auth_type = "integrated" + org = var.administrator_org + allow_unverified_ssl = var.insecure_login + logging = true + logging_file = "cse_install_cse_admin.log" +} + +# Generates an API token for the CSE Admin user, that will be used to instantiate the CSE Server. +# This should not be changed. +resource "vcd_api_token" "cse_admin_token" { + provider = vcd.cse_admin + name = "CSE Admin API Token" + file_name = var.cse_admin_api_token_file + allow_token_file = true +} + +data "local_file" "cse_admin_token_file" { + filename = vcd_api_token.cse_admin_token.file_name +} + +# This is the CSE Server vApp +resource "vcd_vapp" "cse_server_vapp" { + org = vcd_org.solutions_organization.name + vdc = vcd_org_vdc.solutions_vdc.name + name = "CSE Server vApp" + + lease { + runtime_lease_in_sec = 0 + storage_lease_in_sec = 0 + } +} + +# The CSE Server vApp network that will consume an existing routed network from +# the solutions organization. +resource "vcd_vapp_org_network" "cse_server_network" { + org = vcd_org.solutions_organization.name + vdc = vcd_org_vdc.solutions_vdc.name + + vapp_name = vcd_vapp.cse_server_vapp.name + org_network_name = vcd_network_routed_v2.solutions_routed_network.name + + reboot_vapp_on_removal = true +} + +# The CSE Server VM. It requires guest properties to be introduced for it to work +# properly. You can troubleshoot it by checking the cse.log file. +resource "vcd_vapp_vm" "cse_server_vm" { + org = vcd_org.solutions_organization.name + vdc = vcd_org_vdc.solutions_vdc.name + + vapp_name = vcd_vapp.cse_server_vapp.name + name = "CSE Server VM" + + vapp_template_id = vcd_catalog_vapp_template.cse_ova.id + + network { + type = "org" + name = vcd_vapp_org_network.cse_server_network.org_network_name + ip_allocation_mode = "POOL" + } + + guest_properties = { + + # VCD host + "cse.vcdHost" = var.vcd_url + + # CSE Server org + "cse.vAppOrg" = vcd_org.solutions_organization.name + + # CSE admin account's Access Token + "cse.vcdRefreshToken" = jsondecode(data.local_file.cse_admin_token_file.content)["refresh_token"] + + # CSE admin account's username + "cse.vcdUsername" = var.cse_admin_username + + # CSE admin account's org + "cse.userOrg" = vcd_org.solutions_organization.name + } + + customization { + force = false + enabled = true + allow_local_admin_password = true + auto_generate_password = true + } +} diff --git a/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-8-optionals.tf b/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-8-optionals.tf new file mode 100644 index 000000000..79fe239b2 --- /dev/null +++ b/examples/container-service-extension/v4.1/install/step2/3.11-cse-install-8-optionals.tf @@ -0,0 +1,27 @@ +# ------------------------------------------------------------------------------------------------------------ +# CSE v4.1 installation: +# +# * Please read the guide present at https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install +# before applying this configuration. +# +# * Rename "terraform.tfvars.example" to "terraform.tfvars" and adapt the values to your needs. +# Other than that, this snippet should be applied as it is. +# You can check the comments on each resource/data source for more help and context. +# ------------------------------------------------------------------------------------------------------------ + +# This resource installs the UI Plugin. It can be useful for tenant users that are not familiar with +# Terraform. +resource "vcd_ui_plugin" "k8s_container_clusters_ui_plugin" { + count = var.k8s_container_clusters_ui_plugin_path == "" ? 0 : 1 + plugin_path = var.k8s_container_clusters_ui_plugin_path + enabled = true + tenant_ids = [ + data.vcd_org.system_org.id, + vcd_org.solutions_organization.id, + vcd_org.tenant_organization.id, + ] +} + +data "vcd_org" "system_org" { + name = var.administrator_org +} diff --git a/examples/container-service-extension-4.0/install/step2/terraform.tfvars.example b/examples/container-service-extension/v4.1/install/step2/terraform.tfvars.example similarity index 76% rename from examples/container-service-extension-4.0/install/step2/terraform.tfvars.example rename to examples/container-service-extension/v4.1/install/step2/terraform.tfvars.example index 18fff1fb3..204a477bf 100644 --- a/examples/container-service-extension-4.0/install/step2/terraform.tfvars.example +++ b/examples/container-service-extension/v4.1/install/step2/terraform.tfvars.example @@ -1,10 +1,11 @@ -# Change configuration to your needs and rename to 'terraform.tfvars'. +# Change configuration to your needs and rename to 'terraform.tfvars' # For more details about the variables specified here, please read the guide first: -# https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_0_install +# https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install # ------------------------------------------------ # VCD Provider config # ------------------------------------------------ + vcd_url = "https://vcd.my-awesome-corp.com" administrator_user = "administrator" administrator_password = "change-me" @@ -12,27 +13,15 @@ administrator_org = "System" insecure_login = "false" # ------------------------------------------------ -# NSX-T VDC setup +# Infrastructure # ------------------------------------------------ + # These variables are required to create both the Solutions NSX-T VDC and Tenant NSX-T VDC # The values here need to exist already in your VCD appliance. provider_vdc_name = "change-me" # Name of an existing PVDC that can be used to create VDCs nsxt_edge_cluster_name = "change-me" # Name of an existing NSX-T Edge Cluster that can be used to create VDCs network_pool_name = "change-me" # Name of an existing network pool that can be used to create VDCs -# ------------------------------------------------ -# Catalog and OVAs -# ------------------------------------------------ -# These variables are required to upload the necessary OVAs to the Solutions Organization shared catalog. -# You can find the download links in the guide referenced at the top of this file. -tkgm_ova_folder = "/home/changeme/tkgm-folder" # An existing absolute path to a folder containing TKGm OVAs -tkgm_ova_file = "ubuntu-2004-kube-v1.22.9+vmware.1-tkg.1-2182cbabee08edf480ee9bc5866d6933.ova" # An existing TKGm OVA -cse_ova_folder = "/home/changeme/cse-folder" # An existing absolute path to a folder containing CSE Server OVAs -cse_ova_file = "VMware_Cloud_Director_Container_Service_Extension-4.0.1.ova" # An existing CSE Server OVA - -# ------------------------------------------------ -# Values to create a basic networking setup -# ------------------------------------------------ # These variables are used to build a basic networking setup to run the CSE Server # and the TKGm clusters nsxt_manager_name = "change-me" # Name of an existing NSX-T manager, required to create the Provider Gateways @@ -43,8 +32,8 @@ solutions_nsxt_tier0_router_name = "change-me" # The name of solutions_provider_gateway_gateway_ip = "10.20.30.250" # Gateway IP to use in the Solutions Provider Gateway solutions_provider_gateway_gateway_prefix_length = "19" # Prefix length to use in the Solutions Provider Gateway solutions_provider_gateway_static_ip_ranges = [ # IP ranges to use in the Solutions Provider Gateway - ["10.20.30.16", "10.20.30.16"], # Single IP - ["10.20.30.20", "10.20.30.25"], # Many IPs + ["10.20.30.16", "10.20.30.16"], # Single IP + ["10.20.30.20", "10.20.30.25"], # Many IPs ] # These are all required to create the Tenant Organization Provider Gateway. @@ -53,8 +42,8 @@ tenant_nsxt_tier0_router_name = "change-me" # The name of an tenant_provider_gateway_gateway_ip = "10.30.20.150" # Gateway IP to use in the Tenant Provider Gateway tenant_provider_gateway_gateway_prefix_length = "19" # Prefix length to use in the Tenant Provider Gateway tenant_provider_gateway_static_ip_ranges = [ # IP ranges to use in the Tenant Provider Gateway - ["10.30.20.14", "10.30.20.14"], # Single IP - ["10.30.20.30", "10.30.20.37"], # Many IPs + ["10.30.20.14", "10.30.20.14"], # Single IP + ["10.30.20.30", "10.30.20.37"], # Many IPs ] # These will configure the Routed network for the Solutions Organization VDC. @@ -79,9 +68,6 @@ tenant_snat_internal_network_cidr = "10.0.0.0/16" # Required. It shou tenant_routed_network_dns = "" # Optional, if you need DNS tenant_routed_network_dns_suffix = "" # Optional, if you need DNS -# ------------------------------------------------ -# Values to create ALB setup -# ------------------------------------------------ # These are required to create a new ALB setup in VCD that will be used by TKGm clusters. # Your VCD should have an existing ALB deployment that will be imported, the values below must correspond to # the existing controller to be imported into VCD: @@ -91,24 +77,25 @@ alb_controller_url = "https://alb-ctrl.my-awesome-corp.com" # URL of the alb_importable_cloud_name = "change-me" # Name of the Cloud to import to create a Service Engine Group # ------------------------------------------------ -# CSE Server configuration +# Catalog and OVAs # ------------------------------------------------ -# These are required to create the Runtime Defined Entity that will contain the CSE Server configuration (vcdKeConfig) -# To know more about the specific versions, please refer to the CSE documentation. -# The values set here correspond to CSE v4.0: -vcdkeconfig_template_filepath = "../../entities/vcdkeconfig-template.json" -capvcd_version = "1.0.0" -capvcd_rde_version = "1.1.0" # Should be the same as the capvcd_rde_version used in Step 1 -cpi_version = "1.2.0" -csi_version = "1.3.0" -# Optional but recommended to avoid rate limiting when configuring the TKGm clusters. -# Create this one in https://github.com/settings/tokens -github_personal_access_token = "" +# These variables are required to upload the necessary OVAs to the Solutions Organization shared catalog. +# You can find the download links in the guide referenced at the top of this file. +tkgm_ova_folder = "/home/changeme/tkgm-folder" # An existing absolute path to a folder containing TKGm OVAs +tkgm_ova_files = [ # Existing TKGm OVAs + "ubuntu-2004-kube-v1.25.7+vmware.2-tkg.1-8a74b9f12e488c54605b3537acb683bc.ova" +] +cse_ova_folder = "/home/changeme/cse-folder" # An existing absolute path to a folder containing CSE Server OVAs +cse_ova_file = "VMware_Cloud_Director_Container_Service_Extension-4.1.0.ova" # An existing CSE Server OVA + +# ------------------------------------------------ +# CSE Server initialization +# ------------------------------------------------ -# This user was created in Step 1. You need to provide a valid API token for it -cse_admin_user = "cse-admin" -cse_admin_api_token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" +cse_admin_username = "cse_admin" # This must be the same user created in step 1 +cse_admin_password = "change-me" # This must be the same password of the user created in step 1 +cse_admin_api_token_file = "cse_admin_api_token.json" # This file will contain the API token of the CSE Admin user, store it carefully. # ------------------------------------------------ # Other configuration @@ -116,4 +103,4 @@ cse_admin_api_token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # This path points to the .zip file that contains the bundled Kubernetes Container Clusters UI Plugin. # It is optional: if not set, it won't be installed. # Remember to remove older CSE UI plugins if present (for example 3.x plugins) before installing this one. -k8s_container_clusters_ui_plugin_path = "/home/change-me/container-ui-plugin 4.0.zip" +k8s_container_clusters_ui_plugin_path = "/home/change-me/container-ui-plugin-4.1.zip" diff --git a/examples/container-service-extension-4.0/install/step2/variables.tf b/examples/container-service-extension/v4.1/install/step2/variables.tf similarity index 73% rename from examples/container-service-extension-4.0/install/step2/variables.tf rename to examples/container-service-extension/v4.1/install/step2/variables.tf index f4b64e4cc..0836d3e69 100644 --- a/examples/container-service-extension-4.0/install/step2/variables.tf +++ b/examples/container-service-extension/v4.1/install/step2/variables.tf @@ -32,22 +32,7 @@ variable "administrator_org" { } # ------------------------------------------------ -# CSE administrator user details -# ------------------------------------------------ - -variable "cse_admin_user" { - description = "The CSE administrator user created in previous step (Example: 'cse-admin')" - type = string -} - -variable "cse_admin_api_token" { - description = "The CSE administrator API token that should have been created before running this installation step" - type = string - sensitive = true -} - -# ------------------------------------------------ -# VDC setup +# Infrastructure # ------------------------------------------------ variable "provider_vdc_name" { @@ -65,34 +50,6 @@ variable "network_pool_name" { type = string } -# ------------------------------------------------ -# Catalog and OVAs -# ------------------------------------------------ - -variable "tkgm_ova_folder" { - description = "Absolute path to the TKGm OVA file, with no file name (Example: '/home/bob/Downloads/tkgm')" - type = string -} - -variable "tkgm_ova_file" { - description = "TKGm OVA file name, with no path (Example: 'ubuntu-2004-kube-v1.22.9+vmware.1-tkg.1-2182cbabee08edf480ee9bc5866d6933.ova')" - type = string -} - -variable "cse_ova_folder" { - description = "Absolute path to the CSE OVA file, with no file name (Example: '/home/bob/Downloads/cse')" - type = string -} - -variable "cse_ova_file" { - description = "CSE OVA file name, with no path (Example: 'VMware_Cloud_Director_Container_Service_Extension-4.0.1.62-21109756.ova')" - type = string -} - -# ------------------------------------------------ -# Networking -# ------------------------------------------------ - variable "nsxt_manager_name" { description = "NSX-T manager name, required to create the Provider Gateways" type = string @@ -222,9 +179,6 @@ variable "tenant_routed_network_dns_suffix" { default = "" } -# ------------------------------------------------ -# ALB -# ------------------------------------------------ variable "alb_controller_username" { description = "The user to create an ALB Controller with" type = string @@ -246,77 +200,54 @@ variable "alb_importable_cloud_name" { } # ------------------------------------------------ -# CSE Server +# Catalog and OVAs # ------------------------------------------------ -variable "vcdkeconfig_template_filepath" { - type = string - description = "Path to the VCDKEConfig JSON template" - default = "../../entities/vcdkeconfig-template.json" -} -variable "capvcd_version" { - type = string - description = "Version of CAPVCD" - default = "1.0.0" -} - -variable "capvcd_rde_version" { - type = string - description = "Version of the CAPVCD Runtime Defined Entity Type" - default = "1.1.0" -} - -variable "cpi_version" { +variable "tkgm_ova_folder" { + description = "Absolute path to the TKGm OVA files, with no file name (Example: '/home/bob/Downloads/tkgm')" type = string - description = "VCDKEConfig: Cloud Provider Interface version" - default = "1.2.0" } -variable "csi_version" { - type = string - description = "VCDKEConfig: Container Storage Interface version" - default = "1.3.0" +variable "tkgm_ova_files" { + description = "A set of TKGm OVA file names, with no path (Example: 'ubuntu-2004-kube-v1.25.7+vmware.2-tkg.1-8a74b9f12e488c54605b3537acb683bc.ova')" + type = set(string) } -variable "github_personal_access_token" { +variable "cse_ova_folder" { + description = "Absolute path to the CSE OVA file, with no file name (Example: '/home/bob/Downloads/cse')" type = string - description = "VCDKEConfig: Prevents potential github rate limiting errors during cluster creation and deletion" - sensitive = true } -variable "no_proxy" { +variable "cse_ova_file" { + description = "CSE OVA file name, with no path (Example: 'VMware_Cloud_Director_Container_Service_Extension-4.0.1.62-21109756.ova')" type = string - description = "VCDKEConfig: List of comma-separated domains without spaces" - default = "localhost,127.0.0.1,cluster.local,.svc" } -variable "http_proxy" { - type = string - description = "VCDKEConfig: Address of your HTTP proxy server" - default = "" -} +# ------------------------------------------------ +# CSE Server initialization +# ------------------------------------------------ -variable "https_proxy" { +variable "cse_admin_username" { + description = "The CSE administrator user that was created in step 1" type = string - description = "VCDKEConfig: Address of your HTTPS proxy server" - default = "" } -variable "syslog_host" { +variable "cse_admin_password" { + description = "The password to set for the CSE administrator user that was created in step 1" type = string - description = "VCDKEConfig: Domain for system logs" - default = "" + sensitive = true } -variable "syslog_port" { +variable "cse_admin_api_token_file" { + description = "The file where the API Token for the CSE Administrator will be stored" type = string - description = "VCDKEConfig: Port for system logs" - default = "" + default = "cse_admin_api_token.json" } # ------------------------------------------------ # Other configuration # ------------------------------------------------ + variable "k8s_container_clusters_ui_plugin_path" { type = string description = "Path to the Kubernetes Container Clusters UI Plugin zip file" diff --git a/examples/container-service-extension/v4.1/schemas/capvcd-type-schema-v1.2.0.json b/examples/container-service-extension/v4.1/schemas/capvcd-type-schema-v1.2.0.json new file mode 100644 index 000000000..f033b1c0d --- /dev/null +++ b/examples/container-service-extension/v4.1/schemas/capvcd-type-schema-v1.2.0.json @@ -0,0 +1,472 @@ +{ + "definitions": { + "k8sNetwork": { + "type": "object", + "description": "The network-related settings for the cluster.", + "properties": { + "pods": { + "type": "object", + "description": "The network settings for Kubernetes pods.", + "properties": { + "cidrBlocks": { + "type": "array", + "description": "Specifies a range of IP addresses to use for Kubernetes pods.", + "items": { + "type": "string" + } + } + } + }, + "services": { + "type": "object", + "description": "The network settings for Kubernetes services", + "properties": { + "cidrBlocks": { + "type": "array", + "description": "The range of IP addresses to use for Kubernetes services", + "items": { + "type": "string" + } + } + } + } + } + } + }, + "type": "object", + "required": [ + "kind", + "metadata", + "apiVersion", + "spec" + ], + "properties": { + "kind": { + "enum": [ + "CAPVCDCluster" + ], + "type": "string", + "description": "The kind of the Kubernetes cluster.", + "title": "The kind of the Kubernetes cluster.", + "default": "CAPVCDCluster" + }, + "spec": { + "type": "object", + "properties": { + "capiYaml": { + "type": "string", + "title": "CAPI yaml", + "description": "User specification of the CAPI yaml; It is user's responsibility to embed the correct CAPI yaml generated as per instructions - https://github.com/vmware/cluster-api-provider-cloud-director/blob/main/docs/CLUSTERCTL.md#generate-cluster-manifests-for-workload-cluster" + }, + "yamlSet": { + "type": "array", + "items": { + "type": "string" + }, + "title": "User specified K8s Yaml strings", + "description": "User specified K8s Yaml strings to be applied on the target cluster. The component Projector will process this property periodically." + }, + "vcdKe": { + "type": "object", + "properties": { + "isVCDKECluster": { + "type": "boolean", + "title": "User's intent to have this specification processed by VCDKE", + "description": "Does user wants this specification to be processed by the VCDKE component of CSE stack?" + }, + "markForDelete": { + "type": "boolean", + "title": "User's intent to delete the cluster", + "description": "Mark the cluster for deletion", + "default": false + }, + "autoRepairOnErrors": { + "type": "boolean", + "title": "User's intent to let the VCDKE repair/recreate the cluster", + "description": "User's intent to let the VCDKE repair/recreate the cluster on any errors during cluster creation", + "default": true + }, + "forceDelete": { + "type": "boolean", + "title": "User's intent to delete the cluster forcefully", + "description": "User's intent to delete the cluster forcefully", + "default": false + }, + "defaultStorageClassOptions": { + "type": "object", + "properties": { + "vcdStorageProfileName": { + "type": "string", + "title": "Name of the VCD storage profile", + "description": "Name of the VCD storage profile" + }, + "k8sStorageClassName": { + "type": "string", + "title": "Name of the Kubernetes storage class to be created", + "description": "Name of the Kubernetes storage class to be created" + }, + "useDeleteReclaimPolicy": { + "type": "boolean", + "title": "Reclaim policy of the Kubernetes storage class", + "description": "Reclaim policy of the Kubernetes storage class" + }, + "fileSystem": { + "type": "string", + "title": "Default file System of the volumes", + "description": "Default file System of the volumes to be created from the default storage class" + } + }, + "title": "Default Storage class options to be set on the target cluster", + "description": "Default Storage class options to be set on the target cluster" + }, + "secure": { + "type": "object", + "x-vcloud-restricted": ["private", "secure"], + "properties": { + "apiToken": { + "type": "string", + "title": "API Token (Refresh Token) of the user", + "description": "API Token (Refresh Token) of the user." + } + }, + "title": "Encrypted data", + "description": "Fields under this section will be encrypted" + } + }, + "title": "User specification for VCDKE component", + "description": "User specification for VCDKE component" + } + }, + "title": "User specification for the cluster", + "description": "User specification for the cluster" + }, + "metadata": { + "type": "object", + "properties": { + "orgName": { + "type": "string", + "description": "The name of the Organization in which cluster needs to be created or managed.", + "title": "The name of the Organization in which cluster needs to be created or managed." + }, + "virtualDataCenterName": { + "type": "string", + "description": "The name of the Organization data center in which the cluster need to be created or managed.", + "title": "The name of the Organization data center in which the cluster need to be created or managed." + }, + "name": { + "type": "string", + "description": "The name of the cluster.", + "title": "The name of the cluster." + }, + "site": { + "type": "string", + "description": "Fully Qualified Domain Name (https://VCD-FQDN.com) of the VCD site in which the cluster is deployed", + "title": "Fully Qualified Domain Name of the VCD site in which the cluster is deployed" + } + }, + "title": "User specification of the metadata of the cluster", + "description": "User specification of the metadata of the cluster" + }, + "status": { + "type": "object", + "x-vcloud-restricted": "protected", + "properties": { + "capvcd": { + "type": "object", + "properties": { + "phase": { + "type": "string" + }, + "kubernetes": { + "type": "string" + }, + "errorSet": { + "type": "array", + "items": { + "type": "object", + "properties": {} + } + }, + "eventSet": { + "type": "array", + "items": { + "type": "object", + "properties": {} + } + }, + "k8sNetwork": { + "$ref": "#/definitions/k8sNetwork" + }, + "uid": { + "type": "string" + }, + "parentUid": { + "type": "string" + }, + "useAsManagementCluster": { + "type": "boolean" + }, + "clusterApiStatus": { + "type": "object", + "properties": { + "phase": { + "type": "string", + "description": "The phase describing the control plane infrastructure deployment." + }, + "apiEndpoints": { + "type": "array", + "description": "Control Plane load balancer endpoints", + "items": { + "host": { + "type": "string" + }, + "port": { + "type": "integer" + } + } + } + } + }, + "nodePool": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "name of the node pool" + }, + "sizingPolicy": { + "type": "string", + "description": "name of the sizing policy used by the node pool" + }, + "placementPolicy": { + "type": "string", + "description": "name of the sizing policy used by the node pool" + }, + "diskSizeMb": { + "type": "integer", + "description": "disk size of the VMs in the node pool in MB" + }, + "nvidiaGpuEnabled": { + "type": "boolean", + "description": "boolean indicating if the node pools have nvidia GPU enabled" + }, + "storageProfile": { + "type": "string", + "description": "storage profile used by the node pool" + }, + "desiredReplicas": { + "type": "integer", + "description": "desired replica count of the nodes in the node pool" + }, + "availableReplicas": { + "type": "integer", + "description": "number of available replicas in the node pool" + } + } + } + }, + "clusterResourceSet": { + "properties": {}, + "type": "object" + }, + "clusterResourceSetBindings": { + "type": "array", + "items": { + "type": "object", + "properties": { + "clusterResourceSetName": { + "type": "string" + }, + "kind": { + "type": "string" + }, + "name": { + "type": "string" + }, + "applied": { + "type": "boolean" + }, + "lastAppliedTime": { + "type": "string" + } + } + } + }, + "capvcdVersion": { + "type": "string" + }, + "vcdProperties": { + "type": "object", + "properties": { + "organizations": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "id": { + "type": "string" + } + } + } + }, + "site": { + "type": "string" + }, + "orgVdcs": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "id": { + "type": "string" + }, + "ovdcNetworkName": { + "type": "string" + } + } + } + } + } + }, + "upgrade": { + "type": "object", + "description": "determines the state of upgrade. If no upgrade is issued, only the existing version is stored.", + "properties": { + "current": { + "type": "object", + "properties": { + "kubernetesVersion": { + "type": "string", + "description": "current kubernetes version of the cluster. If being upgraded, will represent target kubernetes version of the cluster." + }, + "tkgVersion": { + "type": "string", + "description": "current TKG version of the cluster. If being upgraded, will represent the tarkget TKG version of the cluster." + } + } + }, + "previous": { + "type": "object", + "properties": { + "kubernetesVersion": { + "type": "string", + "description": "the kubernetes version from which the cluster was upgraded from. If cluster upgrade is still in progress, the field will represent the source kubernetes version from which the cluster is being upgraded." + }, + "tkgVersion": { + "type": "string", + "description": "the TKG version from which the cluster was upgraded from. If cluster upgrade is still in progress, the field will represent the source TKG versoin from which the cluster is being upgraded." + } + } + }, + "ready": { + "type": "boolean", + "description": "boolean indicating the status of the cluster upgrade." + } + } + }, + "private": { + "type": "object", + "x-vcloud-restricted": ["private", "secure"], + "description": "Placeholder for the properties invisible and secure to non-admin users.", + "properties": { + "kubeConfig": { + "type": "string", + "description": "Kube config to access the Kubernetes cluster." + } + } + }, + "vcdResourceSet": { + "type": "array", + "items": { + "type": "object", + "properties": {} + } + }, + "createdByVersion": { + "type": "string", + "description": "CAPVCD version used to create the cluster" + } + }, + "title": "CAPVCD's view of the current status of the cluster", + "description": "CAPVCD's view of the current status of the cluster" + }, + "vcdKe": { + "type": "object", + "properties": { + "state": { + "type": "string", + "title": "VCDKE's view of the current state of the cluster", + "description": "VCDKE's view of the current state of the cluster - provisioning/provisioned/error" + }, + "vcdKeVersion": { + "type": "string", + "title": "VCDKE/CSE product version", + "description": "The VCDKE version with which the cluster is originally created" + }, + "defaultStorageClass": { + "type": "object", + "properties": { + "vcdStorageProfileName": { + "type": "string", + "title": "Name of the VCD storage profile", + "description": "Name of the VCD storage profile" + }, + "k8sStorageClassName": { + "type": "string", + "title": "Name of the Kubernetes storage class to be created", + "description": "Name of the Kubernetes storage class to be created" + }, + "useDeleteReclaimPolicy": { + "type": "boolean", + "title": "Reclaim policy of the Kubernetes storage class", + "description": "Reclaim policy of the Kubernetes storage class" + }, + "fileSystem": { + "type": "string", + "title": "Default file System of the volumes", + "description": "Default file System of the volumes to be created from the default storage class" + } + }, + "title": "Default Storage class options to be set on the target cluster", + "description": "Default Storage class options to be set on the target cluster" + } + }, + "title": "VCDKE's view of the current status of the cluster", + "description": "Current status of the cluster from VCDKE's point of view" + }, + "cpi": { + "type": "object", + "properties": { + "name": { + "type": "string", + "title": "Name of the Cloud Provider Interface", + "description": "Name of the CPI" + }, + "version": { + "type": "string", + "title": "Product version of the CPI", + "description": "Product version of the CPI" + } + }, + "title": "CPI for VCD's view of the current status of the cluster", + "description": "CPI for VCD's view of the current status of the cluster" + } + }, + "title": "Current status of the cluster", + "description": "Current status of the cluster. The subsections are updated by various components of CSE stack - VCDKE, Projector, CAPVCD, CPI, CSI and Extensions" + }, + "apiVersion": { + "type": "string", + "default": "capvcd.vmware.com/v1.2", + "description": "The version of the payload format" + } + } +} \ No newline at end of file diff --git a/examples/container-service-extension/v4.1/schemas/vcdkeconfig-type-schema-v1.1.0.json b/examples/container-service-extension/v4.1/schemas/vcdkeconfig-type-schema-v1.1.0.json new file mode 100644 index 000000000..1f721919a --- /dev/null +++ b/examples/container-service-extension/v4.1/schemas/vcdkeconfig-type-schema-v1.1.0.json @@ -0,0 +1,323 @@ +{ + "type": "object", + "properties": { + "profiles": { + "type": "array", + "items": [ + { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "active": { + "type": "boolean" + }, + "vcdKeInstances": { + "type": "array", + "items": [ + { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "version": { + "type": "string", + "default": "4.1.0" + }, + "vcdKeInstanceId": { + "type": "string" + } + } + } + ] + }, + "serverConfig": { + "type": "object", + "properties": { + "rdePollIntervalInMin": { + "type": "integer", + "description": "Server polls and processes the RDEs for every #rdePollIntervalInMin minutes." + }, + "heartbeatWatcherTimeoutInMin": { + "type": "integer", + "description": "The watcher thread kills itself if it does not receive heartbeat with in #heartbeatWatcherTimeoutInMin from the associated worker thread. Eventually worker also dies off as it can no longer post to the already closed heartbeat channel." + }, + "staleHeartbeatIntervalInMin": { + "type": "integer", + "description": "New worker waits for about #staleHeartbeatIntervalinMin before it calls the current heartbeat stale and picks up the RDE. The value must always be greater than #heartbeatWatcherTimeoutInmin" + } + } + }, + "vcdConfig": { + "type": "object", + "properties": { + "sysLogger": { + "type": "object", + "properties": { + "host": { + "type": "string" + }, + "port": { + "type": "string" + } + }, + "required": [ + "host", + "port" + ] + } + } + }, + "githubConfig": { + "type": "object", + "properties": { + "githubPersonalAccessToken": { + "type": "string" + } + } + }, + "bootstrapClusterConfig": { + "type": "object", + "properties": { + "sizingPolicy": { + "type": "string" + }, + "dockerVersion": { + "type": "string" + }, + "kindVersion": { + "type": "string", + "default": "v0.19.0" + }, + "kindestNodeVersion": { + "type": "string", + "default": "v1.27.1", + "description": "Image tag of kindest/node container, used by KinD to deploy a cluster" + }, + "kubectlVersion": { + "type": "string" + }, + "clusterctl": { + "type": "object", + "properties": { + "version": { + "type": "string", + "default": "v1.4.0" + }, + "clusterctlyaml": { + "type": "string" + } + } + }, + "capiEcosystem": { + "type": "object", + "properties": { + "coreCapiVersion": { + "type": "string", + "default": "v1.4.0" + }, + "controlPlaneProvider": { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "version": { + "type": "string", + "default": "v1.4.0" + } + } + }, + "bootstrapProvider": { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "version": { + "type": "string", + "default": "v1.4.0" + } + } + }, + "infraProvider": { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "version": { + "type": "string", + "default": "v1.1.0" + }, + "capvcdRde": { + "type": "object", + "properties": { + "vendor": { + "type": "string" + }, + "nss": { + "type": "string" + }, + "version": { + "type": "string" + } + } + } + } + }, + "certManagerVersion": { + "type": "string", + "default": "v1.11.1" + } + } + }, + "proxyConfig": { + "type": "object", + "properties": { + "httpProxy": { + "type": "string" + }, + "httpsProxy": { + "type": "string" + }, + "noProxy": { + "type": "string" + } + } + }, + "certificateAuthorities": { + "type": "array", + "description": "Certificates to be used as the certificate authority in the bootstrap (ephemeral) VM", + "items": { + "type": "string" + } + } + } + }, + "K8Config": { + "type": "object", + "properties": { + "csi": { + "type": "array", + "items": [ + { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "version": { + "type": "string", + "default": "1.4.0" + } + }, + "required": [ + "name", + "version" + ] + } + ] + }, + "cpi": { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "version": { + "type": "string", + "default": "1.4.0" + } + }, + "required": [ + "name", + "version" + ] + }, + "cni": { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "version": { + "type": "string" + } + }, + "required": [ + "name", + "version" + ] + }, + "rdeProjectorVersion": { + "type": "string", + "default": "0.6.0" + }, + "mhc": { + "type": "object", + "description": "Parameters to configure MachineHealthCheck", + "properties": { + "maxUnhealthyNodes": { + "type": "number", + "default": 100, + "minimum": 1, + "maximum": 100, + "description": "Dictates whether MHC should remediate the machine if the given percentage of nodes in the cluster are down" + }, + "nodeStartupTimeout": { + "type": "string", + "default": "900s", + "description": "Determines how long a MachineHealthCheck should wait for a Node to join the cluster, before considering a Machine unhealthy." + }, + "nodeNotReadyTimeout": { + "type": "string", + "default": "300s", + "description": "Determines how long MachineHealthCheck should wait for before remediating Machines if the Node Ready condition is False" + }, + "nodeUnknownTimeout": { + "type": "string", + "default": "300s", + "description": "Determines how long MachineHealthCheck should wait for before remediating machines if the Node Ready condition is Unknown" + } + }, + "required": [ + "maxUnhealthyNodes", + "nodeStartupTimeout", + "nodeNotReadyTimeout", + "nodeUnknownTimeout" + ] + }, + "certificateAuthorities": { + "type": "array", + "description": "Certificates to be used as the certificate authority", + "items": { + "type": "string" + } + } + }, + "required": [ + "csi", + "cpi", + "cni" + ] + }, + "containerRegistryUrl": { + "type": "string", + "default": "projects.registry.vmware.com" + } + }, + "required": [ + "name", + "active" + ] + } + ] + } + }, + "required": [ + "profiles" + ] +} diff --git a/go.mod b/go.mod index a744be7ca..958c25d8d 100644 --- a/go.mod +++ b/go.mod @@ -7,7 +7,7 @@ require ( github.com/hashicorp/go-version v1.6.0 github.com/hashicorp/terraform-plugin-sdk/v2 v2.29.0 github.com/kr/pretty v0.2.1 - github.com/vmware/go-vcloud-director/v2 v2.22.0-alpha.9 + github.com/vmware/go-vcloud-director/v2 v2.22.0-alpha.10 ) require ( @@ -66,6 +66,6 @@ require ( google.golang.org/protobuf v1.31.0 // indirect ) -replace github.com/vmware/go-vcloud-director/v2 => github.com/dataclouder/go-vcloud-director/v2 v2.17.0-alpha.3.0.20231026182842-23ede48cc1e8 +replace github.com/vmware/go-vcloud-director/v2 => github.com/dataclouder/go-vcloud-director/v2 v2.17.0-alpha.3.0.20231108073534-99eceec2b52a // replace github.com/vmware/go-vcloud-director/v2 => ../go-vcloud-director diff --git a/go.sum b/go.sum index 14642b180..f70194924 100644 --- a/go.sum +++ b/go.sum @@ -15,8 +15,8 @@ github.com/bwesterb/go-ristretto v1.2.3/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7N github.com/cloudflare/circl v1.3.3 h1:fE/Qz0QdIGqeWfnwq0RE0R7MI51s0M2E4Ga9kq5AEMs= github.com/cloudflare/circl v1.3.3/go.mod h1:5XYMA4rFBvNIrhs50XuiBJ15vF2pZn4nnUKZrLbUZFA= github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= -github.com/dataclouder/go-vcloud-director/v2 v2.17.0-alpha.3.0.20231026182842-23ede48cc1e8 h1:RCSWd5LsHzSNZZtxwGKXrsaVGuczajafq8E33qcx/1I= -github.com/dataclouder/go-vcloud-director/v2 v2.17.0-alpha.3.0.20231026182842-23ede48cc1e8/go.mod h1:QPxGFgrUcSyzy9IlpwDE4UNT3tsOy2047tJOPEJ4nlw= +github.com/dataclouder/go-vcloud-director/v2 v2.17.0-alpha.3.0.20231108073534-99eceec2b52a h1:MY46Uv/vsQMjaEYh80P2xZQntq5mZTiPq7cZfiaUYD0= +github.com/dataclouder/go-vcloud-director/v2 v2.17.0-alpha.3.0.20231108073534-99eceec2b52a/go.mod h1:QPxGFgrUcSyzy9IlpwDE4UNT3tsOy2047tJOPEJ4nlw= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= diff --git a/scripts/skip-upgrade-tests.txt b/scripts/skip-upgrade-tests.txt index 50a9eeca4..ed081dc95 100644 --- a/scripts/skip-upgrade-tests.txt +++ b/scripts/skip-upgrade-tests.txt @@ -295,6 +295,7 @@ vcd.TestAccVcdVmPlacementPolicyWithoutDescription.tf v3.9.0 "Changed 'descriptio vcd.TestAccVcdVmPlacementPolicy.tf v3.9.0 "Changed 'description' to Computed in 'vcd_vm_placement_policy'" vcd.ResourceSchema-vcd_vdc_group.tf v3.10.0 "Added new field 'force_delete'" vcd.ResourceSchema-vcd_nsxt_alb_pool.tf v3.10.0 "added field 'ssl_enabled'" +vcd.ResourceSchema-vcd_org_vdc.tf v3.10.0 "field 'edge_cluster_id' becomes computed" vcd.ResourceSchema-vcd_nsxt_edgegateway.tf v3.10.0 "Added support for Segment backed external networks" vcd.ResourceSchema-vcd_vapp_vm.tf v3.11.0 "added fields 'firmware' and 'boot_options'" vcd.ResourceSchema-vcd_vm.tf v3.11.0 "added fields 'firmware' and 'boot_options'" diff --git a/vcd/config_test.go b/vcd/config_test.go index 4863f6919..89a24663a 100644 --- a/vcd/config_test.go +++ b/vcd/config_test.go @@ -168,6 +168,11 @@ type TestConfig struct { RoutedNetwork string `json:"routedNetwork"` IsolatedNetwork string `json:"isolatedNetwork"` DirectNetwork string `json:"directNetwork"` + IpDiscoveryProfile string `json:"ipDiscoveryProfile"` + MacDiscoveryProfile string `json:"macDiscoveryProfile"` + SpoofGuardProfile string `json:"spoofGuardProfile"` + QosProfile string `json:"qosProfile"` + SegmentSecurityProfile string `json:"segmentSecurityProfile"` } `json:"nsxt"` VSphere struct { ResourcePoolForVcd1 string `json:"resourcePoolForVcd1,omitempty"` diff --git a/vcd/datasource_not_found_test.go b/vcd/datasource_not_found_test.go index 14e4363db..758ade6e7 100644 --- a/vcd/datasource_not_found_test.go +++ b/vcd/datasource_not_found_test.go @@ -35,6 +35,24 @@ func TestAccDataSourceNotFound(t *testing.T) { func testSpecificDataSourceNotFound(dataSourceName string, vcdClient *VCDClient) func(*testing.T) { return func(t *testing.T) { + + type skipAlways struct { + dataSourceName string + reason string + } + + skipAlwaysSlice := []skipAlways{ + { + dataSourceName: "vcd_nsxt_global_default_segment_profile_template", + reason: "Global Default Segment Profile Template configuration is always available", + }, + } + for _, skip := range skipAlwaysSlice { + if dataSourceName == skip.dataSourceName { + t.Skipf("Skipping: %s", skip.reason) + } + } + // Skip subtest based on versions type skipOnVersion struct { skipVersionConstraint string @@ -96,6 +114,16 @@ func testSpecificDataSourceNotFound(dataSourceName string, vcdClient *VCDClient) "vcd_resource_pool", "vcd_network_pool", "vcd_nsxt_edgegateway_qos_profile", + "vcd_nsxt_segment_ip_discovery_profile", + "vcd_nsxt_segment_mac_discovery_profile", + "vcd_nsxt_segment_spoof_guard_profile", + "vcd_nsxt_segment_qos_profile", + "vcd_nsxt_segment_security_profile", + "vcd_org_vdc_nsxt_network_profile", + "vcd_nsxt_global_default_segment_profile_template", + "vcd_nsxt_network_segment_profile", + "vcd_nsxt_segment_profile_template", + "vcd_nsxt_network_context_profile", } dataSourcesRequiringAlbConfig := []string{ "vcd_nsxt_alb_cloud", @@ -142,10 +170,17 @@ func testSpecificDataSourceNotFound(dataSourceName string, vcdClient *VCDClient) "DataSourceName": dataSourceName, "MandatoryFields": addedParams, } + if dataSourceName == "vcd_nsxv_distributed_firewall" { params["MandatoryFields"] = `vdc_id = "deadbeef-dead-beef-dead-beefdeadbeef"` } + if dataSourceName == "vcd_org_vdc_nsxt_network_profile" { + config := `org = "` + testConfig.VCD.Org + `"` + "\n" + config += `vdc = "non-existing"` + "\n" + params["MandatoryFields"] = config + } + params["FuncName"] = "NotFoundDataSource-" + dataSourceName // Adding skip directive as running these tests in binary test mode add no value binaryTestSkipText := "# skip-binary-test: data source not found test only works in acceptance tests\n" diff --git a/vcd/datasource_vcd_nsxt_global_default_segment_profile_template.go b/vcd/datasource_vcd_nsxt_global_default_segment_profile_template.go new file mode 100644 index 000000000..57084aff4 --- /dev/null +++ b/vcd/datasource_vcd_nsxt_global_default_segment_profile_template.go @@ -0,0 +1,23 @@ +package vcd + +import ( + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func datasourceVcdGlobalDefaultSegmentProfileTemplate() *schema.Resource { + return &schema.Resource{ + ReadContext: resourceDataSourceVcdGlobalDefaultSegmentProfileTemplateRead, + Schema: map[string]*schema.Schema{ + "vdc_networks_default_segment_profile_template_id": { + Type: schema.TypeString, + Computed: true, + Description: "Global default NSX-T Segment Profile for Org VDC networks", + }, + "vapp_networks_default_segment_profile_template_id": { + Type: schema.TypeString, + Computed: true, + Description: "Global default NSX-T Segment Profile for vApp networks", + }, + }, + } +} diff --git a/vcd/datasource_vcd_nsxt_network_segment_profile.go b/vcd/datasource_vcd_nsxt_network_segment_profile.go new file mode 100644 index 000000000..94aa08c5a --- /dev/null +++ b/vcd/datasource_vcd_nsxt_network_segment_profile.go @@ -0,0 +1,69 @@ +package vcd + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func datasourceVcdNsxtOrgVdcNetworkSegmentProfileTemplate() *schema.Resource { + return &schema.Resource{ + ReadContext: dataSourceVcdNsxtOrgVdcNetworkSegmentProfileRead, + + Schema: map[string]*schema.Schema{ + "org": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: "The name of organization to use, optional if defined at provider " + + "level. Useful when connected as sysadmin working across different organizations", + }, + "org_network_id": { + Type: schema.TypeString, + Required: true, + Description: "ID of the Organization Network that uses the Segment Profile Template", + }, + "segment_profile_template_id": { + Type: schema.TypeString, + Computed: true, + Description: "Segment Profile Template ID", + }, + "segment_profile_template_name": { + Type: schema.TypeString, + Computed: true, + Description: "Segment Profile Template Name", + }, + // Individual Segment Profiles + "ip_discovery_profile_id": { + Type: schema.TypeString, + Computed: true, + Description: "NSX-T IP Discovery Profile", + }, + "mac_discovery_profile_id": { + Type: schema.TypeString, + Computed: true, + Description: "NSX-T Mac Discovery Profile", + }, + "spoof_guard_profile_id": { + Type: schema.TypeString, + Computed: true, + Description: "NSX-T Spoof Guard Profile", + }, + "qos_profile_id": { + Type: schema.TypeString, + Computed: true, + Description: "NSX-T QoS Profile", + }, + "segment_security_profile_id": { + Type: schema.TypeString, + Computed: true, + Description: "NSX-T Segment Security Profile", + }, + }, + } +} + +func dataSourceVcdNsxtOrgVdcNetworkSegmentProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + return resourceDataSourceVcdNsxtOrgVdcNetworkSegmentProfileRead(ctx, d, meta, "datasource") +} diff --git a/vcd/datasource_vcd_nsxt_segment_ip_discovery_profile.go b/vcd/datasource_vcd_nsxt_segment_ip_discovery_profile.go new file mode 100644 index 000000000..fb6712609 --- /dev/null +++ b/vcd/datasource_vcd_nsxt_segment_ip_discovery_profile.go @@ -0,0 +1,152 @@ +package vcd + +import ( + "context" + "fmt" + "net/url" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func datasourceVcdNsxtSegmentIpDiscoveryProfile() *schema.Resource { + return &schema.Resource{ + ReadContext: datasourceNsxtSegmentIpDiscoveryProfileRead, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + Description: "Name of Segment IP Discovery Profile", + }, + "nsxt_manager_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of NSX-T Manager", + }, + "vdc_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of VDC", + }, + "vdc_group_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of VDC Group", + }, + "description": { + Type: schema.TypeString, + Computed: true, + Description: "Description of Segment IP Discovery Profile", + }, + "arp_binding_limit": { + Type: schema.TypeInt, + Computed: true, + Description: "Indicates the number of ARP snooped IP addresses to be remembered per logical port", + }, + "arp_binding_timeout": { + Type: schema.TypeInt, + Computed: true, + Description: "Indicates ARP and ND cache timeout (in minutes)", + }, + "is_arp_snooping_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Defines whether ARP snooping is enabled", + }, + "is_dhcp_snooping_v4_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Defines whether DHCP snooping for IPv4 is enabled", + }, + "is_dhcp_snooping_v6_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Defines whether DHCP snooping for IPv6 is enabled", + }, + "is_duplicate_ip_detection_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether duplicate IP detection is enabled", + }, + "is_nd_snooping_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether neighbor discovery (ND) snooping is enabled", + }, + "is_tofu_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Defines whether 'Trust on First Use (TOFU)' paradigm is enabled", + }, + "is_vmtools_v4_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether fetching IPv4 address using vm-tools is enabled", + }, + "is_vmtools_v6_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether fetching IPv6 address using vm-tools is enabled", + }, + "nd_snooping_limit": { + Type: schema.TypeInt, + Computed: true, + Description: "Maximum number of Neighbor Discovery (ND) snooped IPv6 addresses", + }, + }, + } +} + +func datasourceNsxtSegmentIpDiscoveryProfileRead(_ context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + profileName := d.Get("name").(string) + + contextFilterField, contextUrn, err := getContextFilterField(d) + if err != nil { + return diag.FromErr(err) + } + + queryFilter := url.Values{} + queryFilter.Add("filter", fmt.Sprintf("%s==%s", contextFilterField, contextUrn)) + + ipDiscoveryProfile, err := vcdClient.GetIpDiscoveryProfileByName(profileName, queryFilter) + if err != nil { + return diag.Errorf("could not find IP Discovery Profile by name '%s': %s", profileName, err) + } + + dSet(d, "description", ipDiscoveryProfile.Description) + dSet(d, "arp_binding_limit", ipDiscoveryProfile.ArpBindingLimit) + dSet(d, "arp_binding_timeout", ipDiscoveryProfile.ArpNdBindingTimeout) + dSet(d, "is_arp_snooping_enabled", ipDiscoveryProfile.IsArpSnoopingEnabled) + dSet(d, "is_dhcp_snooping_v4_enabled", ipDiscoveryProfile.IsDhcpSnoopingV4Enabled) + dSet(d, "is_dhcp_snooping_v6_enabled", ipDiscoveryProfile.IsDhcpSnoopingV6Enabled) + dSet(d, "is_duplicate_ip_detection_enabled", ipDiscoveryProfile.IsDuplicateIPDetectionEnabled) + dSet(d, "is_nd_snooping_enabled", ipDiscoveryProfile.IsNdSnoopingEnabled) + dSet(d, "is_tofu_enabled", ipDiscoveryProfile.IsTofuEnabled) + dSet(d, "is_vmtools_v4_enabled", ipDiscoveryProfile.IsVMToolsV4Enabled) + dSet(d, "is_vmtools_v6_enabled", ipDiscoveryProfile.IsVMToolsV6Enabled) + dSet(d, "nd_snooping_limit", ipDiscoveryProfile.NdSnoopingLimit) + + d.SetId(ipDiscoveryProfile.ID) + + return nil +} + +// getContextFilterField determines which field should be used for filtering +func getContextFilterField(d *schema.ResourceData) (string, string, error) { + switch { + case d.Get("nsxt_manager_id").(string) != "": + return "nsxTManagerRef.id", d.Get("nsxt_manager_id").(string), nil + case d.Get("vdc_id").(string) != "": + return "orgVdcId", d.Get("vdc_id").(string), nil + case d.Get("vdc_group_id").(string) != "": + return "vdcGroupId", d.Get("vdc_group_id").(string), nil + + } + + return "", "", fmt.Errorf("unknown filtering field") +} diff --git a/vcd/datasource_vcd_nsxt_segment_mac_discovery_profile.go b/vcd/datasource_vcd_nsxt_segment_mac_discovery_profile.go new file mode 100644 index 000000000..4ea0ef61c --- /dev/null +++ b/vcd/datasource_vcd_nsxt_segment_mac_discovery_profile.go @@ -0,0 +1,107 @@ +package vcd + +import ( + "context" + "fmt" + "net/url" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func datasourceVcdNsxtSegmentMacDiscoveryProfile() *schema.Resource { + return &schema.Resource{ + ReadContext: datasourceNsxtSegmentMacDiscoveryProfileRead, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + Description: "Name of Segment MAC Discovery Profile", + }, + "nsxt_manager_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of NSX-T Manager", + }, + "vdc_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of VDC", + }, + "vdc_group_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of VDC Group", + }, + "description": { + Type: schema.TypeString, + Computed: true, + Description: "Description of Segment MAC Discovery Profile", + }, + "is_mac_change_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indcates whether source MAC address change is enabled", + }, + "is_mac_learning_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether source MAC address learning is enabled", + }, + "is_unknown_unicast_flooding_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether unknown unicast flooding rule is enabled", + }, + "mac_learning_aging_time": { + Type: schema.TypeInt, + Computed: true, + Description: "Indicates aging time in seconds for learned MAC address", + }, + "mac_limit": { + Type: schema.TypeInt, + Computed: true, + Description: "Indicates the maximum number of MAC addresses that can be learned on this port", + }, + "mac_policy": { + Type: schema.TypeString, + Computed: true, + Description: "Defines the policy after MAC Limit is exceeded. It can be either 'ALLOW' or 'DROP'", + }, + }, + } +} + +func datasourceNsxtSegmentMacDiscoveryProfileRead(_ context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + profileName := d.Get("name").(string) + + contextFilterField, contextUrn, err := getContextFilterField(d) + if err != nil { + return diag.FromErr(err) + } + + queryFilter := url.Values{} + queryFilter.Add("filter", fmt.Sprintf("%s==%s", contextFilterField, contextUrn)) + + macDiscoveryProfile, err := vcdClient.GetMacDiscoveryProfileByName(profileName, queryFilter) + if err != nil { + return diag.Errorf("could not find MAC Discovery Profile by name '%s': %s", profileName, err) + } + + dSet(d, "description", macDiscoveryProfile.Description) + dSet(d, "is_mac_change_enabled", macDiscoveryProfile.IsMacChangeEnabled) + dSet(d, "is_mac_learning_enabled", macDiscoveryProfile.IsMacLearningEnabled) + dSet(d, "is_unknown_unicast_flooding_enabled", macDiscoveryProfile.IsUnknownUnicastFloodingEnabled) + dSet(d, "mac_learning_aging_time", macDiscoveryProfile.MacLearningAgingTime) + dSet(d, "mac_limit", macDiscoveryProfile.MacLimit) + dSet(d, "mac_policy", macDiscoveryProfile.MacPolicy) + + d.SetId(macDiscoveryProfile.ID) + + return nil +} diff --git a/vcd/datasource_vcd_nsxt_segment_profile_template.go b/vcd/datasource_vcd_nsxt_segment_profile_template.go new file mode 100644 index 000000000..f4d6af32f --- /dev/null +++ b/vcd/datasource_vcd_nsxt_segment_profile_template.go @@ -0,0 +1,71 @@ +package vcd + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func datasourceVcdSegmentProfileTemplate() *schema.Resource { + return &schema.Resource{ + ReadContext: datasourceVcdSegmentProfileTemplateRead, + + Schema: map[string]*schema.Schema{ + + "name": { + Type: schema.TypeString, + Required: true, + Description: "Name of Segment Profile Template", + }, + "description": { + Type: schema.TypeString, + Computed: true, + Description: "Description of Segment Profile Template", + }, + "nsxt_manager_id": { + Type: schema.TypeString, + Computed: true, + Description: "NSX-T Manager ID", + }, + "ip_discovery_profile_id": { + Type: schema.TypeString, + Computed: true, + Description: "Segment IP Discovery Profile ID", + }, + "mac_discovery_profile_id": { + Type: schema.TypeString, + Computed: true, + Description: "Segment MAC Discovery Profile ID", + }, + "spoof_guard_profile_id": { + Type: schema.TypeString, + Computed: true, + Description: "Segment Spoof Guard Profile ID", + }, + "qos_profile_id": { + Type: schema.TypeString, + Computed: true, + Description: "Segment QoS Profile ID", + }, + "segment_security_profile_id": { + Type: schema.TypeString, + Computed: true, + Description: "Segment Security Profile ID", + }, + }, + } +} + +func datasourceVcdSegmentProfileTemplateRead(_ context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + segmentProfileTemplate, err := vcdClient.GetSegmentProfileTemplateByName(d.Get("name").(string)) + if err != nil { + return diag.FromErr(err) + } + + setNsxtSegmentProfileTemplateData(d, segmentProfileTemplate.NsxtSegmentProfileTemplate) + d.SetId(segmentProfileTemplate.NsxtSegmentProfileTemplate.ID) + + return nil +} diff --git a/vcd/datasource_vcd_nsxt_segment_profiles_test.go b/vcd/datasource_vcd_nsxt_segment_profiles_test.go new file mode 100644 index 000000000..750f540ef --- /dev/null +++ b/vcd/datasource_vcd_nsxt_segment_profiles_test.go @@ -0,0 +1,332 @@ +//go:build network || nsxt || ALL || functional + +package vcd + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccVcdDataSourceNsxtSegmentProfiles(t *testing.T) { + preTestChecks(t) + skipIfNotSysAdmin(t) + + // String map to fill the template + var params = StringMap{ + "TestName": t.Name(), + "OrgName": testConfig.VCD.Org, + "VdcName": testConfig.Nsxt.Vdc, + "VdcGroupName": testConfig.Nsxt.VdcGroup, + "NsxtManager": testConfig.Nsxt.Manager, + + "IpDiscoveryProfileName": testConfig.Nsxt.IpDiscoveryProfile, + "MacDiscoveryProfileName": testConfig.Nsxt.MacDiscoveryProfile, + "QosProfileName": testConfig.Nsxt.QosProfile, + "SpoofGuardProfileName": testConfig.Nsxt.SpoofGuardProfile, + "SegmentSecurityProfileName": testConfig.Nsxt.SegmentSecurityProfile, + + "Tags": "nsxt", + } + testParamsNotEmpty(t, params) + + configText1 := templateFill(testAccVcdDataSourceNsxtSegmentProfilesByNsxtManager, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 1: %s", configText1) + + params["FuncName"] = t.Name() + "step2" + configText2 := templateFill(testAccVcdDataSourceNsxtSegmentProfilesByVdcId, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 2: %s", configText2) + + params["FuncName"] = t.Name() + "step3" + configText3 := templateFill(testAccVcdDataSourceNsxtSegmentProfilesByVdcGroupId, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 3: %s", configText3) + + if vcdShortTest { + t.Skip(acceptanceTestsSkipped) + return + } + + resource.Test(t, resource.TestCase{ + ProviderFactories: testAccProviders, + Steps: []resource.TestStep{ + { + Config: configText1, + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "arp_binding_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "arp_binding_timeout"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_arp_snooping_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_dhcp_snooping_v4_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_dhcp_snooping_v6_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_duplicate_ip_detection_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_nd_snooping_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_tofu_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_vmtools_v4_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_vmtools_v6_enabled"), + + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "is_mac_change_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "is_mac_learning_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "is_unknown_unicast_flooding_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "mac_learning_aging_time"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "mac_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "mac_policy"), + + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_spoof_guard_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_spoof_guard_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_spoof_guard_profile.first", "is_address_binding_whitelist_enabled"), + + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "class_of_service"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "dscp_priority"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "dscp_trust_mode"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "egress_rate_limiter_avg_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "egress_rate_limiter_burst_size"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "egress_rate_limiter_peak_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_broadcast_rate_limiter_avg_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_broadcast_rate_limiter_burst_size"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_broadcast_rate_limiter_peak_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_rate_limiter_avg_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_rate_limiter_burst_size"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_rate_limiter_peak_bandwidth"), + + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "bpdu_filter_allow_list.#"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_bpdu_filter_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_dhcp_v4_client_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_dhcp_v6_client_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_dhcp_v4_server_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_dhcp_v6_server_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_non_ip_traffic_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_ra_guard_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_rate_limitting_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "rx_broadcast_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "rx_multicast_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "tx_broadcast_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "tx_multicast_limit"), + ), + }, + { + Config: configText2, + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "arp_binding_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "arp_binding_timeout"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_arp_snooping_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_dhcp_snooping_v4_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_dhcp_snooping_v6_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_duplicate_ip_detection_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_nd_snooping_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_tofu_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_vmtools_v4_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_vmtools_v6_enabled"), + + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "is_mac_change_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "is_mac_learning_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "is_unknown_unicast_flooding_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "mac_learning_aging_time"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "mac_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "mac_policy"), + + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_spoof_guard_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_spoof_guard_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_spoof_guard_profile.first", "is_address_binding_whitelist_enabled"), + + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "class_of_service"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "dscp_priority"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "dscp_trust_mode"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "egress_rate_limiter_avg_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "egress_rate_limiter_burst_size"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "egress_rate_limiter_peak_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_broadcast_rate_limiter_avg_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_broadcast_rate_limiter_burst_size"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_broadcast_rate_limiter_peak_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_rate_limiter_avg_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_rate_limiter_burst_size"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_rate_limiter_peak_bandwidth"), + + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "bpdu_filter_allow_list.#"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_bpdu_filter_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_dhcp_v4_client_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_dhcp_v6_client_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_dhcp_v4_server_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_dhcp_v6_server_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_non_ip_traffic_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_ra_guard_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_rate_limitting_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "rx_broadcast_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "rx_multicast_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "tx_broadcast_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "tx_multicast_limit"), + ), + }, + { + Config: configText3, + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "arp_binding_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "arp_binding_timeout"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_arp_snooping_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_dhcp_snooping_v4_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_dhcp_snooping_v6_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_duplicate_ip_detection_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_nd_snooping_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_tofu_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_vmtools_v4_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_ip_discovery_profile.first", "is_vmtools_v6_enabled"), + + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "is_mac_change_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "is_mac_learning_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "is_unknown_unicast_flooding_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "mac_learning_aging_time"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "mac_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_mac_discovery_profile.first", "mac_policy"), + + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_spoof_guard_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_spoof_guard_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_spoof_guard_profile.first", "is_address_binding_whitelist_enabled"), + + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "class_of_service"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "dscp_priority"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "dscp_trust_mode"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "egress_rate_limiter_avg_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "egress_rate_limiter_burst_size"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "egress_rate_limiter_peak_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_broadcast_rate_limiter_avg_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_broadcast_rate_limiter_burst_size"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_broadcast_rate_limiter_peak_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_rate_limiter_avg_bandwidth"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_rate_limiter_burst_size"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_qos_profile.first", "ingress_rate_limiter_peak_bandwidth"), + + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "id"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "description"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "bpdu_filter_allow_list.#"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_bpdu_filter_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_dhcp_v4_client_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_dhcp_v6_client_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_dhcp_v4_server_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_dhcp_v6_server_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_non_ip_traffic_block_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_ra_guard_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "is_rate_limitting_enabled"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "rx_broadcast_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "rx_multicast_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "tx_broadcast_limit"), + resource.TestCheckResourceAttrSet("data.vcd_nsxt_segment_security_profile.first", "tx_multicast_limit"), + ), + }, + }, + }) +} + +const testAccVcdDataSourceNsxtSegmentProfilesByNsxtManager = ` +data "vcd_nsxt_manager" "nsxt" { + name = "{{.NsxtManager}}" +} + +data "vcd_nsxt_segment_ip_discovery_profile" "first" { + name = "{{.IpDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_mac_discovery_profile" "first" { + name = "{{.MacDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_spoof_guard_profile" "first" { + name = "{{.SpoofGuardProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_qos_profile" "first" { + name = "{{.QosProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_security_profile" "first" { + name = "{{.SegmentSecurityProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} +` + +const testAccVcdDataSourceNsxtSegmentProfilesByVdcId = ` +data "vcd_org_vdc" "nsxt" { + org = "{{.OrgName}}" + name = "{{.VdcName}}" +} + +data "vcd_nsxt_segment_ip_discovery_profile" "first" { + name = "{{.IpDiscoveryProfileName}}" + vdc_id = data.vcd_org_vdc.nsxt.id +} + +data "vcd_nsxt_segment_mac_discovery_profile" "first" { + name = "{{.MacDiscoveryProfileName}}" + vdc_id = data.vcd_org_vdc.nsxt.id +} + +data "vcd_nsxt_segment_spoof_guard_profile" "first" { + name = "{{.SpoofGuardProfileName}}" + vdc_id = data.vcd_org_vdc.nsxt.id +} + +data "vcd_nsxt_segment_qos_profile" "first" { + name = "{{.QosProfileName}}" + vdc_id = data.vcd_org_vdc.nsxt.id +} + +data "vcd_nsxt_segment_security_profile" "first" { + name = "{{.SegmentSecurityProfileName}}" + vdc_id = data.vcd_org_vdc.nsxt.id +} +` + +const testAccVcdDataSourceNsxtSegmentProfilesByVdcGroupId = ` +data "vcd_vdc_group" "nsxt" { + org = "{{.OrgName}}" + name = "{{.VdcGroupName}}" +} + +data "vcd_nsxt_segment_ip_discovery_profile" "first" { + name = "{{.IpDiscoveryProfileName}}" + vdc_group_id = data.vcd_vdc_group.nsxt.id +} + +data "vcd_nsxt_segment_mac_discovery_profile" "first" { + name = "{{.MacDiscoveryProfileName}}" + vdc_group_id = data.vcd_vdc_group.nsxt.id +} + +data "vcd_nsxt_segment_spoof_guard_profile" "first" { + name = "{{.SpoofGuardProfileName}}" + vdc_group_id = data.vcd_vdc_group.nsxt.id +} + +data "vcd_nsxt_segment_qos_profile" "first" { + name = "{{.QosProfileName}}" + vdc_group_id = data.vcd_vdc_group.nsxt.id +} + +data "vcd_nsxt_segment_security_profile" "first" { + name = "{{.SegmentSecurityProfileName}}" + vdc_group_id = data.vcd_vdc_group.nsxt.id +} +` diff --git a/vcd/datasource_vcd_nsxt_segment_qos_profile.go b/vcd/datasource_vcd_nsxt_segment_qos_profile.go new file mode 100644 index 000000000..fdf24bba7 --- /dev/null +++ b/vcd/datasource_vcd_nsxt_segment_qos_profile.go @@ -0,0 +1,143 @@ +package vcd + +import ( + "context" + "fmt" + "net/url" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func datasourceVcdNsxtSegmentQosProfile() *schema.Resource { + return &schema.Resource{ + ReadContext: datasourceNsxtSegmentQosProfileRead, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + Description: "Name of Segment QoS Profile", + }, + "nsxt_manager_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of NSX-T Manager", + }, + "vdc_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of VDC", + }, + "vdc_group_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of VDC Group", + }, + "description": { + Type: schema.TypeString, + Computed: true, + Description: "Description of Segment QoS Profile", + }, + "class_of_service": { + Type: schema.TypeInt, + Computed: true, + Description: "Groups similar types of traffic in the network and each type of traffic is treated as a class with its own level of service priority", + }, + "dscp_priority": { + Type: schema.TypeInt, + Computed: true, + Description: "Differentiated Services Code Point priority", + }, + "dscp_trust_mode": { + Type: schema.TypeString, + Computed: true, + Description: "Differentiated Services Code Point trust mode", + }, + "egress_rate_limiter_avg_bandwidth": { + Type: schema.TypeInt, + Computed: true, + Description: "Average bandwidth in Mb/s", + }, + "egress_rate_limiter_burst_size": { + Type: schema.TypeInt, + Computed: true, + Description: "Burst size in bytes", + }, + "egress_rate_limiter_peak_bandwidth": { + Type: schema.TypeInt, + Computed: true, + Description: "Peak bandwidth in Mb/s", + }, + "ingress_broadcast_rate_limiter_avg_bandwidth": { + Type: schema.TypeInt, + Computed: true, + Description: "Average bandwidth in Mb/s", + }, + "ingress_broadcast_rate_limiter_burst_size": { + Type: schema.TypeInt, + Computed: true, + Description: "Burst size in bytes", + }, + "ingress_broadcast_rate_limiter_peak_bandwidth": { + Type: schema.TypeInt, + Computed: true, + Description: "Peak bandwidth in Mb/s", + }, + "ingress_rate_limiter_avg_bandwidth": { + Type: schema.TypeInt, + Computed: true, + Description: "Average bandwidth in Mb/s", + }, + "ingress_rate_limiter_burst_size": { + Type: schema.TypeInt, + Computed: true, + Description: "Burst size in bytes", + }, + "ingress_rate_limiter_peak_bandwidth": { + Type: schema.TypeInt, + Computed: true, + Description: "Peak bandwidth in Mb/s", + }, + }, + } +} + +func datasourceNsxtSegmentQosProfileRead(_ context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + profileName := d.Get("name").(string) + + contextFilterField, contextUrn, err := getContextFilterField(d) + if err != nil { + return diag.FromErr(err) + } + + queryFilter := url.Values{} + queryFilter.Add("filter", fmt.Sprintf("%s==%s", contextFilterField, contextUrn)) + + qosProfile, err := vcdClient.GetQoSProfileByName(profileName, queryFilter) + if err != nil { + return diag.Errorf("could not find QoS Profile by name '%s': %s", profileName, err) + } + + dSet(d, "description", qosProfile.Description) + dSet(d, "class_of_service", qosProfile.ClassOfService) + dSet(d, "dscp_priority", qosProfile.DscpConfig.Priority) + dSet(d, "dscp_trust_mode", qosProfile.DscpConfig.TrustMode) + dSet(d, "egress_rate_limiter_avg_bandwidth", qosProfile.EgressRateLimiter.AvgBandwidth) + dSet(d, "egress_rate_limiter_burst_size", qosProfile.EgressRateLimiter.BurstSize) + dSet(d, "egress_rate_limiter_peak_bandwidth", qosProfile.EgressRateLimiter.PeakBandwidth) + dSet(d, "ingress_broadcast_rate_limiter_avg_bandwidth", qosProfile.IngressBroadcastRateLimiter.AvgBandwidth) + dSet(d, "ingress_broadcast_rate_limiter_burst_size", qosProfile.IngressBroadcastRateLimiter.BurstSize) + dSet(d, "ingress_broadcast_rate_limiter_peak_bandwidth", qosProfile.IngressBroadcastRateLimiter.PeakBandwidth) + dSet(d, "ingress_rate_limiter_avg_bandwidth", qosProfile.IngressRateLimiter.AvgBandwidth) + dSet(d, "ingress_rate_limiter_burst_size", qosProfile.IngressRateLimiter.BurstSize) + dSet(d, "ingress_rate_limiter_peak_bandwidth", qosProfile.IngressRateLimiter.PeakBandwidth) + + d.SetId(qosProfile.ID) + + return nil +} diff --git a/vcd/datasource_vcd_nsxt_segment_security_profile.go b/vcd/datasource_vcd_nsxt_segment_security_profile.go new file mode 100644 index 000000000..e37b3b439 --- /dev/null +++ b/vcd/datasource_vcd_nsxt_segment_security_profile.go @@ -0,0 +1,158 @@ +package vcd + +import ( + "context" + "fmt" + "net/url" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func datasourceVcdNsxtSegmentSecurityProfile() *schema.Resource { + return &schema.Resource{ + ReadContext: datasourceNsxtSegmentSecurityProfileRead, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + Description: "Name of Segment Security Profile", + }, + "nsxt_manager_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of NSX-T Manager", + }, + "vdc_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of VDC", + }, + "vdc_group_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of VDC Group", + }, + "description": { + Type: schema.TypeString, + Computed: true, + Description: "Description of Segment Security Profile", + }, + "bpdu_filter_allow_list": { + Type: schema.TypeSet, + Optional: true, + Description: "Indicates pre-defined list of allowed MAC addresses to be excluded from BPDU filtering", + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "is_bpdu_filter_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether BPDU filter is enabled", + }, + "is_dhcp_v4_client_block_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether DHCP Client block IPv4 is enabled", + }, + "is_dhcp_v6_client_block_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether DHCP Client block IPv6 is enabled", + }, + "is_dhcp_v4_server_block_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether DHCP Server block IPv4 is enabled", + }, + "is_dhcp_v6_server_block_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether DHCP Server block IPv6 is enabled", + }, + "is_non_ip_traffic_block_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether non IP traffic block is enabled", + }, + "is_ra_guard_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether Router Advertisement Guard is enabled", + }, + "is_rate_limitting_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether Rate Limiting is enabled", + }, + "rx_broadcast_limit": { + Type: schema.TypeInt, + Computed: true, + Description: "Incoming broadcast traffic limit in packets per second", + }, + "rx_multicast_limit": { + Type: schema.TypeInt, + Computed: true, + Description: "Incoming multicast traffic limit in packets per second", + }, + "tx_broadcast_limit": { + Type: schema.TypeInt, + Computed: true, + Description: "Outgoing broadcast traffic limit in packets per second", + }, + "tx_multicast_limit": { + Type: schema.TypeInt, + Computed: true, + Description: "Outgoing multicast traffic limit in packets per second", + }, + }, + } +} + +func datasourceNsxtSegmentSecurityProfileRead(_ context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + profileName := d.Get("name").(string) + + contextFilterField, contextUrn, err := getContextFilterField(d) + if err != nil { + return diag.FromErr(err) + } + + queryFilter := url.Values{} + queryFilter.Add("filter", fmt.Sprintf("%s==%s", contextFilterField, contextUrn)) + + segmentSecurityProfile, err := vcdClient.GetSegmentSecurityProfileByName(profileName, queryFilter) + if err != nil { + return diag.Errorf("could not find Segment Security Profile by name '%s': %s", profileName, err) + } + + dSet(d, "description", segmentSecurityProfile.Description) + + bpduAllowList := convertStringsToTypeSet(segmentSecurityProfile.BpduFilterAllowList) + err = d.Set("bpdu_filter_allow_list", bpduAllowList) + if err != nil { + return diag.Errorf("error storing 'bpdu_filter_allow_list': %s", err) + } + + dSet(d, "is_bpdu_filter_enabled", segmentSecurityProfile.IsBpduFilterEnabled) + dSet(d, "is_dhcp_v4_client_block_enabled", segmentSecurityProfile.IsDhcpClientBlockV4Enabled) + dSet(d, "is_dhcp_v6_client_block_enabled", segmentSecurityProfile.IsDhcpClientBlockV6Enabled) + dSet(d, "is_dhcp_v4_server_block_enabled", segmentSecurityProfile.IsDhcpServerBlockV4Enabled) + dSet(d, "is_dhcp_v6_server_block_enabled", segmentSecurityProfile.IsDhcpServerBlockV6Enabled) + dSet(d, "is_non_ip_traffic_block_enabled", segmentSecurityProfile.IsNonIPTrafficBlockEnabled) + dSet(d, "is_ra_guard_enabled", segmentSecurityProfile.IsRaGuardEnabled) + dSet(d, "is_rate_limitting_enabled", segmentSecurityProfile.IsRateLimitingEnabled) + dSet(d, "rx_broadcast_limit", segmentSecurityProfile.RateLimits.RxBroadcast) + dSet(d, "rx_multicast_limit", segmentSecurityProfile.RateLimits.RxMulticast) + dSet(d, "tx_broadcast_limit", segmentSecurityProfile.RateLimits.TxBroadcast) + dSet(d, "tx_multicast_limit", segmentSecurityProfile.RateLimits.TxMulticast) + + d.SetId(segmentSecurityProfile.ID) + + return nil +} diff --git a/vcd/datasource_vcd_nsxt_segment_spoof_guard_profile.go b/vcd/datasource_vcd_nsxt_segment_spoof_guard_profile.go new file mode 100644 index 000000000..d93b47d7b --- /dev/null +++ b/vcd/datasource_vcd_nsxt_segment_spoof_guard_profile.go @@ -0,0 +1,77 @@ +package vcd + +import ( + "context" + "fmt" + "net/url" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func datasourceVcdNsxtSegmentSpoofGuardProfile() *schema.Resource { + return &schema.Resource{ + ReadContext: datasourceNsxtSegmentSpoofGuardProfileRead, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + Description: "Name of Segment Spoof Guard Profile", + }, + "nsxt_manager_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of NSX-T Manager", + }, + "vdc_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of VDC", + }, + "vdc_group_id": { + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{"nsxt_manager_id", "vdc_id", "vdc_group_id"}, + Description: "ID of VDC Group", + }, + "description": { + Type: schema.TypeString, + Computed: true, + Description: "Description of Segment Spoof Guard Profile", + }, + "is_address_binding_whitelist_enabled": { + Type: schema.TypeBool, + Computed: true, + Description: "Indicates whether Spoof Guard is enabled", + }, + }, + } +} + +func datasourceNsxtSegmentSpoofGuardProfileRead(_ context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + profileName := d.Get("name").(string) + + contextFilterField, contextUrn, err := getContextFilterField(d) + if err != nil { + return diag.FromErr(err) + } + + queryFilter := url.Values{} + queryFilter.Add("filter", fmt.Sprintf("%s==%s", contextFilterField, contextUrn)) + + spoofGuardProfile, err := vcdClient.GetSpoofGuardProfileByName(profileName, queryFilter) + if err != nil { + return diag.Errorf("could not find Spoof Guard Profile by name '%s': %s", profileName, err) + } + + dSet(d, "description", spoofGuardProfile.Description) + dSet(d, "is_address_binding_whitelist_enabled", spoofGuardProfile.IsAddressBindingWhitelistEnabled) + + d.SetId(spoofGuardProfile.ID) + + return nil +} diff --git a/vcd/datasource_vcd_org_vdc_network_profile.go b/vcd/datasource_vcd_org_vdc_network_profile.go new file mode 100644 index 000000000..fd2535d1e --- /dev/null +++ b/vcd/datasource_vcd_org_vdc_network_profile.go @@ -0,0 +1,49 @@ +package vcd + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func datasourceVcdNsxtOrgVdcNetworkProfile() *schema.Resource { + return &schema.Resource{ + ReadContext: dataSourceVcdNsxtOrgVdcNetworkProfileRead, + + Schema: map[string]*schema.Schema{ + "org": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: "The name of organization to use, optional if defined at provider " + + "level. Useful when connected as sysadmin working across different organizations", + }, + "vdc": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: "The name of VDC to use, optional if defined at provider level", + }, + "edge_cluster_id": { + Type: schema.TypeString, + Computed: true, + Description: "ID of NSX-T Edge Cluster (provider vApp networking services and DHCP capability for Isolated networks)", + }, + "vdc_networks_default_segment_profile_template_id": { + Type: schema.TypeString, + Computed: true, + Description: "Default NSX-T Segment Profile for Org VDC networks", + }, + "vapp_networks_default_segment_profile_template_id": { + Type: schema.TypeString, + Computed: true, + Description: "Default NSX-T Segment Profile for vApp networks", + }, + }, + } +} + +func dataSourceVcdNsxtOrgVdcNetworkProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + return resourceDataSourceVcdNsxtOrgVdcNetworkProfileRead(ctx, d, meta, "datasource") +} diff --git a/vcd/provider.go b/vcd/provider.go index 55e562a89..4aacb014c 100644 --- a/vcd/provider.go +++ b/vcd/provider.go @@ -3,10 +3,11 @@ package vcd import ( "context" "fmt" - "github.com/vmware/go-vcloud-director/v2/govcd" "os" "regexp" + "github.com/vmware/go-vcloud-director/v2/govcd" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -33,216 +34,229 @@ func Resources(nameRegexp string, includeDeprecated bool) (map[string]*schema.Re } var globalDataSourceMap = map[string]*schema.Resource{ - "vcd_org": datasourceVcdOrg(), // 2.5 - "vcd_org_group": datasourceVcdOrgGroup(), // 3.6 - "vcd_org_user": datasourceVcdOrgUser(), // 3.0 - "vcd_org_vdc": datasourceVcdOrgVdc(), // 2.5 - "vcd_catalog": datasourceVcdCatalog(), // 2.5 - "vcd_catalog_media": datasourceVcdCatalogMedia(), // 2.5 - "vcd_catalog_item": datasourceVcdCatalogItem(), // 2.5 - "vcd_edgegateway": datasourceVcdEdgeGateway(), // 2.5 - "vcd_external_network": datasourceVcdExternalNetwork(), // 2.5 - "vcd_external_network_v2": datasourceVcdExternalNetworkV2(), // 3.0 - "vcd_independent_disk": datasourceVcIndependentDisk(), // 2.5 - "vcd_network_routed": datasourceVcdNetworkRouted(), // 2.5 - "vcd_network_direct": datasourceVcdNetworkDirect(), // 2.5 - "vcd_network_isolated": datasourceVcdNetworkIsolated(), // 2.5 - "vcd_vapp": datasourceVcdVApp(), // 2.5 - "vcd_vapp_vm": datasourceVcdVAppVm(), // 2.6 - "vcd_lb_service_monitor": datasourceVcdLbServiceMonitor(), // 2.4 - "vcd_lb_server_pool": datasourceVcdLbServerPool(), // 2.4 - "vcd_lb_app_profile": datasourceVcdLBAppProfile(), // 2.4 - "vcd_lb_app_rule": datasourceVcdLBAppRule(), // 2.4 - "vcd_lb_virtual_server": datasourceVcdLbVirtualServer(), // 2.4 - "vcd_nsxv_dnat": datasourceVcdNsxvDnat(), // 2.5 - "vcd_nsxv_snat": datasourceVcdNsxvSnat(), // 2.5 - "vcd_nsxv_firewall_rule": datasourceVcdNsxvFirewallRule(), // 2.5 - "vcd_nsxv_dhcp_relay": datasourceVcdNsxvDhcpRelay(), // 2.6 - "vcd_nsxv_ip_set": datasourceVcdIpSet(), // 2.6 - "vcd_vapp_network": datasourceVcdVappNetwork(), // 2.7 - "vcd_vapp_org_network": datasourceVcdVappOrgNetwork(), // 2.7 - "vcd_vm_affinity_rule": datasourceVcdVmAffinityRule(), // 2.9 - "vcd_vm_sizing_policy": datasourceVcdVmSizingPolicy(), // 3.0 - "vcd_nsxt_manager": datasourceVcdNsxtManager(), // 3.0 - "vcd_nsxt_tier0_router": datasourceVcdNsxtTier0Router(), // 3.0 - "vcd_portgroup": datasourceVcdPortgroup(), // 3.0 - "vcd_vcenter": datasourceVcdVcenter(), // 3.0 - "vcd_resource_list": datasourceVcdResourceList(), // 3.1 - "vcd_resource_schema": datasourceVcdResourceSchema(), // 3.1 - "vcd_nsxt_edge_cluster": datasourceVcdNsxtEdgeCluster(), // 3.1 - "vcd_nsxt_edgegateway": datasourceVcdNsxtEdgeGateway(), // 3.1 - "vcd_storage_profile": datasourceVcdStorageProfile(), // 3.1 - "vcd_vm": datasourceVcdStandaloneVm(), // 3.2 - "vcd_network_routed_v2": datasourceVcdNetworkRoutedV2(), // 3.2 - "vcd_network_isolated_v2": datasourceVcdNetworkIsolatedV2(), // 3.2 - "vcd_nsxt_network_imported": datasourceVcdNsxtNetworkImported(), // 3.2 - "vcd_nsxt_network_dhcp": datasourceVcdOpenApiDhcp(), // 3.2 - "vcd_right": datasourceVcdRight(), // 3.3 - "vcd_role": datasourceVcdRole(), // 3.3 - "vcd_global_role": datasourceVcdGlobalRole(), // 3.3 - "vcd_rights_bundle": datasourceVcdRightsBundle(), // 3.3 - "vcd_nsxt_ip_set": datasourceVcdNsxtIpSet(), // 3.3 - "vcd_nsxt_security_group": datasourceVcdNsxtSecurityGroup(), // 3.3 - "vcd_nsxt_app_port_profile": datasourceVcdNsxtAppPortProfile(), // 3.3 - "vcd_nsxt_nat_rule": datasourceVcdNsxtNatRule(), // 3.3 - "vcd_nsxt_firewall": datasourceVcdNsxtFirewall(), // 3.3 - "vcd_nsxt_ipsec_vpn_tunnel": datasourceVcdNsxtIpSecVpnTunnel(), // 3.3 - "vcd_nsxt_alb_importable_cloud": datasourceVcdAlbImportableCloud(), // 3.4 - "vcd_nsxt_alb_controller": datasourceVcdAlbController(), // 3.4 - "vcd_nsxt_alb_cloud": datasourceVcdAlbCloud(), // 3.4 - "vcd_nsxt_alb_service_engine_group": datasourceVcdAlbServiceEngineGroup(), // 3.4 - "vcd_nsxt_alb_settings": datasourceVcdAlbSettings(), // 3.5 - "vcd_nsxt_alb_edgegateway_service_engine_group": datasourceVcdAlbEdgeGatewayServiceEngineGroup(), // 3.5 - "vcd_library_certificate": datasourceLibraryCertificate(), // 3.5 - "vcd_nsxt_alb_pool": datasourceVcdAlbPool(), // 3.5 - "vcd_nsxt_alb_virtual_service": datasourceVcdAlbVirtualService(), // 3.5 - "vcd_vdc_group": datasourceVdcGroup(), // 3.5 - "vcd_nsxt_distributed_firewall": datasourceVcdNsxtDistributedFirewall(), // 3.6 - "vcd_nsxt_network_context_profile": datasourceVcdNsxtNetworkContextProfile(), // 3.6 - "vcd_nsxt_route_advertisement": datasourceVcdNsxtRouteAdvertisement(), // 3.7 - "vcd_nsxt_edgegateway_bgp_configuration": datasourceVcdEdgeBgpConfig(), // 3.7 - "vcd_nsxt_edgegateway_bgp_neighbor": datasourceVcdEdgeBgpNeighbor(), // 3.7 - "vcd_nsxt_edgegateway_bgp_ip_prefix_list": datasourceVcdEdgeBgpIpPrefixList(), // 3.7 - "vcd_nsxt_dynamic_security_group": datasourceVcdDynamicSecurityGroup(), // 3.7 - "vcd_org_ldap": datasourceVcdOrgLdap(), // 3.8 - "vcd_vm_placement_policy": datasourceVcdVmPlacementPolicy(), // 3.8 - "vcd_provider_vdc": datasourceVcdProviderVdc(), // 3.8 - "vcd_vm_group": datasourceVcdVmGroup(), // 3.8 - "vcd_catalog_vapp_template": datasourceVcdCatalogVappTemplate(), // 3.8 - "vcd_subscribed_catalog": datasourceVcdSubscribedCatalog(), // 3.8 - "vcd_task": datasourceVcdTask(), // 3.8 - "vcd_nsxv_distributed_firewall": datasourceVcdNsxvDistributedFirewall(), // 3.9 - "vcd_nsxv_application_finder": datasourceVcdNsxvApplicationFinder(), // 3.9 - "vcd_nsxv_application": datasourceVcdNsxvApplication(), // 3.9 - "vcd_nsxv_application_group": datasourceVcdNsxvApplicationGroup(), // 3.9 - "vcd_rde_interface": datasourceVcdRdeInterface(), // 3.9 - "vcd_rde_type": datasourceVcdRdeType(), // 3.9 - "vcd_rde": datasourceVcdRde(), // 3.9 - "vcd_nsxt_edgegateway_qos_profile": datasourceVcdNsxtEdgeGatewayQosProfile(), // 3.9 - "vcd_nsxt_edgegateway_rate_limiting": datasourceVcdNsxtEdgegatewayRateLimiting(), // 3.9 - "vcd_nsxt_network_dhcp_binding": datasourceVcdNsxtDhcpBinding(), // 3.9 - "vcd_ip_space": datasourceVcdIpSpace(), // 3.10 - "vcd_ip_space_uplink": datasourceVcdIpSpaceUplink(), // 3.10 - "vcd_ip_space_ip_allocation": datasourceVcdIpAllocation(), // 3.10 - "vcd_ip_space_custom_quota": datasourceVcdIpSpaceCustomQuota(), // 3.10 - "vcd_nsxt_edgegateway_dhcp_forwarding": datasourceVcdNsxtEdgegatewayDhcpForwarding(), // 3.10 - "vcd_nsxt_edgegateway_dhcpv6": datasourceVcdNsxtEdgegatewayDhcpV6(), // 3.10 - "vcd_org_saml": datasourceVcdOrgSaml(), // 3.10 - "vcd_org_saml_metadata": datasourceVcdOrgSamlMetadata(), // 3.10 - "vcd_nsxt_distributed_firewall_rule": datasourceVcdNsxtDistributedFirewallRule(), // 3.10 - "vcd_nsxt_edgegateway_static_route": datasourceVcdNsxtEdgeGatewayStaticRoute(), // 3.10 - "vcd_resource_pool": datasourceVcdResourcePool(), // 3.10 - "vcd_network_pool": datasourceVcdNetworkPool(), // 3.10 - "vcd_ui_plugin": datasourceVcdUIPlugin(), // 3.10 - "vcd_service_account": datasourceVcdServiceAccount(), // 3.10 - "vcd_rde_interface_behavior": datasourceVcdRdeInterfaceBehavior(), // 3.10 - "vcd_rde_type_behavior": datasourceVcdRdeTypeBehavior(), // 3.10 - "vcd_rde_type_behavior_acl": datasourceVcdRdeTypeBehaviorAccessLevel(), // 3.10 - "vcd_nsxt_edgegateway_l2_vpn_tunnel": datasourceVcdNsxtEdgegatewayL2VpnTunnel(), // 3.11 - "vcd_rde_behavior_invocation": datasourceVcdRdeBehaviorInvocation(), // 3.11 + "vcd_org": datasourceVcdOrg(), // 2.5 + "vcd_org_group": datasourceVcdOrgGroup(), // 3.6 + "vcd_org_user": datasourceVcdOrgUser(), // 3.0 + "vcd_org_vdc": datasourceVcdOrgVdc(), // 2.5 + "vcd_catalog": datasourceVcdCatalog(), // 2.5 + "vcd_catalog_media": datasourceVcdCatalogMedia(), // 2.5 + "vcd_catalog_item": datasourceVcdCatalogItem(), // 2.5 + "vcd_edgegateway": datasourceVcdEdgeGateway(), // 2.5 + "vcd_external_network": datasourceVcdExternalNetwork(), // 2.5 + "vcd_external_network_v2": datasourceVcdExternalNetworkV2(), // 3.0 + "vcd_independent_disk": datasourceVcIndependentDisk(), // 2.5 + "vcd_network_routed": datasourceVcdNetworkRouted(), // 2.5 + "vcd_network_direct": datasourceVcdNetworkDirect(), // 2.5 + "vcd_network_isolated": datasourceVcdNetworkIsolated(), // 2.5 + "vcd_vapp": datasourceVcdVApp(), // 2.5 + "vcd_vapp_vm": datasourceVcdVAppVm(), // 2.6 + "vcd_lb_service_monitor": datasourceVcdLbServiceMonitor(), // 2.4 + "vcd_lb_server_pool": datasourceVcdLbServerPool(), // 2.4 + "vcd_lb_app_profile": datasourceVcdLBAppProfile(), // 2.4 + "vcd_lb_app_rule": datasourceVcdLBAppRule(), // 2.4 + "vcd_lb_virtual_server": datasourceVcdLbVirtualServer(), // 2.4 + "vcd_nsxv_dnat": datasourceVcdNsxvDnat(), // 2.5 + "vcd_nsxv_snat": datasourceVcdNsxvSnat(), // 2.5 + "vcd_nsxv_firewall_rule": datasourceVcdNsxvFirewallRule(), // 2.5 + "vcd_nsxv_dhcp_relay": datasourceVcdNsxvDhcpRelay(), // 2.6 + "vcd_nsxv_ip_set": datasourceVcdIpSet(), // 2.6 + "vcd_vapp_network": datasourceVcdVappNetwork(), // 2.7 + "vcd_vapp_org_network": datasourceVcdVappOrgNetwork(), // 2.7 + "vcd_vm_affinity_rule": datasourceVcdVmAffinityRule(), // 2.9 + "vcd_vm_sizing_policy": datasourceVcdVmSizingPolicy(), // 3.0 + "vcd_nsxt_manager": datasourceVcdNsxtManager(), // 3.0 + "vcd_nsxt_tier0_router": datasourceVcdNsxtTier0Router(), // 3.0 + "vcd_portgroup": datasourceVcdPortgroup(), // 3.0 + "vcd_vcenter": datasourceVcdVcenter(), // 3.0 + "vcd_resource_list": datasourceVcdResourceList(), // 3.1 + "vcd_resource_schema": datasourceVcdResourceSchema(), // 3.1 + "vcd_nsxt_edge_cluster": datasourceVcdNsxtEdgeCluster(), // 3.1 + "vcd_nsxt_edgegateway": datasourceVcdNsxtEdgeGateway(), // 3.1 + "vcd_storage_profile": datasourceVcdStorageProfile(), // 3.1 + "vcd_vm": datasourceVcdStandaloneVm(), // 3.2 + "vcd_network_routed_v2": datasourceVcdNetworkRoutedV2(), // 3.2 + "vcd_network_isolated_v2": datasourceVcdNetworkIsolatedV2(), // 3.2 + "vcd_nsxt_network_imported": datasourceVcdNsxtNetworkImported(), // 3.2 + "vcd_nsxt_network_dhcp": datasourceVcdOpenApiDhcp(), // 3.2 + "vcd_right": datasourceVcdRight(), // 3.3 + "vcd_role": datasourceVcdRole(), // 3.3 + "vcd_global_role": datasourceVcdGlobalRole(), // 3.3 + "vcd_rights_bundle": datasourceVcdRightsBundle(), // 3.3 + "vcd_nsxt_ip_set": datasourceVcdNsxtIpSet(), // 3.3 + "vcd_nsxt_security_group": datasourceVcdNsxtSecurityGroup(), // 3.3 + "vcd_nsxt_app_port_profile": datasourceVcdNsxtAppPortProfile(), // 3.3 + "vcd_nsxt_nat_rule": datasourceVcdNsxtNatRule(), // 3.3 + "vcd_nsxt_firewall": datasourceVcdNsxtFirewall(), // 3.3 + "vcd_nsxt_ipsec_vpn_tunnel": datasourceVcdNsxtIpSecVpnTunnel(), // 3.3 + "vcd_nsxt_alb_importable_cloud": datasourceVcdAlbImportableCloud(), // 3.4 + "vcd_nsxt_alb_controller": datasourceVcdAlbController(), // 3.4 + "vcd_nsxt_alb_cloud": datasourceVcdAlbCloud(), // 3.4 + "vcd_nsxt_alb_service_engine_group": datasourceVcdAlbServiceEngineGroup(), // 3.4 + "vcd_nsxt_alb_settings": datasourceVcdAlbSettings(), // 3.5 + "vcd_nsxt_alb_edgegateway_service_engine_group": datasourceVcdAlbEdgeGatewayServiceEngineGroup(), // 3.5 + "vcd_library_certificate": datasourceLibraryCertificate(), // 3.5 + "vcd_nsxt_alb_pool": datasourceVcdAlbPool(), // 3.5 + "vcd_nsxt_alb_virtual_service": datasourceVcdAlbVirtualService(), // 3.5 + "vcd_vdc_group": datasourceVdcGroup(), // 3.5 + "vcd_nsxt_distributed_firewall": datasourceVcdNsxtDistributedFirewall(), // 3.6 + "vcd_nsxt_network_context_profile": datasourceVcdNsxtNetworkContextProfile(), // 3.6 + "vcd_nsxt_route_advertisement": datasourceVcdNsxtRouteAdvertisement(), // 3.7 + "vcd_nsxt_edgegateway_bgp_configuration": datasourceVcdEdgeBgpConfig(), // 3.7 + "vcd_nsxt_edgegateway_bgp_neighbor": datasourceVcdEdgeBgpNeighbor(), // 3.7 + "vcd_nsxt_edgegateway_bgp_ip_prefix_list": datasourceVcdEdgeBgpIpPrefixList(), // 3.7 + "vcd_nsxt_dynamic_security_group": datasourceVcdDynamicSecurityGroup(), // 3.7 + "vcd_org_ldap": datasourceVcdOrgLdap(), // 3.8 + "vcd_vm_placement_policy": datasourceVcdVmPlacementPolicy(), // 3.8 + "vcd_provider_vdc": datasourceVcdProviderVdc(), // 3.8 + "vcd_vm_group": datasourceVcdVmGroup(), // 3.8 + "vcd_catalog_vapp_template": datasourceVcdCatalogVappTemplate(), // 3.8 + "vcd_subscribed_catalog": datasourceVcdSubscribedCatalog(), // 3.8 + "vcd_task": datasourceVcdTask(), // 3.8 + "vcd_nsxv_distributed_firewall": datasourceVcdNsxvDistributedFirewall(), // 3.9 + "vcd_nsxv_application_finder": datasourceVcdNsxvApplicationFinder(), // 3.9 + "vcd_nsxv_application": datasourceVcdNsxvApplication(), // 3.9 + "vcd_nsxv_application_group": datasourceVcdNsxvApplicationGroup(), // 3.9 + "vcd_rde_interface": datasourceVcdRdeInterface(), // 3.9 + "vcd_rde_type": datasourceVcdRdeType(), // 3.9 + "vcd_rde": datasourceVcdRde(), // 3.9 + "vcd_nsxt_edgegateway_qos_profile": datasourceVcdNsxtEdgeGatewayQosProfile(), // 3.9 + "vcd_nsxt_edgegateway_rate_limiting": datasourceVcdNsxtEdgegatewayRateLimiting(), // 3.9 + "vcd_nsxt_network_dhcp_binding": datasourceVcdNsxtDhcpBinding(), // 3.9 + "vcd_ip_space": datasourceVcdIpSpace(), // 3.10 + "vcd_ip_space_uplink": datasourceVcdIpSpaceUplink(), // 3.10 + "vcd_ip_space_ip_allocation": datasourceVcdIpAllocation(), // 3.10 + "vcd_ip_space_custom_quota": datasourceVcdIpSpaceCustomQuota(), // 3.10 + "vcd_nsxt_edgegateway_dhcp_forwarding": datasourceVcdNsxtEdgegatewayDhcpForwarding(), // 3.10 + "vcd_nsxt_edgegateway_dhcpv6": datasourceVcdNsxtEdgegatewayDhcpV6(), // 3.10 + "vcd_org_saml": datasourceVcdOrgSaml(), // 3.10 + "vcd_org_saml_metadata": datasourceVcdOrgSamlMetadata(), // 3.10 + "vcd_nsxt_distributed_firewall_rule": datasourceVcdNsxtDistributedFirewallRule(), // 3.10 + "vcd_nsxt_edgegateway_static_route": datasourceVcdNsxtEdgeGatewayStaticRoute(), // 3.10 + "vcd_resource_pool": datasourceVcdResourcePool(), // 3.10 + "vcd_network_pool": datasourceVcdNetworkPool(), // 3.10 + "vcd_ui_plugin": datasourceVcdUIPlugin(), // 3.10 + "vcd_service_account": datasourceVcdServiceAccount(), // 3.10 + "vcd_rde_interface_behavior": datasourceVcdRdeInterfaceBehavior(), // 3.10 + "vcd_rde_type_behavior": datasourceVcdRdeTypeBehavior(), // 3.10 + "vcd_rde_type_behavior_acl": datasourceVcdRdeTypeBehaviorAccessLevel(), // 3.10 + "vcd_nsxt_edgegateway_l2_vpn_tunnel": datasourceVcdNsxtEdgegatewayL2VpnTunnel(), // 3.11 + "vcd_rde_behavior_invocation": datasourceVcdRdeBehaviorInvocation(), // 3.11 + "vcd_nsxt_segment_ip_discovery_profile": datasourceVcdNsxtSegmentIpDiscoveryProfile(), // 3.11 + "vcd_nsxt_segment_mac_discovery_profile": datasourceVcdNsxtSegmentMacDiscoveryProfile(), // 3.11 + "vcd_nsxt_segment_spoof_guard_profile": datasourceVcdNsxtSegmentSpoofGuardProfile(), // 3.11 + "vcd_nsxt_segment_qos_profile": datasourceVcdNsxtSegmentQosProfile(), // 3.11 + "vcd_nsxt_segment_security_profile": datasourceVcdNsxtSegmentSecurityProfile(), // 3.11 + "vcd_nsxt_segment_profile_template": datasourceVcdSegmentProfileTemplate(), // 3.11 + "vcd_nsxt_global_default_segment_profile_template": datasourceVcdGlobalDefaultSegmentProfileTemplate(), // 3.11 + "vcd_org_vdc_nsxt_network_profile": datasourceVcdNsxtOrgVdcNetworkProfile(), // 3.11 + "vcd_nsxt_network_segment_profile": datasourceVcdNsxtOrgVdcNetworkSegmentProfileTemplate(), // 3.11 } var globalResourceMap = map[string]*schema.Resource{ - "vcd_network_routed": resourceVcdNetworkRouted(), // 2.0 - "vcd_network_direct": resourceVcdNetworkDirect(), // 2.0 - "vcd_network_isolated": resourceVcdNetworkIsolated(), // 2.0 - "vcd_vapp_network": resourceVcdVappNetwork(), // 2.1 - "vcd_vapp": resourceVcdVApp(), // 1.0 - "vcd_edgegateway": resourceVcdEdgeGateway(), // 2.4 - "vcd_edgegateway_vpn": resourceVcdEdgeGatewayVpn(), // 1.0 - "vcd_edgegateway_settings": resourceVcdEdgeGatewaySettings(), // 3.0 - "vcd_vapp_vm": resourceVcdVAppVm(), // 1.0 - "vcd_org": resourceOrg(), // 2.0 - "vcd_org_vdc": resourceVcdOrgVdc(), // 2.2 - "vcd_org_user": resourceVcdOrgUser(), // 2.4 - "vcd_catalog": resourceVcdCatalog(), // 2.0 - "vcd_catalog_item": resourceVcdCatalogItem(), // 2.0 - "vcd_catalog_media": resourceVcdCatalogMedia(), // 2.0 - "vcd_inserted_media": resourceVcdInsertedMedia(), // 2.1 - "vcd_independent_disk": resourceVcdIndependentDisk(), // 2.1 - "vcd_external_network": resourceVcdExternalNetwork(), // 2.2 - "vcd_lb_service_monitor": resourceVcdLbServiceMonitor(), // 2.4 - "vcd_lb_server_pool": resourceVcdLBServerPool(), // 2.4 - "vcd_lb_app_profile": resourceVcdLBAppProfile(), // 2.4 - "vcd_lb_app_rule": resourceVcdLBAppRule(), // 2.4 - "vcd_lb_virtual_server": resourceVcdLBVirtualServer(), // 2.4 - "vcd_nsxv_dnat": resourceVcdNsxvDnat(), // 2.5 - "vcd_nsxv_snat": resourceVcdNsxvSnat(), // 2.5 - "vcd_nsxv_firewall_rule": resourceVcdNsxvFirewallRule(), // 2.5 - "vcd_nsxv_dhcp_relay": resourceVcdNsxvDhcpRelay(), // 2.6 - "vcd_nsxv_ip_set": resourceVcdIpSet(), // 2.6 - "vcd_vm_internal_disk": resourceVmInternalDisk(), // 2.7 - "vcd_vapp_org_network": resourceVcdVappOrgNetwork(), // 2.7 - "vcd_org_group": resourceVcdOrgGroup(), // 2.9 - "vcd_vapp_firewall_rules": resourceVcdVappFirewallRules(), // 2.9 - "vcd_vapp_nat_rules": resourceVcdVappNetworkNatRules(), // 2.9 - "vcd_vapp_static_routing": resourceVcdVappNetworkStaticRouting(), // 2.9 - "vcd_vm_affinity_rule": resourceVcdVmAffinityRule(), // 2.9 - "vcd_vapp_access_control": resourceVcdAccessControlVapp(), // 3.0 - "vcd_external_network_v2": resourceVcdExternalNetworkV2(), // 3.0 - "vcd_vm_sizing_policy": resourceVcdVmSizingPolicy(), // 3.0 - "vcd_nsxt_edgegateway": resourceVcdNsxtEdgeGateway(), // 3.1 - "vcd_vm": resourceVcdStandaloneVm(), // 3.2 - "vcd_network_routed_v2": resourceVcdNetworkRoutedV2(), // 3.2 - "vcd_network_isolated_v2": resourceVcdNetworkIsolatedV2(), // 3.2 - "vcd_nsxt_network_imported": resourceVcdNsxtNetworkImported(), // 3.2 - "vcd_nsxt_network_dhcp": resourceVcdOpenApiDhcp(), // 3.2 - "vcd_role": resourceVcdRole(), // 3.3 - "vcd_global_role": resourceVcdGlobalRole(), // 3.3 - "vcd_rights_bundle": resourceVcdRightsBundle(), // 3.3 - "vcd_nsxt_ip_set": resourceVcdNsxtIpSet(), // 3.3 - "vcd_nsxt_security_group": resourceVcdSecurityGroup(), // 3.3 - "vcd_nsxt_firewall": resourceVcdNsxtFirewall(), // 3.3 - "vcd_nsxt_app_port_profile": resourceVcdNsxtAppPortProfile(), // 3.3 - "vcd_nsxt_nat_rule": resourceVcdNsxtNatRule(), // 3.3 - "vcd_nsxt_ipsec_vpn_tunnel": resourceVcdNsxtIpSecVpnTunnel(), // 3.3 - "vcd_nsxt_alb_cloud": resourceVcdAlbCloud(), // 3.4 - "vcd_nsxt_alb_controller": resourceVcdAlbController(), // 3.4 - "vcd_nsxt_alb_service_engine_group": resourceVcdAlbServiceEngineGroup(), // 3.4 - "vcd_nsxt_alb_settings": resourceVcdAlbSettings(), // 3.5 - "vcd_nsxt_alb_edgegateway_service_engine_group": resourceVcdAlbEdgeGatewayServiceEngineGroup(), // 3.5 - "vcd_library_certificate": resourceLibraryCertificate(), // 3.5 - "vcd_nsxt_alb_pool": resourceVcdAlbPool(), // 3.5 - "vcd_nsxt_alb_virtual_service": resourceVcdAlbVirtualService(), // 3.5 - "vcd_vdc_group": resourceVdcGroup(), // 3.5 - "vcd_nsxt_distributed_firewall": resourceVcdNsxtDistributedFirewall(), // 3.6 - "vcd_security_tag": resourceVcdSecurityTag(), // 3.7 - "vcd_nsxt_route_advertisement": resourceVcdNsxtRouteAdvertisement(), // 3.7 - "vcd_org_vdc_access_control": resourceVcdOrgVdcAccessControl(), // 3.7 - "vcd_nsxt_dynamic_security_group": resourceVcdDynamicSecurityGroup(), // 3.7 - "vcd_nsxt_edgegateway_bgp_neighbor": resourceVcdEdgeBgpNeighbor(), // 3.7 - "vcd_nsxt_edgegateway_bgp_ip_prefix_list": resourceVcdEdgeBgpIpPrefixList(), // 3.7 - "vcd_nsxt_edgegateway_bgp_configuration": resourceVcdEdgeBgpConfig(), // 3.7 - "vcd_org_ldap": resourceVcdOrgLdap(), // 3.8 - "vcd_vm_placement_policy": resourceVcdVmPlacementPolicy(), // 3.8 - "vcd_catalog_vapp_template": resourceVcdCatalogVappTemplate(), // 3.8 - "vcd_catalog_access_control": resourceVcdCatalogAccessControl(), // 3.8 - "vcd_subscribed_catalog": resourceVcdSubscribedCatalog(), // 3.8 - "vcd_nsxv_distributed_firewall": resourceVcdNsxvDistributedFirewall(), // 3.9 - "vcd_rde_interface": resourceVcdRdeInterface(), // 3.9 - "vcd_rde_type": resourceVcdRdeType(), // 3.9 - "vcd_rde": resourceVcdRde(), // 3.9 - "vcd_nsxt_edgegateway_rate_limiting": resourceVcdNsxtEdgegatewayRateLimiting(), // 3.9 - "vcd_nsxt_network_dhcp_binding": resourceVcdNsxtDhcpBinding(), // 3.9 - "vcd_ip_space": resourceVcdIpSpace(), // 3.10 - "vcd_ip_space_uplink": resourceVcdIpSpaceUplink(), // 3.10 - "vcd_ip_space_ip_allocation": resourceVcdIpAllocation(), // 3.10 - "vcd_ip_space_custom_quota": resourceVcdIpSpaceCustomQuota(), // 3.10 - "vcd_nsxt_edgegateway_dhcp_forwarding": resourceVcdNsxtEdgegatewayDhcpForwarding(), // 3.10 - "vcd_nsxt_edgegateway_dhcpv6": resourceVcdNsxtEdgegatewayDhcpV6(), // 3.10 - "vcd_org_saml": resourceVcdOrgSaml(), // 3.10 - "vcd_nsxt_distributed_firewall_rule": resourceVcdNsxtDistributedFirewallRule(), // 3.10 - "vcd_nsxt_edgegateway_static_route": resourceVcdNsxtEdgeGatewayStaticRoute(), // 3.10 - "vcd_provider_vdc": resourceVcdProviderVdc(), // 3.10 - "vcd_cloned_vapp": resourceVcdClonedVApp(), // 3.10 - "vcd_ui_plugin": resourceVcdUIPlugin(), // 3.10 - "vcd_api_token": resourceVcdApiToken(), // 3.10 - "vcd_service_account": resourceVcdServiceAccount(), // 3.10 - "vcd_rde_interface_behavior": resourceVcdRdeInterfaceBehavior(), // 3.10 - "vcd_rde_type_behavior": resourceVcdRdeTypeBehavior(), // 3.10 - "vcd_rde_type_behavior_acl": resourceVcdRdeTypeBehaviorAccessLevel(), // 3.10 - "vcd_network_pool": resourceVcdNetworkPool(), // 3.11 - "vcd_nsxt_edgegateway_l2_vpn_tunnel": resourceVcdNsxtEdgegatewayL2VpnTunnel(), // 3.11 + "vcd_network_routed": resourceVcdNetworkRouted(), // 2.0 + "vcd_network_direct": resourceVcdNetworkDirect(), // 2.0 + "vcd_network_isolated": resourceVcdNetworkIsolated(), // 2.0 + "vcd_vapp_network": resourceVcdVappNetwork(), // 2.1 + "vcd_vapp": resourceVcdVApp(), // 1.0 + "vcd_edgegateway": resourceVcdEdgeGateway(), // 2.4 + "vcd_edgegateway_vpn": resourceVcdEdgeGatewayVpn(), // 1.0 + "vcd_edgegateway_settings": resourceVcdEdgeGatewaySettings(), // 3.0 + "vcd_vapp_vm": resourceVcdVAppVm(), // 1.0 + "vcd_org": resourceOrg(), // 2.0 + "vcd_org_vdc": resourceVcdOrgVdc(), // 2.2 + "vcd_org_user": resourceVcdOrgUser(), // 2.4 + "vcd_catalog": resourceVcdCatalog(), // 2.0 + "vcd_catalog_item": resourceVcdCatalogItem(), // 2.0 + "vcd_catalog_media": resourceVcdCatalogMedia(), // 2.0 + "vcd_inserted_media": resourceVcdInsertedMedia(), // 2.1 + "vcd_independent_disk": resourceVcdIndependentDisk(), // 2.1 + "vcd_external_network": resourceVcdExternalNetwork(), // 2.2 + "vcd_lb_service_monitor": resourceVcdLbServiceMonitor(), // 2.4 + "vcd_lb_server_pool": resourceVcdLBServerPool(), // 2.4 + "vcd_lb_app_profile": resourceVcdLBAppProfile(), // 2.4 + "vcd_lb_app_rule": resourceVcdLBAppRule(), // 2.4 + "vcd_lb_virtual_server": resourceVcdLBVirtualServer(), // 2.4 + "vcd_nsxv_dnat": resourceVcdNsxvDnat(), // 2.5 + "vcd_nsxv_snat": resourceVcdNsxvSnat(), // 2.5 + "vcd_nsxv_firewall_rule": resourceVcdNsxvFirewallRule(), // 2.5 + "vcd_nsxv_dhcp_relay": resourceVcdNsxvDhcpRelay(), // 2.6 + "vcd_nsxv_ip_set": resourceVcdIpSet(), // 2.6 + "vcd_vm_internal_disk": resourceVmInternalDisk(), // 2.7 + "vcd_vapp_org_network": resourceVcdVappOrgNetwork(), // 2.7 + "vcd_org_group": resourceVcdOrgGroup(), // 2.9 + "vcd_vapp_firewall_rules": resourceVcdVappFirewallRules(), // 2.9 + "vcd_vapp_nat_rules": resourceVcdVappNetworkNatRules(), // 2.9 + "vcd_vapp_static_routing": resourceVcdVappNetworkStaticRouting(), // 2.9 + "vcd_vm_affinity_rule": resourceVcdVmAffinityRule(), // 2.9 + "vcd_vapp_access_control": resourceVcdAccessControlVapp(), // 3.0 + "vcd_external_network_v2": resourceVcdExternalNetworkV2(), // 3.0 + "vcd_vm_sizing_policy": resourceVcdVmSizingPolicy(), // 3.0 + "vcd_nsxt_edgegateway": resourceVcdNsxtEdgeGateway(), // 3.1 + "vcd_vm": resourceVcdStandaloneVm(), // 3.2 + "vcd_network_routed_v2": resourceVcdNetworkRoutedV2(), // 3.2 + "vcd_network_isolated_v2": resourceVcdNetworkIsolatedV2(), // 3.2 + "vcd_nsxt_network_imported": resourceVcdNsxtNetworkImported(), // 3.2 + "vcd_nsxt_network_dhcp": resourceVcdOpenApiDhcp(), // 3.2 + "vcd_role": resourceVcdRole(), // 3.3 + "vcd_global_role": resourceVcdGlobalRole(), // 3.3 + "vcd_rights_bundle": resourceVcdRightsBundle(), // 3.3 + "vcd_nsxt_ip_set": resourceVcdNsxtIpSet(), // 3.3 + "vcd_nsxt_security_group": resourceVcdSecurityGroup(), // 3.3 + "vcd_nsxt_firewall": resourceVcdNsxtFirewall(), // 3.3 + "vcd_nsxt_app_port_profile": resourceVcdNsxtAppPortProfile(), // 3.3 + "vcd_nsxt_nat_rule": resourceVcdNsxtNatRule(), // 3.3 + "vcd_nsxt_ipsec_vpn_tunnel": resourceVcdNsxtIpSecVpnTunnel(), // 3.3 + "vcd_nsxt_alb_cloud": resourceVcdAlbCloud(), // 3.4 + "vcd_nsxt_alb_controller": resourceVcdAlbController(), // 3.4 + "vcd_nsxt_alb_service_engine_group": resourceVcdAlbServiceEngineGroup(), // 3.4 + "vcd_nsxt_alb_settings": resourceVcdAlbSettings(), // 3.5 + "vcd_nsxt_alb_edgegateway_service_engine_group": resourceVcdAlbEdgeGatewayServiceEngineGroup(), // 3.5 + "vcd_library_certificate": resourceLibraryCertificate(), // 3.5 + "vcd_nsxt_alb_pool": resourceVcdAlbPool(), // 3.5 + "vcd_nsxt_alb_virtual_service": resourceVcdAlbVirtualService(), // 3.5 + "vcd_vdc_group": resourceVdcGroup(), // 3.5 + "vcd_nsxt_distributed_firewall": resourceVcdNsxtDistributedFirewall(), // 3.6 + "vcd_security_tag": resourceVcdSecurityTag(), // 3.7 + "vcd_nsxt_route_advertisement": resourceVcdNsxtRouteAdvertisement(), // 3.7 + "vcd_org_vdc_access_control": resourceVcdOrgVdcAccessControl(), // 3.7 + "vcd_nsxt_dynamic_security_group": resourceVcdDynamicSecurityGroup(), // 3.7 + "vcd_nsxt_edgegateway_bgp_neighbor": resourceVcdEdgeBgpNeighbor(), // 3.7 + "vcd_nsxt_edgegateway_bgp_ip_prefix_list": resourceVcdEdgeBgpIpPrefixList(), // 3.7 + "vcd_nsxt_edgegateway_bgp_configuration": resourceVcdEdgeBgpConfig(), // 3.7 + "vcd_org_ldap": resourceVcdOrgLdap(), // 3.8 + "vcd_vm_placement_policy": resourceVcdVmPlacementPolicy(), // 3.8 + "vcd_catalog_vapp_template": resourceVcdCatalogVappTemplate(), // 3.8 + "vcd_catalog_access_control": resourceVcdCatalogAccessControl(), // 3.8 + "vcd_subscribed_catalog": resourceVcdSubscribedCatalog(), // 3.8 + "vcd_nsxv_distributed_firewall": resourceVcdNsxvDistributedFirewall(), // 3.9 + "vcd_rde_interface": resourceVcdRdeInterface(), // 3.9 + "vcd_rde_type": resourceVcdRdeType(), // 3.9 + "vcd_rde": resourceVcdRde(), // 3.9 + "vcd_nsxt_edgegateway_rate_limiting": resourceVcdNsxtEdgegatewayRateLimiting(), // 3.9 + "vcd_nsxt_network_dhcp_binding": resourceVcdNsxtDhcpBinding(), // 3.9 + "vcd_ip_space": resourceVcdIpSpace(), // 3.10 + "vcd_ip_space_uplink": resourceVcdIpSpaceUplink(), // 3.10 + "vcd_ip_space_ip_allocation": resourceVcdIpAllocation(), // 3.10 + "vcd_ip_space_custom_quota": resourceVcdIpSpaceCustomQuota(), // 3.10 + "vcd_nsxt_edgegateway_dhcp_forwarding": resourceVcdNsxtEdgegatewayDhcpForwarding(), // 3.10 + "vcd_nsxt_edgegateway_dhcpv6": resourceVcdNsxtEdgegatewayDhcpV6(), // 3.10 + "vcd_org_saml": resourceVcdOrgSaml(), // 3.10 + "vcd_nsxt_distributed_firewall_rule": resourceVcdNsxtDistributedFirewallRule(), // 3.10 + "vcd_nsxt_edgegateway_static_route": resourceVcdNsxtEdgeGatewayStaticRoute(), // 3.10 + "vcd_provider_vdc": resourceVcdProviderVdc(), // 3.10 + "vcd_cloned_vapp": resourceVcdClonedVApp(), // 3.10 + "vcd_ui_plugin": resourceVcdUIPlugin(), // 3.10 + "vcd_api_token": resourceVcdApiToken(), // 3.10 + "vcd_service_account": resourceVcdServiceAccount(), // 3.10 + "vcd_rde_interface_behavior": resourceVcdRdeInterfaceBehavior(), // 3.10 + "vcd_rde_type_behavior": resourceVcdRdeTypeBehavior(), // 3.10 + "vcd_rde_type_behavior_acl": resourceVcdRdeTypeBehaviorAccessLevel(), // 3.10 + "vcd_nsxt_edgegateway_l2_vpn_tunnel": resourceVcdNsxtEdgegatewayL2VpnTunnel(), // 3.11 + "vcd_nsxt_segment_profile_template": resourceVcdSegmentProfileTemplate(), // 3.11 + "vcd_nsxt_global_default_segment_profile_template": resourceVcdGlobalDefaultSegmentProfileTemplate(), // 3.11 + "vcd_org_vdc_nsxt_network_profile": resourceVcdNsxtOrgVdcNetworkProfile(), // 3.11 + "vcd_nsxt_network_segment_profile": resourceVcdNsxtOrgVdcNetworkSegmentProfileTemplate(), // 3.11 + "vcd_network_pool": resourceVcdNetworkPool(), // 3.11 } // Provider returns a terraform.ResourceProvider. diff --git a/vcd/remove_leftovers_test.go b/vcd/remove_leftovers_test.go index 61d4cb5ea..225287a05 100644 --- a/vcd/remove_leftovers_test.go +++ b/vcd/remove_leftovers_test.go @@ -90,7 +90,15 @@ var alsoDelete = entityList{ var isTest = regexp.MustCompile(`^[Tt]est`) // alwaysShow lists the resources that will always be shown -var alwaysShow = []string{"vcd_provider_vdc", "vcd_network_pool", "vcd_org", "vcd_catalog", "vcd_org_vdc", "vcd_nsxt_alb_controller"} +var alwaysShow = []string{ + "vcd_provider_vdc", + "vcd_network_pool", + "vcd_org", + "vcd_catalog", + "vcd_org_vdc", + "vcd_nsxt_alb_controller", + "vcd_nsxt_segment_profile_template", +} func removeLeftovers(govcdClient *govcd.VCDClient, verbose bool) error { if verbose { @@ -475,6 +483,31 @@ func removeLeftovers(govcdClient *govcd.VCDClient, verbose bool) error { } } + // -------------------------------------------------------------- + // Segment Profile Templates can be used in: + // * Global Default Segment Profiles (Infrastructure resources -> Segment Profile Templates -> Global Defaults) + // * VDC defaults (Cloud Resources -> Organization VDCs -> _any NSX-T vdc_ -> Segment Profile Templates) + // * Org VDC Networks (Org VDC networks do not show ) + // It is best to attempt cleanup at the end, when all the other artifacts that can consume them + // are already removed + // -------------------------------------------------------------- + if govcdClient.Client.IsSysAdmin { + allSpts, err := govcdClient.GetAllSegmentProfileTemplates(nil) + if err != nil { + return fmt.Errorf("error retrieving all Segment Profile Templates: %s", err) + } + for _, spt := range allSpts { + // This will delete all Segment Profile Templates that match the `isTest` regex. + toBeDeleted := shouldDeleteEntity(alsoDelete, doNotDelete, spt.NsxtSegmentProfileTemplate.Name, "vcd_nsxt_segment_profile_template", 0, verbose) + if toBeDeleted { + err = spt.Delete() + if err != nil { + return fmt.Errorf("error deleting Segment Profile Template '%s': %s", spt.NsxtSegmentProfileTemplate.Name, err) + } + } + } + } + return nil } diff --git a/vcd/resource_vcd_nsxt_global_default_segment_profile_template.go b/vcd/resource_vcd_nsxt_global_default_segment_profile_template.go new file mode 100644 index 000000000..93334675f --- /dev/null +++ b/vcd/resource_vcd_nsxt_global_default_segment_profile_template.go @@ -0,0 +1,109 @@ +package vcd + +import ( + "context" + "fmt" + + "github.com/vmware/go-vcloud-director/v2/types/v56" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +const ( + globalDefaultSegmentProfileId = "global-default-segment-profile" +) + +func resourceVcdGlobalDefaultSegmentProfileTemplate() *schema.Resource { + return &schema.Resource{ + CreateContext: resourceVcdGlobalDefaultSegmentProfileTemplateCreateUpdate, + ReadContext: resourceDataSourceVcdGlobalDefaultSegmentProfileTemplateRead, + UpdateContext: resourceVcdGlobalDefaultSegmentProfileTemplateCreateUpdate, + DeleteContext: resourceVcdGlobalDefaultSegmentProfileTemplateDelete, + Importer: &schema.ResourceImporter{ + StateContext: resourceVcdGlobalDefaultSegmentProfileTemplateImport, + }, + + Schema: map[string]*schema.Schema{ + "vdc_networks_default_segment_profile_template_id": { + Type: schema.TypeString, + Optional: true, + Description: "Global default NSX-T Segment Profile for Org VDC networks", + }, + "vapp_networks_default_segment_profile_template_id": { + Type: schema.TypeString, + Optional: true, + Description: "Global default NSX-T Segment Profile for vApp networks", + }, + }, + } +} + +func resourceVcdGlobalDefaultSegmentProfileTemplateCreateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + + globalDefaultSegmentProfileConfig := &types.NsxtGlobalDefaultSegmentProfileTemplate{} + + if d.Get("vapp_networks_default_segment_profile_template_id").(string) != "" { + globalDefaultSegmentProfileConfig.VappNetworkSegmentProfileTemplateRef = &types.OpenApiReference{ID: d.Get("vapp_networks_default_segment_profile_template_id").(string)} + } + + if d.Get("vdc_networks_default_segment_profile_template_id").(string) != "" { + globalDefaultSegmentProfileConfig.VdcNetworkSegmentProfileTemplateRef = &types.OpenApiReference{ID: d.Get("vdc_networks_default_segment_profile_template_id").(string)} + } + + _, err := vcdClient.UpdateGlobalDefaultSegmentProfileTemplates(globalDefaultSegmentProfileConfig) + if err != nil { + return diag.Errorf("error updating Global Default Segment Profile Template configuration: %s", err) + } + + d.SetId(globalDefaultSegmentProfileId) + + return resourceDataSourceVcdGlobalDefaultSegmentProfileTemplateRead(ctx, d, meta) +} + +func resourceDataSourceVcdGlobalDefaultSegmentProfileTemplateRead(_ context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + + defaults, err := vcdClient.GetGlobalDefaultSegmentProfileTemplates() + if err != nil { + return diag.Errorf("error reading Global Default Segment Profile Template configuration: %s", err) + } + + dSet(d, "vdc_networks_default_segment_profile_template_id", "") + if defaults.VdcNetworkSegmentProfileTemplateRef != nil { + dSet(d, "vdc_networks_default_segment_profile_template_id", defaults.VdcNetworkSegmentProfileTemplateRef.ID) + } + + dSet(d, "vapp_networks_default_segment_profile_template_id", "") + if defaults.VappNetworkSegmentProfileTemplateRef != nil { + dSet(d, "vapp_networks_default_segment_profile_template_id", defaults.VappNetworkSegmentProfileTemplateRef.ID) + } + + d.SetId(globalDefaultSegmentProfileId) + + return nil +} + +func resourceVcdGlobalDefaultSegmentProfileTemplateDelete(_ context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + + _, err := vcdClient.UpdateGlobalDefaultSegmentProfileTemplates(&types.NsxtGlobalDefaultSegmentProfileTemplate{}) + if err != nil { + return diag.Errorf("error deleting Global Default Segment Profile Template configuration: %s", err) + } + + return nil +} + +func resourceVcdGlobalDefaultSegmentProfileTemplateImport(_ context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + vcdClient := meta.(*VCDClient) + + _, err := vcdClient.GetGlobalDefaultSegmentProfileTemplates() + if err != nil { + return nil, fmt.Errorf("error finding Global Segment Profile Template: %s", err) + } + + d.SetId(globalDefaultSegmentProfileId) + return []*schema.ResourceData{d}, nil +} diff --git a/vcd/resource_vcd_nsxt_network_segment_profile.go b/vcd/resource_vcd_nsxt_network_segment_profile.go new file mode 100644 index 000000000..e2b2f4ca3 --- /dev/null +++ b/vcd/resource_vcd_nsxt_network_segment_profile.go @@ -0,0 +1,273 @@ +package vcd + +import ( + "context" + "fmt" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/vmware/go-vcloud-director/v2/govcd" + "github.com/vmware/go-vcloud-director/v2/types/v56" +) + +func resourceVcdNsxtOrgVdcNetworkSegmentProfileTemplate() *schema.Resource { + return &schema.Resource{ + CreateContext: resourceVcdNsxtOrgVdcNetworkSegmentProfileCreateUpdate, + ReadContext: resourceVcdNsxtOrgVdcNetworkSegmentProfileRead, + UpdateContext: resourceVcdNsxtOrgVdcNetworkSegmentProfileCreateUpdate, + DeleteContext: resourceVcdNsxtOrgVdcNetworkSegmentProfileDelete, + Importer: &schema.ResourceImporter{ + StateContext: resourceVcdNsxtOrgVdcNetworkSegmentProfileImport, + }, + + Schema: map[string]*schema.Schema{ + "org": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: "The name of organization to use, optional if defined at provider " + + "level. Useful when connected as sysadmin working across different organizations", + }, + "org_network_id": { + Type: schema.TypeString, + Required: true, + Description: "ID of the Organization Network that will have the segment profile", + }, + // One can set either Segment Profile Template (which is composed of multiple Segment Profiles), or individual Segment Profiles + "segment_profile_template_id": { + Type: schema.TypeString, + Optional: true, + Description: "Segment Profile Template ID", + ConflictsWith: []string{"ip_discovery_profile_id", "mac_discovery_profile_id", "spoof_guard_profile_id", "qos_profile_id", "segment_security_profile_id"}, + }, + "segment_profile_template_name": { + Type: schema.TypeString, + Computed: true, + Description: "Segment Profile Template Name", + }, + "ip_discovery_profile_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: "NSX-T IP Discovery Profile", + ConflictsWith: []string{"segment_profile_template_id"}, + }, + "mac_discovery_profile_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: "NSX-T Mac Discovery Profile", + ConflictsWith: []string{"segment_profile_template_id"}, + }, + "spoof_guard_profile_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: "NSX-T Spoof Guard Profile", + ConflictsWith: []string{"segment_profile_template_id"}, + }, + "qos_profile_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: "NSX-T QoS Profile", + ConflictsWith: []string{"segment_profile_template_id"}, + }, + "segment_security_profile_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: "NSX-T Segment Security Profile", + ConflictsWith: []string{"segment_profile_template_id"}, + }, + }, + } +} + +func resourceVcdNsxtOrgVdcNetworkSegmentProfileCreateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + vcdClient.lockParentOrgNetwork(d) + defer vcdClient.unLockParentOrgNetwork(d) + + org, err := vcdClient.GetOrgFromResource(d) + if err != nil { + return diag.Errorf("[Org VDC Network Segment Profile configuration] error retrieving Org: %s", err) + } + + orgNetworkId := d.Get("org_network_id").(string) + orgVdcNet, err := org.GetOpenApiOrgVdcNetworkById(orgNetworkId) + if err != nil { + return diag.Errorf("[Org VDC Network Segment Profile configuration] error retrieving Org VDC network with ID '%s': %s", orgNetworkId, err) + } + + if !orgVdcNet.IsNsxt() { + return diag.Errorf("[Org VDC Network Segment Profile configuration] only NSX-T Org VDC networks support Segment Profiles") + } + + ipDiscoveryProfileId := d.Get("ip_discovery_profile_id").(string) + macDiscoveryProfileId := d.Get("mac_discovery_profile_id").(string) + spoofGuardProfileId := d.Get("spoof_guard_profile_id").(string) + qosProfileId := d.Get("qos_profile_id").(string) + segmentSecurityProfileId := d.Get("segment_security_profile_id").(string) + + segmentProfileTemplateId := d.Get("segment_profile_template_id").(string) + + switch { + // Setting `segment_profile_template_id` requires modifying Org VDC Network structure. + // It can only be set (PUT/POST) using Org VDC network structure, but cannot be read (GET). + // To read its value one must use orgVdcNet.GetSegmentProfile() function. + case segmentProfileTemplateId != "": + orgVdcNet.OpenApiOrgVdcNetwork.SegmentProfileTemplate = &types.OpenApiReference{ID: segmentProfileTemplateId} + _, err = orgVdcNet.Update(orgVdcNet.OpenApiOrgVdcNetwork) + if err != nil { + return diag.Errorf("[Org VDC Network Segment Profile configuration] error setting Segment Profile Template for Org VDC Network: %s", err) + } + case ipDiscoveryProfileId != "" || macDiscoveryProfileId != "" || spoofGuardProfileId != "" || qosProfileId != "" || segmentSecurityProfileId != "": + // Individual segment profiles should be applied using a dedicated Segment Profile orgVdcNet.UpdateSegmentProfile + segmentProfileConfig := &types.OrgVdcNetworkSegmentProfiles{ + IPDiscoveryProfile: &types.Reference{ID: ipDiscoveryProfileId}, + MacDiscoveryProfile: &types.Reference{ID: macDiscoveryProfileId}, + SpoofGuardProfile: &types.Reference{ID: spoofGuardProfileId}, + QosProfile: &types.Reference{ID: qosProfileId}, + SegmentSecurityProfile: &types.Reference{ID: segmentSecurityProfileId}, + } + _, err = orgVdcNet.UpdateSegmentProfile(segmentProfileConfig) + if err != nil { + return diag.Errorf("[Org VDC Network Segment Profile configuration] error configuring Segment Profile for Org VDC Network: %s", err) + } + default: + return diag.Errorf("[Org VDC Network Segment Profile configuration] invalid configuration provided") + } + + d.SetId(orgVdcNet.OpenApiOrgVdcNetwork.ID) + + return resourceVcdNsxtOrgVdcNetworkSegmentProfileRead(ctx, d, meta) +} + +func resourceVcdNsxtOrgVdcNetworkSegmentProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + return resourceDataSourceVcdNsxtOrgVdcNetworkSegmentProfileRead(ctx, d, meta, "resource") +} + +func resourceDataSourceVcdNsxtOrgVdcNetworkSegmentProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}, origin string) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + org, err := vcdClient.GetOrgFromResource(d) + if err != nil { + return diag.Errorf("[Org VDC Network Segment Profile configuration read] error retrieving Org: %s", err) + } + + orgNetworkId := d.Get("org_network_id").(string) + orgVdcNet, err := org.GetOpenApiOrgVdcNetworkById(orgNetworkId) + if err != nil { + if origin == "resource" && govcd.ContainsNotFound(err) { + d.SetId("") + return nil + } + return diag.Errorf("[Org VDC Network Segment Profile configuration read] error retrieving Org VDC network with ID '%s': %s", orgNetworkId, err) + } + + segmentProfileConfig, err := orgVdcNet.GetSegmentProfile() + if err != nil { + return diag.Errorf("[Org VDC Network Segment Profile configuration read] error retrieving Segment Profile configuration for Org VDC Network: %s", err) + } + + dSet(d, "segment_profile_template_name", "") + dSet(d, "segment_profile_template_id", "") + if segmentProfileConfig.SegmentProfileTemplate != nil && segmentProfileConfig.SegmentProfileTemplate.TemplateRef != nil { + dSet(d, "segment_profile_template_id", segmentProfileConfig.SegmentProfileTemplate.TemplateRef.ID) + dSet(d, "segment_profile_template_name", segmentProfileConfig.SegmentProfileTemplate.TemplateRef.Name) + } + + dSet(d, "ip_discovery_profile_id", "") + if segmentProfileConfig.IPDiscoveryProfile != nil { + dSet(d, "ip_discovery_profile_id", segmentProfileConfig.IPDiscoveryProfile.ID) + } + + dSet(d, "mac_discovery_profile_id", "") + if segmentProfileConfig.MacDiscoveryProfile != nil { + dSet(d, "mac_discovery_profile_id", segmentProfileConfig.MacDiscoveryProfile.ID) + } + + dSet(d, "spoof_guard_profile_id", "") + if segmentProfileConfig.SpoofGuardProfile != nil { + dSet(d, "spoof_guard_profile_id", segmentProfileConfig.SpoofGuardProfile.ID) + } + + dSet(d, "qos_profile_id", "") + if segmentProfileConfig.QosProfile != nil { + dSet(d, "qos_profile_id", segmentProfileConfig.QosProfile.ID) + } + + dSet(d, "segment_security_profile_id", "") + if segmentProfileConfig.SegmentSecurityProfile != nil { + dSet(d, "segment_security_profile_id", segmentProfileConfig.SegmentSecurityProfile.ID) + } + + d.SetId(orgVdcNet.OpenApiOrgVdcNetwork.ID) + + return nil +} + +func resourceVcdNsxtOrgVdcNetworkSegmentProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + vcdClient.lockParentOrgNetwork(d) + defer vcdClient.unLockParentOrgNetwork(d) + + org, err := vcdClient.GetOrgFromResource(d) + if err != nil { + return diag.Errorf("[Org VDC Network Segment Profile configuration delete] error retrieving Org: %s", err) + } + + orgNetworkId := d.Get("org_network_id").(string) + + orgVdcNet, err := org.GetOpenApiOrgVdcNetworkById(orgNetworkId) + if err != nil { + return diag.Errorf("[Org VDC Network Segment Profile configuration delete] error retrieving Org VDC network with ID '%s': %s", orgNetworkId, err) + } + + // Attempt to remove Segment Profile Template using main network structure (it is the only way, if it is set) + if orgVdcNet.OpenApiOrgVdcNetwork != nil && orgVdcNet.OpenApiOrgVdcNetwork.SegmentProfileTemplate != nil { + orgVdcNet.OpenApiOrgVdcNetwork.SegmentProfileTemplate = &types.OpenApiReference{} + _, err := orgVdcNet.Update(orgVdcNet.OpenApiOrgVdcNetwork) + if err != nil { + return diag.Errorf("[Org VDC Network Segment Profile configuration delete] error reseting Segment Profile Template ID for Org VDC Network: %s", err) + } + } + + // Attempt to cleanup any custom segment profiles + _, err = orgVdcNet.UpdateSegmentProfile(&types.OrgVdcNetworkSegmentProfiles{}) + if err != nil { + return diag.Errorf("[Org VDC Network Segment Profile configuration delete] error reseting Segment Profile: %s", err) + } + + return nil +} + +func resourceVcdNsxtOrgVdcNetworkSegmentProfileImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + resourceURI := strings.Split(d.Id(), ImportSeparator) + if len(resourceURI) != 3 { + return nil, fmt.Errorf("resource name must be specified as org-name.vdc-org-vdc-group-name.org_network_name") + } + orgName, vdcOrVdcGroupName, orgVdcNetworkName := resourceURI[0], resourceURI[1], resourceURI[2] + + vcdClient := meta.(*VCDClient) + vdcOrVdcGroup, err := lookupVdcOrVdcGroup(vcdClient, orgName, vdcOrVdcGroupName) + if err != nil { + return nil, err + } + + if !vdcOrVdcGroup.IsNsxt() { + return nil, fmt.Errorf("[Org VDC Network Segment Profile configuration import] Segment Profile configuration is only supported for NSX-T networks: %s", err) + } + + orgVdcNet, err := vdcOrVdcGroup.GetOpenApiOrgVdcNetworkByName(orgVdcNetworkName) + if err != nil { + return nil, fmt.Errorf("[Org VDC Network Segment Profile configuration import] error retrieving Org VDC network with name '%s': %s", orgVdcNetworkName, err) + } + + dSet(d, "org", orgName) + dSet(d, "org_network_id", orgVdcNet.OpenApiOrgVdcNetwork.ID) + d.SetId(orgVdcNet.OpenApiOrgVdcNetwork.ID) + + return []*schema.ResourceData{d}, nil +} diff --git a/vcd/resource_vcd_nsxt_network_segment_profile_test.go b/vcd/resource_vcd_nsxt_network_segment_profile_test.go new file mode 100644 index 000000000..c5483c83e --- /dev/null +++ b/vcd/resource_vcd_nsxt_network_segment_profile_test.go @@ -0,0 +1,561 @@ +//go:build nsxt || ALL || functional + +package vcd + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccVcdNsxtNetworkSegmentProfileCustom(t *testing.T) { + preTestChecks(t) + skipIfNotSysAdmin(t) + + // String map to fill the template + var params = StringMap{ + "TestName": t.Name(), + "Org": testConfig.VCD.Org, + "NsxtVdc": testConfig.Nsxt.Vdc, + "EdgeGw": testConfig.Nsxt.EdgeGateway, + "NsxtManager": testConfig.Nsxt.Manager, + "IpDiscoveryProfileName": testConfig.Nsxt.IpDiscoveryProfile, + "MacDiscoveryProfileName": testConfig.Nsxt.MacDiscoveryProfile, + "QosProfileName": testConfig.Nsxt.QosProfile, + "SpoofGuardProfileName": testConfig.Nsxt.SpoofGuardProfile, + "SegmentSecurityProfileName": testConfig.Nsxt.SegmentSecurityProfile, + + "Tags": "nsxt ", + } + + configText1 := templateFill(testAccVcdNsxtNetworkSegmentProfileCustom, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 1: %s", configText1) + + params["FuncName"] = t.Name() + "step2" + configText2DS := templateFill(testAccVcdNsxtNetworkSegmentProfileCustomDS, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 2: %s", configText2DS) + + params["FuncName"] = t.Name() + "step4" + configText4 := templateFill(testAccVcdNsxtNetworkSegmentProfileCustomUpdate, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 4: %s", configText4) + + if vcdShortTest { + t.Skip(acceptanceTestsSkipped) + return + } + + resource.Test(t, resource.TestCase{ + ProviderFactories: testAccProviders, + Steps: []resource.TestStep{ + { + Config: configText1, + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("vcd_nsxt_network_segment_profile.custom-prof", "id"), + resource.TestCheckResourceAttrPair("data.vcd_nsxt_segment_ip_discovery_profile.first", "id", "vcd_nsxt_network_segment_profile.custom-prof", "ip_discovery_profile_id"), + resource.TestCheckResourceAttrPair("data.vcd_nsxt_segment_mac_discovery_profile.first", "id", "vcd_nsxt_network_segment_profile.custom-prof", "mac_discovery_profile_id"), + resource.TestCheckResourceAttrPair("data.vcd_nsxt_segment_spoof_guard_profile.first", "id", "vcd_nsxt_network_segment_profile.custom-prof", "spoof_guard_profile_id"), + resource.TestCheckResourceAttrPair("data.vcd_nsxt_segment_qos_profile.first", "id", "vcd_nsxt_network_segment_profile.custom-prof", "qos_profile_id"), + resource.TestCheckResourceAttrPair("data.vcd_nsxt_segment_security_profile.first", "id", "vcd_nsxt_network_segment_profile.custom-prof", "segment_security_profile_id"), + ), + }, + { + ResourceName: "vcd_nsxt_network_segment_profile.custom-prof", + ImportState: true, + ImportStateVerify: true, + ImportStateIdFunc: importStateIdOrgNsxtVdcObject(t.Name() + "-routed"), + }, + { + Config: configText2DS, + Check: resource.ComposeAggregateTestCheckFunc( + resourceFieldsEqual("data.vcd_nsxt_network_segment_profile.custom-prof", "vcd_nsxt_network_segment_profile.custom-prof", nil), + ), + }, + { + Config: configText4, + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("vcd_nsxt_network_segment_profile.custom-prof", "id"), + resource.TestCheckResourceAttrPair("data.vcd_nsxt_segment_ip_discovery_profile.first", "id", "vcd_nsxt_network_segment_profile.custom-prof", "ip_discovery_profile_id"), + resource.TestCheckResourceAttrPair("data.vcd_nsxt_segment_mac_discovery_profile.first", "id", "vcd_nsxt_network_segment_profile.custom-prof", "mac_discovery_profile_id"), + resource.TestCheckResourceAttrPair("data.vcd_nsxt_segment_spoof_guard_profile.first", "id", "vcd_nsxt_network_segment_profile.custom-prof", "spoof_guard_profile_id"), + resource.TestCheckResourceAttrPair("data.vcd_nsxt_segment_qos_profile.first", "id", "vcd_nsxt_network_segment_profile.custom-prof", "qos_profile_id"), + resource.TestCheckResourceAttrPair("data.vcd_nsxt_segment_security_profile.first", "id", "vcd_nsxt_network_segment_profile.custom-prof", "segment_security_profile_id"), + ), + }, + }, + }) + postTestChecks(t) +} + +const testAccVcdNsxtNetworkSegmentProfileCustom = ` +data "vcd_nsxt_manager" "nsxt" { + name = "{{.NsxtManager}}" +} + +data "vcd_nsxt_segment_ip_discovery_profile" "first" { + name = "{{.IpDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_mac_discovery_profile" "first" { + name = "{{.MacDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_spoof_guard_profile" "first" { + name = "{{.SpoofGuardProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_qos_profile" "first" { + name = "{{.QosProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_security_profile" "first" { + name = "{{.SegmentSecurityProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_edgegateway" "existing" { + org = "{{.Org}}" + name = "{{.EdgeGw}}" +} + +resource "vcd_network_routed_v2" "net1" { + org = "{{.Org}}" + name = "{{.TestName}}-routed" + + edge_gateway_id = data.vcd_nsxt_edgegateway.existing.id + + gateway = "1.1.1.1" + prefix_length = 24 + + static_ip_pool { + start_address = "1.1.1.10" + end_address = "1.1.1.20" + } +} + +resource "vcd_nsxt_network_segment_profile" "custom-prof" { + org = "{{.Org}}" + org_network_id = vcd_network_routed_v2.net1.id + + ip_discovery_profile_id = data.vcd_nsxt_segment_ip_discovery_profile.first.id + mac_discovery_profile_id = data.vcd_nsxt_segment_mac_discovery_profile.first.id + spoof_guard_profile_id = data.vcd_nsxt_segment_spoof_guard_profile.first.id + qos_profile_id = data.vcd_nsxt_segment_qos_profile.first.id + segment_security_profile_id = data.vcd_nsxt_segment_security_profile.first.id +} +` + +const testAccVcdNsxtNetworkSegmentProfileCustomDS = testAccVcdNsxtNetworkSegmentProfileCustom + ` +data "vcd_nsxt_network_segment_profile" "custom-prof" { + org = "{{.Org}}" + org_network_id = vcd_network_routed_v2.net1.id + + depends_on = [vcd_nsxt_network_segment_profile.custom-prof] +} +` + +const testAccVcdNsxtNetworkSegmentProfileCustomUpdate = ` +data "vcd_nsxt_manager" "nsxt" { + name = "{{.NsxtManager}}" +} + +data "vcd_nsxt_segment_ip_discovery_profile" "first" { + name = "{{.IpDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_mac_discovery_profile" "first" { + name = "{{.MacDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_spoof_guard_profile" "first" { + name = "{{.SpoofGuardProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_qos_profile" "first" { + name = "{{.QosProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_security_profile" "first" { + name = "{{.SegmentSecurityProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_edgegateway" "existing" { + org = "{{.Org}}" + name = "{{.EdgeGw}}" +} + +resource "vcd_network_routed_v2" "net1" { + org = "{{.Org}}" + name = "{{.TestName}}-routed" + description = "{{.TestName}}-description" + + edge_gateway_id = data.vcd_nsxt_edgegateway.existing.id + + gateway = "1.1.1.1" + prefix_length = 24 + + static_ip_pool { + start_address = "1.1.1.10" + end_address = "1.1.1.20" + } +} + +resource "vcd_nsxt_network_segment_profile" "custom-prof" { + org = "{{.Org}}" + org_network_id = vcd_network_routed_v2.net1.id + + ip_discovery_profile_id = data.vcd_nsxt_segment_ip_discovery_profile.first.id + mac_discovery_profile_id = data.vcd_nsxt_segment_mac_discovery_profile.first.id + spoof_guard_profile_id = data.vcd_nsxt_segment_spoof_guard_profile.first.id + qos_profile_id = data.vcd_nsxt_segment_qos_profile.first.id + segment_security_profile_id = data.vcd_nsxt_segment_security_profile.first.id +} +` + +func TestAccVcdNsxtNetworkSegmentProfileTemplate(t *testing.T) { + preTestChecks(t) + skipIfNotSysAdmin(t) + + // String map to fill the template + var params = StringMap{ + "TestName": t.Name(), + "Org": testConfig.VCD.Org, + "NsxtVdc": testConfig.Nsxt.Vdc, + "EdgeGw": testConfig.Nsxt.EdgeGateway, + "NsxtImportSegment": testConfig.Nsxt.NsxtImportSegment, + "NsxtManager": testConfig.Nsxt.Manager, + "IpDiscoveryProfileName": testConfig.Nsxt.IpDiscoveryProfile, + "MacDiscoveryProfileName": testConfig.Nsxt.MacDiscoveryProfile, + "QosProfileName": testConfig.Nsxt.QosProfile, + "SpoofGuardProfileName": testConfig.Nsxt.SpoofGuardProfile, + "SegmentSecurityProfileName": testConfig.Nsxt.SegmentSecurityProfile, + + "Tags": "nsxt ", + } + + configText1 := templateFill(testAccVcdNsxtNetworkSegmentProfileTemplateStep1, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 1: %s", configText1) + + params["FuncName"] = t.Name() + "step2" + configText2DS := templateFill(testAccVcdNsxtNetworkSegmentProfileTemplateDS, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 2: %s", configText2DS) + + params["FuncName"] = t.Name() + "step4" + configText4 := templateFill(testAccVcdNsxtNetworkSegmentProfileTemplateStep2, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 4: %s", configText4) + + if vcdShortTest { + t.Skip(acceptanceTestsSkipped) + return + } + + resource.Test(t, resource.TestCase{ + ProviderFactories: testAccProviders, + Steps: []resource.TestStep{ + { + Config: configText1, + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("vcd_nsxt_segment_profile_template.complete", "id"), + resource.TestCheckResourceAttrSet("vcd_nsxt_network_segment_profile.custom-prof-routed", "id"), + resource.TestCheckResourceAttrSet("vcd_nsxt_network_segment_profile.custom-prof-isolated", "id"), + resource.TestCheckResourceAttrSet("vcd_nsxt_network_segment_profile.custom-prof-imported", "id"), + ), + }, + { + ResourceName: "vcd_nsxt_network_segment_profile.custom-prof-routed", + ImportState: true, + ImportStateVerify: true, + ImportStateIdFunc: importStateIdOrgNsxtVdcObject(t.Name() + "-routed"), + }, + { + Config: configText2DS, + Check: resource.ComposeAggregateTestCheckFunc( + resourceFieldsEqual("data.vcd_nsxt_network_segment_profile.custom-prof-routed", "vcd_nsxt_network_segment_profile.custom-prof-routed", nil), + ), + }, + { + // This step checks that updating Org VDC network does not compromise its Segment Profile configuration + // after updating Org VDC networks + Config: configText4, + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("vcd_nsxt_segment_profile_template.complete", "id"), + resource.TestCheckResourceAttrSet("vcd_nsxt_network_segment_profile.custom-prof-routed", "id"), + resource.TestCheckResourceAttrSet("vcd_nsxt_network_segment_profile.custom-prof-isolated", "id"), + resource.TestCheckResourceAttrSet("vcd_nsxt_network_segment_profile.custom-prof-imported", "id"), + ), + }, + }, + }) + postTestChecks(t) +} + +const testAccVcdNsxtNetworkSegmentProfileTemplateStep1 = ` +data "vcd_org_vdc" "nsxt" { + org = "{{.Org}}" + name = "{{.NsxtVdc}}" +} + +data "vcd_nsxt_manager" "nsxt" { + name = "{{.NsxtManager}}" +} + +data "vcd_nsxt_segment_ip_discovery_profile" "first" { + name = "{{.IpDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_mac_discovery_profile" "first" { + name = "{{.MacDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_spoof_guard_profile" "first" { + name = "{{.SpoofGuardProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_qos_profile" "first" { + name = "{{.QosProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_security_profile" "first" { + name = "{{.SegmentSecurityProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +resource "vcd_nsxt_segment_profile_template" "complete" { + name = "{{.TestName}}-complete" + description = "description" + + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id + ip_discovery_profile_id = data.vcd_nsxt_segment_ip_discovery_profile.first.id + mac_discovery_profile_id = data.vcd_nsxt_segment_mac_discovery_profile.first.id + spoof_guard_profile_id = data.vcd_nsxt_segment_spoof_guard_profile.first.id + qos_profile_id = data.vcd_nsxt_segment_qos_profile.first.id + segment_security_profile_id = data.vcd_nsxt_segment_security_profile.first.id +} + +data "vcd_nsxt_edgegateway" "existing" { + org = "{{.Org}}" + name = "{{.EdgeGw}}" +} + +resource "vcd_network_routed_v2" "net1" { + org = "{{.Org}}" + name = "{{.TestName}}-routed" + + edge_gateway_id = data.vcd_nsxt_edgegateway.existing.id + + gateway = "1.1.1.1" + prefix_length = 24 + + static_ip_pool { + start_address = "1.1.1.10" + end_address = "1.1.1.20" + } +} + +resource "vcd_nsxt_network_segment_profile" "custom-prof-routed" { + org = "{{.Org}}" + org_network_id = vcd_network_routed_v2.net1.id + + segment_profile_template_id = vcd_nsxt_segment_profile_template.complete.id +} + +resource "vcd_network_isolated_v2" "nsxt-backed" { + org = "{{.Org}}" + owner_id = data.vcd_org_vdc.nsxt.id + + name = "{{.TestName}}-isolated" + + gateway = "1.1.1.1" + prefix_length = 24 + + static_ip_pool { + start_address = "1.1.1.10" + end_address = "1.1.1.20" + } + + static_ip_pool { + start_address = "1.1.1.100" + end_address = "1.1.1.103" + } +} + +resource "vcd_nsxt_network_segment_profile" "custom-prof-isolated" { + org = "{{.Org}}" + org_network_id = vcd_network_isolated_v2.nsxt-backed.id + + segment_profile_template_id = vcd_nsxt_segment_profile_template.complete.id +} + +resource "vcd_nsxt_network_imported" "net1" { + org = "{{.Org}}" + owner_id = data.vcd_org_vdc.nsxt.id + name = "{{.TestName}}-imported" + + nsxt_logical_switch_name = "{{.NsxtImportSegment}}" + + gateway = "8.1.1.1" + prefix_length = 24 + + static_ip_pool { + start_address = "8.1.1.10" + end_address = "8.1.1.20" + } +} + +resource "vcd_nsxt_network_segment_profile" "custom-prof-imported" { + org = "{{.Org}}" + org_network_id = vcd_nsxt_network_imported.net1.id + + segment_profile_template_id = vcd_nsxt_segment_profile_template.complete.id +} +` + +const testAccVcdNsxtNetworkSegmentProfileTemplateDS = testAccVcdNsxtNetworkSegmentProfileTemplateStep1 + ` +data "vcd_nsxt_network_segment_profile" "custom-prof-routed" { + org = "{{.Org}}" + org_network_id = vcd_network_routed_v2.net1.id + + depends_on = [vcd_nsxt_network_segment_profile.custom-prof-routed] +} +` + +const testAccVcdNsxtNetworkSegmentProfileTemplateStep2 = ` +data "vcd_org_vdc" "nsxt" { + org = "{{.Org}}" + name = "{{.NsxtVdc}}" +} + +data "vcd_nsxt_manager" "nsxt" { + name = "{{.NsxtManager}}" +} + +data "vcd_nsxt_segment_ip_discovery_profile" "first" { + name = "{{.IpDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_mac_discovery_profile" "first" { + name = "{{.MacDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_spoof_guard_profile" "first" { + name = "{{.SpoofGuardProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_qos_profile" "first" { + name = "{{.QosProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_security_profile" "first" { + name = "{{.SegmentSecurityProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +resource "vcd_nsxt_segment_profile_template" "complete" { + name = "{{.TestName}}-complete" + description = "description" + + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id + ip_discovery_profile_id = data.vcd_nsxt_segment_ip_discovery_profile.first.id + mac_discovery_profile_id = data.vcd_nsxt_segment_mac_discovery_profile.first.id + spoof_guard_profile_id = data.vcd_nsxt_segment_spoof_guard_profile.first.id + qos_profile_id = data.vcd_nsxt_segment_qos_profile.first.id + segment_security_profile_id = data.vcd_nsxt_segment_security_profile.first.id +} + +data "vcd_nsxt_edgegateway" "existing" { + org = "{{.Org}}" + name = "{{.EdgeGw}}" +} + +resource "vcd_network_routed_v2" "net1" { + org = "{{.Org}}" + name = "{{.TestName}}-routed" + description = "{{.TestName}}-added-description" + + edge_gateway_id = data.vcd_nsxt_edgegateway.existing.id + + gateway = "1.1.1.1" + prefix_length = 24 + + static_ip_pool { + start_address = "1.1.1.10" + end_address = "1.1.1.20" + } + + static_ip_pool { + start_address = "1.1.1.40" + end_address = "1.1.1.50" + } +} + +resource "vcd_nsxt_network_segment_profile" "custom-prof-routed" { + org = "{{.Org}}" + org_network_id = vcd_network_routed_v2.net1.id + + segment_profile_template_id = vcd_nsxt_segment_profile_template.complete.id +} + +resource "vcd_network_isolated_v2" "nsxt-backed" { + org = "{{.Org}}" + owner_id = data.vcd_org_vdc.nsxt.id + + name = "{{.TestName}}-isolated" + description = "My isolated Org VDC network backed by NSX-T" + + gateway = "1.1.1.1" + prefix_length = 24 + + static_ip_pool { + start_address = "1.1.1.10" + end_address = "1.1.1.20" + } + +} + +resource "vcd_nsxt_network_segment_profile" "custom-prof-isolated" { + org = "{{.Org}}" + org_network_id = vcd_network_isolated_v2.nsxt-backed.id + + segment_profile_template_id = vcd_nsxt_segment_profile_template.complete.id +} + +resource "vcd_nsxt_network_imported" "net1" { + org = "{{.Org}}" + owner_id = data.vcd_org_vdc.nsxt.id + name = "{{.TestName}}-imported" + description = "{{.TestName}}-imported" + + nsxt_logical_switch_name = "{{.NsxtImportSegment}}" + + gateway = "8.1.1.1" + prefix_length = 24 + + static_ip_pool { + start_address = "8.1.1.10" + end_address = "8.1.1.20" + } +} + +resource "vcd_nsxt_network_segment_profile" "custom-prof-imported" { + org = "{{.Org}}" + org_network_id = vcd_nsxt_network_imported.net1.id + + segment_profile_template_id = vcd_nsxt_segment_profile_template.complete.id +} +` diff --git a/vcd/resource_vcd_nsxt_segment_profile_template.go b/vcd/resource_vcd_nsxt_segment_profile_template.go new file mode 100644 index 000000000..8667a4e47 --- /dev/null +++ b/vcd/resource_vcd_nsxt_segment_profile_template.go @@ -0,0 +1,198 @@ +package vcd + +import ( + "context" + "fmt" + + "github.com/vmware/go-vcloud-director/v2/govcd" + + "github.com/vmware/go-vcloud-director/v2/types/v56" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func resourceVcdSegmentProfileTemplate() *schema.Resource { + return &schema.Resource{ + CreateContext: resourceVcdSegmentProfileTemplateCreate, + ReadContext: resourceVcdSegmentProfileTemplateRead, + UpdateContext: resourceVcdSegmentProfileTemplateUpdate, + DeleteContext: resourceVcdSegmentProfileTemplateDelete, + Importer: &schema.ResourceImporter{ + StateContext: resourceVcdSegmentProfileTemplateImport, + }, + + Schema: map[string]*schema.Schema{ + "nsxt_manager_id": { + Type: schema.TypeString, + Required: true, + Description: "NSX-T Manager ID", + }, + "name": { + Type: schema.TypeString, + Required: true, + Description: "Name of Segment Profile Template", + }, + "description": { + Type: schema.TypeString, + Optional: true, + Description: "Description of Segment Profile Template", + }, + "ip_discovery_profile_id": { + Type: schema.TypeString, + Optional: true, + Description: "Segment IP Discovery Profile ID", + }, + "mac_discovery_profile_id": { + Type: schema.TypeString, + Optional: true, + Description: "Segment MAC Discovery Profile ID", + }, + "spoof_guard_profile_id": { + Type: schema.TypeString, + Optional: true, + Description: "Segment Spoof Guard Profile ID", + }, + "qos_profile_id": { + Type: schema.TypeString, + Optional: true, + Description: "Segment QoS Profile ID", + }, + "segment_security_profile_id": { + Type: schema.TypeString, + Optional: true, + Description: "Segment Security Profile ID", + }, + }, + } +} + +func resourceVcdSegmentProfileTemplateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + + segmentProfileTemplateCfg := getNsxtSegmentProfileTemplateType(d) + createdSegmentProfileTemplate, err := vcdClient.CreateSegmentProfileTemplate(segmentProfileTemplateCfg) + if err != nil { + return diag.Errorf("error creating NSX-T Segment Profile Template '%s': %s", segmentProfileTemplateCfg.Name, err) + } + + d.SetId(createdSegmentProfileTemplate.NsxtSegmentProfileTemplate.ID) + + return resourceVcdSegmentProfileTemplateRead(ctx, d, meta) +} + +func resourceVcdSegmentProfileTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + + spt, err := vcdClient.GetSegmentProfileTemplateById(d.Id()) + if err != nil { + return diag.Errorf("unable to find NSX-T Segment Profile Template: %s", err) + } + + updateSegmentProfileTemplateConfig := getNsxtSegmentProfileTemplateType(d) + updateSegmentProfileTemplateConfig.ID = d.Id() + _, err = spt.Update(updateSegmentProfileTemplateConfig) + if err != nil { + return diag.Errorf("error updating NSX-T Segment Profile Template: %s", err) + } + + return resourceVcdSegmentProfileTemplateRead(ctx, d, meta) +} + +func resourceVcdSegmentProfileTemplateRead(_ context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + + spt, err := vcdClient.GetSegmentProfileTemplateById(d.Id()) + if err != nil { + if govcd.ContainsNotFound(err) { + d.SetId("") + return nil + } + return diag.Errorf("unable to find NSX-T Segment Profile Template: %s", err) + } + + setNsxtSegmentProfileTemplateData(d, spt.NsxtSegmentProfileTemplate) + + return nil +} + +func resourceVcdSegmentProfileTemplateDelete(_ context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + + spt, err := vcdClient.GetSegmentProfileTemplateById(d.Id()) + if err != nil { + return diag.Errorf("unable to find NSX-T Segment Profile Template: %s", err) + } + + err = spt.Delete() + if err != nil { + return diag.Errorf("error deleting NSX-T Segment Profile Template: %s", err) + } + + return nil +} + +func resourceVcdSegmentProfileTemplateImport(_ context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + vcdClient := meta.(*VCDClient) + + resourceURI := d.Id() + spt, err := vcdClient.GetSegmentProfileTemplateByName(resourceURI) + if err != nil { + return nil, fmt.Errorf("error finding NSX-T Segment Profile Template with Name '%s': %s", d.Id(), err) + } + + d.SetId(spt.NsxtSegmentProfileTemplate.ID) + return []*schema.ResourceData{d}, nil +} + +func getNsxtSegmentProfileTemplateType(d *schema.ResourceData) *types.NsxtSegmentProfileTemplate { + + config := &types.NsxtSegmentProfileTemplate{ + Name: d.Get("name").(string), + Description: d.Get("description").(string), + IPDiscoveryProfile: &types.Reference{ID: d.Get("ip_discovery_profile_id").(string)}, + MacDiscoveryProfile: &types.Reference{ID: d.Get("mac_discovery_profile_id").(string)}, + QosProfile: &types.Reference{ID: d.Get("qos_profile_id").(string)}, + SegmentSecurityProfile: &types.Reference{ID: d.Get("segment_security_profile_id").(string)}, + SpoofGuardProfile: &types.Reference{ID: d.Get("spoof_guard_profile_id").(string)}, + SourceNsxTManagerRef: &types.OpenApiReference{ID: d.Get("nsxt_manager_id").(string)}, + } + + return config +} + +func setNsxtSegmentProfileTemplateData(d *schema.ResourceData, config *types.NsxtSegmentProfileTemplate) { + dSet(d, "name", config.Name) + dSet(d, "description", config.Description) + + dSet(d, "nsxt_manager_id", "") + if config.SourceNsxTManagerRef != nil { + dSet(d, "nsxt_manager_id", config.SourceNsxTManagerRef.ID) + } + + dSet(d, "ip_discovery_profile_id", "") + if config.IPDiscoveryProfile != nil { + dSet(d, "ip_discovery_profile_id", config.IPDiscoveryProfile.ID) + } + + dSet(d, "mac_discovery_profile_id", "") + if config.MacDiscoveryProfile != nil { + dSet(d, "mac_discovery_profile_id", config.MacDiscoveryProfile.ID) + } + + dSet(d, "qos_profile_id", "") + if config.QosProfile != nil { + dSet(d, "qos_profile_id", config.QosProfile.ID) + } + + dSet(d, "segment_security_profile_id", "") + if config.SegmentSecurityProfile != nil { + dSet(d, "segment_security_profile_id", config.SegmentSecurityProfile.ID) + } + + dSet(d, "spoof_guard_profile_id", "") + if config.SpoofGuardProfile != nil { + dSet(d, "spoof_guard_profile_id", config.SpoofGuardProfile.ID) + } + +} diff --git a/vcd/resource_vcd_nsxt_segment_profile_template_test.go b/vcd/resource_vcd_nsxt_segment_profile_template_test.go new file mode 100644 index 000000000..40c313737 --- /dev/null +++ b/vcd/resource_vcd_nsxt_segment_profile_template_test.go @@ -0,0 +1,260 @@ +//go:build nsxt || ALL || functional + +package vcd + +import ( + "fmt" + "regexp" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/vmware/go-vcloud-director/v2/govcd" +) + +func TestAccVcdNsxtSegmentProfileTemplate(t *testing.T) { + preTestChecks(t) + skipIfNotSysAdmin(t) + + // String map to fill the template + var params = StringMap{ + "TestName": t.Name(), + "NsxtManager": testConfig.Nsxt.Manager, + "IpDiscoveryProfileName": testConfig.Nsxt.IpDiscoveryProfile, + "MacDiscoveryProfileName": testConfig.Nsxt.MacDiscoveryProfile, + "QosProfileName": testConfig.Nsxt.QosProfile, + "SpoofGuardProfileName": testConfig.Nsxt.SpoofGuardProfile, + "SegmentSecurityProfileName": testConfig.Nsxt.SegmentSecurityProfile, + + "Tags": "nsxt ", + } + + configText1 := templateFill(testAccVcdNsxtSegmentProfileTemplate, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 1: %s", configText1) + + params["FuncName"] = t.Name() + "step2" + configText2DS := templateFill(testAccVcdNsxtSegmentProfileTemplateDS, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 2: %s", configText2DS) + + params["FuncName"] = t.Name() + "step3" + configText3 := templateFill(testAccVcdNsxtSegmentProfileTemplateGlobalDefault, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 3: %s", configText3) + + params["FuncName"] = t.Name() + "step4" + configText4DS := templateFill(testAccVcdNsxtSegmentProfileTemplateGlobalDefaultDS, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 4: %s", configText4DS) + + params["FuncName"] = t.Name() + "step7" + configText7 := templateFill(testAccVcdNsxtSegmentProfileTemplateGlobalDefaultNoValues, params) + debugPrintf("#[DEBUG] CONFIGURATION for step 7: %s", configText7) + + if vcdShortTest { + t.Skip(acceptanceTestsSkipped) + return + } + + resource.Test(t, resource.TestCase{ + ProviderFactories: testAccProviders, + CheckDestroy: resource.ComposeAggregateTestCheckFunc( + testAccCheckVcdSegmentProfileTemplateDestroy("vcd_nsxt_segment_profile_template.empty"), + testAccCheckVcdSegmentProfileTemplateDestroy("vcd_nsxt_segment_profile_template.complete"), + testAccCheckVcdSegmentProfileTemplateDestroy("vcd_nsxt_segment_profile_template.half-complete"), + ), + Steps: []resource.TestStep{ + { + Config: configText1, + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr("vcd_nsxt_segment_profile_template.empty", "name", t.Name()+"-empty"), + resource.TestCheckResourceAttr("vcd_nsxt_segment_profile_template.empty", "description", "description"), + resource.TestCheckResourceAttr("vcd_nsxt_segment_profile_template.empty", "ip_discovery_profile_id", ""), + resource.TestCheckResourceAttr("vcd_nsxt_segment_profile_template.empty", "mac_discovery_profile_id", ""), + resource.TestCheckResourceAttr("vcd_nsxt_segment_profile_template.empty", "spoof_guard_profile_id", ""), + resource.TestCheckResourceAttr("vcd_nsxt_segment_profile_template.empty", "qos_profile_id", ""), + resource.TestCheckResourceAttr("vcd_nsxt_segment_profile_template.empty", "segment_security_profile_id", ""), + + resource.TestCheckResourceAttr("vcd_nsxt_segment_profile_template.complete", "name", t.Name()+"-complete"), + resource.TestMatchResourceAttr("vcd_nsxt_segment_profile_template.complete", "ip_discovery_profile_id", regexp.MustCompile(`[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$`)), + resource.TestMatchResourceAttr("vcd_nsxt_segment_profile_template.complete", "mac_discovery_profile_id", regexp.MustCompile(`[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$`)), + resource.TestMatchResourceAttr("vcd_nsxt_segment_profile_template.complete", "spoof_guard_profile_id", regexp.MustCompile(`[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$`)), + resource.TestMatchResourceAttr("vcd_nsxt_segment_profile_template.complete", "qos_profile_id", regexp.MustCompile(`[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$`)), + resource.TestMatchResourceAttr("vcd_nsxt_segment_profile_template.complete", "segment_security_profile_id", regexp.MustCompile(`[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$`)), + + resource.TestCheckResourceAttr("vcd_nsxt_segment_profile_template.half-complete", "name", t.Name()+"-half-complete"), + resource.TestCheckResourceAttr("vcd_nsxt_segment_profile_template.half-complete", "description", ""), + resource.TestMatchResourceAttr("vcd_nsxt_segment_profile_template.half-complete", "ip_discovery_profile_id", regexp.MustCompile(`[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$`)), + resource.TestMatchResourceAttr("vcd_nsxt_segment_profile_template.half-complete", "mac_discovery_profile_id", regexp.MustCompile(`[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$`)), + resource.TestMatchResourceAttr("vcd_nsxt_segment_profile_template.half-complete", "spoof_guard_profile_id", regexp.MustCompile(`[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$`)), + resource.TestCheckResourceAttr("vcd_nsxt_segment_profile_template.half-complete", "qos_profile_id", ""), + resource.TestCheckResourceAttr("vcd_nsxt_segment_profile_template.half-complete", "segment_security_profile_id", ""), + ), + }, + { + ResourceName: "vcd_nsxt_segment_profile_template.complete", + ImportState: true, + ImportStateVerify: true, + ImportStateId: t.Name() + "-complete", + }, + { + Config: configText2DS, + Check: resource.ComposeAggregateTestCheckFunc( + resourceFieldsEqual("data.vcd_nsxt_segment_profile_template.empty", "vcd_nsxt_segment_profile_template.empty", nil), + resourceFieldsEqual("data.vcd_nsxt_segment_profile_template.half-complete", "vcd_nsxt_segment_profile_template.half-complete", nil), + resourceFieldsEqual("data.vcd_nsxt_segment_profile_template.complete", "vcd_nsxt_segment_profile_template.complete", nil), + ), + }, + { + Config: configText3, + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr("vcd_nsxt_global_default_segment_profile_template.singleton", "id", globalDefaultSegmentProfileId), + ), + }, + { + Config: configText4DS, + Check: resource.ComposeAggregateTestCheckFunc( + resourceFieldsEqual("data.vcd_nsxt_global_default_segment_profile_template.singleton", "vcd_nsxt_global_default_segment_profile_template.singleton", nil), + ), + }, + { + ResourceName: "vcd_nsxt_global_default_segment_profile_template.singleton", + ImportState: true, + ImportStateVerify: true, + ImportStateId: "", // It does not need a value for ID as it is global VCD configuration + }, + { + ResourceName: "vcd_nsxt_global_default_segment_profile_template.singleton", + ImportState: true, + ImportStateVerify: true, + ImportStateId: "dummy", // Attempt to perform import with a dummy ID + }, + { + Config: configText7, + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr("vcd_nsxt_global_default_segment_profile_template.singleton", "vdc_networks_default_segment_profile_template_id", ""), + resource.TestCheckResourceAttr("vcd_nsxt_global_default_segment_profile_template.singleton", "vapp_networks_default_segment_profile_template_id", ""), + ), + }, + }, + }) + postTestChecks(t) +} + +const testAccVcdNsxtSegmentProfileTemplate = ` +data "vcd_nsxt_manager" "nsxt" { + name = "{{.NsxtManager}}" +} + +resource "vcd_nsxt_segment_profile_template" "empty" { + name = "{{.TestName}}-empty" + description = "description" + + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +resource "vcd_nsxt_segment_profile_template" "complete" { + name = "{{.TestName}}-complete" + description = "description" + + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id + ip_discovery_profile_id = data.vcd_nsxt_segment_ip_discovery_profile.first.id + mac_discovery_profile_id = data.vcd_nsxt_segment_mac_discovery_profile.first.id + spoof_guard_profile_id = data.vcd_nsxt_segment_spoof_guard_profile.first.id + qos_profile_id = data.vcd_nsxt_segment_qos_profile.first.id + segment_security_profile_id = data.vcd_nsxt_segment_security_profile.first.id +} + +resource "vcd_nsxt_segment_profile_template" "half-complete" { + name = "{{.TestName}}-half-complete" + + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id + ip_discovery_profile_id = data.vcd_nsxt_segment_ip_discovery_profile.first.id + mac_discovery_profile_id = data.vcd_nsxt_segment_mac_discovery_profile.first.id + spoof_guard_profile_id = data.vcd_nsxt_segment_spoof_guard_profile.first.id +} + +data "vcd_nsxt_segment_ip_discovery_profile" "first" { + name = "{{.IpDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_mac_discovery_profile" "first" { + name = "{{.MacDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_spoof_guard_profile" "first" { + name = "{{.SpoofGuardProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_qos_profile" "first" { + name = "{{.QosProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_security_profile" "first" { + name = "{{.SegmentSecurityProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} +` + +const testAccVcdNsxtSegmentProfileTemplateDS = testAccVcdNsxtSegmentProfileTemplate + ` +data "vcd_nsxt_segment_profile_template" "empty" { + name = vcd_nsxt_segment_profile_template.empty.name + + depends_on = [vcd_nsxt_segment_profile_template.empty] +} + +data "vcd_nsxt_segment_profile_template" "half-complete" { + name = vcd_nsxt_segment_profile_template.half-complete.name + + depends_on = [vcd_nsxt_segment_profile_template.half-complete] +} + +data "vcd_nsxt_segment_profile_template" "complete" { + name = vcd_nsxt_segment_profile_template.complete.name + + depends_on = [vcd_nsxt_segment_profile_template.complete] +} +` + +const testAccVcdNsxtSegmentProfileTemplateGlobalDefault = testAccVcdNsxtSegmentProfileTemplate + ` +resource "vcd_nsxt_global_default_segment_profile_template" "singleton" { + vdc_networks_default_segment_profile_template_id = vcd_nsxt_segment_profile_template.complete.id + vapp_networks_default_segment_profile_template_id = vcd_nsxt_segment_profile_template.empty.id +} +` + +const testAccVcdNsxtSegmentProfileTemplateGlobalDefaultDS = testAccVcdNsxtSegmentProfileTemplateGlobalDefault + ` +data "vcd_nsxt_global_default_segment_profile_template" "singleton" { + + depends_on = [vcd_nsxt_global_default_segment_profile_template.singleton] +} +` + +const testAccVcdNsxtSegmentProfileTemplateGlobalDefaultNoValues = testAccVcdNsxtSegmentProfileTemplate + ` +resource "vcd_nsxt_global_default_segment_profile_template" "singleton" { +} +` + +func testAccCheckVcdSegmentProfileTemplateDestroy(identifier string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[identifier] + if !ok { + return fmt.Errorf("not found: %s", identifier) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("no Segment Profile Template ID is set") + } + + conn := testAccProvider.Meta().(*VCDClient) + + _, err := conn.GetSegmentProfileTemplateById(rs.Primary.ID) + + if err == nil || !govcd.ContainsNotFound(err) { + return fmt.Errorf("%s not deleted yet", identifier) + } + return nil + + } +} diff --git a/vcd/resource_vcd_org_vdc.go b/vcd/resource_vcd_org_vdc.go index 5fa02146e..951b71b61 100644 --- a/vcd/resource_vcd_org_vdc.go +++ b/vcd/resource_vcd_org_vdc.go @@ -275,7 +275,9 @@ func resourceVcdOrgVdc() *schema.Resource { "edge_cluster_id": { Type: schema.TypeString, Optional: true, + Computed: true, Description: "ID of NSX-T Edge Cluster (provider vApp networking services and DHCP capability for Isolated networks)", + Deprecated: "Please use 'vcd_org_vdc_nsxt_network_profile' resource to manage Edge Cluster and Segment Profile Templates", }, "enable_nsxv_distributed_firewall": { Type: schema.TypeBool, diff --git a/vcd/resource_vcd_org_vdc_network_profile.go b/vcd/resource_vcd_org_vdc_network_profile.go new file mode 100644 index 000000000..1fad109b3 --- /dev/null +++ b/vcd/resource_vcd_org_vdc_network_profile.go @@ -0,0 +1,144 @@ +package vcd + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/vmware/go-vcloud-director/v2/govcd" + "github.com/vmware/go-vcloud-director/v2/types/v56" +) + +func resourceVcdNsxtOrgVdcNetworkProfile() *schema.Resource { + return &schema.Resource{ + CreateContext: resourceVcdNsxtOrgVdcNetworkProfileCreateUpdate, + ReadContext: resourceVcdNsxtOrgVdcNetworkProfileRead, + UpdateContext: resourceVcdNsxtOrgVdcNetworkProfileCreateUpdate, + DeleteContext: resourceVcdNsxtOrgVdcNetworkProfileDelete, + Importer: &schema.ResourceImporter{ + StateContext: resourceVcdVdcAccessControlImport, + }, + + Schema: map[string]*schema.Schema{ + "org": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: "The name of organization to use, optional if defined at provider " + + "level. Useful when connected as sysadmin working across different organizations", + }, + "vdc": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: "The name of VDC to use, optional if defined at provider level", + }, + "edge_cluster_id": { + Type: schema.TypeString, + Optional: true, + Description: "ID of NSX-T Edge Cluster (provider vApp networking services and DHCP capability for Isolated networks)", + }, + "vdc_networks_default_segment_profile_template_id": { + Type: schema.TypeString, + Optional: true, + Description: "Default NSX-T Segment Profile for Org VDC networks", + }, + "vapp_networks_default_segment_profile_template_id": { + Type: schema.TypeString, + Optional: true, + Description: "Default NSX-T Segment Profile for vApp networks", + }, + }, + } +} + +func resourceVcdNsxtOrgVdcNetworkProfileCreateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + + _, vdc, err := vcdClient.GetOrgAndVdcFromResource(d) + if err != nil { + return diag.Errorf("error when retrieving VDC: %s", err) + } + + if !vdc.IsNsxt() { + return diag.Errorf("network profile configuration is only supported on NSX-T VDCs") + } + + vdcNetworkProfileConfig := &types.VdcNetworkProfile{} + + if d.Get("edge_cluster_id").(string) != "" { + vdcNetworkProfileConfig.ServicesEdgeCluster = &types.VdcNetworkProfileServicesEdgeCluster{BackingID: d.Get("edge_cluster_id").(string)} + } + + if d.Get("vdc_networks_default_segment_profile_template_id").(string) != "" { + vdcNetworkProfileConfig.VdcNetworkSegmentProfileTemplateRef = &types.OpenApiReference{ID: d.Get("vdc_networks_default_segment_profile_template_id").(string)} + } + + if d.Get("vapp_networks_default_segment_profile_template_id").(string) != "" { + vdcNetworkProfileConfig.VappNetworkSegmentProfileTemplateRef = &types.OpenApiReference{ID: d.Get("vapp_networks_default_segment_profile_template_id").(string)} + } + + _, err = vdc.UpdateVdcNetworkProfile(vdcNetworkProfileConfig) + if err != nil { + return diag.Errorf("error updating VDC network profile configuration: %s", err) + } + + d.SetId(vdc.Vdc.ID) + return resourceVcdNsxtOrgVdcNetworkProfileRead(ctx, d, meta) +} + +func resourceVcdNsxtOrgVdcNetworkProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + return resourceDataSourceVcdNsxtOrgVdcNetworkProfileRead(ctx, d, meta, "resource") +} + +func resourceDataSourceVcdNsxtOrgVdcNetworkProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}, origin string) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + + _, vdc, err := vcdClient.GetOrgAndVdcFromResource(d) + if err != nil { + if origin == "resource" && govcd.ContainsNotFound(err) { + d.SetId("") + return nil + } + return diag.Errorf("error when retrieving VDC: %s", err) + } + + netProfile, err := vdc.GetVdcNetworkProfile() + if err != nil { + return diag.Errorf("error getting VDC Network Profile: %s", err) + } + + dSet(d, "edge_cluster_id", "") + if netProfile.ServicesEdgeCluster != nil && netProfile.ServicesEdgeCluster.BackingID != "" { + dSet(d, "edge_cluster_id", netProfile.ServicesEdgeCluster.BackingID) + } + + dSet(d, "vapp_networks_default_segment_profile_template_id", "") + if netProfile.VappNetworkSegmentProfileTemplateRef != nil { + dSet(d, "vapp_networks_default_segment_profile_template_id", netProfile.VappNetworkSegmentProfileTemplateRef.ID) + } + + dSet(d, "vdc_networks_default_segment_profile_template_id", "") + if netProfile.VdcNetworkSegmentProfileTemplateRef != nil { + dSet(d, "vdc_networks_default_segment_profile_template_id", netProfile.VdcNetworkSegmentProfileTemplateRef.ID) + } + + d.SetId(vdc.Vdc.ID) + return nil +} + +func resourceVcdNsxtOrgVdcNetworkProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + vcdClient := meta.(*VCDClient) + + _, vdc, err := vcdClient.GetOrgAndVdcFromResource(d) + if err != nil { + return diag.Errorf("error when retrieving VDC: %s", err) + } + + _, err = vdc.UpdateVdcNetworkProfile(&types.VdcNetworkProfile{}) + if err != nil { + return diag.Errorf("error deleting VDC network profile configuration: %s", err) + } + + return nil +} diff --git a/vcd/resource_vcd_org_vdc_network_profile_test.go b/vcd/resource_vcd_org_vdc_network_profile_test.go new file mode 100644 index 000000000..54e37593c --- /dev/null +++ b/vcd/resource_vcd_org_vdc_network_profile_test.go @@ -0,0 +1,193 @@ +//go:build vdc || nsxt || ALL || functional + +package vcd + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccVcdOrgVdcNsxtNetworkProfile(t *testing.T) { + preTestChecks(t) + skipIfNotSysAdmin(t) + + var params = StringMap{ + "VdcName": testConfig.Nsxt.Vdc, + "OrgName": testConfig.VCD.Org, + + "EdgeCluster": testConfig.Nsxt.NsxtEdgeCluster, + + "TestName": t.Name(), + "NsxtManager": testConfig.Nsxt.Manager, + "IpDiscoveryProfileName": testConfig.Nsxt.IpDiscoveryProfile, + "MacDiscoveryProfileName": testConfig.Nsxt.MacDiscoveryProfile, + "QosProfileName": testConfig.Nsxt.QosProfile, + "SpoofGuardProfileName": testConfig.Nsxt.SpoofGuardProfile, + "SegmentSecurityProfileName": testConfig.Nsxt.SegmentSecurityProfile, + + "Tags": "vdc", + } + testParamsNotEmpty(t, params) + + configText1 := templateFill(testAccVcdOrgVdcNsxtNetworkProfile, params) + params["FuncName"] = t.Name() + "-step2DS" + configText2 := templateFill(testAccVcdOrgVdcNsxtNetworkProfileDS, params) + + params["FuncName"] = t.Name() + "-step4" + configText4 := templateFill(testAccVcdOrgVdcNsxtNetworkProfileRemove, params) + + debugPrintf("#[DEBUG] CONFIGURATION - Step1: %s", configText1) + debugPrintf("#[DEBUG] CONFIGURATION - Step2: %s", configText2) + debugPrintf("#[DEBUG] CONFIGURATION - Step4: %s", configText4) + + if vcdShortTest { + t.Skip(acceptanceTestsSkipped) + return + } + + resource.Test(t, resource.TestCase{ + ProviderFactories: testAccProviders, + CheckDestroy: resource.ComposeAggregateTestCheckFunc( + testAccCheckVdcDestroy, + ), + Steps: []resource.TestStep{ + { + Config: configText1, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet("vcd_org_vdc_nsxt_network_profile.nsxt", "vdc_networks_default_segment_profile_template_id"), + resource.TestCheckResourceAttrSet("vcd_org_vdc_nsxt_network_profile.nsxt", "vapp_networks_default_segment_profile_template_id"), + resource.TestCheckResourceAttrSet("vcd_org_vdc_nsxt_network_profile.nsxt", "edge_cluster_id"), + resource.TestCheckResourceAttrPair("vcd_org_vdc_nsxt_network_profile.nsxt", "edge_cluster_id", "data.vcd_org_vdc.nsxt2", "edge_cluster_id"), + ), + }, + { + Config: configText2, + Check: resource.ComposeTestCheckFunc( + resourceFieldsEqual("vcd_org_vdc_nsxt_network_profile.nsxt", "vcd_org_vdc_nsxt_network_profile.nsxt", nil), + ), + }, + { + ResourceName: "vcd_org_vdc_nsxt_network_profile.nsxt", + ImportState: true, + ImportStateVerify: true, + ImportStateId: testConfig.VCD.Org + "." + testConfig.Nsxt.Vdc, + }, + { + Config: configText4, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("vcd_org_vdc_nsxt_network_profile.nsxt", "vdc_networks_default_segment_profile_template_id", ""), + resource.TestCheckResourceAttr("vcd_org_vdc_nsxt_network_profile.nsxt", "vapp_networks_default_segment_profile_template_id", ""), + resource.TestCheckResourceAttr("vcd_org_vdc_nsxt_network_profile.nsxt", "edge_cluster_id", ""), + resource.TestCheckResourceAttrPair("vcd_org_vdc_nsxt_network_profile.nsxt", "edge_cluster_id", "data.vcd_org_vdc.nsxt2", "edge_cluster_id"), + ), + }, + }, + }) + postTestChecks(t) +} + +const testAccVcdOrgVdcNsxtNetworkProfileCommon = ` +data "vcd_nsxt_manager" "nsxt" { + name = "{{.NsxtManager}}" +} + +data "vcd_nsxt_segment_ip_discovery_profile" "first" { + name = "{{.IpDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_mac_discovery_profile" "first" { + name = "{{.MacDiscoveryProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_spoof_guard_profile" "first" { + name = "{{.SpoofGuardProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_qos_profile" "first" { + name = "{{.QosProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_security_profile" "first" { + name = "{{.SegmentSecurityProfileName}}" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +resource "vcd_nsxt_segment_profile_template" "complete" { + name = "{{.TestName}}-complete" + description = "description" + + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id + ip_discovery_profile_id = data.vcd_nsxt_segment_ip_discovery_profile.first.id + mac_discovery_profile_id = data.vcd_nsxt_segment_mac_discovery_profile.first.id + spoof_guard_profile_id = data.vcd_nsxt_segment_spoof_guard_profile.first.id + qos_profile_id = data.vcd_nsxt_segment_qos_profile.first.id + segment_security_profile_id = data.vcd_nsxt_segment_security_profile.first.id +} + +` + +const testAccVcdOrgVdcNsxtNetworkProfile = testAccVcdOrgVdcNsxtNetworkProfileCommon + ` +data "vcd_org_vdc" "nsxt" { + org = "{{.OrgName}}" + name = "{{.VdcName}}" +} + +data "vcd_nsxt_edge_cluster" "first" { + org = "{{.OrgName}}" + vdc_id = data.vcd_org_vdc.nsxt.id + name = "{{.EdgeCluster}}" +} + +resource "vcd_org_vdc_nsxt_network_profile" "nsxt" { + org = "{{.OrgName}}" + vdc = "{{.VdcName}}" + + edge_cluster_id = data.vcd_nsxt_edge_cluster.first.id + vdc_networks_default_segment_profile_template_id = vcd_nsxt_segment_profile_template.complete.id + vapp_networks_default_segment_profile_template_id = vcd_nsxt_segment_profile_template.complete.id +} + +data "vcd_org_vdc" "nsxt2" { + org = "{{.OrgName}}" + name = "{{.VdcName}}" + + depends_on = [vcd_org_vdc_nsxt_network_profile.nsxt] +} +` + +const testAccVcdOrgVdcNsxtNetworkProfileDS = testAccVcdOrgVdcNsxtNetworkProfile + ` +data "vcd_org_vdc_nsxt_network_profile" "nsxt" { + org = vcd_org_vdc_nsxt_network_profile.nsxt.org + vdc = vcd_org_vdc_nsxt_network_profile.nsxt.vdc +} +` + +const testAccVcdOrgVdcNsxtNetworkProfileRemove = testAccVcdOrgVdcNsxtNetworkProfileCommon + ` +data "vcd_org_vdc" "nsxt" { + org = "{{.OrgName}}" + name = "{{.VdcName}}" +} + +data "vcd_nsxt_edge_cluster" "first" { + org = "{{.OrgName}}" + vdc_id = data.vcd_org_vdc.nsxt.id + name = "{{.EdgeCluster}}" +} + +resource "vcd_org_vdc_nsxt_network_profile" "nsxt" { + org = "{{.OrgName}}" + vdc = "{{.VdcName}}" +} + +data "vcd_org_vdc" "nsxt2" { + org = "{{.OrgName}}" + name = "{{.VdcName}}" + + depends_on = [vcd_org_vdc_nsxt_network_profile.nsxt] +} +` diff --git a/vcd/resource_vcd_org_vdc_nsxt_edge_cluster_test.go b/vcd/resource_vcd_org_vdc_nsxt_edge_cluster_test.go index 1da9cc169..0a1412506 100644 --- a/vcd/resource_vcd_org_vdc_nsxt_edge_cluster_test.go +++ b/vcd/resource_vcd_org_vdc_nsxt_edge_cluster_test.go @@ -27,16 +27,8 @@ func TestAccVcdOrgVdcNsxtEdgeCluster(t *testing.T) { params["FuncName"] = t.Name() + "-step2DS" configText2 := templateFill(testAccVcdOrgVdcNsxtEdgeClusterDataSource, params) - params["FuncName"] = t.Name() + "-Update" - configText3 := templateFill(testAccVcdOrgVdcNsxtEdgeCluster_update, params) - - params["FuncName"] = t.Name() + "-UpdateDS" - configText4 := templateFill(testAccVcdOrgVdcNsxtEdgeClusterDataSource_update, params) - debugPrintf("#[DEBUG] CONFIGURATION - Step1: %s", configText1) debugPrintf("#[DEBUG] CONFIGURATION - Step2: %s", configText2) - debugPrintf("#[DEBUG] CONFIGURATION - Step3: %s", configText3) - debugPrintf("#[DEBUG] CONFIGURATION - Step4: %s", configText4) if vcdShortTest { t.Skip(acceptanceTestsSkipped) @@ -67,18 +59,6 @@ func TestAccVcdOrgVdcNsxtEdgeCluster(t *testing.T) { resourceFieldsEqual("vcd_org_vdc.with-edge-cluster", "data.vcd_org_vdc.ds", []string{"delete_recursive", "delete_force", "%"}), ), }, - { - Config: configText3, - Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr("vcd_org_vdc.with-edge-cluster", "edge_cluster_id", ""), - ), - }, - { - Config: configText4, - Check: resource.ComposeTestCheckFunc( - resourceFieldsEqual("vcd_org_vdc.with-edge-cluster", "data.vcd_org_vdc.ds", []string{"delete_recursive", "delete_force", "%"}), - ), - }, }, }) postTestChecks(t) @@ -140,50 +120,3 @@ resource "vcd_org_vdc" "with-edge-cluster" { ` const testAccVcdOrgVdcNsxtEdgeClusterDataSource = testAccVcdOrgVdcNsxtEdgeCluster + testAccVcdOrgVdcNsxtEdgeClusterDS - -const testAccVcdOrgVdcNsxtEdgeCluster_update = ` -data "vcd_provider_vdc" "pvdc" { - name = "{{.ProviderVdc}}" -} - -data "vcd_nsxt_edge_cluster" "ec" { - provider_vdc_id = data.vcd_provider_vdc.pvdc.id - name = "{{.EdgeCluster}}" -} - -resource "vcd_org_vdc" "with-edge-cluster" { - name = "{{.VdcName}}" - org = "{{.OrgName}}" - - allocation_model = "ReservationPool" - network_pool_name = "{{.NetworkPool}}" - provider_vdc_name = data.vcd_provider_vdc.pvdc.name - - compute_capacity { - cpu { - allocated = 1024 - limit = 1024 - } - - memory { - allocated = 1024 - limit = 1024 - } - } - - storage_profile { - name = "{{.ProviderVdcStorageProfile}}" - enabled = true - limit = 10240 - default = true - } - - enabled = true - enable_thin_provisioning = true - enable_fast_provisioning = true - delete_force = true - delete_recursive = true -} -` - -const testAccVcdOrgVdcNsxtEdgeClusterDataSource_update = testAccVcdOrgVdcNsxtEdgeCluster_update + testAccVcdOrgVdcNsxtEdgeClusterDS diff --git a/website/docs/d/nsxt_global_default_segment_profile_template.html.markdown b/website/docs/d/nsxt_global_default_segment_profile_template.html.markdown new file mode 100644 index 000000000..8444b671a --- /dev/null +++ b/website/docs/d/nsxt_global_default_segment_profile_template.html.markdown @@ -0,0 +1,34 @@ +--- +layout: "vcd" +page_title: "VMware Cloud Director: vcd_nsxt_global_default_segment_profile_template" +sidebar_current: "docs-vcd-data-source-nsxt-segment-profile-template" +description: |- + Provides a data source to read Global Default NSX-T Segment Profile Templates. +--- + +# vcd\_nsxt\_global\_default\_segment\_profile\_template + +Provides a resource to manage Global Default NSX-T Segment Profile Templates. + +Supported in provider *v3.11+* and VCD 10.4.0+ with NSX-T. Requires System Administrator privileges. + +## Example Usage + +```hcl +resource "vcd_nsxt_global_default_segment_profile_template" "singleton" { +} +``` +## Argument Reference + +No arguments are required because this is a global VCD configuration + +## Attribute Reference + +The following attributes are supported: + +* `vdc_networks_default_segment_profile_template_id` - Global Default Segment Profile + Template ID for all VDC Networks +* `vapp_networks_default_segment_profile_template_id` - Global Default Segment Profile + Template ID for all vApp Networks + + diff --git a/website/docs/d/nsxt_network_segment_profile.html.markdown b/website/docs/d/nsxt_network_segment_profile.html.markdown new file mode 100644 index 000000000..74ddab2e4 --- /dev/null +++ b/website/docs/d/nsxt_network_segment_profile.html.markdown @@ -0,0 +1,35 @@ +--- +layout: "vcd" +page_title: "VMware Cloud Director: vcd_org_vdc_nsxt_network_profile" +sidebar_current: "docs-vcd-datasource-nsxt-network-segment-profile" +description: |- + Provides a data source to read Segment Profile configuration for NSX-T Org VDC networks. +--- + +# vcd\_nsxt\_network\_segment\_profile + +Provides a data source to read Segment Profile configuration for NSX-T Org VDC networks. + +Supported in provider *v3.11+* and VCD 10.4.0+ with NSX-T. + +## Example Usage + +```hcl +data "vcd_nsxt_network_segment_profile" "custom-prof" { + org = "my-org" + org_network_id = vcd_network_routed_v2.net1.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `org` - (Optional) The name of organization to use, optional if defined at provider level +* `org_network_id` - (Required) Org VDC Network ID + +## Attribute Reference + +All the arguments and attributes defined in +[`vcd_nsxt_network_segment_profile`](/providers/vmware/vcd/latest/docs/resources/nsxt_network_segment_profile) +resource are available. diff --git a/website/docs/d/nsxt_segment_ip_discovery_profile.html.markdown b/website/docs/d/nsxt_segment_ip_discovery_profile.html.markdown new file mode 100644 index 000000000..10a9016b2 --- /dev/null +++ b/website/docs/d/nsxt_segment_ip_discovery_profile.html.markdown @@ -0,0 +1,66 @@ +--- +layout: "vcd" +page_title: "VMware Cloud Director: vcd_nsxt_segment_ip_discovery_profile" +sidebar_current: "docs-vcd-data-source-nsxt-segment-ip-discovery-profile" +description: |- + Provides a VMware Cloud Director NSX-T IP Discovery Profile data source. This can be used to read NSX-T Segment Profile definitions. +--- + +# vcd\_nsxt\_segment\_ip\_discovery\_profile + +Provides a VMware Cloud Director NSX-T IP Discovery Profile data source. This can be used to read NSX-T Segment Profile definitions. + +Supported in provider *v3.11+*. + +## Example Usage (IP Discovery Profile) + +```hcl +data "vcd_nsxt_manager" "nsxt" { + name = "nsxManager1" +} + +data "vcd_nsxt_segment_ip_discovery_profile" "first" { + name = "ip-discovery-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} +``` + + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of Segment Profile +* `nsxt_manager_id` - (Optional) Segment Profile search context. Use when searching by NSX-T manager +* `vdc_id` - (Optional) Segment Profile search context. Use when searching by VDC +* `vdc_group_id` - (Optional) Segment Profile search context. Use when searching by VDC group + +-> Note: only one of `nsxt_manager_id`, `vdc_id`, `vdc_group_id` can be used + + +## Attribute reference + +* `description` - Description of IP Discovery Profile +* `arp_binding_limit` - Indicates the number of ARP snooped IP addresses to be remembered per + logical port +* `arp_binding_timeout` - ARP and ND (Neighbor Discovery) cache timeout (in minutes) +* `is_arp_snooping_enabled` - Defines whether ARP snooping is enabled +* `is_dhcp_snooping_v4_enabled` - Defines whether DHCP snooping for IPv4 is enabled +* `is_dhcp_snooping_v6_enabled` - Defines whether DHCP snooping for IPv6 is enabled +* `is_duplicate_ip_detection_enabled` - Defines whether duplicate IP detection is enabled. Duplicate + IP detection is used to determine if there is any IP conflict with any other port on the same + logical switch. If a conflict is detected, then the IP is marked as a duplicate on the port where + the IP was discovered last +* `is_nd_snooping_enabled` - Defines whether ND (Neighbor Discovery) snooping is enabled. If true, + this method will snoop the NS (Neighbor Solicitation) and NA (Neighbor Advertisement) messages in + the ND (Neighbor Discovery Protocol) family of messages which are transmitted by a VM. From the NS + messages, we will learn about the source which sent this NS message. From the NA message, we will + learn the resolved address in the message which the VM is a recipient of. Addresses snooped by + this method are subject to TOFU +* `is_tofu_enabled` - Defines whether `Trust on First Use(TOFU)` paradigm is enabled +* `is_vmtools_v4_enabled` - Defines whether fetching IPv4 address using vm-tools is enabled. This + option is only supported on ESX where vm-tools is installed +* `is_vmtools_v6_enabled` - Defines whether fetching IPv6 address using vm-tools is enabled. This + will learn the IPv6 addresses which are configured on interfaces of a VM with the help of the + VMTools software +* `nd_snooping_limit` - Maximum number of ND (Neighbor Discovery Protocol) snooped IPv6 addresses diff --git a/website/docs/d/nsxt_segment_mac_discovery_profile.html.markdown b/website/docs/d/nsxt_segment_mac_discovery_profile.html.markdown new file mode 100644 index 000000000..45665949d --- /dev/null +++ b/website/docs/d/nsxt_segment_mac_discovery_profile.html.markdown @@ -0,0 +1,50 @@ +--- +layout: "vcd" +page_title: "VMware Cloud Director: vcd_nsxt_segment_mac_discovery_profile" +sidebar_current: "docs-vcd-data-source-nsxt-segment-mac-discovery-profile" +description: |- + Provides a VMware Cloud Director NSX-T MAC Discovery Profile data source. This can be used to read NSX-T Segment Profile definitions. +--- + +# vcd\_nsxt\_segment\_mac\_discovery\_profile + +Provides a VMware Cloud Director NSX-T MAC Discovery Profile data source. This can be used to read NSX-T Segment Profile definitions. + +Supported in provider *v3.11+*. + +## Example Usage (MAC Discovery Profile) + +```hcl +data "vcd_nsxt_manager" "nsxt" { + name = "nsxManager1" +} + +data "vcd_nsxt_segment_mac_discovery_profile" "first" { + name = "mac-discovery-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} +``` + + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of Segment Profile +* `nsxt_manager_id` - (Optional) Segment Profile search context. Use when searching by NSX-T manager +* `vdc_id` - (Optional) Segment Profile search context. Use when searching by VDC +* `vdc_group_id` - (Optional) Segment Profile search context. Use when searching by VDC group + +-> Note: only one of `nsxt_manager_id`, `vdc_id`, `vdc_group_id` can be used + +## Attribute reference + +* `description` - Description of MAC Discovery Profile +* `is_mac_change_enabled` - Defines whether source MAC address change is enabled +* `is_mac_learning_enabled` - Defines whether source MAC address learning is enabled +* `is_unknown_unicast_flooding_enabled` - Defines whether unknown unicast flooding rule is enabled + This allows flooding for unlearned MAC for ingress traffic +* `mac_learning_aging_time` - Aging time in seconds for learned MAC address. Indicates how long + learned MAC address remain +* `mac_limit` - The maximum number of MAC addresses that can be learned on this port +* `mac_policy` - The policy after MAC Limit is exceeded. It can be either `ALLOW` or `DROP` \ No newline at end of file diff --git a/website/docs/d/nsxt_segment_profile_template.html.markdown b/website/docs/d/nsxt_segment_profile_template.html.markdown new file mode 100644 index 000000000..d1956ca14 --- /dev/null +++ b/website/docs/d/nsxt_segment_profile_template.html.markdown @@ -0,0 +1,32 @@ +--- +layout: "vcd" +page_title: "VMware Cloud Director: vcd_nsxt_segment_profile_template" +sidebar_current: "docs-vcd-data-source-nsxt-segment-profile-template" +description: |- + Provides a data source to read NSX-T Segment Profile Templates. +--- + +# vcd\_nsxt\_segment\_profile\_template + +Provides a data source to read NSX-T Segment Profile Templates. + +Supported in provider *v3.11+* and VCD 10.4.0+ with NSX-T. Requires System Administrator privileges. + +## Example Usage (Complete example with all Segment Profiles) + +```hcl +data "vcd_nsxt_segment_profile_template" "complete" { + name = "my-segment-profile-template-name" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) Name of existing Segment Profile Template + +## Attribute reference + +All properties defined in [vcd_nsxt_segment_profile_template](/providers/vmware/vcd/latest/docs/resources/nsxt_segment_profile_template) +resource are available. diff --git a/website/docs/d/nsxt_segment_qos_profile.html.markdown b/website/docs/d/nsxt_segment_qos_profile.html.markdown new file mode 100644 index 000000000..b12c16f87 --- /dev/null +++ b/website/docs/d/nsxt_segment_qos_profile.html.markdown @@ -0,0 +1,61 @@ +--- +layout: "vcd" +page_title: "VMware Cloud Director: vcd_nsxt_segment_qos_profile" +sidebar_current: "docs-vcd-data-source-nsxt-segment-qos-profile" +description: |- + Provides a VMware Cloud Director NSX-T QoS Profile data source. This can be used to read NSX-T Segment Profile definitions. +--- + +# vcd\_nsxt\_segment\_qos\_profile + +Provides a VMware Cloud Director NSX-T QoS Profile data source. This can be used to read NSX-T Segment Profile definitions. + +Supported in provider *v3.11+*. + +## Example Usage (QoS Profile) + +```hcl +data "vcd_nsxt_manager" "nsxt" { + name = "nsxManager1" +} + +data "vcd_nsxt_segment_qos_profile" "first" { + name = "qos-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} +``` + + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of Segment Profile +* `nsxt_manager_id` - (Optional) Segment Profile search context. Use when searching by NSX-T manager +* `vdc_id` - (Optional) Segment Profile search context. Use when searching by VDC +* `vdc_group_id` - (Optional) Segment Profile search context. Use when searching by VDC group + +-> Note: only one of `nsxt_manager_id`, `vdc_id`, `vdc_group_id` can be used + +## Attribute reference + +* `description` - Description of QoS Profile +* `class_of_service` - Class of service groups similar types of traffic in the network and each type + of traffic is treated as a class with its own level of service priority. The lower priority + traffic is slowed down or in some cases dropped to provide better throughput for higher priority + traffic. +* `dscp_priority` - A Differentiated Services Code Point (DSCP) priority + Profile. +* `dscp_trust_mode` - A Differentiated Services Code Point (DSCP) trust mode. Values are below: + * `TRUSTED` - With Trusted mode the inner header DSCP value is applied to the outer IP header for + IP/IPv6 traffic. For non IP/IPv6 traffic, the outer IP header takes the default value. + * `UNTRUSTED` - Untrusted mode is supported on overlay-based and VLAN-based logical port. +* `egress_rate_limiter_avg_bandwidth` - Average egress bandwidth in Mb/s. +* `egress_rate_limiter_burst_size` - Egress burst size in bytes. +* `egress_rate_limiter_peak_bandwidth` - Peak egress bandwidth in Mb/s. +* `ingress_broadcast_rate_limiter_avg_bandwidth` - Average ingress broadcast bandwidth in Mb/s. +* `ingress_broadcast_rate_limiter_burst_size` - Ingress broadcast burst size in bytes. +* `ingress_broadcast_rate_limiter_peak_bandwidth` - Peak ingress broadcast bandwidth in Mb/s. +* `ingress_rate_limiter_avg_bandwidth` - Average ingress bandwidth in Mb/s. +* `ingress_rate_limiter_burst_size` - Ingress burst size in bytes. +* `ingress_rate_limiter_peak_bandwidth` - Peak ingress broadcast bandwidth in Mb/s. diff --git a/website/docs/d/nsxt_segment_security_profile.html.markdown b/website/docs/d/nsxt_segment_security_profile.html.markdown new file mode 100644 index 000000000..fee30de6e --- /dev/null +++ b/website/docs/d/nsxt_segment_security_profile.html.markdown @@ -0,0 +1,56 @@ +--- +layout: "vcd" +page_title: "VMware Cloud Director: vcd_nsxt_segment_security_profile" +sidebar_current: "docs-vcd-data-source-nsxt-segment-security-profile" +description: |- + Provides a VMware Cloud Director NSX-T Segment Security Profile data source. This can be used to read NSX-T Segment Profile definitions. +--- + +# vcd\_nsxt\_segment\_security\_profile + +Provides a VMware Cloud Director NSX-T Segment Security Profile data source. This can be used to read NSX-T Segment Profile definitions. + +Supported in provider *v3.11+*. + +## Example Usage (Segment Security Profile) + +```hcl +data "vcd_nsxt_manager" "nsxt" { + name = "nsxManager1" +} + +data "vcd_nsxt_segment_security_profile" "first" { + name = "segment-security-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} +``` + + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of Segment Profile +* `nsxt_manager_id` - (Optional) Segment Profile search context. Use when searching by NSX-T manager +* `vdc_id` - (Optional) Segment Profile search context. Use when searching by VDC +* `vdc_group_id` - (Optional) Segment Profile search context. Use when searching by VDC group + +-> Note: only one of `nsxt_manager_id`, `vdc_id`, `vdc_group_id` can be used + + +## Attribute reference + +* `description` - Description of Segment Security Profile +* `bpdu_filter_allow_list` - Pre-defined list of allowed MAC addresses to be excluded from BPDU filtering. +* `is_bpdu_filter_enabled` - Defines whether BPDU filter is enabled. +* `is_dhcp_v4_client_block_enabled` - Defines whether DHCP Client block IPv4 is enabled. This filters DHCP Client IPv4 traffic. +* `is_dhcp_v6_client_block_enabled` - Defines whether DHCP Client block IPv6 is enabled. This filters DHCP Client IPv6 traffic. +* `is_dhcp_v4_server_block_enabled` - Defines whether DHCP Server block IPv4 is enabled. This filters DHCP Server IPv4 traffic. +* `is_dhcp_v6_server_block_enabled` - Defines whether DHCP Server block IPv6 is enabled. This filters DHCP Server IPv6 traffic. +* `is_non_ip_traffic_block_enabled` - Defines whether non IP traffic block is enabled. If true, it blocks all traffic except IP/(G)ARP/BPDU. +* `is_ra_guard_enabled` - Defines whether Router Advertisement Guard is enabled. This filters DHCP Server IPv6 traffic. +* `is_rate_limitting_enabled` - Defines whether Rate Limiting is enabled. +* `rx_broadcast_limit` - Incoming broadcast traffic limit in packets per second. +* `rx_multicast_limit` - Incoming multicast traffic limit in packets per second. +* `tx_broadcast_limit` - Outgoing broadcast traffic limit in packets per second. +* `tx_multicast_limit` - Outgoing multicast traffic limit in packets per second. diff --git a/website/docs/d/nsxt_segment_spoof_guard_profile.html.markdown b/website/docs/d/nsxt_segment_spoof_guard_profile.html.markdown new file mode 100644 index 000000000..c1f3406aa --- /dev/null +++ b/website/docs/d/nsxt_segment_spoof_guard_profile.html.markdown @@ -0,0 +1,43 @@ +--- +layout: "vcd" +page_title: "VMware Cloud Director: vcd_nsxt_segment_spoof_guard_profile" +sidebar_current: "docs-vcd-data-source-nsxt-segment-spoof-guard-profile" +description: |- + Provides a VMware Cloud Director NSX-T Spoof Guard Profile data source. This can be used to read NSX-T Segment Profile definitions. +--- + +# vcd\_nsxt\_segment\_spoof\_guard\_profile + +Provides a VMware Cloud Director Spoof Guard Profile data source. This can be used to read NSX-T Segment Profile definitions. + +Supported in provider *v3.11+*. + +## Example Usage (IP Discovery Profile) + +```hcl +data "vcd_nsxt_manager" "nsxt" { + name = "nsxManager1" +} + +data "vcd_nsxt_segment_spoof_guard_profile" "first" { + name = "spoof-guard-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of Segment Profile +* `nsxt_manager_id` - (Optional) Segment Profile search context. Use when searching by NSX-T manager +* `vdc_id` - (Optional) Segment Profile search context. Use when searching by VDC +* `vdc_group_id` - (Optional) Segment Profile search context. Use when searching by VDC group + +-> Note: only one of `nsxt_manager_id`, `vdc_id`, `vdc_group_id` can be used + +## Attribute reference + +* `description` - Description of Spoof Guard profile +* `is_address_binding_whitelist_enabled` - Whether Spoof Guard is enabled. If true, it only allows + VM sending traffic with the IPs in the whitelist diff --git a/website/docs/d/org_vdc_nsxt_network_profile.html.markdown b/website/docs/d/org_vdc_nsxt_network_profile.html.markdown new file mode 100644 index 000000000..c4d1adfdb --- /dev/null +++ b/website/docs/d/org_vdc_nsxt_network_profile.html.markdown @@ -0,0 +1,44 @@ +--- +layout: "vcd" +page_title: "VMware Cloud Director: vcd_org_vdc_nsxt_network_profile" +sidebar_current: "docs-vcd-data-source-nsxt-segment-profile-template" +description: |- + Provides a data source to read Network Profile for NSX-T VDCs. +--- + +# vcd\_org\_vdc\_nsxt\_network\_profile + +Provides a data source to read Network Profile for NSX-T VDCs. + +Supported in provider *v3.11+* and VCD 10.4.0+ with NSX-T. + +## Example Usage + +```hcl +data "vcd_org_vdc_nsxt_network_profile" "nsxt" { + org = "my-org" + vdc = "my-vdc" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `org` - (Optional) The name of organization to use, optional if defined at provider level +* `vdc` - (Optional) The name of VDC to use, optional if defined at provider level + +## Attribute reference + +* `edge_cluster_id` - An ID of NSX-T Edge Cluster which should provide vApp + Networking Services or DHCP for Isolated Networks. This value **might be unavailable** in data + source if user has insufficient rights. +* `vdc_networks_default_segment_profile_template_id` - Default Segment Profile ID for all Org VDC + networks withing this VDC +* `vapp_networks_default_segment_profile_template_id`- Default Segment Profile ID for all vApp + networks withing this VDC + +All other attributes are defined in [organization VDC network profile +resource](/providers/vmware/vcd/latest/docs/resources/org_vdc_nsxt_network_profile.html#attribute-reference) +are supported. + diff --git a/website/docs/guides/container_service_extension_3_1_x.html.markdown b/website/docs/guides/container_service_extension_3_1_x.html.markdown index 5b7d83519..c18bb26c9 100644 --- a/website/docs/guides/container_service_extension_3_1_x.html.markdown +++ b/website/docs/guides/container_service_extension_3_1_x.html.markdown @@ -8,6 +8,9 @@ description: |- # Container Service Extension v3.1.x +~> This CSE installation method is **deprecated** in favor of CSE v4.x. Please have a look at the new guide +[here](https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/container_service_extension_4_x_install) + ## About This guide describes the required steps to configure VCD to install the Container Service Extension (CSE) v3.1.x, that diff --git a/website/docs/guides/container_service_extension_4_0_install.html.markdown b/website/docs/guides/container_service_extension_4_x_install.html.markdown similarity index 55% rename from website/docs/guides/container_service_extension_4_0_install.html.markdown rename to website/docs/guides/container_service_extension_4_x_install.html.markdown index 4572924a4..9b992ba11 100644 --- a/website/docs/guides/container_service_extension_4_0_install.html.markdown +++ b/website/docs/guides/container_service_extension_4_x_install.html.markdown @@ -1,19 +1,19 @@ --- layout: "vcd" -page_title: "VMware Cloud Director: Container Service Extension v4.0 installation" -sidebar_current: "docs-vcd-guides-cse-4-0-install" +page_title: "VMware Cloud Director: Container Service Extension v4.1 installation" +sidebar_current: "docs-vcd-guides-cse-4-x-install" description: |- - Provides guidance on configuring VCD to be able to install and use Container Service Extension v4.0 + Provides guidance on configuring VCD to be able to install and use Container Service Extension v4.1 --- -# Container Service Extension v4.0 installation +# Container Service Extension v4.1 installation ## About -This guide describes the required steps to configure VCD to install the Container Service Extension (CSE) v4.0, that +This guide describes the required steps to configure VCD to install the Container Service Extension (CSE) v4.1, that will allow tenant users to deploy **Tanzu Kubernetes Grid Multi-cloud (TKGm)** clusters on VCD using Terraform or the UI. -To know more about CSE v4.0, you can visit [the documentation][cse_docs]. +To know more about CSE v4.1, you can visit [the documentation][cse_docs]. ## Pre-requisites @@ -21,108 +21,146 @@ To know more about CSE v4.0, you can visit [the documentation][cse_docs]. In order to complete the steps described in this guide, please be aware: -* CSE v4.0 is supported from VCD v10.4.0 or above, make sure your VCD appliance matches the criteria. -* Terraform provider needs to be v3.10.0 or above. +* CSE v4.1 is supported from VCD v10.4.2 or above, as specified in the [Product Interoperability Matrix][product_matrix]. + Please check that the target VCD appliance matches the criteria. +* Terraform provider needs to be v3.11.0 or above. * Both CSE Server and the Bootstrap clusters require outbound Internet connectivity. -* CSE v4.0 makes use of [ALB](/providers/vmware/vcd/latest/docs/guides/nsxt_alb) capabilities. +* CSE v4.1 makes use of [ALB](/providers/vmware/vcd/latest/docs/guides/nsxt_alb) capabilities. ## Installation process --> To install CSE v4.0, this guide will make use of the ready-to-use Terraform configuration located [here](https://github.com/vmware/terraform-provider-vcd/tree/main/examples/container-service-extension-4.0/install). +-> To install CSE v4.1, this guide will make use of the example Terraform configuration located [here](https://github.com/vmware/terraform-provider-vcd/tree/main/examples/container-service-extension/v4.1/install). You can check it, customise it to your needs and apply. However, reading this guide first is recommended to understand what it does and how to use it. -The installation process is split in two independent steps that should be run separately: +The installation process is split in two independent steps that must be run one after the other: -- The first step creates the [Runtime Defined Entity Interfaces][rde_interface] and [Types][rde_type] that are required for CSE to work, a new [Role][role] - and a CSE Administrator [User][user] that will be referenced later on in second step. -- The second step will configure remaining resources, like [Organizations][org], [VDCs][vdc], [Catalogs][catalog], Networks and [VMs][vm]. +- The first step installs the same elements as the _"Configure Settings for CSE Server"_ section in UI wizard, that is, creates the + [RDE Interfaces][rde_interface], [RDE Types][rde_type], [RDE Interface Behaviors][rde_interface_behavior] and the [RDE][rde] that + are required for the CSE Server to work, in addition to a new [Role][role], new [VM Sizing Policies][sizing] and a CSE Administrator [User][user] that will be + referenced later on in the second step. +- The second step will configure the remaining resources, like [Organizations][org], [VDCs][vdc], [Catalogs and OVAs][catalog], Networks, and the CSE Server [VM][vm]. -The reason for such as split is that Providers require to generate an [API token][api_token] -for the CSE Administrator user. This operation needs to be done outside the Terraform realm for security reasons, and it's -up to the Providers to decide the most ideal way to generate such a token for its CSE Administrator in their particular scenarios. +The reason for such as split is that the CSE Administrator created during the first step is used to configure a new `provider` block in +the second one, so that it can provision a valid [API token][api_token]. This operation must be done separately as a `provider` block +can't log in with a user created in the same run. -### Step 1: Create RDEs and the CSE Administrator user +### Step 1: Configure Settings for CSE Server --> This step of the installation refers to the Terraform configuration present [here][step1]. +-> This step of the installation refers to the [step 1 of the example Terraform configuration][step1]. -In the [given configuration][step1] you can find a file named `terraform.tfvars.example`, you need to rename it to `terraform.tfvars` -and change the values present there to the ones that fit with your needs. +This step will create the same elements as the _"Configure Settings for CSE Server"_ section in UI wizard. The subsections +below can be helpful to understand all the building blocks that are described in the proposed example of Terraform configuration. -This step will create the following: +In the directory there is also a file named `terraform.tfvars.example`, which needs to be renamed to `terraform.tfvars` +and its values to be set to the correct ones. In general, for this specific step, the proposed HCL files (`.tf`) should not be +modified and be applied as they are. -- The required `VCDKEConfig` [RDE Interface][rde_interface] and [RDE Type][rde_type]. These two resources specify the schema of the CSE Server - configuration (called "VCDKEConfig") that will be instantiated in next step with a [RDE][rde]. -- The required `capvcdCluster` [RDE Type][rde_type]. Its version is specified by the `capvcd_rde_version` variable, that **must be "1.1.0" for CSE v4.0**. - This resource specifies the schema of the [TKGm clusters][tkgm_docs]. -- The **CSE Admin [Role][role]**, that specifies the required rights for the CSE Administrator to manage provider-side elements of VCD. -- The **CSE Administrator [User][user]** that will administrate the CSE Server and other aspects of VCD that are directly related to CSE. - Feel free to add more attributes like `description` or `full_name` if needed. +#### RDE Interfaces, Types and Behaviors -Once reviewed and applied with `terraform apply`, one **must login with the created CSE Administrator user to -generate an API token** that will be used in the next step. In UI, the API tokens can be generated in the CSE Administrator -user preferences in the top right, then go to the API tokens section, add a new one. -Or you can visit `/provider/administration/settings/user-preferences` at your VCD URL as CSE Administrator. +CSE v4.1 requires a set of Runtime Defined Entity items, such as [Interfaces][rde_interface], [Types][rde_type] and [Behaviors][rde_interface_behavior]. +In the [step 1 configuration][step1] you can find the following: -### Step 2: Install CSE +- The required `VCDKEConfig` [RDE Interface][rde_interface] and [RDE Type][rde_type]. These two resources specify the schema of the **CSE Server + configuration** that will be instantiated with a [RDE][rde]. + +- The required `capvcd` [RDE Interface][rde_interface] and `capvcdCluster` [RDE Type][rde_type]. + These two resources specify the schema of the [TKGm clusters][tkgm_docs]. + +- The required [RDE Interface Behaviors][rde_interface_behavior] used to retrieve critical information from the [TKGm clusters][tkgm_docs], + for example, the resulting **Kubeconfig**. + +#### RDE (CSE Server configuration / VCDKEConfig) + +The CSE Server configuration lives in a [Runtime Defined Entity][rde] that uses the `VCDKEConfig` [RDE Type][rde_type]. +To customise it, the [step 1 configuration][step1] asks for the following variables that you can set in `terraform.tfvars`: + +- `vcdkeconfig_template_filepath` references a local file that defines the `VCDKEConfig` [RDE][rde] contents. + It should be a JSON file with template variables that Terraform can interpret, like + [the RDE template file for CSE v4.1](https://github.com/vmware/terraform-provider-vcd/tree/main/examples/container-service-extension/v4.1/entities/vcdkeconfig.json.template) + used in the step 1 configuration, that can be rendered correctly with the Terraform built-in function `templatefile`. + (Note: In `terraform.tfvars.example` the path for the CSE v4.1 RDE contents is already provided). +- `capvcd_version`: The version for CAPVCD. The default value is **"1.1.0"** for CSE v4.1. + (Note: Do not confuse with the version of the `capvcdCluster` [RDE Type][rde_type], + which **must be "1.2.0"** for CSE v4.1 and cannot be changed through a variable). +- `cpi_version`: The version for CPI (Cloud Provider Interface). The default value is **"1.4.0"** for CSE v4.1. +- `csi_version`: The version for CSI (Cloud Storage Interface). The default value is **"1.4.0"** for CSE v4.1. +- `github_personal_access_token`: Create this one [here](https://github.com/settings/tokens), + this will avoid installation errors caused by GitHub rate limiting, as the TKGm cluster creation process requires downloading + some Kubernetes components from GitHub. + The token should have the `public_repo` scope for classic tokens and `Public Repositories` for fine-grained tokens. +- `http_proxy`: Address of your HTTP proxy server. Optional in the step 1 configuration. +- `https_proxy`: Address of your HTTPS proxy server. Optional in the step 1 configuration. +- `no_proxy`: A list of comma-separated domains without spaces that indicate the targets that must **not** go through the configured proxy. Optional in the step 1 configuration. +- `syslog_host`: Domain where to send the system logs. Optional in the step 1 configuration. +- `syslog_port`: Port where to send the system logs. Optional in the step 1 configuration. +- `node_startup_timeout`: A node will be considered unhealthy and remediated if joining the cluster takes longer than this timeout (seconds, defaults to 900 in the step 1 configuration). +- `node_not_ready_timeout`: A newly joined node will be considered unhealthy and remediated if it cannot host workloads for longer than this timeout (seconds, defaults to 300 in the step 1 configuration). +- `node_unknown_timeout`: A healthy node will be considered unhealthy and remediated if it is unreachable for longer than this timeout (seconds, defaults to 300 in the step 1 configuration). +- `max_unhealthy_node_percentage`: Remediation will be suspended when the number of unhealthy nodes exceeds this percentage. + (100% means that unhealthy nodes will always be remediated, while 0% means that unhealthy nodes will never be remediated). Defaults to 100 in the step 1 configuration. +- `container_registry_url`: URL from where TKG clusters will fetch container images, useful for VCD appliances that are completely isolated from Internet. Defaults to "projects.registry.vmware.com" in the step 1 configuration. +- `bootstrap_vm_certificates`: Certificate(s) to allow the ephemeral VM (created during cluster creation) to authenticate with. + For example, when pulling images from a container registry. Optional in the step 1 configuration. +- `k8s_cluster_certificates`: Certificate(s) to allow clusters to authenticate with. + For example, when pulling images from a container registry. Optional in the step 1 configuration. + +#### Rights, Roles and VM Sizing Policies + +CSE v4.1 requires a set of new [Rights Bundles][rights_bundle], [Roles][role] and [VM Sizing Policies][sizing] that are also created +in this step of the [step 1 configuration][step1]. Nothing should be customised here, except for the "CSE Administrator" +account to be created, where you can provide a username of your choice (`cse_admin_username`) and its password (`cse_admin_password`). + +This account will be used in the next step to provision an [API Token][api_token] to deploy the CSE Server. + +Once all variables are reviewed and set, you can start the installation with `terraform apply`. When it finishes successfully, you can continue with the **step 2**. + +### Step 2: Create the infrastructure and deploy the CSE Server -> This step of the installation refers to the Terraform configuration present [here][step2]. -~> Be sure that previous step is successfully completed and the API token for the CSE Administrator user was created. +~> Make sure that the previous step is successfully completed. -This step will create all the remaining elements to install CSE v4.0 in VCD. You can read subsequent sections -to have a better understanding of the building blocks that are described in the [proposed Terraform configuration][step2]. +This step will create all the remaining elements to install CSE v4.1 in VCD. You can read the subsequent sections +to have a better understanding of the building blocks that are described in the [step 2 Terraform configuration][step2]. -In this [configuration][step2] you can also find a file named `terraform.tfvars.example`, you need to rename it to `terraform.tfvars` -and change the values present there to the correct ones. You can also modify the proposed resources so they fit better to your needs. +In this [configuration][step2] you can also find a file named `terraform.tfvars.example` that needs to be updated with correct values and renamed to `terraform.tfvars` +and change the values present there to the correct ones. You can also modify the proposed resources to better suit your needs. #### Organizations -The [proposed configuration][step2] will create two new [Organizations][org], as specified in the [CSE documentation][cse_docs]: +The [step 2 configuration][step2] will create two new [Organizations][org], as specified in the [CSE documentation][cse_docs]: - A Solutions [Organization][org], which will host all provider-scoped items, such as the CSE Server. It should only be accessible to the CSE Administrator and Providers. - A Tenant [Organization][org], which will host the [TKGm clusters][tkgm_docs] for the users of this tenant to consume them. -> If you already have these two [Organizations][org] created and you want to use them instead, -you can leverage customising the [proposed configuration][step2] to use the Organization [data source][org_d] to fetch them. - -#### VM Sizing Policies - -The [proposed configuration][step2] will create four VM Sizing Policies: - -- `TKG extra-large`: 8 CPU, 32GB memory. -- `TKG large`: 4 CPU, 16GB memory. -- `TKG medium`: 2 CPU, 8GB memory. -- `TKG small`: 2 CPU, 4GB memory. - -These VM Sizing Policies should be applied as they are, so nothing should be changed here as these are the exact same -VM Sizing Policies created during CSE installation in UI. They will be assigned to the Tenant -Organization's VDC to be able to dimension the created [TKGm clusters][tkgm_docs] (see section below). +you can leverage customising the [step 2 configuration][step2] to use the Organization [data source][org_d] to fetch them. #### VDCs -The [proposed configuration][step2] will create two [VDCs][vdc], one for the Solutions Organization and another one for the Tenant Organization. +The [step 2 configuration][step2] will create two [VDCs][vdc], one for the Solutions Organization and another one for the Tenant Organization. You need to specify the following values in `terraform.tfvars`: - `provider_vdc_name`: This is used to fetch an existing [Provider VDC][provider_vdc], that will be used to create the two VDCs. - If you are going to use more than one [Provider VDC][provider_vdc], please consider modifying the proposed configuration. + If you are going to use more than one [Provider VDC][provider_vdc], please consider modifying the step 2 configuration. In UI, [Provider VDCs][provider_vdc] can be found in the Provider view, inside _Cloud Resources_ menu. - `nsxt_edge_cluster_name`: This is used to fetch an existing [Edge Cluster][edge_cluster], that will be used to create the two VDCs. - If you are going to use more than one [Edge Cluster][edge_cluster], please consider modifying the proposed configuration. + If you are going to use more than one [Edge Cluster][edge_cluster], please consider modifying the step 2 configuration. In UI, [Edge Clusters][edge_cluster] can be found in the NSX-T manager web UI. - `network_pool_name`: This references an existing Network Pool, which is used to create both VDCs. - If you are going to use more than one Network Pool, please consider modifying the proposed configuration. + If you are going to use more than one Network Pool, please consider modifying the step 2 configuration. -In the [proposed configuration][step2] the Tenant Organization's VDC has all the required VM Sizing Policies assigned, with the `TKG small` being the default one. -You can customise it to make any other TKG policy the default one. +In the [step 2 configuration][step2] the Tenant Organization's VDC has all the required VM Sizing Policies from the first step assigned, +with the `TKG small` being the default one. You can customise it to make any other TKG policy the default one. You can also leverage changing the storage profiles and other parameters to fit the requirements of your organization. Also, if you already have usable [VDCs][vdc], you can change the configuration to fetch them instead. #### Catalog and OVAs -The [proposed configuration][step2] will create two catalogs: +The [step 2 configuration][step2] will create two catalogs: - A catalog to host CSE Server OVA files, only accessible to CSE Administrators. This catalog will allow CSE Administrators to organise and manage all the CSE Server OVAs that are required to run and upgrade the CSE Server. @@ -131,29 +169,20 @@ The [proposed configuration][step2] will create two catalogs: Then it will upload the required OVAs to them. The OVAs can be specified in `terraform.tfvars`: - `tkgm_ova_folder`: This will reference the path to the TKGm OVA, as an absolute or relative path. It should **not** end with a trailing `/`. -- `tkgm_ova_file`: This will reference the file name of the TKGm OVA, like `ubuntu-2004-kube-v1.22.9+vmware.1-tkg.1-2182cbabee08edf480ee9bc5866d6933.ova`. +- `tkgm_ova_files`: This will reference the file names of the TKGm OVAs, like `[ubuntu-2004-kube-v1.25.7+vmware.2-tkg.1-8a74b9f12e488c54605b3537acb683bc.ova, ubuntu-2004-kube-v1.24.11+vmware.1-tkg.1-2ccb2a001f8bd8f15f1bfbc811071830.ova]`. - `cse_ova_folder`: This will reference the path to the CSE OVA, as an absolute or relative path. It should **not** end with a trailing `/`. -- `cse_ova_file`: This will reference the file name of the CSE OVA, like `VMware_Cloud_Director_Container_Service_Extension-4.0.1.ova`. +- `cse_ova_file`: This will reference the file name of the CSE OVA, like `VMware_Cloud_Director_Container_Service_Extension-4.1.0.ova`. --> To download the required OVAs, please refer to the [CSE documentation][cse_docs]. +-> To download the required OVAs, please refer to the [CSE documentation][cse_docs]. +You can also check the [Product Interoperability Matrix][product_matrix] to confirm the appropriate version of TKGm. ~> Both CSE Server and TKGm OVAs are heavy. Please take into account that the upload process could take more than 30 minutes, depending on upload speed. You can tune the `upload_piece_size` to speed up the upload. Another option would be uploading them manually in the UI. In case you're using a pre-uploaded OVA, leverage the [vcd_catalog_vapp_template][catalog_vapp_template_ds] data source (instead of the resource). -If you need to upload more than one OVA, please modify the [proposed configuration][step2]. - -### "Kubernetes Cluster Author" global role - -Apart from the role to manage the CSE Server created in [step 1][step1], we also need a [Global Role][global_role] -for the [TKGm clusters][tkgm_docs] consumers (it would be similar to the concept of "vApp Author" but for [TKGm clusters][tkgm_docs]). - -In order to create this [Global Role][global_role], the [proposed configuration][step2] first -creates a new [Rights Bundle][rights_bundle] and publishes it to all the tenants, then creates the [Global Role][global_role]. - -### Networking +#### Networking -The [proposed configuration][step2] prepares a basic networking layout that will make CSE v4.0 work. However, it is +The [step 2 configuration][step2] prepares a basic networking layout that will make CSE v4.1 work. However, it is recommended that you review the code and adapt the different parts to your needs, specially for the resources like `vcd_nsxt_firewall`. The configuration will create the following: @@ -168,13 +197,13 @@ The configuration will create the following: In this setup, we just provide a routed network per organization, so the CSE Server is inside its own network, isolated from the [TKGm clusters][tkgm_docs] network. - Two [SNAT rules][nat_rule] that will allow outbound access. Feel free to adjust or replace these rules with other ways of providing outbound access. -~> SNAT rules is just a proposal to give the CSE Server and the clusters outbound access. Please review the [proposed configuration][step2] +~> SNAT rules is just a proposal to give the CSE Server and the clusters outbound access. Please review the [step 2 configuration][step2] first. -In order to create all the items listed above, the [proposed configuration][step2] asks for the following variables that you can customise in `terraform.tfvars`: +In order to create all the items listed above, the [step 2 configuration][step2] asks for the following variables that you can customise in `terraform.tfvars`: - `nsxt_manager_name`: It is the name of an existing [NSX-T Manager][nsxt_manager], which is needed in order to create the [Provider Gateways][provider_gateway]. - If you are going to use more than one [NSX-T Manager][nsxt_manager], please consider modifying the proposed configuration. + If you are going to use more than one [NSX-T Manager][nsxt_manager], please consider modifying the step 2 configuration. In UI, [NSX-T Managers][nsxt_manager] can be found in the Provider view, inside _Infrastructure Resources > NSX-T_. - `solutions_nsxt_tier0_router_name`: It is the name of an existing [Tier-0 Router][nsxt_tier0_router], which is needed in order to create the [Provider Gateway][provider_gateway] in the Solutions Organization. In UI, [Tier-0 Routers][nsxt_tier0_router] can be found in the NSX-T manager web UI. @@ -231,42 +260,30 @@ In order to create all the items listed above, the [proposed configuration][step Organization [Routed network][routed_network]. - `tenant_routed_network_dns`: DNS Server for the Tenant Organization [Routed network][routed_network]. It can be left blank if it's not needed. -If you wish to have a different networking setup, please modify the [proposed configuration][step2]. +If you wish to have a different networking setup, please modify the [step 2 configuration][step2]. -### CSE Server +#### CSE Server -There is also a set of resources created by the [proposed configuration][step2] that correspond to the CSE Server vApp. -The generated VM makes use of the uploaded CSE OVA and some required guest properties. +There is also a set of resources created by the [step 2 configuration][step2] that correspond to the CSE Server vApp. +The generated VM makes use of the uploaded CSE OVA and some required guest properties: -In order to do so, the [configuration][step2] asks for the following variables that you can customise in `terraform.tfvars`: +- `cse_admin_username`: This must be the same CSE Administrator user created in the first step. +- `cse_admin_password`: This must be the same CSE Administrator user's password created in the first step. +- `cse_admin_api_token_file`: This specifies the path where the API token is saved and consumed. -- `vcdkeconfig_template_filepath` references a local file that defines the `VCDKEConfig` [RDE][rde] contents. It should be a JSON template, like - [the one used in the configuration](https://github.com/vmware/terraform-provider-vcd/tree/main/examples/container-service-extension-4.0/entities/vcdkeconfig-template.json). - (Note: In `terraform.tfvars.example` the correct path is already provided). -- `capvcd_version`: The version for CAPVCD. It should be "1.0.0" for CSE v4.0. -- `capvcd_rde_version`: The version for the CAPVCD [RDE Type][rde_type]. It should be the same version used in Step 1. -- `cpi_version`: The version for CPI. It should be "1.2.0" for CSE v4.0. -- `csi_version`: The version for CSI. It should be "1.3.0" for CSE v4.0. -- `github_personal_access_token`: Create this one [here](https://github.com/settings/tokens), - this will avoid installation errors caused by GitHub rate limiting, as the TKGm cluster creation process requires downloading - some Kubernetes components from GitHub. - The token should have the `public_repo` scope for classic tokens and `Public Repositories` for fine-grained tokens. -- `cse_admin_user`: This should reference the CSE Administrator [User][user] that was created in Step 1. -- `cse_admin_api_token`: This should be the API token that you created for the CSE Administrator after Step 1. +#### UI plugin installation -### UI plugin installation +-> If the old CSE 3.x UI plugin is installed, you will need to remove it before installing the new one. -The final resource created by the [proposed configuration][step2] is the [`vcd_ui_plugin`][ui_plugin] resource. +The final resource created by the [step 2 configuration][step2] is the [`vcd_ui_plugin`][ui_plugin] resource. -This resource is optional, it will be only created if the variable `k8s_container_clusters_ui_plugin_path` is not empty, +This resource is **optional**, it will be only created if the variable `k8s_container_clusters_ui_plugin_path` is not empty, so you can leverage whether your tenant users or system administrators will need it or not. It can be useful for troubleshooting, or if your tenant users are not familiar with Terraform, they will be still able to create and manage their clusters with the UI. If you decide to install it, `k8s_container_clusters_ui_plugin_path` should point to the -[Kubernetes Container Clusters UI plug-in v4.0][cse_docs] ZIP file that you can download in the [CSE documentation][cse_docs]. - --> If the old CSE 3.x plugin is installed, you will need to remove it also. +[Kubernetes Container Clusters UI plug-in v4.1][cse_docs] ZIP file that you can download in the [CSE documentation][cse_docs]. ### Final considerations @@ -297,7 +314,7 @@ resource "vcd_nsxt_nat_rule" "solutions_nat" { Once you gain access to the CSE Server, you can check the `cse.log` file, the configuration file or check Internet connectivity. If something does not work, please check the **Troubleshooting** section below. -#### Troubleshooting +## Troubleshooting To evaluate the correctness of the setup, you can check the _"Verifying that the setup works"_ section above. @@ -320,12 +337,134 @@ The most common issues are: - Cluster creation is failing: - Please visit the [CSE documentation][cse_docs] to learn how to monitor the logs and troubleshoot possible problems. -## Update CSE Server +## Upgrade from CSE v4.0 to v4.1 + +In this section you can find the required steps to update from CSE v4.0 to v4.1. + +~> This section assumes that the old CSE v4.0 installation was done with Terraform by following the v4.0 guide steps. +Also, you need to meet [the pre-requisites criteria](#pre-requisites). + +### Create the new RDE elements + +A new [RDE Interface][rde_interface] needs to be created, which is required by the new v4.1 version: + +```hcl +resource "vcd_rde_interface" "cse_interface" { + vendor = "cse" + nss = "capvcd" + version = "1.0.0" + name = "cseInterface" +} +``` + +CSE v4.1 also requires the usage of [RDE Interface Behaviors][rde_interface_behavior] and +[RDE Behavior Access Controls][rde_type_behavior_acl] that can be created with the following snippets (these can +also be found in the [step 1 configuration][step1]): + +```hcl +resource "vcd_rde_interface_behavior" "capvcd_behavior" { + rde_interface_id = vcd_rde_interface.cse_interface.id + name = "getFullEntity" + execution = { + "type" : "noop" + "id" : "getFullEntity" + } +} + +resource "vcd_rde_type_behavior_acl" "capvcd_behavior_acl" { + rde_type_id = vcd_rde_type.capvcdcluster_type_v120.id # This definition is below + behavior_id = vcd_rde_interface_behavior.capvcd_behavior.id + access_level_ids = ["urn:vcloud:accessLevel:FullControl"] +} +``` + +Create a new version of the [RDE Types][rde_type] that were used in v4.0. This will allow them to co-exist with the old ones, +so we can perform a smooth upgrade. + +```hcl +resource "vcd_rde_type" "vcdkeconfig_type_v110" { + # Same attributes as v4.1, except for: + version = "1.1.0" # New version + # New schema: + schema_url = "https://raw.githubusercontent.com/vmware/terraform-provider-vcd/main/examples/container-service-extension/v4.1/schemas/vcdkeconfig-type-schema-v1.1.0.json" +} + +resource "vcd_rde_type" "capvcdcluster_type_v120" { + # Same attributes as v4.1, except for: + version = "1.2.0" # New version + # New schema: + schema_url = "https://raw.githubusercontent.com/vmware/terraform-provider-vcd/main/examples/container-service-extension/v4.1/schemas/capvcd-type-schema-v1.2.0.json" + # Notice that the new interface cse:capvcd:1.0.0 is used + interface_ids = [data.vcd_rde_interface.kubernetes_interface.id, vcd_rde_interface.cse_interface.id] + # Behaviors need to be created before any RDE Type + depends_on = [vcd_rde_interface_behavior.capvcd_behavior] +} +``` + +### Upgrade the VCDKEConfig RDE (CSE Server configuration) + +With the new [RDE Types][rde_type] in place, you need to perform an upgrade of the existing `VCDKEConfig` [RDE][rde], which +stores the CSE Server configuration. By using the v3.11.0 of the VCD Terraform Provider, you can do this update without forcing +a replacement: + +```hcl +resource "vcd_rde" "vcdkeconfig_instance" { + # Same values as before, except: + rde_type_id = vcd_rde_type.vcdkeconfig_type_v110.id # Update to the new RDE Type + input_entity = templatefile(var.vcdkeconfig_template_filepath, { + # Same values as before, except: + node_startup_timeout = var.node_startup_timeout + node_not_ready_timeout = var.node_not_ready_timeout + node_unknown_timeout = var.node_unknown_timeout + max_unhealthy_node_percentage = var.max_unhealthy_node_percentage + container_registry_url = var.container_registry_url + k8s_cluster_certificates = join(",", var.k8s_cluster_certificates) + bootstrap_vm_certificates = join(",", var.bootstrap_vm_certificates) + }) +} +``` + +You can find the meaning of these values in the section ["RDE (CSE Server configuration / VCDKEConfig)"](#rde-cse-server-configuration--vcdkeconfig). +Please notice that you need to upgrade the CAPVCD, CPI and CSI versions. The new values are stated in the same section. + +### Update Rights and Roles + +There are differences between the rights needed in v4.0 and v4.1. You can check the resources `vcd_rights_bundle.k8s_clusters_rights_bundle` and +`vcd_global_role.k8s_cluster_author` in the [step 1 configuration][step1] to see the new required set of rights. + +### Upload the new CSE v4.1 OVA + +You need to upload the new CSE v4.1 OVA to the `cse_catalog` that already hosts the CSE v4.0 one. +To download the required OVAs, please refer to the [CSE documentation][cse_docs]. + +```hcl +resource "vcd_catalog_vapp_template" "cse_ova_v4_1" { + org = vcd_org.solutions_organization.name # References the Solutions Organization that already exists from v4.0 + catalog_id = vcd_catalog.cse_catalog.id # References the CSE Catalog that already exists from v4.0 + + name = "VMware_Cloud_Director_Container_Service_Extension-4.1.0" + description = "VMware_Cloud_Director_Container_Service_Extension-4.1.0" + ova_path = "VMware_Cloud_Director_Container_Service_Extension-4.1.0.ova" +} +``` + +### Update CSE Server + +To update the CSE Server, just change the referenced OVA: + +```hcl +resource "vcd_vapp_vm" "cse_server_vm" { + # All values remain the same, except: + vapp_template_id = vcd_catalog_vapp_template.cse_ova_v4_1.id # Reference the v4.1 OVA +} +``` + +This will re-deploy the VM with the new CSE v4.1 Server. -### Update Configuration +## Update CSE Server Configuration To make changes to the existing server configuration, you should be able to locate the [`vcd_rde`][rde] resource named `vcdkeconfig_instance` -in the [proposed configuration][step2] that was created during the installation process. To update its configuration, you can +in the [step 2 configuration][step2] that was created during the installation process. To update its configuration, you can **change the variable values that are referenced**. For this, you can review the **"CSE Server"** section in the Installation process to see how this can be done. @@ -357,19 +496,19 @@ This must be done as a 2-step operation. To upgrade the CSE Server appliance, first you need to upload a new CSE Server OVA to the CSE catalog and then replace the reference to the [vApp Template][catalog_vapp_template] in the CSE Server VM. -In the [proposed configuration][step2], you can find the `cse_ova` [vApp Template][catalog_vapp_template] and the +In the [step 2 configuration][step2], you can find the `cse_ova` [vApp Template][catalog_vapp_template] and the `cse_server_vm` [VM][vm] that were applied during the installation process. Then you can create a new `vcd_catalog_vapp_template` and modify `cse_server_vm` to reference it: ```hcl -# Uploads a new CSE Server OVA. In the example below, we upload version 4.0.2 +# Uploads a new CSE Server OVA. In the example below, we upload version 4.1.0 resource "vcd_catalog_vapp_template" "new_cse_ova" { org = vcd_org.solutions_organization.name # References the Solutions Organization catalog_id = vcd_catalog.cse_catalog.id # References the CSE Catalog - name = "VMware_Cloud_Director_Container_Service_Extension-4.0.2" - description = "VMware_Cloud_Director_Container_Service_Extension-4.0.2" - ova_path = "/home/bob/cse/VMware_Cloud_Director_Container_Service_Extension-4.0.2.ova" + name = "VMware_Cloud_Director_Container_Service_Extension-4.1.0" + description = "VMware_Cloud_Director_Container_Service_Extension-4.1.0" + ova_path = "/home/bob/cse/VMware_Cloud_Director_Container_Service_Extension-4.1.0.ova" } # ... @@ -392,7 +531,7 @@ Please read the specific guide on that topic [here][cse_cluster_management_guide Once all clusters are removed in the background by CSE Server, you may destroy the remaining infrastructure with Terraform command. [alb]: /providers/vmware/vcd/latest/docs/guides/nsxt_alb -[api_token]: https://docs.vmware.com/en/VMware-Cloud-Director/10.4/VMware-Cloud-Director-Tenant-Portal-Guide/GUID-A1B3B2FA-7B2C-4EE1-9D1B-188BE703EEDE.html +[api_token]: /providers/vmware/vcd/latest/docs/resources/api_token [catalog]: /providers/vmware/vcd/latest/docs/resources/catalog [catalog_vapp_template_ds]: /providers/vmware/vcd/latest/docs/data-sources/catalog_vapp_template [cse_cluster_management_guide]: /providers/vmware/vcd/latest/docs/guides/container_service_extension_4_0_cluster_management @@ -405,17 +544,20 @@ Once all clusters are removed in the background by CSE Server, you may destroy t [nsxt_tier0_router]: /providers/vmware/vcd/latest/docs/data-sources/nsxt_tier0_router [org]: /providers/vmware/vcd/latest/docs/resources/org [org_d]: /providers/vmware/vcd/latest/docs/data-sources/org +[product_matrix]: https://interopmatrix.vmware.com/Interoperability?col=659,&row=0 [provider_gateway]: /providers/vmware/vcd/latest/docs/resources/external_network_v2 [provider_vdc]: /providers/vmware/vcd/latest/docs/data-sources/provider_vdc [rights_bundle]: /providers/vmware/vcd/latest/docs/resources/rights_bundle [rde]: /providers/vmware/vcd/latest/docs/resources/rde [rde_interface]: /providers/vmware/vcd/latest/docs/resources/rde_interface [rde_type]: /providers/vmware/vcd/latest/docs/resources/rde_type +[rde_interface_behavior]: /providers/vmware/vcd/latest/docs/resources/rde_interface_behavior +[rde_type_behavior_acl]: /providers/vmware/vcd/latest/docs/resources/rde_type_behavior_acl [role]: /providers/vmware/vcd/latest/docs/resources/role [routed_network]: /providers/vmware/vcd/latest/docs/resources/network_routed_v2 [sizing]: /providers/vmware/vcd/latest/docs/resources/vm_sizing_policy -[step1]: https://github.com/vmware/terraform-provider-vcd/tree/main/examples/container-service-extension-4.0/install/step1 -[step2]: https://github.com/vmware/terraform-provider-vcd/tree/main/examples/container-service-extension-4.0/install/step2 +[step1]: https://github.com/vmware/terraform-provider-vcd/tree/main/examples/container-service-extension/v4.1/install/step1 +[step2]: https://github.com/vmware/terraform-provider-vcd/tree/main/examples/container-service-extension/v4.1/install/step2 [tkgm_docs]: https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/index.html [user]: /providers/vmware/vcd/latest/docs/resources/org_user [ui_plugin]: /providers/vmware/vcd/latest/docs/resources/ui_plugin diff --git a/website/docs/guides/container_service_extension_4_0_cluster_management.html.markdown b/website/docs/guides/container_service_extension_cluster_management.html.markdown similarity index 100% rename from website/docs/guides/container_service_extension_4_0_cluster_management.html.markdown rename to website/docs/guides/container_service_extension_cluster_management.html.markdown diff --git a/website/docs/r/nsxt_global_default_segment_profile_template.html.markdown b/website/docs/r/nsxt_global_default_segment_profile_template.html.markdown new file mode 100644 index 000000000..c02600304 --- /dev/null +++ b/website/docs/r/nsxt_global_default_segment_profile_template.html.markdown @@ -0,0 +1,52 @@ +--- +layout: "vcd" +page_title: "VMware Cloud Director: vcd_nsxt_global_default_segment_profile_template" +sidebar_current: "docs-vcd-resource-nsxt-segment-profile-template" +description: |- + Provides a resource to manage Global Default NSX-T Segment Profile Templates. +--- + +# vcd\_nsxt\_global\_default\_segment\_profile\_template + +Provides a resource to manage Global Default NSX-T Segment Profile Templates. + +Supported in provider *v3.11+* and VCD 10.4.0+ with NSX-T. Requires System Administrator privileges. + +-> This resource is a singleton - only one configuration exists in entire VCD instance. Having +multiple resource definitions will override each other. + +## Example Usage + +```hcl +resource "vcd_nsxt_global_default_segment_profile_template" "singleton" { + vdc_networks_default_segment_profile_template_id = vcd_nsxt_segment_profile_template.complete.id + vapp_networks_default_segment_profile_template_id = vcd_nsxt_segment_profile_template.empty.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `vdc_networks_default_segment_profile_template_id` - (Optional) Global Default Segment Profile + Template ID for all VDC Networks +* `vapp_networks_default_segment_profile_template_id` - (Optional) Global Default Segment Profile + Template ID for all vApp Networks + + +## Importing + +~> The current implementation of Terraform import can only import resources into the state. +It does not generate configuration. [More information.](https://www.terraform.io/docs/import/) + +An existing global default Segment Profile Template configuration can be [imported][docs-import] into this +resource via supplying path for it. An example is below: + +[docs-import]: https://www.terraform.io/docs/import/ + +``` +terraform import vcd_nsxt_global_default_segment_profile_template.imported optional-dummy-id +``` + +The above would import the global default Segment Profile Template configuration. **Note**: the +`optional-dummy-id` is not mandatory but it may be useful for `import` definitions. diff --git a/website/docs/r/nsxt_network_segment_profile.html.markdown b/website/docs/r/nsxt_network_segment_profile.html.markdown new file mode 100644 index 000000000..23ebea874 --- /dev/null +++ b/website/docs/r/nsxt_network_segment_profile.html.markdown @@ -0,0 +1,152 @@ +--- +layout: "vcd" +page_title: "VMware Cloud Director: vcd_org_vdc_nsxt_network_profile" +sidebar_current: "docs-vcd-resource-nsxt-network-segment-profile" +description: |- + Provides a resource to configure Segment Profiles for NSX-T Org VDC networks. +--- + +# vcd\_nsxt\_network\_segment\_profile + +Provides a resource to configure Segment Profiles for NSX-T Org VDC networks. + +Supported in provider *v3.11+* and VCD 10.4.0+ with NSX-T. + +## Example Usage (Segment Profile Template assignment to Org VDC Network) + +```hcl +data "vcd_nsxt_segment_profile_template" "complete" { + name = "complete-profile" +} + +data "vcd_nsxt_edgegateway" "existing" { + org = "my-org" + name = "my-gw" +} + +resource "vcd_network_routed_v2" "net1" { + org = "my-org" + name = "routed-net" + + edge_gateway_id = data.vcd_nsxt_edgegateway.existing.id + + gateway = "1.1.1.1" + prefix_length = 24 + + static_ip_pool { + start_address = "1.1.1.10" + end_address = "1.1.1.20" + } +} + +resource "vcd_nsxt_network_segment_profile" "custom-prof" { + org = "my-org" + org_network_id = vcd_network_routed_v2.net1.id + + segment_profile_template_id = data.vcd_nsxt_segment_profile_template.complete.id +} +``` + +## Example Usage (Custom Segment Profile assignment to Org VDC Network) + +```hcl +data "vcd_nsxt_manager" "nsxt" { + name = "nsxManager1" +} + +data "vcd_nsxt_segment_ip_discovery_profile" "first" { + name = "ip-discovery-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_mac_discovery_profile" "first" { + name = "mac-discovery-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_spoof_guard_profile" "first" { + name = "spoof-guard-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_qos_profile" "first" { + name = "qos-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_security_profile" "first" { + name = "segment-security-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_edgegateway" "existing" { + org = "my-org" + name = "nsxt-gw-v40" +} + +resource "vcd_network_routed_v2" "net1" { + org = "my-org" + name = "routed-net" + + edge_gateway_id = data.vcd_nsxt_edgegateway.existing.id + + gateway = "1.1.1.1" + prefix_length = 24 + + static_ip_pool { + start_address = "1.1.1.10" + end_address = "1.1.1.20" + } +} + +resource "vcd_nsxt_network_segment_profile" "custom-prof" { + org = "my-org" + org_network_id = vcd_network_routed_v2.net1.id + + ip_discovery_profile_id = data.vcd_nsxt_segment_ip_discovery_profile.first.id + mac_discovery_profile_id = data.vcd_nsxt_segment_mac_discovery_profile.first.id + spoof_guard_profile_id = data.vcd_nsxt_segment_spoof_guard_profile.first.id + qos_profile_id = data.vcd_nsxt_segment_qos_profile.first.id + segment_security_profile_id = data.vcd_nsxt_segment_security_profile.first.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `org` - (Optional) The name of organization to use, optional if defined at provider level +* `org_network_id` - (Required) Org VDC Network ID +* `segment_profile_template_id` - (Optional) Segment Profile Template ID to be applied for this Org + VDC Network +* `ip_discovery_profile_id` - (Optional) IP Discovery Profile ID. Can be referenced using + [`vcd_nsxt_segment_ip_discovery_profile`](/providers/vmware/vcd/latest/docs/data-sources/nsxt_segment_ip_discovery_profile) + data source. +* `mac_discovery_profile_id` - (Optional) MAC Discovery Profile ID. Can be referenced using + [`vcd_nsxt_segment_mac_discovery_profile`](/providers/vmware/vcd/latest/docs/data-sources/nsxt_segment_mac_discovery_profile) + data source. +* `spoof_guard_profile_id` - (Optional) Spoof Guard Profile ID. Can be referenced using + [`vcd_nsxt_segment_spoof_guard_profile`](/providers/vmware/vcd/latest/docs/data-sources/nsxt_segment_spoof_guard_profile) + data source. +* `qos_profile_id` - (Optional) QoS Profile ID. Can be referenced using + [`vcd_nsxt_segment_qos_profile`](/providers/vmware/vcd/latest/docs/data-sources/nsxt_segment_qos_profile) + data source. +* `segment_security_profile_id` - (Optional) Segment Security Profile ID. Can be referenced using + [`vcd_nsxt_segment_security_profile`](/providers/vmware/vcd/latest/docs/data-sources/nsxt_segment_security_profile) + data source. + +## Importing + +~> **Note:** The current implementation of Terraform import can only import resources into the state. +It does not generate configuration. [More information.](https://www.terraform.io/docs/import/) + +An existing NSX-T Org VDC Network Segment Profile configuration can be [imported][docs-import] into +this resource via supplying the full dot separated path for Org VDC Network. An example is below: + +[docs-import]: https://www.terraform.io/docs/import/ + +``` +terraform import vcd_nsxt_network_segment_profile.my-profile org-name.vdc-org-vdc-group-name.org_network_name +``` + +NOTE: the default separator (.) can be changed using Provider.import_separator or variable VCD_IMPORT_SEPARATOR diff --git a/website/docs/r/nsxt_segment_profile_template.html.markdown b/website/docs/r/nsxt_segment_profile_template.html.markdown new file mode 100644 index 000000000..5ba1f3666 --- /dev/null +++ b/website/docs/r/nsxt_segment_profile_template.html.markdown @@ -0,0 +1,95 @@ +--- +layout: "vcd" +page_title: "VMware Cloud Director: vcd_nsxt_segment_profile_template" +sidebar_current: "docs-vcd-resource-nsxt-segment-profile-template" +description: |- + Provides a resource to manage NSX-T Segment Profile Templates. +--- + +# vcd\_nsxt\_segment\_profile\_template + +Provides a resource to manage NSX-T Segment Profile Templates. + +Supported in provider *v3.11+* and VCD 10.4.0+ with NSX-T. Requires System Administrator privileges. + +## Example Usage (Example with all Segment Profiles) + +```hcl +data "vcd_nsxt_manager" "nsxt" { + name = "nsxManager1" +} + +data "vcd_nsxt_segment_ip_discovery_profile" "first" { + name = "ip-discovery-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_mac_discovery_profile" "first" { + name = "mac-discovery-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_spoof_guard_profile" "first" { + name = "spoof-guard-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_qos_profile" "first" { + name = "qos-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +data "vcd_nsxt_segment_security_profile" "first" { + name = "segment-security-profile-0" + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id +} + +resource "vcd_nsxt_segment_profile_template" "complete" { + nsxt_manager_id = data.vcd_nsxt_manager.nsxt.id + + name = "my-first-segment-profile-template" + description = "my description" + + ip_discovery_profile_id = data.vcd_nsxt_segment_ip_discovery_profile.first.id + mac_discovery_profile_id = data.vcd_nsxt_segment_mac_discovery_profile.first.id + spoof_guard_profile_id = data.vcd_nsxt_segment_spoof_guard_profile.first.id + qos_profile_id = data.vcd_nsxt_segment_qos_profile.first.id + segment_security_profile_id = data.vcd_nsxt_segment_security_profile.first.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `nsxt_manager_id` - (Required) NSX-T Manager ID (can be referenced using + [`vcd_nsxt_manager`](/providers/vmware/vcd/latest/docs/data-sources/nsxt_manager) datasource) +* `name` - (Required) Name for Segment Profile Template +* `description` - (Optional) Description of Segment Profile Template +* `ip_discovery_profile_id` - (Optional) IP Discovery Profile ID. can be referenced using + [`vcd_nsxt_segment_ip_discovery_profile`](/providers/vmware/vcd/latest/docs/data-sources/nsxt_segment_ip_discovery_profile) +* `mac_discovery_profile_id` - (Optional) IP Discovery Profile ID. can be referenced using + [`vcd_nsxt_segment_mac_discovery_profile`](/providers/vmware/vcd/latest/docs/data-sources/nsxt_segment_mac_discovery_profile) +* `spoof_guard_profile_id` - (Optional) IP Discovery Profile ID. can be referenced using + [`vcd_nsxt_segment_spoof_guard_profile`](/providers/vmware/vcd/latest/docs/data-sources/nsxt_segment_spoof_guard_profile) +* `qos_profile_id` - (Optional) IP Discovery Profile ID. can be referenced using + [`vcd_nsxt_segment_qos_profile`](/providers/vmware/vcd/latest/docs/data-sources/nsxt_segment_qos_profile) +* `segment_security_profile_id` - (Optional) IP Discovery Profile ID. can be referenced using + [`vcd_nsxt_segment_security_profile`](/providers/vmware/vcd/latest/docs/data-sources/nsxt_segment_security_profile) + + +## Importing + +~> The current implementation of Terraform import can only import resources into the state. +It does not generate configuration. [More information.](https://www.terraform.io/docs/import/) + +An existing NSX-T Segment Profile Template configuration can be [imported][docs-import] into this +resource via supplying path for it. An example is below: + +[docs-import]: https://www.terraform.io/docs/import/ + +``` +terraform import vcd_nsxt_segment_profile_template.imported segment-profile-name +``` + +The above would import the `segment-profile-name` NSX-T Segment Profile Template. diff --git a/website/docs/r/org_vdc.html.markdown b/website/docs/r/org_vdc.html.markdown index f4ea4f920..4fb3a66fe 100644 --- a/website/docs/r/org_vdc.html.markdown +++ b/website/docs/r/org_vdc.html.markdown @@ -241,9 +241,10 @@ The following arguments are supported: * `default_vm_sizing_policy_id` - (Deprecated; Optional, *v3.0+*, *VCD 10.2+*) ID of the default Compute Policy for this VDC. It can be a VM Sizing Policy, a VM Placement Policy or a vGPU Policy. Deprecated in favor of `default_compute_policy_id`. * `vm_sizing_policy_ids` - (Optional, *v3.0+*, *VCD 10.2+*) Set of IDs of VM Sizing policies that are assigned to this VDC. This field requires `default_compute_policy_id` to be configured together. * `vm_placement_policy_ids` - (Optional, *v3.8+*, *VCD 10.2+*) Set of IDs of VM Placement policies that are assigned to this VDC. This field requires `default_compute_policy_id` to be configured together. -* `edge_cluster_id` - (Optional, *v3.8+*, *VCD 10.3+*) An ID of NSX-T Edge Cluster which should - provide vApp Networking Services or DHCP for isolated networks. Can be looked up using - `vcd_nsxt_edge_cluster` data source. +* `edge_cluster_id` - (Deprecated; Optional, *v3.8+*, *VCD 10.3+*) An ID of NSX-T Edge Cluster which + should provide vApp Networking Services or DHCP for isolated networks. Can be looked up using + `vcd_nsxt_edge_cluster` data source. This field is **deprecated** in favor of + [`vcd_org_vdc_nsxt_network_profile`](/providers/vmware/vcd/latest/docs/resources/org_vdc_nsxt_network_profile). * `enable_nsxv_distributed_firewall` - (Optional, *v3.9+*, *VCD 10.3+*) Enables or disables the NSX-V distributed firewall. diff --git a/website/docs/r/org_vdc_nsxt_network_profile.html.markdown b/website/docs/r/org_vdc_nsxt_network_profile.html.markdown new file mode 100644 index 000000000..2d441abc3 --- /dev/null +++ b/website/docs/r/org_vdc_nsxt_network_profile.html.markdown @@ -0,0 +1,73 @@ +--- +layout: "vcd" +page_title: "VMware Cloud Director: vcd_org_vdc_nsxt_network_profile +sidebar_current: "docs-vcd-resource-vcd-org-vdc-nsxt-network-profile" +description: |- + Provides a resource to manage NSX-T Org VDC Network Profile. +--- + +# vcd\_org\_vdc\_nsxt\_network\_profile + +Provides a resource to manage NSX-T Org VDC Network Profile. + +Supported in provider *v3.11+* and VCD 10.4.0+ with NSX-T. + +-> This resource is a "singleton" per VDC as it modifies VDC property (network profile +configuration) + +## Example Usage + +```hcl +data "vcd_org_vdc" "nsxt" { + org = "my-org" + name = "my-vdc" +} + +data "vcd_nsxt_edge_cluster" "first" { + org = "my-org" + vdc_id = data.vcd_org_vdc.nsxt.id + name = "my-edge-cluster" +} + +resource "vcd_org_vdc_nsxt_network_profile" "nsxt" { + org = "my-org" + vdc = "my-vdc" + + edge_cluster_id = data.vcd_nsxt_edge_cluster.first.id + vdc_networks_default_segment_profile_template_id = vcd_nsxt_segment_profile_template.complete.id + vapp_networks_default_segment_profile_template_id = vcd_nsxt_segment_profile_template.complete.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `edge_cluster_id` - (Optional) - Edge Cluster ID to be used for this VDC +* `vdc_networks_default_segment_profile_template_id` - (Optional) - Default Segment Profile + Template ID for all VDC Networks in a VDC +* `vapp_networks_default_segment_profile_template_id` - (Optional) - Default Segment Profile + Template ID for all vApp Networks in a VDC + + +## Importing + + +~> **Note:** The current implementation of Terraform import can only import resources into the state. +It does not generate configuration. [More information.](https://www.terraform.io/docs/import/) + +An existing an organization VDC NSX-T Network Profile configuration can be [imported][docs-import] into +this resource via supplying the full dot separated path to VDC. An example is below: + +``` +terraform import vcd_org_vdc_nsxt_network_profile.my-cfg my-org.my-vdc +``` + +NOTE: the default separator (.) can be changed using Provider.import_separator or variable VCD_IMPORT_SEPARATOR + +[docs-import]:https://www.terraform.io/docs/import/ + +After that, you can expand the configuration file and either update or delete the VDC Network +Profile as needed. Running `terraform plan` at this stage will show the difference between the +minimal configuration file and the VDC's stored properties. + diff --git a/website/vcd.erb b/website/vcd.erb index 45668cfa1..dd927925b 100644 --- a/website/vcd.erb +++ b/website/vcd.erb @@ -31,11 +31,11 @@ > Container Service Extension v3.1.x - > - Container Service Extension v4.0 installation + > + Container Service Extension v4.1 installation - > - Container Service Extension v4.0 Kubernetes clusters management + > + Container Service Extension v4.1 Kubernetes clusters management > Catalog subscription and sharing @@ -373,6 +373,33 @@ > vcd_service_account + > + vcd_nsxt_segment_ip_discovery_profile + + > + vcd_nsxt_segment_mac_discovery_profile + + > + vcd_nsxt_segment_qos_profile + + > + vcd_nsxt_segment_security_profile + + > + vcd_nsxt_segment_spoof_guard_profile + + > + vcd_nsxt_segment_profile_template + + > + vcd_nsxt_global_default_segment_profile_template + + > + vcd_org_vdc_nsxt_network_profile + + > + vcd_nsxt_network_segment_profile + > @@ -666,8 +693,22 @@ > vcd_ui_plugin +<<<<<<< HEAD > vcd_network_pool +======= + > + vcd_nsxt_segment_profile_template + + > + vcd_nsxt_global_default_segment_profile_template + + > + vcd_org_vdc_nsxt_network_profile + + > + vcd_nsxt_network_segment_profile +>>>>>>> main