Skip to content

Commit

Permalink
Fix typos
Browse files Browse the repository at this point in the history
  • Loading branch information
NathanBaulch committed Sep 3, 2024
1 parent 70e28e0 commit d145125
Show file tree
Hide file tree
Showing 23 changed files with 36 additions and 36 deletions.
2 changes: 1 addition & 1 deletion .goreleaser.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ checksum:
signs:
- artifacts: checksum
args:
# pass the batch flag to indicate its not interactive.
# pass the batch flag to indicate it's not interactive.
- "--batch"
- "--local-user"
- "{{ .Env.GPG_FINGERPRINT }}"
Expand Down
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@

### Added
- [index] Add include_type_name for compatibility between ESv6/7
- [xpack license] Handle ackowledged only reponse.
- [xpack license] Handle acknowledged only response.
- [kibana alert] Fix storing actions, missing descriptions.

### Fixed
Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ provider "elasticsearch" {
#### Assume role configuration

You can instruct the provider to assume a role in AWS before interacting with the cluster by setting the `aws_assume_role_arn` variable.
Optionnaly, you can configure the [External ID](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html) of IAM role trust policy by setting the `aws_assume_role_external_id` variable.
Optionally, you can configure the [External ID](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html) of IAM role trust policy by setting the `aws_assume_role_external_id` variable.

Example usage:

Expand Down
4 changes: 2 additions & 2 deletions docs/resources/cluster_settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ resource "elasticsearch_cluster_settings" "global" {
- **cluster_max_shards_per_node_frozen** (Number) The total number of primary and replica frozen shards, for the cluster; Ssards for closed indices do not count toward this limit, a cluster with no frozen data nodes is unlimited.
- **cluster_no_master_block** (String) Specifies which operations are rejected when there is no active master in a cluster (all, write)
- **cluster_persistent_tasks_allocation_enable** (String) Whether allocation for persistent tasks is active (all, none)
- **cluster_persistent_tasks_allocation_recheck_interval** (String) A time string controling how often assignment checks are performed to react to whether persistent tasks can be assigned to nodes
- **cluster_persistent_tasks_allocation_recheck_interval** (String) A time string controlling how often assignment checks are performed to react to whether persistent tasks can be assigned to nodes
- **cluster_routing_allocation_allow_rebalance** (String) Specify when shard rebalancing is allowed (always, indices_primaries_active, indices_all_active)
- **cluster_routing_allocation_awareness_attributes** (String) Use custom node attributes to take hardware configuration into account when allocating shards
- **cluster_routing_allocation_balance_index** (Number) Weight factor for the number of shards per index allocated on a node, increasing this raises the tendency to equalize the number of shards per index across all nodes
Expand All @@ -55,7 +55,7 @@ resource "elasticsearch_cluster_settings" "global" {
- **cluster_routing_rebalance_enable** (String) Allow rebalancing for specific kinds of shards (all, primaries, replicas, none)
- **indices_breaker_fielddata_limit** (String) The percentage of memory above which if loading a field into the field data cache would cause the cache to exceed this limit, an error is returned
- **indices_breaker_fielddata_overhead** (Number) A constant that all field data estimations are multiplied by
- **indices_breaker_request_limit** (String) The percentabge of memory above which per-request data structures (e.g. calculating aggregations) are prevented from exceeding
- **indices_breaker_request_limit** (String) The percentage of memory above which per-request data structures (e.g. calculating aggregations) are prevented from exceeding
- **indices_breaker_request_overhead** (Number) A constant that all request estimations are multiplied by
- **indices_breaker_total_limit** (String) The percentage of total amount of memory that can be used across all breakers
- **indices_recovery_max_bytes_per_sec** (String) Maximum total inbound and outbound recovery traffic for each node, in mb
Expand Down
2 changes: 1 addition & 1 deletion docs/resources/opensearch_audit_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Audit config lets you configure the security plugin audit log settings. See the

Note that when using with a managed AWS OpenSearch cluster, some values and permutations are not
allowed, and will result in a HTTP 409 (Conflict) error being returned. See the comments in the
example below for some know scenario's where this may occur.
example below for some known scenarios where this may occur.

## Example Usage

Expand Down
2 changes: 1 addition & 1 deletion docs/resources/opensearch_ism_policy_mapping.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,6 @@ Optional:
Import is supported using the following syntax:

```shell
# Import by poilcy_id
# Import by policy_id
terraform import elasticsearch_opensearch_ism_policy_mapping.test policy_1
```
2 changes: 1 addition & 1 deletion docs/resources/script.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ resource "elasticsearch_script" "test_script" {
The following arguments are supported:

* `script_id` - (Required) The name of the script.
* `lang` - Specifies the language the script is written in. Defaults to painless..
* `lang` - Specifies the language the script is written in. Defaults to painless.
* `source` - (Required) The source of the stored script.

## Attributes Reference
Expand Down
2 changes: 1 addition & 1 deletion docs/resources/xpack_role_mapping.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ resource "elasticsearch_xpack_role_mapping" "test" {

- **role_mapping_name** (String) The distinct name that identifies the role mapping, used solely as an identifier.
- **roles** (Set of String) A list of role names that are granted to the users that match the role mapping rules.
- **rules** (String) A list of mustache templates that will be evaluated to determine the roles names that should granted to the users that match the role mapping rules. This matches fields of users, rules can be grouped into `all` and `any` top level keys.
- **rules** (String) A list of mustache templates that will be evaluated to determine the roles names that should be granted to the users that match the role mapping rules. This matches fields of users, rules can be grouped into `all` and `any` top level keys.

### Optional

Expand Down
2 changes: 1 addition & 1 deletion es/data_source_elasticsearch_host.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ func dataSourceElasticsearchHostRead(d *schema.ResourceData, m interface{}) erro

// The upstream elastic client does not export the property for the urls
// it's using. Presumably the URLS would be available where the client is
// intantiated, but in terraform, that's not always practicable.
// instantiated, but in terraform, that's not always practicable.
var err error
esClient, err := getClient(m.(*ProviderConf))
if err != nil {
Expand Down
2 changes: 1 addition & 1 deletion es/data_source_elasticsearch_opendistro_destination.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ var datasourceOpenDistroDestinationSchema = map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
Description: "Name of the destrination to retrieve",
Description: "Name of the destination to retrieve",
},
"body": {
Type: schema.TypeMap,
Expand Down
4 changes: 2 additions & 2 deletions es/resource_elasticsearch_cluster_settings.go
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ func resourceElasticsearchClusterSettings() *schema.Resource {
"cluster_persistent_tasks_allocation_recheck_interval": {
Type: schema.TypeString,
Optional: true,
Description: "A time string controling how often assignment checks are performed to react to whether persistent tasks can be assigned to nodes",
Description: "A time string controlling how often assignment checks are performed to react to whether persistent tasks can be assigned to nodes",
},
"cluster_blocks_read_only": {
Type: schema.TypeBool,
Expand Down Expand Up @@ -225,7 +225,7 @@ func resourceElasticsearchClusterSettings() *schema.Resource {
"indices_breaker_request_limit": {
Type: schema.TypeString,
Optional: true,
Description: "The percentabge of memory above which per-request data structures (e.g. calculating aggregations) are prevented from exceeding",
Description: "The percentage of memory above which per-request data structures (e.g. calculating aggregations) are prevented from exceeding",
},
"indices_breaker_request_overhead": {
Type: schema.TypeFloat,
Expand Down
14 changes: 7 additions & 7 deletions es/resource_elasticsearch_kibana_alert_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ func TestAccElasticsearchKibanaAlert(t *testing.T) {
}
meta := provider.Meta()

// We use the elasticsearch version to check compatibilty, it'll connect to
// We use the elasticsearch version to check compatibility, it'll connect to
// kibana below
providerConf := meta.(*ProviderConf)
esClient, err := getClient(providerConf)
Expand Down Expand Up @@ -54,8 +54,8 @@ func TestAccElasticsearchKibanaAlert(t *testing.T) {
}

testConfig := testAccElasticsearchKibanaAlertV77(defaultActionID)
testParmsConfig := testAccElasticsearchKibanaAlertParamsJSONV77
testActionParmsConfig := testAccElasticsearchKibanaAlertJsonV77(defaultActionID)
testParamsConfig := testAccElasticsearchKibanaAlertParamsJSONV77
testActionParamsConfig := testAccElasticsearchKibanaAlertJsonV77(defaultActionID)
elasticVersion, err := resourceElasticsearchKibanaGetVersion(meta)
if err != nil {
t.Skipf("err: %s", err)
Expand All @@ -67,8 +67,8 @@ func TestAccElasticsearchKibanaAlert(t *testing.T) {
}
if elasticVersion.GreaterThanOrEqual(versionV711) {
testConfig = testAccElasticsearchKibanaAlertV711
testParmsConfig = testAccElasticsearchKibanaAlertParamsJSONV711
testActionParmsConfig = testAccElasticsearchKibanaAlertJsonV711(defaultActionID)
testParamsConfig = testAccElasticsearchKibanaAlertParamsJSONV711
testActionParamsConfig = testAccElasticsearchKibanaAlertJsonV711(defaultActionID)
}

log.Printf("[INFO] TestAccElasticsearchKibanaAlert %+v", elasticVersion)
Expand All @@ -89,13 +89,13 @@ func TestAccElasticsearchKibanaAlert(t *testing.T) {
),
},
{
Config: testParmsConfig,
Config: testParamsConfig,
Check: resource.ComposeTestCheckFunc(
testCheckElasticsearchKibanaAlertExists("elasticsearch_kibana_alert.test_params_json"),
),
},
{
Config: testActionParmsConfig,
Config: testActionParamsConfig,
Check: resource.ComposeTestCheckFunc(
testCheckElasticsearchKibanaAlertExists("elasticsearch_kibana_alert.test_action_json"),
),
Expand Down
4 changes: 2 additions & 2 deletions es/resource_elasticsearch_opendistro_ism_policy_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ import (
)

func TestAccElasticsearchOpenDistroISMPolicy(t *testing.T) {
opensearchVerionConstraints, _ := version.NewConstraint(">= 1.1, < 6")
opensearchVersionConstraints, _ := version.NewConstraint(">= 1.1, < 6")
provider := Provider()
diags := provider.Configure(context.Background(), &terraform.ResourceConfig{})
if diags.HasError() {
Expand All @@ -38,7 +38,7 @@ func TestAccElasticsearchOpenDistroISMPolicy(t *testing.T) {
if err != nil {
t.Skipf("err: %s", err)
}
if opensearchVerionConstraints.Check(v) {
if opensearchVersionConstraints.Check(v) {
config = testAccElasticsearchOpenDistroISMPolicyOpenSearch11
} else {
config = testAccElasticsearchOpenDistroISMPolicyV7
Expand Down
2 changes: 1 addition & 1 deletion es/resource_elasticsearch_opendistro_kibana_tenant.go
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ func resourceElasticsearchOpenDistroKibanaTenant() *schema.Resource {
Importer: &schema.ResourceImporter{
StateContext: schema.ImportStatePassthroughContext,
},
DeprecationMessage: "elasticsearch_opendistro_kibana_tentant is deprecated, please use elasticsearch_opensearch_kibana_tenant resource instead.",
DeprecationMessage: "elasticsearch_opendistro_kibana_tenant is deprecated, please use elasticsearch_opensearch_kibana_tenant resource instead.",
}
}

Expand Down
2 changes: 1 addition & 1 deletion es/resource_elasticsearch_opendistro_monitor.go
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ func resourceElasticsearchOpenDistroMonitorCreate(d *schema.ResourceData, m inte
log.Printf("[INFO] Object ID: %s", d.Id())

// Although we receive the full monitor in the response to the POST,
// OpenDistro seems to add default values to the ojbect after the resource
// OpenDistro seems to add default values to the object after the resource
// is saved, e.g. adjust_pure_negative, boost values
return resourceElasticsearchOpenDistroMonitorRead(d, m)
}
Expand Down
4 changes: 2 additions & 2 deletions es/resource_elasticsearch_opendistro_monitor_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,15 @@ import (
)

func TestAccElasticsearchOpenDistroMonitor(t *testing.T) {
opensearchVerionConstraints, _ := version.NewConstraint(">= 1.1, < 6")
opensearchVersionConstraints, _ := version.NewConstraint(">= 1.1, < 6")
var config string
var check resource.TestCheckFunc
meta := testAccOpendistroProvider.Meta()
v, err := version.NewVersion(meta.(*ProviderConf).esVersion)
if err != nil {
t.Fatalf("err: %s", err)
}
if opensearchVerionConstraints.Check(v) {
if opensearchVersionConstraints.Check(v) {
config = testAccElasticsearchOpenDistroMonitorOpenSearch11
check = resource.ComposeTestCheckFunc(
testCheckElasticsearchOpenDistroMonitorExists("elasticsearch_opendistro_monitor.test_monitor1"),
Expand Down
6 changes: 3 additions & 3 deletions es/resource_elasticsearch_opendistro_role.go
Original file line number Diff line number Diff line change
Expand Up @@ -276,11 +276,11 @@ func resourceElasticsearchPutOpenDistroRole(d *schema.ResourceData, m interface{
}
var tenantPermissionsBody []TenantPermissions
for _, tenant := range tenantPermissions {
putTeanant := TenantPermissions{
putTenant := TenantPermissions{
TenantPatterns: tenant.TenantPatterns,
AllowedActions: tenant.AllowedActions,
}
tenantPermissionsBody = append(tenantPermissionsBody, putTeanant)
tenantPermissionsBody = append(tenantPermissionsBody, putTenant)
}

rolesDefinition := RoleBody{
Expand Down Expand Up @@ -317,7 +317,7 @@ func resourceElasticsearchPutOpenDistroRole(d *schema.ResourceData, m interface{
// see https://github.com/opendistro-for-
// elasticsearch/security/issues/1095, this should return a 409, but
// retry on the 500 as well. We can't parse the message to only retry on
// the conlict exception becaues the elastic client doesn't directly
// the conflict exception because the elastic client doesn't directly
// expose the error response body
RetryStatusCodes: []int{http.StatusConflict, http.StatusInternalServerError},
Retrier: elastic7.NewBackoffRetrier(
Expand Down
2 changes: 1 addition & 1 deletion es/resource_elasticsearch_opendistro_roles_mapping.go
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ func resourceElasticsearchPutOpenDistroRolesMapping(d *schema.ResourceData, m in
// see https://github.com/opendistro-for-
// elasticsearch/security/issues/1095, this should return a 409, but
// retry on the 500 as well. We can't parse the message to only retry on
// the conlict exception becaues the elastic client doesn't directly
// the conflict exception because the elastic client doesn't directly
// expose the error response body
RetryStatusCodes: []int{http.StatusConflict, http.StatusInternalServerError},
Retrier: elastic7.NewBackoffRetrier(
Expand Down
2 changes: 1 addition & 1 deletion es/resource_elasticsearch_opendistro_user.go
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,7 @@ func resourceElasticsearchPutOpenDistroUser(d *schema.ResourceData, m interface{
// see https://github.com/opendistro-for-
// elasticsearch/security/issues/1095, this should return a 409, but
// retry on the 500 as well. We can't parse the message to only retry on
// the conlict exception becaues the elastic client doesn't directly
// the conflict exception because the elastic client doesn't directly
// expose the error response body
RetryStatusCodes: []int{http.StatusConflict, http.StatusInternalServerError},
Retrier: elastic7.NewBackoffRetrier(
Expand Down
4 changes: 2 additions & 2 deletions es/resource_elasticsearch_xpack_license.go
Original file line number Diff line number Diff line change
Expand Up @@ -215,13 +215,13 @@ func resourceElasticsearchPutEnterpriseLicense(l string, meta interface{}) (Lice
}

if !licenseResponse.Acknowledged {
return emptyLicense, errors.New("License waas not acknowledged")
return emptyLicense, errors.New("License was not acknowledged")
}

if len(licenseResponse.Licenses) > 0 {
return licenseResponse.Licenses[0], err
} else {
// The API can ackowledge a license, but not return it :|, so we parse what we PUTed
// The API can acknowledge a license, but not return it :|, so we parse what we PUTed
var license License
if err := json.Unmarshal([]byte(l), &license); err != nil {
return emptyLicense, fmt.Errorf("Error unmarshalling license: %+v: %+v", err, body)
Expand Down
2 changes: 1 addition & 1 deletion es/resource_elasticsearch_xpack_license_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ import (
)

// Note the tests run with a trial license enabled, so this test is
// "destructive" in that once deactivated, a trail license may not be re-
// "destructive" in that once deactivated, a trial license may not be re-
// activated. Restarting the docker compose container doesn't seem to work.
func TestAccElasticsearchXpackLicense_Basic(t *testing.T) {
resource.Test(t, resource.TestCase{
Expand Down
2 changes: 1 addition & 1 deletion es/resource_elasticsearch_xpack_role_mapping.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ func resourceElasticsearchXpackRoleMapping() *schema.Resource {
Type: schema.TypeString,
Required: true,
DiffSuppressFunc: suppressEquivalentJson,
Description: "A list of mustache templates that will be evaluated to determine the roles names that should granted to the users that match the role mapping rules. This matches fields of users, rules can be grouped into `all` and `any` top level keys.",
Description: "A list of mustache templates that will be evaluated to determine the roles names that should be granted to the users that match the role mapping rules. This matches fields of users, rules can be grouped into `all` and `any` top level keys.",
},
"roles": {
Type: schema.TypeSet,
Expand Down
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
# Import by poilcy_id
# Import by policy_id
terraform import elasticsearch_opensearch_ism_policy_mapping.test policy_1

0 comments on commit d145125

Please sign in to comment.