diff --git a/docs/Creating and editing rules/Creating rules.md b/docs/Creating and editing rules/Creating rules.md new file mode 100644 index 0000000..cd96782 --- /dev/null +++ b/docs/Creating and editing rules/Creating rules.md @@ -0,0 +1,79 @@ +# Creating RDK rules + +Below, we are going to explain metadata fields and how you can set them using RDK. +With RDK installed, all you need to do to create a new custom rule is to run the rdk create command with the required parameters. Answering the following questions will help you to provide the right arguments to rdk create: + +- What is the resource type(s) you want to evaluate? +- What should trigger the rule execution (e.g., configuration changes, periodic, hybrid)? +- What are the parameters that should be passed to the Lambda function? + +## Creating rules with rdk create command + +`rdk create` has many possible arguments, which include rule metadata arguments and remediation arguments (remediation arguments are covered in a later section in this guide). +You can see the full list of commands by running the command `rdk create --help`. +The following table includes AWS Config rule’s metadata fields and its corresponding `rdk create` options: + +| Config Metadata Field | `rdk create` option | +| --------------------- | ------------------- | +|_identifier_: ID for an AWS managed rule (written in all caps with underscores)| the first argument after `rdk create` is the rule name of your choice: `rdk create ` | +| _defaultName_: the name that instances of a rule will get by default. | rdk uses rule identifier as the defaultName. | +| _description_: provides context for what the rule evaluates. | `rdk create` does not accept an option for rule description and uses rule name/identifier. This can be modified manually by updating the parameters.json file. | +| _scope_: resource types targeted by the rule. | use `-r, --resource-types` with `rdk create` to specify resource types that trigger the rule (single resource or a comma delimited list of resources). | +| _sourceDetails_: rule's trigger type when detective evaluation occurs: `ConfigurationItemChangeNotification`, `OversizedConfigurationItemChangeNotification`, and `ScheduleNotification`. | When creating a rule with rdk, you don’t explicitly select rule trigger type, rather it is set when you use `-r, --resource-types` and/or `-m, --maximum-frequency` options. More on this in the following sections. | +| _compulsoryInputParameterDetails_: rule’s required parameters. | use `-i, --input-parameters` with `rdk create` to specify required parameters. Accepts JSON format inputs, for example `"{\"desiredInstanceType\":\"t2.micro\"}"` | +| _optionalInputParameterDetails_: parameters that are optional for a rule to do its evaluation. | use `--optional-parameters` with `rdk create` to specify optional parameters. Accepts JSON format inputs. | +| _labels_: used to optionally tag rules. | use `--tags` with `rdk create` to specify rule tags. Accepts JSON format input. | +| _supportedEvaluationModes_: could be `DETECTIVE` or `PROACTIVE`. We only cover detective rules in this guide. | rdk doesn’t support setting evaluation mode and will default to `DETECTIVE`. | + +The only argument required by `rdk create` is `` which is a positional argument. In addition, one of either `-r , --resource-types` or `-m , --maximum-frequency` is required to indicate the type of rule to create. So, in its simplest form, you can create your first rule by running: +`rdk create -r ` +By running the `rdk create` command, RDK creates a new directory with several files, including a skeleton of your Lambda code. RDK creates three files in a directory named after your rule: + +- _.py_: skeleton of your Lambda code +- __test.py_: unit test framework for your rule +- _parameters.json_: holds rule metadata + +Table below lists parameters (rule metadata) included in parameters.json file for different type of rules and how they map to `rdk create` arguments. + +| Parameter | Description | Rule type | rdk create argument | +| --------- | ----------- | --------- | ------------------- | +| RuleName | Rule name | All | positional argument | +| Description | Rule description | All | Same as rule name by default | +| SourceRuntime | Lambda function runtime | All | `--runtime` | +| CodeKey | Name of zip file uploaded by RDK | All | N/A | +| InputParameters | Rule input parameters (JSON format) | Optional for all | `--input-parameters` | +| OptionalParameters | Rule optional parameters (JSON format) | Optional for all | `--optional-parameters` | +| SourceEvents | Resource type(s) | Configuration change/Hybrid | `--resource-types` | +| SourcePeriodic | Evaluation frequency | Periodic/Hybrid | `--maximum-frequency` | + +Once you have your rule created, you should write your Lambda code and incorporate your compliance evaluation logic. We will cover the Lambda function in the next section. + +## Examples + +### Creating a configuration change triggered rule to assess IAM roles’ compliance + +Run `rdk create IAM_ROLE --runtime python3.11 --resource-types AWS::IAM::Role`. This command creates a folder named IAM_ROLE in your working directory containing rule files. + +When you use `--resource-types or -r` options, you are implicitly setting your rule’s trigger type to configuration changes, so when you deploy this rule, you will see Oversized configuration changes and Configuration changes under Detective evaluation trigger type in your rule’s detail page on AWS Config console: + +![configuration change triggered rule](../../images/config_change_triggered.jpeg) + +### Creating a periodically triggered rule to assess IAM roles’ compliance + +Run `rdk create IAM_ROLE --runtime python3.11 --maximum-frequency Six_Hours`. Using the `-m, or --maximum-frequency` option, implicitly sets your rule’s trigger type to periodic, so when you deploy this rule, you will see Periodic: 6 hours under Detective evaluation trigger type in your rule’s detail page on AWS Config console: + +![periodically triggered rule](../../images/config_periodic.jpeg) + +Note that _Scope of Changes_ is empty because this is a periodically triggered role. + +### Creating a hybrid rule with input parameters to assess multiple resource types’ compliance + +You can create rules that use EITHER `resource-types` OR `maximum-frequency` options, but not both. We have found that rules that try to be both event-triggered as well as periodic wind up being very complicated and so we do not recommend it as a best practice. + +However, it is possible to create rules with hybrid trigger type, you just need to make sure that hybrid trigger type rule is absolutely required to meet your compliance criteria. [Here](https://github.com/awslabs/aws-config-rules/tree/master/python/IAM_USER_USED_LAST_90_DAYS) is an example of hybrid trigger type rule. + +Run `rdk create IAM_ROLE --runtime python3.11 --maximum-frequency Six_Hours --resource-types AWS::IAM::Role,AWS::IAM::Policy --input-parameters "{\"WhitelistedRoleList\": \"\", \"NotUsedTimeOutInDays\": \"90\", \"NewRoleCooldownInDays\": \"7\"}"`. This command creates a role which is triggered by both configuration changes and periodically. It also takes three input parameters, where `NotUsedTimeOutInDays` and `NewRoleCooldownInDays` have default values of 90 and 7. You can specify `--optional-parameters` using the same format used here for `--input-parameters`. On Windows it is necessary to escape the double-quotes when specifying input parameters, otherwise you don’t need to escape them. + +This rule is triggered every six hours, and every time there is a change in _AWS::IAM::Role_ or _AWS::IAM::Policy_ resource types. When you deploy this rule you will see _Oversized configuration changes, Periodic: 6 hours_ and _Configuration changes_ under _Detective evaluation trigger type_ in your rule’s detail page on AWS Config console, you should also see two different resource types under _Resource types_: + +![Hybrid rule with input parameters](../../images/config_hybrid.jpeg) diff --git a/docs/Creating and editing rules/Modifying rules.md b/docs/Creating and editing rules/Modifying rules.md new file mode 100644 index 0000000..55ab63e --- /dev/null +++ b/docs/Creating and editing rules/Modifying rules.md @@ -0,0 +1,3 @@ +# Modifying rules + +You can modify your rules by either editing _parameters.json_ or by running `rdk modify` command which takes the same arguments and options as `rdk create` command. Note that modifying your rule locally does not modify the rule in your AWS Account and you need to redeploy the rule to apply the changes. diff --git a/docs/Creating and editing rules/RDK Lambda function/Lambda functions logic.md b/docs/Creating and editing rules/RDK Lambda function/Lambda functions logic.md new file mode 100644 index 0000000..9ada22a --- /dev/null +++ b/docs/Creating and editing rules/RDK Lambda function/Lambda functions logic.md @@ -0,0 +1,69 @@ +# Introduction + +When an AWS Config rule is triggered, AWS Config invokes the rule’s Lambda function, passing the triggering event as an argument to the Lambda function. An AWS Lambda event is a JSON-formatted document containing data for the Lambda function to operate. Visit [Example Events for AWS Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules_example-events.html) for more information. + +## Lambda function’s logic + +An AWS Config rule’s Lambda function includes a lambda_handler function that takes the event published by the AWS Config service, and assesses the associated resource’s compliance. You might notice _context_ as the second argument for the lambda_handler function, in newer versions of Lambda, this argument is no longer needed. + +The lambda_handler function, performs two major tasks: + +- Validating published event by AWS Config and evaluating resource(s) compliance +- Validating compliance results, and reporting back to AWS Config + +### Validating published event by AWS Config and evaluate resource(s) compliance + +![lambda_handler function algorithm](../../images/lambda_logic1.png) + +Here are the highlights of what the lambda_handler function does: + +- Invokes check_defined function to make sure the event is passed to lambda_handler function is not empty. +- Runs `invoking_event = json.loads(event["invokingEvent"])` to extract invoking event element of the event and stores is invoking_event variable. +- Checks to see if there are any rule parameters in the event, and stores them in rule_parameters variable. +- Invokes evaluate_parameters function to check input parameters are valid. If you have input parameters, you need to modify evaluate_parameters to do input sanitization and validation. +- If the input parameters are valid, it checks rule’s trigger type by extracting `invoking_event["messageType"]` and comparing it against one of the possible trigger types: + - `"ConfigurationItemChangeNotification"` + - `"ScheduledNotification"` + - `"OversizedConfigurationItemChangeNotification"` +- If the rule trigger type is valid, it runs the get_configuration_item function: + - A configuration item represents a point-in-time view of the various attributes of a supported AWS resource that exists in your account. learn more here. + - For change-triggered rules it extracts the configuration item from invoking event + - For periodic rules it returns `None` + - For invalid message types (trigger types) it returns an _"Unexpected message type"_ error. +- Finally, it invokes the is_applicable function to discard events triggered by a deleted resource, for all other events, it calls the evaluate_compliance function to do compliance evaluation + +### Validating compliance results, and reporting back to AWS Config + +When you create your AWS Config rule files using `rdk create` command, your Lambda function file has an empty evaluate_compliance function which you need to populate with your compliance evaluation logic and return the compliance result (see the [Writing an evaluate_compliance function](Writing%20an%20evaluate_compliance%20function.md) for guidance on updating this function). Compliance results is expected to be a string, a dictionary or a list of dictionaries containing the following keys: + +- `ComplianceResourceType` +- `ComplianceResourceId` +- `ComplianceType` +- `OrderingTimestamp` + +The Lambda skeleton file includes following helper functions to create compliance result dictionaries: + +- build_evaluation function, generally used for periodically triggered rules, returns a compliance result dictionary and accepts the following arguments: + - `resource_id` + - `compliance_type`: can be `COMPLIANT`, `NON_COMPLIANT`, or `NOT_APPLICABLE` + - event + - `resource_type=DEFAULT_RESOURCE_TYPE`, you can set `DEFAULT_RESOURCE_TYPE` variable (defaults to `'AWS::::Account'`) in the skeleton file and avoid passing it every time if your rule is scoped only to one type of resource + - `annotation=None` You can pass an annotation string providing more information about compliance status of the resource being assessed. Annotations are shown in the AWS Config console or response of AWS CLI or AWS SDK calls. +- build_evaluation_from_config_item function, generally used for configuration change triggered rules, returns a compliance result dictionary and accepts the following arguments: + - `configuration_item`: for configuration-change-based rules, the configuration item describing the resource that changed state. + - `compliance_type`: a string with the evaluated compliance status, one of `COMPLIANT`, `NON_COMPLIANT`, or `NOT_APPLICABLE` + - `annotation=None`: an optional string providing more information about compliance status of the resource being assessed. Annotations are shown on AWS Config console or response of AWS CLI or AWS SDK calls. These are particularly recommended for `NON_COMPLIANT` resources to explain why a resource is non-compliant. + +The returned value from the evaluate_compliance function is stored in `compliance_result` variable. +![compliance evaluation function logic](../../images/compliance_evaluation.png) + +Once the compliance results are returned, lambda_handler follows the below logic: + +- If `compliance_result` variable is empty, the results are recorded as NOT_APPLICABLE. +- If `compliance_result` variable is list type with dictionary elements, lambda_handler checks to see if all the required keys are i- luded with each member, and upon successful verification it reports the results of evaluation. +- If `compliance_result` variable is dictionary type, lambda_handler checks to see if all the required keys are included, and upon successful verification it reports the results of evaluation. +- If `compliance_result` variable is string type, lambda_handler calls: + - build_evaluation_from_config_item function, if configuration_item exists and converts `compliance_result` to a dictionary and reports the results of evaluation. + - build_evaluation function if configuration_item does not exist and converts `compliance_result` to a dictionary and reports the results of evaluation. + +The rest of the Lambda function, takes care of error handling, and should not be modified. diff --git a/docs/Creating and editing rules/RDK Lambda function/Writing an evaluate_compliance function.md b/docs/Creating and editing rules/RDK Lambda function/Writing an evaluate_compliance function.md new file mode 100644 index 0000000..d8e0811 --- /dev/null +++ b/docs/Creating and editing rules/RDK Lambda function/Writing an evaluate_compliance function.md @@ -0,0 +1,140 @@ +# Introduction + +RDK creates evaluate_compliance function for you, but you don’t need to keep the default structure; you can even create multiple functions to evaluate compliance. We’re going to start with the default structure and keep building on top of it in the following examples. + +## Compliance evaluation function for evaluations triggered by periodic frequency + +One of the evaluate_compliance function’s input is `event`. See [Example Events for AWS Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules_example-events.html) for more information. Events have different type of information required to evaluate compliance of AWS resources based on Config rule type. + +For periodic trigger type rules, the _messageType_ element in _invokingEvent_ element of Event has the value of _ScheduledNotification_. Scheduled compliance validation usually checks numerous resources of same type for compliance and the published event has no _Configuration_item_ so you should use AWS SDK (i.e. [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) for python) to gather required information for compliance evaluation. + +Imagine you want to scan your account for IAM roles with no policies attached to them and report them as non-compliant resources. Once you have created your rule and set `-m` , `--maximum-frequency` option of `rdk create` command to the desired value, AWS Config triggers your rule at the set frequency and your Lambda function calls evaluate_compliance function to report the results. +Let’s build the logic: + +1. Initiate an empty evaluations list: `evaluations = []` +2. Create a boto3 client representing AWS Identity and Access Management (IAM): `iam_client = get_client('iam', event)` +we use get_client function which is defined in our skeleton file to create the client. +3. Getting a list of roles: `roles_list = iam_client.list_roles()` +4. We can create a loop and for every role gather a list of attached inline policies and managed policies. + 1. Inline policies: `role_policies_inline = iam_client.list_role_policies(RoleName=role_name)['PolicyNames']` + 2. Managed policies: `role_policies_managed = iam_client.list_attached_role_policies(RoleName=role_name)['AttachedPolicies']` +5. Finally, we’re going to check the length of inline and managed policy arrays and if it is equal to zero, it means the role does not have any attached policies and is noncompliant. +6. We use build_evaluation function, already defined in our skeleton file to create an evaluation dictionary and append it to the evaluation list initiated on step1. + +Fully developed evaluate_compliance function for this example: + +```python +def evaluate_compliance(event, configuration_item, valid_rule_parameters): + evaluations = [] + iam_client = get_client('iam', event) + + roles_list = iam_client.list_roles() + + while True: + + for role in roles_list['Roles']: + role_name = role['RoleName'] + role_policies_inline = iam_client.list_role_policies(RoleName=role_name)['PolicyNames'] + role_policies_managed = iam_client.list_attached_role_policies(RoleName=role_name)['AttachedPolicies'] + if len(role_policies_inline)+len(role_policies_managed) == 0 : + evaluations.append(build_evaluation(role['RoleId'], 'NON_COMPLIANT', event, resource_type='AWS::IAM::Role')) + continue + + evaluations.append(build_evaluation(role['RoleId'], 'COMPLIANT', event, resource_type='AWS::IAM::Role')) + + # Marker is used for pagination, in cases where the API call returns too many results to display at once + if "Marker" in roles_list: + roles_list = iam_client.list_roles(Marker=roles_list["Marker"]) + else: + break + + if not evaluations: + evaluations.append(build_evaluation(event['accountId'],'NOT_APPLICABLE', event, resource_type='AWS::::Account')) + return evaluations + +``` + +Make sure to read boto3 documentation for each class you are using to understand its limitations and capabilities. In our case `list_roles` method might not return a complete list of roles in one call, so we are using a `while` loop and checking for `Marker` in the results to make subsequent calls in case of receiving a truncated role list. Read more about Marker on `list_roles` method [documentation](https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/iam.html#IAM.Client.list_roles). + +Notes: + +- For compliant resources we are also creating an evaluation dictionary and appending it to the evaluation list. +- You can remove any unused arguments of the evaluate_compliance function definition, as long as you also remove them from when the lambda_handler function calls evaluate_compliance. +- build_evaluation function returns an evaluation dictionary (refer to previous section for more information). + +## Compliance evaluation function for evaluations triggered by configuration changes + +For configuration change type rules, the _messageType_ element in _invokingEvent_ element of Event has the value of _ConfigurationItemChangeNotification_. If the returned _messageType_ value is _OversizedConfigurationItemChangeNotification_, helper functions will automatically pull resource’s Configuration_item, so you don’t need to do anything. _invokingEvent_ also contains Configuration_item which provides information on the changed resource. We are going to recreate the previous function, this time using information included in Configuration_item. To view an example of information included in a Configuration_item you can: +Run `rdk sample-ci `, or check the AWS Config Resource Schema repository on GitHub. Every time an AWS resource type indicated during rule creation is changed, this rule will be triggered and an event will be published (e.g. in our case if an IAM role is change, an event would be published). + +We can recreate the example in the previous section using Configuration_item. IAM Role Configuration_item includes `rolePolicyList` (inline policies) and `attachedManagedPolicies` (managed policies) keys in its `configuration` element. With this information our assessment logic can be done in one step: + +- Check the length of `rolePolicyList` and `attachedManagedPolicies` arrays and return `NON_COMPLIANT` if both are equal to 0. + +Fully developed evaluate_compliance function for this example: + +```python +def evaluate_compliance(event, configuration_item, valid_rule_parameters): + + if len(configuration_item['configuration']['attachedManagedPolicies'])+len(configuration_item['configuration']['rolePolicyList']) == 0: + return 'NON_COMPLIANT' + + return 'COMPLIANT' + +``` + +Notes: + +- Once you deploy the rule, AWS Config evaluates all the resources in scope using the already available configuration items (it does not create a new configuration item unless the resource is changed) +- After the initial evaluation, Config runs compliance evaluation for configuration change triggered rules once resource at a time and when the resource changes. +- If the configuration_item does not provide all the necessary information for compliance evaluation, you can use boto3 to gather any extra information you require to complete the evaluation. +- In this example, evaluate_compliance function returns the compliance status as a string. If lambda_handler receives a string from evaluate_compliance functions, it uses build_evaluation_from_config_item function to build compliance results. + - build_evaluation_from_config_item function returns an evaluation dictionary (refer to previous section for more information) +- If you need to add annotations to your compliance results, instead of returning a string, you can call build_evaluation_from_config_item function and pass the annotation string. + +Modified evaluate_compliance function to include annotations in compliance evaluation: + +```python +def evaluate_compliance(event, configuration_item, valid_rule_parameters): + + if len(configuration_item['configuration']['attachedManagedPolicies'])+len(configuration_item['configuration']['rolePolicyList']) == 0: + return build_evaluation_from_config_item(configuration_item, 'NON_COMPLIANT', annotation='Your custom annotation') + + return 'COMPLIANT' +``` + +## Compliance evaluation function for evaluations for hybrid trigger type + +Writing evaluation logic for these types of rules is rather complicated and need to be very well-thought of before execution. It’s best not to create hybrid triggered rules unless you can’t accomplish compliance evaluation using periodic or change-triggered rules. + +Imagine a scenario where you need assess your resources periodically and upon any resource changes, for example you want a Config rule that checks for unused IAM Roles, but it ignores newly created roles for a few days (role cooldown period). In this scenario if you rely only on configuration change triggers, your new roles will be marked non-compliant upon creation (technically they have never been used before), so you need another mechanism to check them regularly to assess their compliance. In this case you can modify your evaluation logic to accommodate both trigger types. One way to do this, is modifying evaluate_compliance function to take an extra argument: + +```python +def evaluate_compliance(event, configuration_item, valid_rule_parameters, message_type): + if message_type in ["ConfigurationItemChangeNotification","OversizedConfigurationItemChangeNotification"]: + # Add evaluation logic for configuration change trigger type + else: + # Add evaluation logic for scheduled trigger type + +``` + +When calling evaluate_compliance function from lambda_handler function, pass `invoking_event['messageType']` as message type: + +```python +compliance_result = evaluate_compliance(event, configuration_item, valid_rule_parameters, invoking_event['messageType']) +``` + +Another way would be creating two separate functions, one for periodic evaluations and one for configuration change triggered evaluations, and modifying lambda_handler function to call the proper function based on the `invoking_event['messageType']` (trigger type): + +```python +... +if invoking_event['messageType'] in ["ConfigurationItemChangeNotification","OversizedConfigurationItemChangeNotification"]: + compliance_result = evaluate_changetrigger_compliance(event, configuration_item, rule_parameters) + elif invoking_event['messageType'] == 'ScheduledNotification': + compliance_result = evaluate_scheduled_compliance(event, configuration_item, rule_parameters) + else: + return {'internalErrorMessage': 'Unexpected message type ' + str(invoking_event)} +... +``` + +Once you pick the best approach for your evaluation logic, the rest would be similar to what we covered on previous sections for periodic and configuration change triggered rules. diff --git a/docs/Remediation.md b/docs/Remediation.md new file mode 100644 index 0000000..628e0cb --- /dev/null +++ b/docs/Remediation.md @@ -0,0 +1,49 @@ +# Remediating noncompliant resources + +You can set up manual or automatic remediation for your rules to remediate noncompliant resources that are evaluated by AWS Config rules. AWS Config uses [AWS Systems Manager Automation Documents](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html) to apply remediation. You can use one of the more than 100 -pre-configured documents included in AWS Systems Manager or [create](https://docs.aws.amazon.com/systems-manager/latest/userguide/documents-creating-content.html#writing-ssm-doc-content) your own Systems Manager document to remediate non-compliant resources. + +Under the hood, RDK creates an _AWS::Config::RemediationConfiguration_ CloudFormation resource and associates it with your rule when you create or modify a rule with remediation actions. To learn more about this resource view _AWS::Config::RemediationConfiguration_ - AWS CloudFormation on AWS documentations. + +You can set _AWS::Config::RemediationConfiguration_ resource properties when creating or modifying a rule by including RDK rule remediation arguments. Following table includes a list of arguments that you can pass to `rdk create` or `rdk modify` to configure remediation action and how they map to _AWS::Config::RemediationConfiguration_ properties. + +| `rdk create`/`rdk modify` argument | RemediationConfiguration property | Description | +| ---------------------------------- | --------------------------------- | ----------- | +| `--remediation-action` | [TargetId](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-config-remediationconfiguration.html#cfn-config-remediationconfiguration-targetid) | SSM Document name | +| `--remediation-action-version` | [TargetVersion](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-config-remediationconfiguration.html#cfn-config-remediationconfiguration-targetversion) | SSM Document version | +| `--auto-remediate` | [Automatic](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-config-remediationconfiguration.html#cfn-config-remediationconfiguration-automatic) | The remediation is triggered automatically. | +| `--auto-remediation-retry-attempts` | [MaximumAutomaticAttempts](file:///Users/nimaft/Documents/Content/Blog/MaximumAutomaticAttempts) | The maximum number of failed attempts for auto-remediation. | +| `--auto-remediation-retry-time` | [RetryAttemptSeconds](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-config-remediationconfiguration.html#cfn-config-remediationconfiguration-retryattemptseconds) | Maximum time in seconds that AWS Config runs auto-remediation. | +| `--remediation-concurrent-execution-percent` | [ExecutionControls.SsmControls.ConcurrentExecutionRatePercentage](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-config-remediationconfiguration-ssmcontrols.html#cfn-config-remediationconfiguration-ssmcontrols-concurrentexecutionratepercentage) | The maximum percentage of remediation actions allowed to run in parallel on the non-compliant resources. | +| `--remediation-error-rate-percent` | [ExecutionControls.SsmControls.ErrorPercentage](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-config-remediationconfiguration-ssmcontrols.html#cfn-config-remediationconfiguration-ssmcontrols-errorpercentage) | The percentage of errors that are allowed before SSM stops running automations on non-compliant resources. | +| `--remediation-parameters` | [Parameters](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-config-remediationconfiguration.html#cfn-config-remediationconfiguration-parameters) | SSM Document parameters. | + +Some SSM Documents require input parameters to work properly. When setting up rule remediation, you can use `--remediation-parameters` to pass parameters to selected Document. This argument takes a JSON string containing Document parameters and has the following structure: + +```json +{ + "SSMDocumentParameterX": { + "StaticValue": { + "Values": [ + "StaticValue1" + ] + } + }, + "SSMDocumentParameterY": { + "ResourceValue": { + "Value": [ + "RESOURCE_ID" + ] + } + } +} +``` + +Note that there are two types of values: static value and resource value. Static value can take a list of values, whereas resource value can only take one value and it should be `RESOURCE_ID`. When you pass resource value as an input parameter, the actual value is determined during runtime and it would be the resource ID of noncompliant resource evaluated by AWS Config. + +Imagine you want to have a remediation action for the rule we created in previous section and delete all the noncompliant IAM Roles with no policies. First, check the list of AWS managed Document (available on the [Systems Manager console](https://console.aws.amazon.com/systems-manager/documents)) to see if a Document meeting our goal already exists. Matching our need, AWS SSM offers a managed Document named [“AWSConfigRemediation-DeleteIAMRole”](https://console.aws.amazon.com/systems-manager/documents/AWSConfigRemediation-DeleteIAMRole). Navigate to Document’s [Detail](https://console.aws.amazon.com/systems-manager/documents/AWSConfigRemediation-DeleteIAMRole/details) tab and review the required parameters. This Document requires two parameters `“AutomationAssumeRole”` and `“IAMRoleID”`. First, you need to create an IAM role for the SSM Documents to complete its steps. Review step inputs for each step of the Rule under Description tab to determine required permissions for `“AutomationAssumeRole”` Role. For `“IAMRoleID”` we are going to pass the resource ID of noncompliant resources dynamically. Finally, you can issue the following command to modify your rule and specify `“AWSConfigRemediation-DeleteIAMRole”` Document as the remediation action with its required parameters: + +```bash +rdk modify IAM_ROLE --runtime python3.9 --remediation-action AWSConfigRemediation-DeleteIAMRole --remediation-parameters '{"AutomationAssumeRole":{"StaticValue":{"Values":["arn:aws:iam::123456789012:role/managed/DocumentRole"]}},"IAMRoleID":{"ResourceValue":{"Value":"RESOURCE_ID"}}}' +``` + +Note that the remediation actions for AWS Config Rules is only supported in certain [regions](https://docs.aws.amazon.com/config/latest/developerguide/remediation.html#region-support-config-remediation). diff --git a/docs/Writing test units.md b/docs/Writing test units.md new file mode 100644 index 0000000..f546d65 --- /dev/null +++ b/docs/Writing test units.md @@ -0,0 +1,142 @@ +# Introduction + +If you are creating a new rule using RDK, the `rdk create` command will automatically create a unit test script file (named with "_test" appended to your Config rule name). +If you are adding a unit test script to an existing rule, you can use `rdk create` with the same parameters like you're creating a new rule, then copy just the `*_test.py` file it creates to your existing python-code folder (you can delete any other files it creates). For example, the following command will create `MyCoolNewRule.py` and `MyCoolNewRule_test.py`, as well as a `parameters.json` file containing some metadata about your rule. + +```bash +rdk create MyCoolNewRule --runtime python3.9-lib --resource-types AWS::EC2::Instance --input-parameters '{"desiredInstanceType":"t2.micro"}' +``` + +## Developing a unit test + +Your unit tests will look very different depending on whether they are Configuration change-based or frequency-based. For Configuration change-based Config rules, you may only need to write a few sample configuration items. For frequency-based rules (or Configuration change-based rules that make boto3 calls), mock boto3 responses will need to be defined to create testing scenarios. In some cases, you will also need mock boto3 calls in change-based rules as well. + +### Example (for a Frequency-based Config rule) + +Frequency-based Config rules do not take a CI as a parameter. Instead, a frequency-based Config rule queries specific resources in the account to determine whether the rule is Compliant. In order to create a unit test for frequency-based Config rules, the unit test must define mock responses that will replace the actual response from boto3 during unit testing. Update the Boto3Mock definition to include any boto3 clients that your Config rule invokes. + +By default, `rdk create` will create an IAM and STS mock client for you. Any additional clients must be added. This is an example that adds an EMR client: + +```python +CONFIG_CLIENT_MOCK = MagicMock() +EMR_CLIENT_MOCK = MagicMock() # Added +STS_CLIENT_MOCK = MagicMock() + +class Boto3Mock(): + @staticmethod + def client(client_name, *args, **kwargs): + if client_name == "emr": # Added + return EMR_CLIENT_MOCK # Added + if client_name == "config": + return CONFIG_CLIENT_MOCK + if client_name == "sts": + return STS_CLIENT_MOCK + raise Exception("Attempting to create an unknown client") +``` + +Add your mock responses, mock event, and unit test method to the ComplianceTest class + +This is an example referencing the same EMR client as defined above. + +```python +class ComplianceTest(unittest.TestCase): + + # This is a dict that will act as the response for the EMR boto3 method get_block_public_access_configuration(). This content was adapted from https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/emr.html#EMR.Client.get_block_public_access_configuration. + mock_bpa_response = { + "BlockPublicAccessConfiguration": { + "PermittedPublicSecurityGroupRuleRanges": [], + "BlockPublicSecurityGroupRules": True + } + } + + # Some Config rules expect to be invoked with events that have certain properties on them, such as 'invokingEvent' or 'executionRoleArn'. You may need to provide dummy event values to avoid KeyErrors in your Config evaluation + mock_event = { + "invokingEvent": '{"awsAccountId":"123456789012","notificationCreationTime":"2016-07-13T21:50:00.373Z",' + + '"messageType":"ScheduledNotification","recordVersion":"1.0"}', + "ruleParameters": '{"myParameterKey":"myParameterValue"}', + "resultToken": "myResultToken", + "eventLeftScope": False, + "executionRoleArn": "arn:aws:iam::dummyAccount:role/dummyRole", + "configRuleName": "periodic-config-rule", + "configRuleId": "config-rule-6543210", + "accountId": "123456789012", + "version": "1.0", + } + + def test_mock_bpa_configured_response(self): + sts_mock() # This function creates mock responses for the STS client's methods (such as AssumeRole) + # This is what tells the Mock EMR client to replace the boto3 method get_block_public_access_configuration with the mock response specified above. + EMR_CLIENT_MOCK.get_block_public_access_configuration = MagicMock( + return_value=self.mock_bpa_response + ) + response = RULE.evaluate_compliance( + self.mock_event, + self.config_item, + self.rule_parameters + ) + expected_response = [ + build_expected_response( + compliance_type="COMPLIANT", + compliance_resource_id="test", + compliance_resource_type=DEFAULT_RESOURCE_TYPE, + annotation="EMR Block Public access in this account" + ) + ] + assert_successful_evaluation(self, response, expected_response, len(response)) + +``` + +### Example (for a Configuration change-based Config rule) + +This is an example of a unit test for an API Gateway Stage. It uses a CI-based approach. + +```python +class ComplianceTest(unittest.TestCase): + test1id = "access_log_unit_test_1" + # This is a configuration item definition that will be used to test the Config rule + mock_ci_no_access_log_settings = { + "resourceType": "AWS::ApiGateway::Stage", + "resourceId": test1id, + "configurationItemCaptureTime": "2021-10-07T04:34:52.542Z", + "configuration": { + "stageName": "Dev", + "restApiId": "test", + } + } # Note that not all fields of an actual Configuration Item need to be included in the mock CI. + + def test_no_access_log_settings(self): + response = RULE.evaluate_compliance({}, self.mock_ci_no_access_log_settings, {}) # Notice that the CI parameter is being provided the mock CI built above + # Define the response you expect from your Config rule for the given CI + expected_response = build_expected_response( + compliance_type="NON_COMPLIANT", + compliance_resource_id=self.test1id, + compliance_resource_type=DEFAULT_RESOURCE_TYPE, + annotation="AccessLogSettings are not defined for this stage." # The exact annotation you expect should be provided. + ) + # This function will verify that the expected and actual response match. + assert_successful_evaluation(self, response, expected_response, len(response)) + + # More unit tests can be added to the same ComplianceTest Class + test_correct_access_log_settings = { "fillThisIn": "withRealData" } + def test_correct_access_log_settings(self): + # Implementation omitted for brevity + +``` + +## Running Unit Tests + +```bash +rdk test-local +``` + +## Debugging + +The easiest way to debug a unit test is to add the following at the bottom of your `*_test.py` file: + +```python +if __name__ == '__main__': + unittest.main() + +``` + +Then run your IDE's debugger (eg. 'Start Debugging' in VSCode). This will run your unit tests and stop at any breakpoints so you can see what the data looks like. diff --git a/docs/index.md b/docs/index.md deleted file mode 120000 index 32d46ee..0000000 --- a/docs/index.md +++ /dev/null @@ -1 +0,0 @@ -../README.md \ No newline at end of file diff --git a/docs/index.md b/docs/index.md new file mode 100644 index 0000000..0ad2aa4 --- /dev/null +++ b/docs/index.md @@ -0,0 +1,97 @@ +# Getting Started + +Uses Python 3.7+ and is installed via pip. Requires you to have +an AWS account and sufficient permissions to manage the Config service, +and to create S3 Buckets, Roles, and Lambda Functions. An AWS IAM Policy +Document that describes the minimum necessary permissions can be found +at `policy/rdk-minimum-permissions.json`. + +Under the hood, rdk uses boto3 to make API calls to AWS, so you can set +your credentials any way that boto3 recognizes (options 3 through 8 +[here](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#guide-credentials)) +or pass them in with the command-line parameters `--profile`, +`--region`, `--access-key-id`, or `--secret-access-key` + +If you just want to use the RDK, go ahead and install it using pip. + +```bash +pip install rdk +``` + +Alternately, if you want to see the code and/or contribute you can clone +the git repo, and then from the repo directory use pip to install the +package. Use the `-e` flag to generate symlinks so that any edits you +make will be reflected when you run the installed package. + +If you are going to author your Lambda functions using Java you will +need to have Java 8 and gradle installed. If you are going to author +your Lambda functions in C# you will need to have the dotnet CLI and the +.NET Core Runtime 1.08 installed. + +```bash +pip install -e . +``` + +To make sure the rdk is installed correctly, running the package from +the command line without any arguments should display help information. + +```bash +rdk +usage: rdk [-h] [-p PROFILE] [-k ACCESS_KEY_ID] [-s SECRET_ACCESS_KEY] + [-r REGION] [-f REGION_FILE] [--region-set REGION_SET] + [-v] ... +rdk: error: the following arguments are required: , +``` + +## Usage + +### Configure your env + +To use the RDK, it's recommended to create a directory that will be +your working directory. This should be committed to a source code repo, +and ideally created as a python virtualenv. In that directory, run the +`init` command to set up your AWS Config environment. + +```bash +rdk init +Running init! +Creating Config bucket config-bucket-780784666283 +Creating IAM role config-role +Waiting for IAM role to propagate +Config Service is ON +Config setup complete. +Creating Code bucket config-rule-code-bucket-780784666283ap-southeast-1 +``` + +Running `init` subsequent times will validate your AWS Config setup and +re-create any S3 buckets or IAM resources that are needed. + +- If you have config delivery bucket already present in some other AWS account then use `--config-bucket-exists-in-another-account` as argument. + +```bash +rdk init --config-bucket-exists-in-another-account +``` + +- If you have AWS Organizations/ControlTower Setup in your AWS environment then additionally, use `--control-tower` as argument. + +```bash +rdk init --control-tower --config-bucket-exists-in-another-account +``` + +- If bucket for custom lambda code is already present in current account then use `--skip-code-bucket-creation` argument. + +```bash +rdk init --skip-code-bucket-creation +``` + +- If you want rdk to create/update and upload the rdklib-layer for you, then use `--generate-lambda-layer` argument. In supported regions, rdk will deploy the layer using the Serverless Application Repository, otherwise it will build a local lambda layer archive and upload it for use. + +```bash +rdk init --generate-lambda-layer +``` + +- If you want rdk to give a custom name to the lambda layer for you, then use `--custom-layer-namer` argument. The Serverless Application Repository currently cannot be used for custom lambda layers. + +```bash +rdk init --generate-lambda-layer --custom-layer-name +``` diff --git a/images/compliance_evaluation.png b/images/compliance_evaluation.png new file mode 100644 index 0000000..cb467fd Binary files /dev/null and b/images/compliance_evaluation.png differ diff --git a/images/config_change_triggered.jpeg b/images/config_change_triggered.jpeg new file mode 100644 index 0000000..819f0a6 Binary files /dev/null and b/images/config_change_triggered.jpeg differ diff --git a/images/config_hybrid.jpeg b/images/config_hybrid.jpeg new file mode 100644 index 0000000..351caaa Binary files /dev/null and b/images/config_hybrid.jpeg differ diff --git a/images/config_periodic.jpeg b/images/config_periodic.jpeg new file mode 100644 index 0000000..651ec4d Binary files /dev/null and b/images/config_periodic.jpeg differ diff --git a/images/lambda_logic1.png b/images/lambda_logic1.png new file mode 100644 index 0000000..d693d01 Binary files /dev/null and b/images/lambda_logic1.png differ