diff --git a/docs/Technical-Documentation/cloud-foundry-db-upgrade.md b/docs/Technical-Documentation/cloud-foundry-db-upgrade.md
new file mode 100644
index 000000000..466b562f6
--- /dev/null
+++ b/docs/Technical-Documentation/cloud-foundry-db-upgrade.md
@@ -0,0 +1,120 @@
+# Cloud Foundry, Cloud.gov AWS RDS Database Upgrade
+
+## Process
+
+If you are performing this process for the staging or production, you need to ensure you are performing the changes through the [HHS](https://github.com/HHS/TANF-app) repo and not the [Raft](https://github.com/raft-tech/TANF-app) repo.
+
+
+### 1. SSH into a backend app in your desired environment
+```bash
+cf ssh tdp-backend-
+```
+
+
+### 2. Create a backup of all the databases in the ENV's RDS instance
+Note: you can get the required field values from `VCAP_SERVICES`.
+```bash
+/home/vcap/deps/0/apt/usr/lib/postgresql//bin/pg_dump -h -p -d -U -F c --no-acl --no-owner -f .pg
+```
+
+
+### 3. Copy the backup(s) to your local machine
+Note: This assumes you ran the backup command above in the home directory of the app. As an added bonus for later steps, you should execute this command from somewhere within `tdrs-backend` directory! Make sure not to commit the files/directories that are copied to your local directory.
+```bash
+cf ssh tdp-backend-- -c 'tar cfz - ~/app/*.pg' | tar xfz - -C .
+```
+
+
+### 4. Verify backup file size(s) match the backup size(s) in the app
+```bash
+ls -lh /home/vcap/app
+```
+As an added verification step, you should consider restoring the backups into a local server and verifying the contents with `psql` or `pgAdmin`.
+
+
+### 5. Update the `version` key in the `json_params` item in the `database` resource in the `main.tf` file in the environment(s) you're upgrading with the new database server version
+```yaml
+json_params = "{\"version\": \"\"}"
+```
+
+
+### 6. Update the `postgresql-client` version to the new version in `tdrs-backend/apt.yml`
+```yaml
+- postgresql-client-
+```
+Note: if the underlying OS for CloudFoundry is no longer `cflinuxfs4` you may also need to update the repo we point to for the postgres client binaries.
+
+
+### 7. Update the postgres container version in `tdrs-backend/docker-compose.yml`
+```yaml
+postgres:
+image: postgres:
+```
+
+
+### 8. Update Terraform state to delete then re-create RDS instance
+Follow the instuctions in the `terraform/README.md` and proceed from there. Modify the `main.tf` file in the `terraform/` directory to inform TF of the changes. To delete the existing RDS instance you can simply comment out the whole database `resource` in the file (even though you made changes in the steps above). TF will see that the resource is no longer there, delete it, and appropriately update it's state. Then you simply re-comment the database `resource` back in with the changes you made in previous steps. TF will create the new RDS instance with your new updates, and also update the state in S3.
+
+
+### 9. Bind backend to the new RDS instance to get credentials
+```bash
+cf bind-service tdp-backend- tdp-db-
+```
+Be sure to re-stage the app when prompted
+
+
+### 10. Apply the backend manifest to begin the restore process
+If you copied the backups as mentioned in the note from step 3, the backups will be copied for you to the app instance in the command below. If not, you will need to use `scp` to copy the backups to the app instance after running the command below.
+```bash
+cf push tdp-backend- --no-route -f manifest.buildpack.yml -t 180 --strategy rolling
+```
+
+
+### 11. SSH into the app you just pushed
+```bash
+cf ssh tdp-backend-
+```
+
+
+### 12. Create the appropriate database(s) in the new RDS server
+Note: you can get the required field values from `VCAP_SERVICES`.
+```bash
+/home/vcap/deps/0/apt/usr/lib/postgresql//bin/createdb -U -h
+```
+
+
+### 13. Restore the backup(s) to the appropriate database(s)
+Note: you can get the required field values from `VCAP_SERVICES`.
+```bash
+/home/vcap/deps/0/apt/usr/lib/postgresql//bin/pg_restore -p -h -U -d .pg
+```
+During this step, you may see errors similar to the message below. Note `` is imputed in the message to avoid leaking environment specific usernames/roles.
+```bash
+pg_restore: from TOC entry 215; 1259 17313 SEQUENCE users_user_user_permissions_id_seq
+pg_restore: error: could not execute query: ERROR: role "" does not exist
+Command was: ALTER TABLE public.users_user_user_permissions_id_seq OWNER TO ;
+```
+and the result and total amount of these errors should be:
+```bash
+pg_restore: warning: errors ignored on restore: 68
+```
+If this is what you see, everything is OK. This happens because the `pg_dump` doesn't remove owner associations on sequences for some reason. But you will see in the blocks above that `pg_restore` correctly alters the sequence owner to the new database user.
+
+
+### 14. Use `psql` to get into the database to check state
+Note: you can get the required field values from `VCAP_SERVICES`.
+```bash
+/home/vcap/deps/0/apt/usr/lib/postgresql//bin/psql
+```
+
+
+### 15. Re-deploy or Re-stage the backend and frontend apps
+Pending your environment you can do this GitHub labels or you can re-stage the apps from Cloud.gov.
+
+
+### 16. Access the re-deployed/re-staged apps and run a smoke test
+- Log in
+- Submit a few datafiles
+- Make sure new and existing submission histories populate correctly
+- Checkout the DACs data
+
diff --git a/tdrs-backend/apt.yml b/tdrs-backend/apt.yml
index f07aee4a3..cbcb0edf4 100644
--- a/tdrs-backend/apt.yml
+++ b/tdrs-backend/apt.yml
@@ -2,8 +2,8 @@ cleancache: true
keys:
- https://www.postgresql.org/media/keys/ACCC4CF8.asc
repos:
- - deb http://apt.postgresql.org/pub/repos/apt/ bookworm-pgdg main
+ - deb http://apt.postgresql.org/pub/repos/apt/ jammy-pgdg main
packages:
- - postgresql-client-12
+ - postgresql-client-15
- libjemalloc-dev
- redis
diff --git a/tdrs-backend/docker-compose.yml b/tdrs-backend/docker-compose.yml
index 07d014fb5..81d7065c4 100644
--- a/tdrs-backend/docker-compose.yml
+++ b/tdrs-backend/docker-compose.yml
@@ -12,7 +12,7 @@ services:
- ../scripts/zap-hook.py:/zap/scripts/zap-hook.py:ro
postgres:
- image: postgres:11.6
+ image: postgres:15.7
environment:
- PGDATA=/var/lib/postgresql/data/
- POSTGRES_DB=tdrs_test
diff --git a/tdrs-backend/tdpservice/data_files/test/test_api.py b/tdrs-backend/tdpservice/data_files/test/test_api.py
index 5f177721d..9ae1a408e 100644
--- a/tdrs-backend/tdpservice/data_files/test/test_api.py
+++ b/tdrs-backend/tdpservice/data_files/test/test_api.py
@@ -100,8 +100,10 @@ def assert_error_report_tanf_file_content_matches_with_friendly_names(response):
assert ws.cell(row=1, column=1).value == "Please refer to the most recent versions of the coding " \
+ "instructions (linked below) when looking up items and allowable values during the data revision process"
- assert ws.cell(row=8, column=COL_ERROR_MESSAGE).value == "Every T1 record should have at least one " + \
- "corresponding T2 or T3 record with the same RPT_MONTH_YEAR and CASE_NUMBER."
+ assert ws.cell(row=8, column=COL_ERROR_MESSAGE).value == ("if Cash Amount :873 validator1 passed then Cash and "
+ "Cash Equivalents: Number of Months T1 Item -1 (Cash "
+ "and Cash Equivalents: Number of Months): 0 is not "
+ "larger than 0.")
@staticmethod
def assert_error_report_ssp_file_content_matches_with_friendly_names(response):
@@ -112,8 +114,8 @@ def assert_error_report_ssp_file_content_matches_with_friendly_names(response):
assert ws.cell(row=1, column=1).value == "Please refer to the most recent versions of the coding " \
+ "instructions (linked below) when looking up items and allowable values during the data revision process"
- assert ws.cell(row=7, column=COL_ERROR_MESSAGE).value == "TRAILER: record length is 15 characters " + \
- "but must be 23."
+ assert ws.cell(row=7, column=COL_ERROR_MESSAGE).value == ("M1 Item 11 (Receives Subsidized Housing): 3 is "
+ "not larger or equal to 1 and smaller or equal to 2.")
@staticmethod
def assert_error_report_file_content_matches_without_friendly_names(response):
@@ -132,9 +134,9 @@ def assert_error_report_file_content_matches_without_friendly_names(response):
assert ws.cell(row=1, column=1).value == "Please refer to the most recent versions of the coding " \
+ "instructions (linked below) when looking up items and allowable values during the data revision process"
- assert ws.cell(row=8, column=COL_ERROR_MESSAGE).value == ("Every T1 record should have at least one "
- "corresponding T2 or T3 record with the same "
- "RPT_MONTH_YEAR and CASE_NUMBER.")
+ assert ws.cell(row=8, column=COL_ERROR_MESSAGE).value == ("if CASH_AMOUNT :873 validator1 passed then "
+ "NBR_MONTHS T1 Item -1 (NBR_MONTHS): 0 is not "
+ "larger than 0.")
@staticmethod
def assert_data_file_exists(data_file_data, version, user):
diff --git a/tdrs-backend/tdpservice/scheduling/db_backup.py b/tdrs-backend/tdpservice/scheduling/db_backup.py
index 05f51ad09..48d0da749 100644
--- a/tdrs-backend/tdpservice/scheduling/db_backup.py
+++ b/tdrs-backend/tdpservice/scheduling/db_backup.py
@@ -29,7 +29,7 @@ def get_system_values():
sys_values['SPACE'] = json.loads(OS_ENV['VCAP_APPLICATION'])['space_name']
# Postgres client pg_dump directory
- sys_values['POSTGRES_CLIENT_DIR'] = "/home/vcap/deps/0/apt/usr/lib/postgresql/12/bin/"
+ sys_values['POSTGRES_CLIENT_DIR'] = "/home/vcap/deps/0/apt/usr/lib/postgresql/15/bin/"
# If the client directory and binaries don't exist, we need to find them.
if not (os.path.exists(sys_values['POSTGRES_CLIENT_DIR']) and
diff --git a/tdrs-backend/tdpservice/settings/cloudgov.py b/tdrs-backend/tdpservice/settings/cloudgov.py
index 67f2c5b60..0da7a63d0 100644
--- a/tdrs-backend/tdpservice/settings/cloudgov.py
+++ b/tdrs-backend/tdpservice/settings/cloudgov.py
@@ -44,9 +44,8 @@ class CloudGov(Common):
cloudgov_space = cloudgov_app.get('space_name', 'tanf-dev')
cloudgov_space_suffix = cloudgov_space.strip('tanf-')
cloudgov_name = cloudgov_app.get('name').split("-")[-1] # converting "tdp-backend-name" to just "name"
- services_basename = cloudgov_name if (
- cloudgov_name == "develop" and cloudgov_space_suffix == "staging"
- ) else cloudgov_space_suffix
+ # TODO: does this break prod?
+ services_basename = cloudgov_space_suffix
database_creds = get_cloudgov_service_creds_by_instance_name(
cloudgov_services['aws-rds'],
@@ -68,10 +67,10 @@ class CloudGov(Common):
###
# Dynamic Database configuration based on cloud.gov services
#
- env_based_db_name = f'tdp_db_{cloudgov_space_suffix}_{cloudgov_name}'
+ env_based_db_name = f'tdp_db_{cloudgov_name}'
logger.debug("css: " + cloudgov_space_suffix)
- if (cloudgov_space_suffix in ["prod", "staging"]):
+ if (cloudgov_space_suffix == "prod"):
db_name = database_creds['db_name']
else:
db_name = env_based_db_name
diff --git a/terraform/README.md b/terraform/README.md
index 72bec143b..7513b17b5 100644
--- a/terraform/README.md
+++ b/terraform/README.md
@@ -28,7 +28,7 @@ We use an S3 bucket created by Cloud Foundry in Cloud.gov as our remote backend
Note that a single S3 bucket maintains the Terraform State for both the development and staging environments, and this instance is deployed in the development space.
-| | development | staging | production |
+| | development | staging | production |
|---|---|---|---|
| S3 Key | `terraform.tfstate.dev` | `terraform.tfstate.staging` | `terraform.tfstate.prod` |
| Service Space | `tanf-dev` | `tanf-dev` | `tanf-prod` |
@@ -45,11 +45,11 @@ Sometimes a developer will need to run Terraform locally to perform manual opera
1. **Install Cloud Foundry CLI**
- On macOS: `brew install cloudfoundry/tap/cf-cli`
- On other platforms: [Download and install cf][cf-install]
-
+
1. **Install CircleCI local CLI**
- On macOS: `brew install circleci`
- On other platforms: [Download and install circleci][circleci]
-
+
1. **Install jq CLI**
- On macOS: `brew install jq`
- On other platforms: [Download and install jq][jq]
@@ -59,10 +59,10 @@ Sometimes a developer will need to run Terraform locally to perform manual opera
# login
cf login -a api.fr.cloud.gov --sso
# Follow temporary authorization code prompt.
-
- # Select the target org (probably `hhs-acf-prototyping`),
+
+ # Select the target org (probably `hhs-acf-prototyping`),
# and the space within which you want to provision infrastructure.
-
+
# Spaces:
# dev = tanf-dev
# staging = tanf-staging
@@ -75,7 +75,7 @@ Sometimes a developer will need to run Terraform locally to perform manual opera
```bash
./create_tf_vars.sh
-
+
# Should generate a file `variables.tfvars` in the `/terraform/dev` directory.
# Your file should look something like this:
#
@@ -83,32 +83,32 @@ Sometimes a developer will need to run Terraform locally to perform manual opera
# cf_password = "some-dev-password"
# cf_space_name = "tanf-dev"
```
-
+
## Test Deployment in Development
1. Follow the instructions above and ensure the `variables.tfvars` file has been generated with proper values.
1. `cd` into `/terraform/dev`
1. Prepare terraform backend:
-
+
**Remote vs. Local Backend:**
-
+
If you merely wish to test some new changes without regards to the currently deployed state stored in the remote TF state S3 bucket, you may want to use a "local" backend with Terraform.
```terraform
terraform {
backend "local" {}
}
```
-
+
With this change to `main.tf`, you should be able to run `terraform init` successfully.
**Get Remote S3 Credentials:**
-
+
In the `/terraform` directory, you can run the `create_backend_vars.sh` script which can be modified with details of your current environment, and will yield a `backend_config.tfvars` file which must be later passed in to Terraform. For more on this, check out [terraform variable definitions][tf-vars].
```bash
./create_backend_vars.sh
-
+
# Should generate a file `backend_config.tfvars` in the current directory.
# Your file should look something like this:
#
@@ -116,21 +116,21 @@ Sometimes a developer will need to run Terraform locally to perform manual opera
# secret_key = "some-secret-key"
# region = "us-gov-west-1"
```
-
+
You can now run `terraform init -backend-config backend_config.tfvars` and load the remote state stored in S3 into your local Terraform config.
1. Run `terraform init` if using a local backend, or `terraform init -backend-config backend_config.tfvars` with the remote backend.
-1. Run `terraform destroy -var-file variables.tfvars` to clear the current deployment (if there is one).
+1. Run `terraform destroy -var-file variables.tf` to clear the current deployment (if there is one).
- If the current deployment isn't destroyed, `terraform apply` will fail later because the unique service instance names are already taken.
- Be cautious and weary of your target environment when destroying infrastructure.
-1. Run `terraform plan -out tfapply -var-file variables.tfvars` to create a new execution plan.
+1. Run `terraform plan -out tfapply -var-file variables.tfvars` to create a new execution plan. When prompted for the `cf_app_name`, you should provide the value `tanf-` where `` is: `dev`, `staging`, `prod`.
1. Run `terraform apply "tfapply"` to create the new infrastructure.
A similar test deployment can also be executed from the `/scripts/deploy-infrastructure-dev.sh` script, albeit without the `destroy` step.
### Terraform State S3 Bucket
-These instructions describe the creation of a new S3 bucket to hold Terraform's state. _This need only be done once per environment_ (note that currently development and staging environments share a single S3 bucket that exists in the development space). This is the only true manual steps that needs to be taken upon the initial application deployment in new environments. This should only need to be done at the beginning of a deployed app's lifetime.
+These instructions describe the creation of a new S3 bucket to hold Terraform's state. _This need only be done once per environment_ (note that currently development and staging environments share a single S3 bucket that exists in the development space). This is the only true manual steps that needs to be taken upon the initial application deployment in new environments. This should only need to be done at the beginning of a deployed app's lifetime.
1. **Create S3 Bucket for Terraform State**
@@ -139,7 +139,7 @@ These instructions describe the creation of a new S3 bucket to hold Terraform's
```
1. **Create service key**
-
+
Now we need a new service key with which to authenticate to our Cloud.gov S3 bucket from CircleCI.
```bash
@@ -175,7 +175,7 @@ Below, we will use an example change that has been done on cloud.gov UI. Assume
If we try to run plan or deploy at this point, then it will fail since the state doesn't have new "es-dev" elastic search service, so it assumes this is a new deployment and tries to deploy the new instance, which will fail since the name is already taken.
-2. grab the id of remote change (in this case elastic service) by running ```cf``` commands.
+2. grab the id of remote change (in this case elastic service) by running ```cf``` commands.
for the case of our example, we can run ```cf services```, and then run ```cf service es-dev --guid ``` which will show guid of newly created elasticsearch service instance, which is required for updating state with ES instance.
3. run this command to update state: ```terraform import cloudfoundry_service_instance.elasticsearch ```
@@ -183,13 +183,13 @@ If we try to run plan or deploy at this point, then it will fail since the state
You should change ```cloudfoundry_service_instance.elasticsearch``` to your instance/service you added and trying to update the state file with.
#### Security
-
- The Terraform State S3 instance is set to be encrypted (see `main.tf#backend`). Amazon S3 [protects data at rest][s3] using 256-bit Advanced Encryption Standard.
+
+ The Terraform State S3 instance is set to be encrypted (see `main.tf#backend`). Amazon S3 [protects data at rest][s3] using 256-bit Advanced Encryption Standard.
> **Rotating credentials:**
- >
+ >
> The S3 service creates unique IAM credentials for each application binding or service key. To rotate credentials associated with an application binding, unbind and rebind the service instance to the application. To rotate credentials associated with a service key, delete and recreate the service key.
-
+
diff --git a/terraform/create_backend_vars.sh b/terraform/create_backend_vars.sh
index 72e6d107a..11de0340c 100755
--- a/terraform/create_backend_vars.sh
+++ b/terraform/create_backend_vars.sh
@@ -1,5 +1,15 @@
#!/usr/bin/env bash
+if [[ $# -eq 0 ]] ; then
+ echo 'You need to pass the env you are configuring: 'dev', 'staging', 'production'.'
+ exit 1
+fi
+
+if [[ "$1" != "dev" && "$1" != "staging" && "$1" != "production" ]] ; then
+ echo 'The first argument to this script must be one of: 'dev', 'staging', or 'production'.'
+ exit 1
+fi
+
S3_CREDENTIALS=$(cf service-key tdp-tf-states tdp-tf-key | tail -n +2)
if [ -z "$S3_CREDENTIALS" ]; then
echo "Unable to get service-keys, you may need to login to Cloud.gov first"
@@ -8,15 +18,14 @@ if [ -z "$S3_CREDENTIALS" ]; then
fi
# Requires installation of jq - https://stedolan.github.io/jq/download/
-ACCESS_KEY=$(echo "${S3_CREDENTIALS}" | jq -r '.access_key_id')
-SECRET_KEY=$(echo "${S3_CREDENTIALS}" | jq -r '.secret_access_key')
-REGION=$(echo "${S3_CREDENTIALS}" | jq -r '.region')
-BUCKET=$(echo "${S3_CREDENTIALS}" | jq -r '.bucket')
+ACCESS_KEY=$(echo "${S3_CREDENTIALS}" | jq -r '.credentials.access_key_id')
+SECRET_KEY=$(echo "${S3_CREDENTIALS}" | jq -r '.credentials.secret_access_key')
+REGION=$(echo "${S3_CREDENTIALS}" | jq -r '.credentials.region')
+BUCKET=$(echo "${S3_CREDENTIALS}" | jq -r '.credentials.bucket')
{
echo "access_key = \"$ACCESS_KEY\""
echo "secret_key = \"$SECRET_KEY\""
echo "region = \"$REGION\""
echo "bucket = \"$BUCKET\""
- echo "prefix = \"dev\""
-} >> ./dev/backend_config.tfvars
+} > ./$1/backend_config.tfvars
diff --git a/terraform/create_tf_vars.sh b/terraform/create_tf_vars.sh
index 042fcbd71..55beafb71 100755
--- a/terraform/create_tf_vars.sh
+++ b/terraform/create_tf_vars.sh
@@ -1,5 +1,15 @@
#!/usr/bin/env bash
+if [[ $# -eq 0 ]] ; then
+ echo 'You need to pass the env you are configuring: 'dev', 'staging', 'production'.'
+ exit 1
+fi
+
+if [[ "$1" != "dev" && "$1" != "staging" && "$1" != "production" ]] ; then
+ echo 'The first argument to this script must be one of: 'dev', 'staging', or 'production'.'
+ exit 1
+fi
+
KEYS_JSON=$(cf service-key tanf-keys deployer | grep -A4 "{")
if [ -z "$KEYS_JSON" ]; then
echo "Unable to get service-keys, you may need to login to Cloud.gov first"
@@ -8,8 +18,8 @@ if [ -z "$KEYS_JSON" ]; then
fi
# Requires installation of jq - https://stedolan.github.io/jq/download/
-CF_USERNAME_DEV=$(echo "$KEYS_JSON" | jq -r '.username')
-CF_PASSWORD_DEV=$(echo "$KEYS_JSON" | jq -r '.password')
+CF_USERNAME_DEV=$(echo "$KEYS_JSON" | jq -r '.credentials.username')
+CF_PASSWORD_DEV=$(echo "$KEYS_JSON" | jq -r '.credentials.password')
CF_SPACE="tanf-dev"
@@ -17,4 +27,4 @@ CF_SPACE="tanf-dev"
echo "cf_password = \"$CF_PASSWORD_DEV\""
echo "cf_user = \"$CF_USERNAME_DEV\""
echo "cf_space_name = \"$CF_SPACE\""
-} >> ./dev/variables.tfvars
+} > ./$1/variables.tfvars
diff --git a/terraform/dev/main.tf b/terraform/dev/main.tf
index 98e9d422b..0b81b8114 100644
--- a/terraform/dev/main.tf
+++ b/terraform/dev/main.tf
@@ -51,8 +51,13 @@ resource "cloudfoundry_service_instance" "database" {
name = "tdp-db-dev"
space = data.cloudfoundry_space.space.id
service_plan = data.cloudfoundry_service.rds.service_plans["micro-psql"]
- json_params = "{\"version\": \"12\"}"
+ json_params = "{\"version\": \"15\"}"
recursive_delete = true
+ timeouts {
+ create = "60m"
+ update = "60m"
+ delete = "2h"
+ }
}
###
diff --git a/terraform/production/main.tf b/terraform/production/main.tf
index 7b8b6850a..c9ecf505e 100644
--- a/terraform/production/main.tf
+++ b/terraform/production/main.tf
@@ -51,8 +51,13 @@ resource "cloudfoundry_service_instance" "database" {
name = "tdp-db-prod"
space = data.cloudfoundry_space.space.id
service_plan = data.cloudfoundry_service.rds.service_plans["medium-psql"]
- json_params = "{\"version\": \"12\"}"
+ json_params = "{\"version\": \"15\"}"
recursive_delete = true
+ timeouts {
+ create = "60m"
+ update = "60m"
+ delete = "2h"
+ }
}
###
diff --git a/terraform/staging/main.tf b/terraform/staging/main.tf
index 39d3a65e9..0c4cc2576 100644
--- a/terraform/staging/main.tf
+++ b/terraform/staging/main.tf
@@ -51,8 +51,13 @@ resource "cloudfoundry_service_instance" "database" {
name = "tdp-db-staging"
space = data.cloudfoundry_space.space.id
service_plan = data.cloudfoundry_service.rds.service_plans["micro-psql"]
- json_params = "{\"version\": \"12\"}"
+ json_params = "{\"version\": \"15\"}"
recursive_delete = true
+ timeouts {
+ create = "60m"
+ update = "60m"
+ delete = "2h"
+ }
}
###