-
Notifications
You must be signed in to change notification settings - Fork 6
Docker containers
The values.yaml file shows a list of container images that VRO creates and are ultimately deployed to Kubernetes.
(Packages in Github Container Registry)
Currently, there are more packages associated with this repo. Packages are listed on the side panel of the VRO repo.
UPDATE: PR #65 makes it such that packages are automatically associated with this repo, but the package needs to be manually set to "Inherit access from source repository" as instructed by LHDI doc.
- OBSOLETE:
If more are created, go to the VA Organization's Packages page, search for the package, and manually connect them to this repo -- see LHDI Development Guide.
Due to Docker.com rate limits, PR #67 and PR #68 pulls container images (e.g., postgres
and rabbitmq
) from Docker.com (i.e., DockerHub, docker.io), then tags and pushes them to Github Container Registry (ghcr
). LHDI has ideas to create mirrors of commonly used images, so that people can access those without getting rate limited (as well as being well-vetted images), however the timeline for that is unclear. Until then, we'll push them to ghcr
so that LHDI can pull the images without any limit.
These packages (unchanged from Docker.com) need "Inherit access from source repository" to be set manually (instructions in LHDI Development Guide), and must be manually connected to this repo because we do not modify the Dockerfile LABEL (LABEL org.opencontainers.image.source=https://github.com/department-of-veterans-affairs/abd-vro
), which would automatically associate the image with the repo.
To build the container images, this project uses Palantir's Gradle Docker plugin -- https://github.com/palantir/gradle-docker.
The following subsections describe uses of container images.
Used in LHDI's Kubernetes clusters, the images are retrieved and deployed to dev and separately deployed to production.
In the Kubernetes clusters, the app
(or abd_vro-app
) docker container depends on init containers (which run and exit before the app
container is started):
-
container-init
:init_pg.sql
-
db-init
(orabd_vro-db-init
): flyway DB migrations -
opa-init
: rego policies and permissions
As mentioned in the packages section above, the application-specific container images (abd_vro-app
and abd_vro-db-init
) are packaged, tagged, and pushed to Github Container Registry by VRO's Github Actions.
The other (non-abd_vro
) container images (such as pg-ready
, istio-init
, and istio-proxy
) are provided by LHDI.
Used by publish.yml
Github Action to push container images ("packages") to the Github Container Registry -- see Github Actions.
Used to set up a local environment for code development and testing. The dockerComposeUp
task in app/build.gradle
starts Docker containers locally using your local code modifications. See Development process for details.
When deploying an image from Docker, use Docker official images.
When defining a new image with a Dockerfile
, use a base image from Docker official images.
Why? See LHDI docs. Either use a Docker official image, or have it scanned and signed in the Secure Release process by including it in .github/secrel/config.yml
.
- Add a new Gradle subproject
- Include your new service inside of
settings.gradle
- Include your new service inside of
build.gradle
- Include your new service inside of
- For local development, add your container to
app/docker-compose.yml
so that your container is started when running VRO. This file has several helpers and placeholder vars to ensure consistency between services and containers (rabbitmq, redis, etc). Note the service and image names will be prefixed withsvc
andabd_vro
respectively, e.g.:
svc-foo:
image: va/abd_vro-foo:latest
After creating and updating the files above, run these commands:
./gradlew build check docker
# to start VRO
./gradlew :dockerComposeUp :app:dockerComposeUp
You should see Container docker-svc-foo-1 Started
in the output of all containers starting up.
To add a new python microservice with docker container, there are a few steps & files that need to be updated.
First, create the new microservice files inside of service-python
. It's okay if these are stubbed to begin with, full functionality is not needed in order to create the container.
For a microservice named foo
, you will need to add these files:
service-python/foo/__init__.py
-
service-python/foo/build.gradle
- This Gradle file can be barebones, all that is needed is
plugins { id 'local.python.container-service-conventions' }
to re-use the common settings for all python services. Refer to the otherservice-python/*/build.gradle
files.
- This Gradle file can be barebones, all that is needed is
-
service-python/foo/src/requirements.txt
the dependency requirements for your service -
service-python/foo/src/lib/
as the directory where your python code will actually live. This will have another__init__.py
and likely amain.py
,utils.py
,settings.py
, etc. as required by your services. - (Optional)
service-python/foo/docker-compose.yml
with the necessary services listed (rabbitmq, redis, etc) -
service-python/Dockerfile
will be used to create the container image for the service using files in thebuild/docker/
subfolder (populated after running./gradlew service-python:foo:docker
). Using a custom Dockerfile is possible but will require special Gradle and SecRel config.yml configurations.
Outside of your specific service, a few other files need to be updated:
- Include your requirements in
service-python/requirements.txt
- Add your container to VRO configs -- see section above
- Add your container to deployment configs -- see section below
- Update
scripts/image-names.sh
:- add the folder's basename to
IMAGES
Bash array - add to various the bash functions's
case
statement if the folder is not at the project root-level
- add the folder's basename to
- Run
scripts/image-names.sh
and review the changes- If changes look good, update the files manually or like so:
cp .github/secrel/config-updated.yml .github/secrel/config.yml cp helmchart/values-updated.yaml helmchart/values.yaml
- Add changes, including
scripts/image_vars.src
, and commit changes to git
- If changes look good, update the files manually or like so:
- Update
helmchart/templates/api/deployment.yaml
, which prescribes how VRO is to be deployed to LHDI - Commit all changes, push, and create a PR
- Run the Deploy-Dev action on the PR's branch
- Once the action completes:
- Verify that the image ("package") was pushed to GHCR: https://github.com/orgs/department-of-veterans-affairs/packages?tab=packages&q=vro-. Click on your image and note there are 0 downloads.
- The new container image needs to be manually set to "Inherit access from source repository" as instructed by LHDI doc.
- Check for successful LHDI deployment to the DEV namespace by
- Looking for non-zero downloads of your package. LHDI should have downloaded it.
- Or connecting to LHDI's EKS cluster, e.g., using the Lens app
GHCR images are associated with the abd-vro-internal
repo
-
abd-vro-internal packages
-
mirror-*
images: added when the Publish 3rd-party image workflow is manually run in the abd-vro-internal repo; these images are unmodified mirrors of the original -
vro-*
images: added as part of the SecRel workflow when no prefix is selected.
-
The SecRel workflow has an option to sign the images, which is needed for certain deployment environments.
Deployment environments (LHDI clusters) pull images from GHCR
- DEV and QA do NOT require signed images
- SANDBOX, PROD-TEST, and PROD require signed images. For these namespaces:
DI does not enforce any specific usage of these namespaces other than enforcing resource quotas and image signing for prod and sandbox environments.
In the internal repo, when a GitHub Release is created. the SecRel workflow is triggered, which will sign the image in GHCR if it passes SecRel checks. Then these signed images can be deployed to the LHDI environments that require signed images.
Why not use the signed images for all environments so that there are fewer images to manage? Because if SecRel fails (for various reasons), the images are not signed. To allow testing to continue despite SecRel failures, the unsigned images are useful to have for deployment to certain environments for testing.