- Run through the setup at least once.
- Make changes to the pipeline image, if desired.
- Make changes to the Tekton pipelines and/or tasks, if desired.
- Test all impacted pipelines.
- Submit a pull request with your changes.
- Git leaks detection
- Prepare a development environment
- Prepare a certification project
- Prepare an Operator bundle
- Prepare your
ci.yaml
- Create a bundle pull request (optional)
- Required for testing hosted or release pipelines
- Create an API key (optional)
- Required for testing submission with the CI pipeline
- Prepare the CI to run from your fork (optional)
- Required to run integration testing on forks of this repo.
Since the repository contains secret information in form of encrypted
Ansible Vault there is high chance that developer may push a commit with
decrypted secrets by mistake. To avoid this problem we recommend
to use Gitleaks
tool that prevent you from commit secret code into git history.
The repository is already pre-configured but each developer has to make final config changes in his/her environment.
Follow the documentation to configure Gitleaks on your computer.
You may use any OpenShift 4.7+ cluster (including CodeReady Containers).
The hosted and release pipelines require a considerable amount of dependencies which are tedious to configure manually. Luckily these steps have been automated and can be executed by anyone with access to the Ansible vault password.
Before running this you should ensure you're logged into the correct OpenShift
cluster using oc
. If already logged into the OpenShift console, an oc
login
command can be obtained by clicking on your username in upper right corner, and
selecting copy login command
.
ansible-playbook -v \
-i "ansible/inventory/operator-pipeline-$ENV" \
-e "oc_namespace=$NAMESPACE" \
-e "ocp_token=`oc whoami -t`" \
--vault-password-file $VAULT_PASSWORD_PATH \
ansible/playbooks/deploy.yml
Cleanup can be performed by specifying the absent
state for some of the resources.
ansible-playbook -v \
-i "ansible/inventory/operator-pipeline-$ENV" \
-e "oc_namespace=$NAMESPACE" \
-e "ocp_token=`oc whoami -t`" \
-e "namespace_state=absent" \
-e "github_webhook_state=absent" \
--vault-password-file $VAULT_PASSWORD_PATH \
ansible/playbooks/deploy.yml
See integration tests section in ci-cd.md
You should install the tkn CLI which corresponds to the version of the cluster you're utilizing.
It's possible to deploy and test the pipelines from a CodeReady Containers (CRC) cluster for development/testing, purposes.
-
Install CodeReady Containers
-
Install OpenShift Pipelines
-
Login to your cluster with
oc
CLI.You can run
crc console --credentials
to get the admin login command. -
Create a test project in your cluster
oc new-project playground
-
Grant the
privileged
SCC to the defaultpipeline
service account.The
buildah
task requires theprivileged
security context constraint in order to callnewuidmap
/newgidmap
. This is only necessary becauserunAsUser:0
is defined intemplates/crc-pod-template.yml
.oc adm policy add-scc-to-user privileged -z pipeline
It may be necessary to pass the following tkn
CLI arg to avoid
permission issues with the default CRC PersistentVolumes.
--pod-template templates/crc-pod-template.yml
A certification project is required for executing all pipelines. In order to avoid collisions with other developers, it's best to create a new one in the corresponding Pyxis environment.
The pipelines depend on the following certification project fields:
{
"project_status": "active",
"type": "Containers",
// Arbitrary name for the project - can be almost anything
"name": "<insert-project-name>",
/*
Either "connect", "marketplace" or "undistributed".
This maps to the `organization` field in the bundle submission repo's config.yaml.
connect -> certified-operators
marketplace -> redhat-marketplace
undistributed -> certified-operators (certified against, but not distributed to)
*/
"operator_distribution": "<insert-distribution>",
// Must correspond to a containerVendor record with the same org_id value.
"org_id": <insert-org-id>,
"container": {
"type": "operator bundle image",
"build_catagories":"Operator bundle",
// Required but always "rhcc"
"distribution_method": "rhcc",
// Always set to true to satisfy the publishing checklist
"distribution_approval": true,
// Must match the github user(s) which opened the test pull requests
"github_usernames": ["<insert-github-username>"],
// Must be unique for the vendor
"repository_name": "<insert-repo-name>"
}
}
You can use a utility script to copy an existing bundle. By default it will copy
a bundle that should avoid common failure conditions such as digest pinning.
View all the customization options by passing -h
to this command.
./scripts/copy-bundle.sh
You may wish to tweak the generated output to influence the behavior of the pipelines. For example, Red Hat Marketplace Operator bundles may require additional annotations. The pipeline should provide sufficient error messages to indicate what is missing. If such errors are unclear, that is likely a bug which should be fixed.
At the root of your operator package directory (note: not the bundle
version directory) there needs to be a ci.yaml
file. For development
purposes, it should follow this format in most cases.
---
# Copy this value from the _id field of the certification project in Pyxis.
cert_project_id: <pyxis-cert-project-id>
# Set this to true to allow the hosted pipeline to merge pull requests.
merge: false
It's recommended to open bundle pull requests against the operator-pipelines-test repo. The pipeline GitHub bot account has permissions to manage it.
Note: This repository is only configured for testing certified operators, NOT Red Hat Marketplace operators (see
config.yaml
).
# Checkout the pipelines test repo
git clone https://github.com/redhat-openshift-ecosystem/operator-pipelines-test
cd operator-pipelines-test
# Create a new branch
git checkout -b <insert-branch-name>
# Copy your package directory
cp -R <package-dir>/ operators/
# Commit changes
git add -A
# Use this commit pattern so it defaults to the pull request title.
# This is critical to the success of the pipelines.
git commit -m "operator <operator-package-name> (<bundle-version>)"
# Push your branch. Open the pull request using the output.
git push origin <insert-branch-name>
Note: You may need to merge the pull request to use it for testing the release pipeline.
File a ticket with Pyxis admins
to assist with this request. It must correspond to the org_id
for the certification project under test.
- Avoid including credentials within Task scripts.
- Avoid the use of
set -x
in shell scripts which could expose credentials to the console.
- Avoid the use of
- Don't use workspaces for passing secrets. Use
secretKeyRef
andvolumeMount
with secret and key names instead.- Reason: It adds unnecessary complexity to
tkn
commands.
- Reason: It adds unnecessary complexity to
- Use images from trusted registries/namespaces.
- registry.redhat.io
- registry.access.redhat.com
- quay.io/redhat-isv
- quay.io/opdev
- Use image pull specs with digests instead of tags wherever possible.
- Tasks must implement their own skipping behavior, if needed.
- Reason: If a task is not executed, any dependent tasks will not be executed either.
- Don't use ClusterTasks or upstream tasks. All tasks are defined in this repo.
- Document all params, especially pipeline params.
- Output human readable logs.
- Use reasonable defaults for params wherever possible.
You can use the following command to apply all local changes to your OCP project. It will add all the Tekton resources used across all the pipelines.
oc apply -R -f ansible/roles/operator-pipeline/templates/openshift
Operator pipelines project is configured to automatically manage Python
dependencies using PDM tool. The pdm automates definition, installation,
upgrades and the whole lifecycle of dependency in a project. All dependencies
are stored in pyproject.toml
file in a groups that corresponds to individual
applications within the Operator pipelines project.
Adding, removing and updating of dependency needs to be always done
using pdm
cli.
pdm add -G operator-pipelines gunicorn==20.1.0
After a dependency is installed it is added to pdm.lock file. The lock file is always part of git repository.
If you want to install specific group set of dependencies use following command:
pdm install -G operator-pipelines
Dependencies are stored into virtual environment (.venv) which is automatically
created after pdm install
. If .venv wasn't created, configure pdm to
automatically create it during installation with pdm config python.use_venv true
.
Before running the tests locally, the environment needs to be prepared. Choose the preparation process according to your Linux version.
sudo dnf -y install hadolint
python3 -m pip install pdm
pdm venv create 3.12
pdm install
source .venv/bin/activate
python3 -m pip install ansible-lint
Before starting, make sure you have installed the Brew package manager.
brew install hadolint
python3 -m pip install pdm
pdm venv create 3.12
pdm install
source .venv/bin/activate
python3 -m pip install ansible-lint
To run unit tests and code style checkers:
tox
Setup python virtual environment using pdm.
pdm venv create 3.12
pdm install
source .venv/bin/activate
-
Ensure you have buildah installed
-
Build the image
buildah bud
-
Push the image to a remote registry, eg. Quay.io.
buildah push <image-digest-from-build-step> <remote-repository>
This step may require login, eg.
buildah login quay.io