Skip to content

Latest commit

 

History

History
78 lines (46 loc) · 10.8 KB

functionality-demonstrated.md

File metadata and controls

78 lines (46 loc) · 10.8 KB

Functionality demonstrated in this repository

While it is possible to limit the konflux-sample repositories to only illustrate a specific functionality or tip, many different changes are leveraged in this repository in order to improve the comprehension and maintenance of the repository. The key customizations and changes made are:

Using a pipelineRef to unify on a common PipelineDefinition

By default, Konflux will push two Tekton PipelineRun files for PAC to trigger on the cluster. Since these files share a large part of common specification, maintenance and customization can become harder especially if there are multiple artifacts being built in a repository with effectively the same repository.

There are two common pipelines defined in this repository, single-arch-build-pipeline and multi-arch-build-pipeline. Each of the PipelineRuns that are used by PAC then refer to one of the pipeline definitions with a pipelineRef, for example

  pipelineRef:
    name: multi-arch-build-pipeline

Building artifacts on multiple platforms

Tekton Pipelines are a collection of Tasks which run in pods in the cluster. The default Tekton PipelineRun that is pushed to repositories only builds images for a single architecture -- the cluster's. This results in image artifacts only being built and pushed to the registry as an Image Manifest with the default docker-build-oci-ta pipeline. These images produced do not have an Image Index produced by default which enable the container execution environment to pull an image which matches the running system.

NOTE: An image index can be generated in the default pipelines by setting the ALWAYS_BUILD_INDEX parameter to "true".

Since it is not currently possible to build on multiple architectures natively from within Tekton, a custom controller was developed for native building on multiple platforms in Konflux.

The previous version of multi-arch support was added in PR#4, but was converted to be based on the system default pipeline in PR#69 (i.e. using a matrix).

The pipeline definition supporting these builds in this repository is multi-arch-build-pipeline. Build tasks can be triggered on any remote virtual machines that are configured for the Konflux deployment via the PLATFORM parameter. A remote build is triggered for each specific architecture and then an OCI Image Index is generated referencing each of the architecture-specific Image Manifests.

NOTE: The available platforms for building container images on are defined by the Konflux deployment. Since this repository use a Red Hat deployment, the available options are available in the host configuration, A value from the local-platforms will run the build in a pod in the cluster, from the dynamic-platforms will run the build in an appropriate ephemeral VM, and the static hosts will run the build in an isolated namespace on a shared VM.

We are now producing a multi-platform pipeline in konflux-ci/build-definitions and push a Tekton bundle to quay.io/konflux-ci.

NOTE: Once the version of Tekton has been updated in the Konflux deployment to include a fix for size 1 matrices, it will be possible to use the same pipeline definition for single arch and multi-arch pipelines by setting ALWAYS_BUILD_INDEX to "false" for single-arch builds.

Trusted artifacts, removing the need for PVCs

The initial onboarding PRs from Konflux proposed a Tekton Pipeline that utilized PersistentVolumeClaims (PVC) for a shared workspace between tasks (see gatekeeper-operator, for an example). While these pipelines work for building artifacts, there are a couple primary limitations to the approach:

  • PVCs often have a quota within a workspace due to the infrastructure backing them. Removing a restriction on the PVC quotas increases the number of PipelineRuns that can progress in parallel.
  • All TaskRuns utilizing a shared PVC-backed workspace need to be run on the same node. This prevent k8s from scheduling each TaskRun from being able to effectively distribute workloads, especially when those workloads are long-running pipelines.
  • When content is shared in a PVC between tasks, the only way to ensure that the data hasn't been tampered with is to prevent any custom tasks that might access the workspace. This means that if users want to continue to pass Enterprise Contract policy verifications that they would be unable to add additional custom tasks to the pipeline like running unit tests on the repository or built artifacts.

Since data still needs to be shared between tasks, however, an alternative to PVCs are needed. Konflux enables pipelines to be able to leverage OCI-backed trusted artifacts. Instead of storing data in a PVC, tasks in Konflux can leverage the trusted artifact image to use and create OCI images to pass data between tasks. Since the digest of these images are recorded in the generated provenance, we can then verify the integrity of the data's chain of custody.

Utilization of on-cel-expressions for reducing redundant builds

After properly configuring the cel expressions, a PR which updates content that only affects a single component (for example, the only single-arch component) should not trigger builds for the rest of the components. This can be achieved by modifying the default on-cel-expressions to include filtering events based on the files changed. For example, the expressions for gatekeeper-operator-bundle which was built above is:

    pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch == "main" && 
      (".tekton/single-arch-build-pipeline.yaml".pathChanged() || 
      ".tekton/gatekeeper-operator-bundle-pull-request.yaml".pathChanged() || 
      ".tekton/gatekeeper-operator-bundle-push.yaml".pathChanged() || 
      "Containerfile.gatekeeper-operator-bundle".pathChanged() ||
      "gatekeeper-operator".pathChanged())

When adding filter for directory use "directory/***".pathChanged() format. When adding filter for git submodule use "submodule".pathChanged() format.

This indicates that the PipelineRun will only be executed if the event is a PR to the main branch and if on of the 5 specified files were included in the changeset. Documentation for filtering events can be found in Pipelines as Code.

While this results in improved resource utilization (lower cloud spend), it does mean that not all artifacts will be built from every commit. If you need to identify the specific commit which produced an artifact, you can query the artifact's attestations for the build parameters.

Customization of files for component nudging

The gatekeeper and gatekeeper operator's "push" pipelines both specify the file to nudge component references in via the build.appstudio.openshift.io/build-nudge-files annotation. Since the location of the component references is atypical (i.e. not a Containerfile or yaml file), we need to configure this annotation. Once this annotation is set, the newly built operand and operator images will trigger a pull request against the update_bundle.sh file (i.e. #21 and #22).

Preventing build process drift in submodules

Git submodules are a convenient way to vendor source code, especially if you want to build the repositories without having to manage updating a fork and maintaining your own build-specific configurations during these syncs. One disadvantage to these submodules, however, is that the updates included can easily be opaque which can easily result in drift between your build process and that of the original repository. This becomes harder when you have an external tool like Renovate which automatically suggests updates to the submodules.

This functionality is documented further in #71 as well as in konflux-onboarding.md.

Add Renovate configuration

When onboarding to Konflux, a Renovate instance will help to keep all of your references up to date. You might not want all of the functionality enabled by default. The best practice for this is to extend the original configuration while removing the specific functionality that is undesired with as narrow of a scope as possible. This will ensure that you can still get timely updates for any dependencies including to resolve CVEs. This is described further in konflux-onboarding.md.