Version 1.x.x is is about to be released soon, read more about migration from older version here
Meanwhile 1.x.x is to release and makred as pre-release we will maintain 2 branches:
master
- the previous version (version < 1.0.0
)- we will keep maintaing if (bugs, security issues) - this version will be intalled when installing
venona
on MacOS using brew codefresh/venona:latest
will refer to this branch
- we will keep maintaing if (bugs, security issues) - this version will be intalled when installing
release-1.0
it the new release, which will be used when running Codefresh CLI to install the agent We highly suggest to use Codefresh official CLI to install the agent:
kubectl create namespace codefresh
codefresh install agent --kube-namespace codefresh --install-runtime
The last command will:
- Install the agent on the namespace
codefresh
- Install the runtime on the same namespace
- Attach the runtime to the agent
It is still possible, for advanced users to install all manually, for example: One process of Venona can manage multiple runtime environments NOTE: Please make sure that the process where Venona is installed there is a network connection to the clusters where the runtimes will be installed
# 1. Create namespace for the agent:
kubectl create namespace codefresh-agent
# 2. Install the agent on the namespace ( give your agent a unique):
# Print a token that the Venona process will be using.
codefresh create agent $NAME
codefresh install agent --token $TOKEN --kube-namespace codefresh-agent
# 3. Create namespace for the first runtime:
kubectl create namespace codefresh-runtime-1
# 4. Install the first runtime on the namespace
# 5. the runtime name is printed
codefresh install runtime --kube-namespace codefresh-runtime-1
# 6. Attach the first runtime to agent:
codefresh attach runtime --agent-name $AGENT_NAME --agent-kube-namespace codefresh-agent --runtime-name $RUNTIME_NAME --kube-namespace codefresh-runtime-1
# 7. Restart the venona pod in namespace `codefresh-agent`
kubectl delete pods $VENONA_POD
# 8. Create namespace for the second runtime
kubectl create namespace codefresh-runtime-2
# 9. Install the second runtime on the namespace
codefresh install runtime --kube-namespace codefresh-runtime-2
# 10. Attach the second runtime to agent and restart the Venoa pod automatically
codefresh attach runtime --agent-name $AGENT_NAME --agent-kube-namespace codefresh-agent --runtime-name $RUNTIME_NAME --runtime-kube-namespace codefresh-runtime-1 --restart-agent
Migrating from Venona < 1.x.x
to > 1.x.x
is not done automatically, please use the migration script to do that, check out which environment variables are required to run it.
# This script comes to migrate old versions of Venona installation ( version < 1.x.x ) to new version (version >= 1.0.0 )
# Please read carefully what the script does.
# There will be a "downtime" in terms of your builds targeted to this runtime environment
# Once the script is finished, all the builds during the downtime will start
# The script will:
# 1. Create new agent entity in Codefresh using Codefresh CLI - give it a name $CODEFRESH_AGENT_NAME, default is "codefresh"
# 2. Install the agent on you cluster pass variables:
# a. $VENONA_KUBE_NAMESPACE - required
# b. $VENONA_KUBE_CONTEXT - default is current-context
# c. $VENONA_KUBECONFIG_PATH - default is $HOME/.kube/config
# 3. Attach runtime to the new agent (downtime ends) - pass $CODEFRESH_RUNTIME_NAME - required
- Kubernetes - Used to create resource in your K8S cluster
- Kube Version > 1.10:
- Instuction to install on cluster version < 1.10
- Disk size 50GB per node
- Kube Version > 1.10:
- Codefresh - Used to create resource in Codefresh
- Authenticated context exist under
$HOME/.cfconfig
or authenticate with Codefesh CLI
- Authenticated context exist under
- Download venona's binary
- With homebrew:
brew tap codefresh-io/venona
brew install venona
- With homebrew:
- Make sure the
PersistentLocalVolumes
feature gate is turned on - Venona's agent is trying to load avaliables apis using api
/openapi/v2
endpoint Add this endpoint to ClusterRolesystem:discovery
underrules[0].nonResourceURLs
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:discovery
rules:
- nonResourceURLs:
- ...other_resources
- /openapi
- /openapi/*
verbs:
- get
- Make sure your user has
Kubernetes Engine Cluster Admin
role in google console - Bind your user with cluster-admin kubernetes clusterrole
kubectl create clusterrolebinding NAME --clusterrole cluster-admin --user YOUR_USER
Installation of Venona on Kubernetes cluster installing 2 groups of objects, Each one has own RBAC needs and therefore, created roles(and cluster-roles) The resource descriptors are avaliable here List of the resources that will be created
- Agent (grouped by
/.*.venona.yaml/
)service-account.re.yaml
- The service account that the Venona pod will use to create the resource on the runtime namespace(the resoucre installed on the runtime namespace)role.re.yaml
- Allow to GET, CREATE and DELETE pods and persistentvolumeclaimsrole-binding.re.yaml
- The agent is spinning up pods and pvc, this biniding bindsrole.venona.yaml
toservice-account.venona.yaml
cluster-role-binding.venona.yaml
- The agent discovering K8S apis by calling toopenapi/v2
, this ClusterRoleBinding binds bootstraped ClusterRole by Kubernetessystem:discovery
toservice-account.venona.yaml
. This role has only permissions to make a GET calls to non resources urls
- Runtime-environment (grouped by
/.*.re.yaml/
) Kubernetes controller that spins up all required resources to provide a good caching expirience during pipeline executionservice-account.dind-volume-provisioner.re.yaml
- The service account that the controller will usecluster-role.dind-volume-provisioner.re.yaml
Defines all the permission needed for the controller to operate correctlycluster-role-binding.dind-volume-provisioner.yaml
- Binds the ClusterRole toservice-account.dind-volume-provisioner.re.yaml