diff --git a/README.md b/README.md index 68c2f7f..8a2d1b1 100644 --- a/README.md +++ b/README.md @@ -6,44 +6,45 @@ Create an [OpenShift 4.X](https://github.com/openshift/installer) cluster in a s Images are built and maintained in openshift-gce-devel GCE project. If you are an OpenShift developer and have access to openshift-gce-devel GCE project, all you need to get started is the [gcloud CLI tool](https://cloud.google.com/sdk/docs/downloads-yum) -For developers not in OpenShift organization, see [here](https://github.com/ironcladlou/openshift4-libvirt-gcp/blob/rhel8/IMAGES.md) -for information on images, and substitute `your-gce-project` for `openshift-gce-devel` in all scripts. + +For developers not in OpenShift organization, see [the centos8-okd4 branch](https://github.com/ironcladlou/openshift4-libvirt-gcp/blob/centos8-okd4/README.md). +For information on images, see [IMAGES.md](https://github.com/ironcladlou/openshift4-libvirt-gcp/blob/rhel8/IMAGES.md). ### Create GCP instance -First, create network and firewall rules in GCP and then the GCP instance. ``` -Note: this script uses scp to copy pull-secret to gcp instance. Alternative is to +Note: This script uses scp to copy pull-secret to gcp instance. Alternative is to add pull-secret to metadata when creating the instance. However, metadata is printed in the gcp console. This is why this setup uses scp instead. ``` -You can either run the commands from `create-gcp-resources.sh` individually or run the script like so: +You'll need a network and firewall rules to connect to your instance. This is already set up for developers in +`openshift-gce-devel`, it's `ocp4-libvirt-dev`. The script `create-gcp-instance.sh` will launch an instance using this network. +If you prefer, you can run `create-network-subnet.sh` and then the commands from the create-gcp-instance.sh to create an instance +in a different network. To use the preconfigured network `ocp4-libvirt-dev`, run the script like so: ```shell $ export INSTANCE=mytest $ export GCP_USER=, used to scp pull-secret to $HOME/pull-secret in gcp instance $ export PULL_SECRET=/path/to/pull-secret-one-liner.json -$ ./create-gcp-resources.sh +$ ./create-gcp-instance.sh ``` ### Find an available release payload image -You need to provide an OVERRIDE release payload that you have access to. +You need to provide a release payload that you have access to. This setup will extract the installer binary from the +RELEASE_IMAGE you provide. For public images see [ocp-dev-preview](https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/) and [quay.io/ocp-dev-preview](https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags) or for internal images see [CI release images](https://openshift-release.svc.ci.openshift.org/) -example release images: -* quay.io/openshift-release-dev/ocp-release:4.4.0-rc.2-x86_64 (public) -* registry.svc.ci.openshift.org/ocp/release:4.5.0-0.ci-2020-03-16-194422 (internal) ### Create nested libvirt cluster - 3 masters, 2 workers Connect to the instance using SSH and create a cluster named `$CLUSTER_NAME` using latest payload built from CI. -Install directory will be populated at `$HOME/clusters/$CLUSTER_NAME` +Install directory will be populated at `$HOME/clusters/$CLUSTER_NAME`. ```shell $ gcloud beta compute ssh --zone "us-east1-c" $INSTANCE --project "openshift-gce-devel" -$ OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=registry.svc.ci.openshift.org/ocp/release:whatever create-cluster $CLUSTER_NAME +$ RELEASE_IMAGE=registry.svc.ci.openshift.org/ocp/release:whatever create-cluster $CLUSTER_NAME ``` ### Tear Down Cluster @@ -56,10 +57,9 @@ $ openshift-install destroy cluster --dir ~/clusters/$ClUSTER_NAME && rm -rf ~/c ### Tear Down and Clean Up GCP. -Clean up your GCP resources when you are done with your development cluster. -Check out `teardown-gcp.sh` for individual commands or run the script like so: +Clean up your GCP instance when you are done with your development cluster. To delete the instance: ```shell -$ INSTANCE= ./teardown-gcp.sh +$ gcloud compute instances delete INSTANCE_NAME ``` Interact with your cluster with `oc` while connected via ssh to your gcp instance. diff --git a/create-gcp-resources.sh b/create-gcp-resources.sh index 5ab4f2b..7931f2b 100755 --- a/create-gcp-resources.sh +++ b/create-gcp-resources.sh @@ -20,20 +20,7 @@ set -euo pipefail export ZONE=$(gcloud config get-value compute/zone) export PROJECT=$(gcloud config get-value project) -echo_bright "Creating network ${INSTANCE}" -gcloud compute networks create "${INSTANCE}" \ - --subnet-mode=custom \ - --bgp-routing-mode=regional - -echo_bright "Creating subnet for network ${INSTANCE}" -gcloud compute networks subnets create "${INSTANCE}" \ - --network "${INSTANCE}" \ - --range=10.0.0.0/9 - -echo_bright "Creating firewall rules for network ${INSTANCE}" -gcloud compute firewall-rules create "${INSTANCE}" \ - --network "${INSTANCE}" \ - --allow tcp:22,icmp +export NETWORK=ocp4-libvirt-dev echo_bright "Creating instance ${INSTANCE} in project ${PROJECT}" gcloud compute instances create "${INSTANCE}" \ @@ -42,8 +29,8 @@ gcloud compute instances create "${INSTANCE}" \ --min-cpu-platform "Intel Haswell" \ --machine-type n1-standard-16 \ --boot-disk-type pd-ssd --boot-disk-size 256GB \ - --network "${INSTANCE}" \ - --subnet "${INSTANCE}" + --network "${NETWORK}" \ + --subnet "${NETWORK}" echo_bright "Using scp to copy pull-secret to /home/${GCP_USER}/pull-secret in instance ${INSTANCE}" timeout 45s bash -ce 'until \ @@ -56,5 +43,5 @@ echo "${bold}All resources successfully created${reset}" echo "${bold}Use this command to ssh into the VM:${reset}" echo_bright "gcloud beta compute ssh --zone ${ZONE} ${INSTANCE} --project ${PROJECT}" echo "" -echo "${bold}To clean up all resources from this script, run:${reset}" -echo_bright "INSTANCE=${INSTANCE} ./teardown-gcp.sh" +echo "${bold}To delete the instance, run:${reset}" +echo_bright "gcloud compute instances delete ${INSTANCE}" diff --git a/create-network-and-subnet.sh b/create-network-and-subnet.sh new file mode 100755 index 0000000..0f807de --- /dev/null +++ b/create-network-and-subnet.sh @@ -0,0 +1,36 @@ +#!/bin/bash + +bold=$(tput bold) +bright=$(tput setaf 14) +reset=$(tput sgr0) + +echo_bright() { + echo "${bold}${bright}$1${reset}" +} + +echo "${bold}Creating GCP resources${reset}" +if [[ -z "$ID" ]]; then + echo the following environment variables must be provided: + echo "\$ID to name gcp network and subnet" + exit 1 +fi +set -euo pipefail + +export ZONE=$(gcloud config get-value compute/zone) +export PROJECT=$(gcloud config get-value project) +echo_bright "Creating network ${ID}" +gcloud compute networks create "${ID}" \ + --subnet-mode=custom \ + --bgp-routing-mode=regional + +echo_bright "Creating subnet for network ${ID}" +gcloud compute networks subnets create "${ID}" \ + --network "${ID}" \ + --range=10.0.0.0/9 + +echo_bright "Creating firewall rules for network ${ID}" +gcloud compute firewall-rules create "${ID}" \ + --network "${ID}" \ + --allow tcp:22,icmp + +echo "${bold}Network and subnet successfully created${reset}" diff --git a/provision.sh b/provision.sh index dad9c75..fc4cd2c 100755 --- a/provision.sh +++ b/provision.sh @@ -91,9 +91,6 @@ rm -fr oc.tar.gz sudo mv $HOME/oc /usr/local/bin sudo ln -s /usr/local/bin/oc /usr/local/bin/kubectl -# Install a default installer -update-installer - sudo bash -c 'cat >> /etc/bashrc' << EOF export KUBECONFIG=\$HOME/clusters/nested/auth/kubeconfig export PATH=$PATH:/usr/local/go/bin diff --git a/teardown-gcp.sh b/teardown-gcp.sh index 26234db..b7b20b1 100755 --- a/teardown-gcp.sh +++ b/teardown-gcp.sh @@ -12,7 +12,7 @@ echo_bright "Cleaning up GCP" if [[ -z "$INSTANCE" ]]; then echo "\$INSTANCE must be provided" fi -echo "This script will remove all ${bright}${bold}$INSTANCE${reset} GCP resources" +echo "This script will remove ${bright}${bold}$INSTANCE${reset} GCP instance." echo "${bold}Do you want to continue (Y/n)?${reset}" read x if [ "$x" != "Y" ]; then @@ -20,6 +20,3 @@ if [ "$x" != "Y" ]; then fi set -x gcloud compute instances delete "${INSTANCE}" --quiet -gcloud compute firewall-rules delete "${INSTANCE}" --quiet -gcloud compute networks subnets delete "${INSTANCE}" --quiet -gcloud compute networks delete "${INSTANCE}" --quiet diff --git a/tools/create-cluster b/tools/create-cluster index de218b8..6f182cc 100755 --- a/tools/create-cluster +++ b/tools/create-cluster @@ -6,16 +6,27 @@ if [ -z "$NAME" ]; then exit 1 fi +# TODO: only need RELEASE_IMAGE, but temporarily need both while we transition in CI +if [[ -z "$RELEASE_IMAGE" ]] && [[ -z "$OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE" ]]; then + echo "either \$RELEASE_IMAGE or \$OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE must be provided" + exit 1 +fi + +if [[ ! -z "$OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE" ]]; then + export RELEASE_IMAGE="${OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE}" + unset OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE +fi + +# extract libvirt installer from release image +oc adm release extract -a ~/pull-secret --command openshift-baremetal-install "${RELEASE_IMAGE}" +sudo mv openshift-baremetal-install /usr/local/bin/openshift-install + CLUSTER_DIR="${HOME}/clusters/${NAME}" if [ -d "${CLUSTER_DIR}" ]; then echo "WARNING: cluster ${NAME} already exists at ${CLUSTER_DIR}" else mkdir -p ${CLUSTER_DIR} fi -if [[ -z "$OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE" ]]; then - echo "\$OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE must be provided" - exit 1 -fi # Generate a default SSH key if one doesn't exist SSH_KEY="${HOME}/.ssh/id_rsa" if [ ! -f $SSH_KEY ]; then diff --git a/tools/update-installer b/tools/update-installer index 28d5f22..f35e1f5 100755 --- a/tools/update-installer +++ b/tools/update-installer @@ -4,6 +4,8 @@ set -u set -o pipefail echo "Building installer" +echo "You should always extract the installer from the release payload." +echo "Building the installer from source may result in failed launch." REPO_OWNER="${1:-openshift}" BRANCH="${2:-master}"