Skip to content

Commit

Permalink
extract installer binary, instead of OPENSHIFT_INSTALL_RELEASE_IMAGE_…
Browse files Browse the repository at this point in the history
…OVERRIDE and use pre-configured network.
  • Loading branch information
sallyom committed Jun 27, 2020
1 parent e1e2ea9 commit abf1af5
Show file tree
Hide file tree
Showing 7 changed files with 74 additions and 41 deletions.
27 changes: 15 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,29 +6,33 @@ Create an [OpenShift 4.X](https://github.com/openshift/installer) cluster in a s

Images are built and maintained in openshift-gce-devel GCE project. If you are an OpenShift developer and have access to openshift-gce-devel GCE project,
all you need to get started is the [gcloud CLI tool](https://cloud.google.com/sdk/docs/downloads-yum)
For developers not in OpenShift organization, see [here](https://github.com/ironcladlou/openshift4-libvirt-gcp/blob/rhel8/IMAGES.md)
for information on images, and substitute `your-gce-project` for `openshift-gce-devel` in all scripts.

For developers not in OpenShift organization, see [the centos8-okd4 branch](https://github.com/ironcladlou/openshift4-libvirt-gcp/blob/centos8-okd4/README.md).
For information on images, see [IMAGES.md](https://github.com/ironcladlou/openshift4-libvirt-gcp/blob/rhel8/IMAGES.md).

### Create GCP instance

First, create network and firewall rules in GCP and then the GCP instance.
```
Note: this script uses scp to copy pull-secret to gcp instance. Alternative is to
Note: This script uses scp to copy pull-secret to gcp instance. Alternative is to
add pull-secret to metadata when creating the instance. However, metadata is printed
in the gcp console. This is why this setup uses scp instead.
```
You can either run the commands from `create-gcp-resources.sh` individually or run the script like so:
You'll need a network and firewall rules to connect to your instance. This is already set up for developers in
`openshift-gce-devel`, it's `ocp4-libvirt-dev`. The script `create-gcp-instance.sh` will launch an instance using this network.
If you prefer, you can run `create-network-subnet.sh` and then the commands from the create-gcp-instance.sh to create an instance
in a different network. To use the preconfigured network `ocp4-libvirt-dev`, run the script like so:

```shell
$ export INSTANCE=mytest
$ export GCP_USER=<whatever name you login as to gcp instance>, used to scp pull-secret to $HOME/pull-secret in gcp instance
$ export PULL_SECRET=/path/to/pull-secret-one-liner.json
$ ./create-gcp-resources.sh
$ ./create-gcp-instance.sh
```

### Find an available release payload image

You need to provide an OVERRIDE release payload that you have access to.
You need to provide a release payload that you have access to. This setup will extract the installer binary from the
RELEASE_IMAGE you provide.
For public images see [ocp-dev-preview](https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/) and
[quay.io/ocp-dev-preview](https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags)
or for internal images see [CI release images](https://openshift-release.svc.ci.openshift.org/)
Expand All @@ -39,11 +43,11 @@ example release images:
### Create nested libvirt cluster - 3 masters, 2 workers

Connect to the instance using SSH and create a cluster named `$CLUSTER_NAME` using latest payload built from CI.
Install directory will be populated at `$HOME/clusters/$CLUSTER_NAME`
Install directory will be populated at `$HOME/clusters/$CLUSTER_NAME`.

```shell
$ gcloud beta compute ssh --zone "us-east1-c" $INSTANCE --project "openshift-gce-devel"
$ OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=registry.svc.ci.openshift.org/ocp/release:whatever create-cluster $CLUSTER_NAME
$ RELEASE_IMAGE=registry.svc.ci.openshift.org/ocp/release:whatever create-cluster $CLUSTER_NAME
```

### Tear Down Cluster
Expand All @@ -56,10 +60,9 @@ $ openshift-install destroy cluster --dir ~/clusters/$ClUSTER_NAME && rm -rf ~/c

### Tear Down and Clean Up GCP.

Clean up your GCP resources when you are done with your development cluster.
Check out `teardown-gcp.sh` for individual commands or run the script like so:
Clean up your GCP instance when you are done with your development cluster. To delete the instance:
```shell
$ INSTANCE=<your gcp instance name> ./teardown-gcp.sh
$ gcloud compute instances delete INSTANCE_NAME
```

Interact with your cluster with `oc` while connected via ssh to your gcp instance.
Expand Down
23 changes: 5 additions & 18 deletions create-gcp-resources.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,20 +20,7 @@ set -euo pipefail

export ZONE=$(gcloud config get-value compute/zone)
export PROJECT=$(gcloud config get-value project)
echo_bright "Creating network ${INSTANCE}"
gcloud compute networks create "${INSTANCE}" \
--subnet-mode=custom \
--bgp-routing-mode=regional

echo_bright "Creating subnet for network ${INSTANCE}"
gcloud compute networks subnets create "${INSTANCE}" \
--network "${INSTANCE}" \
--range=10.0.0.0/9

echo_bright "Creating firewall rules for network ${INSTANCE}"
gcloud compute firewall-rules create "${INSTANCE}" \
--network "${INSTANCE}" \
--allow tcp:22,icmp
export NETWORK=ocp4-libvirt-dev

echo_bright "Creating instance ${INSTANCE} in project ${PROJECT}"
gcloud compute instances create "${INSTANCE}" \
Expand All @@ -42,8 +29,8 @@ gcloud compute instances create "${INSTANCE}" \
--min-cpu-platform "Intel Haswell" \
--machine-type n1-standard-16 \
--boot-disk-type pd-ssd --boot-disk-size 256GB \
--network "${INSTANCE}" \
--subnet "${INSTANCE}"
--network "${NETWORK}" \
--subnet "${NETWORK}"

echo_bright "Using scp to copy pull-secret to /home/${GCP_USER}/pull-secret in instance ${INSTANCE}"
timeout 45s bash -ce 'until \
Expand All @@ -56,5 +43,5 @@ echo "${bold}All resources successfully created${reset}"
echo "${bold}Use this command to ssh into the VM:${reset}"
echo_bright "gcloud beta compute ssh --zone ${ZONE} ${INSTANCE} --project ${PROJECT}"
echo ""
echo "${bold}To clean up all resources from this script, run:${reset}"
echo_bright "INSTANCE=${INSTANCE} ./teardown-gcp.sh"
echo "${bold}To delete the instance, run:${reset}"
echo_bright "gcloud compute instances delete ${INSTANCE}"
36 changes: 36 additions & 0 deletions create-network-and-subnet.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
#!/bin/bash

bold=$(tput bold)
bright=$(tput setaf 14)
reset=$(tput sgr0)

echo_bright() {
echo "${bold}${bright}$1${reset}"
}

echo "${bold}Creating GCP resources${reset}"
if [[ -z "$ID" ]]; then
echo the following environment variables must be provided:
echo "\$ID to name gcp network and subnet"
exit 1
fi
set -euo pipefail

export ZONE=$(gcloud config get-value compute/zone)
export PROJECT=$(gcloud config get-value project)
echo_bright "Creating network ${ID}"
gcloud compute networks create "${ID}" \
--subnet-mode=custom \
--bgp-routing-mode=regional

echo_bright "Creating subnet for network ${ID}"
gcloud compute networks subnets create "${ID}" \
--network "${ID}" \
--range=10.0.0.0/9

echo_bright "Creating firewall rules for network ${ID}"
gcloud compute firewall-rules create "${ID}" \
--network "${ID}" \
--allow tcp:22,icmp

echo "${bold}Network and subnet successfully created${reset}"
3 changes: 0 additions & 3 deletions provision.sh
Original file line number Diff line number Diff line change
Expand Up @@ -91,9 +91,6 @@ rm -fr oc.tar.gz
sudo mv $HOME/oc /usr/local/bin
sudo ln -s /usr/local/bin/oc /usr/local/bin/kubectl

# Install a default installer
update-installer

sudo bash -c 'cat >> /etc/bashrc' << EOF
export KUBECONFIG=\$HOME/clusters/nested/auth/kubeconfig
export PATH=$PATH:/usr/local/go/bin
Expand Down
5 changes: 1 addition & 4 deletions teardown-gcp.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,11 @@ echo_bright "Cleaning up GCP"
if [[ -z "$INSTANCE" ]]; then
echo "\$INSTANCE must be provided"
fi
echo "This script will remove all ${bright}${bold}$INSTANCE${reset} GCP resources"
echo "This script will remove ${bright}${bold}$INSTANCE${reset} GCP instance."
echo "${bold}Do you want to continue (Y/n)?${reset}"
read x
if [ "$x" != "Y" ]; then
exit 0
fi
set -x
gcloud compute instances delete "${INSTANCE}" --quiet
gcloud compute firewall-rules delete "${INSTANCE}" --quiet
gcloud compute networks subnets delete "${INSTANCE}" --quiet
gcloud compute networks delete "${INSTANCE}" --quiet
19 changes: 15 additions & 4 deletions tools/create-cluster
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,27 @@ if [ -z "$NAME" ]; then
exit 1
fi

# TODO: only need RELEASE_IMAGE, but temporarily need both while we transition in CI
if [[ -z "$RELEASE_IMAGE" ]] && [[ -z "$OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE" ]]; then
echo "either \$RELEASE_IMAGE or \$OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE must be provided"
exit 1
fi

if [[ ! -z "$OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE" ]]; then
export RELEASE_IMAGE="${OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE}"
unset OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE
fi

# extract libvirt installer from release image
oc adm release extract -a ~/pull-secret --command openshift-baremetal-install "${RELEASE_IMAGE}"
sudo mv openshift-baremetal-install /usr/local/bin/openshift-install

CLUSTER_DIR="${HOME}/clusters/${NAME}"
if [ -d "${CLUSTER_DIR}" ]; then
echo "WARNING: cluster ${NAME} already exists at ${CLUSTER_DIR}"
else
mkdir -p ${CLUSTER_DIR}
fi
if [[ -z "$OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE" ]]; then
echo "\$OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE must be provided"
exit 1
fi
# Generate a default SSH key if one doesn't exist
SSH_KEY="${HOME}/.ssh/id_rsa"
if [ ! -f $SSH_KEY ]; then
Expand Down
2 changes: 2 additions & 0 deletions tools/update-installer
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ set -u
set -o pipefail

echo "Building installer"
echo "You should always extract the installer from the release payload."
echo "Building the installer from source may result in failed launch."

REPO_OWNER="${1:-openshift}"
BRANCH="${2:-master}"
Expand Down

0 comments on commit abf1af5

Please sign in to comment.