The backports
branch of this fork contains open pull requests from upstream to fix bugs and add features.
Feel free to use it until the pending PRs are merged and this branch becomes obsolete.
Included Patches:
- fix(rancher-common): Helm error, could not download chart
- feat(rancher/aws): Use AWS env vars, use null defaults
- feat(rancher/aws): Bump rancher, rke2, k3s, cert-manager version
- feat(rancher): Use latest SLES 15 SP6 PAYG AMI
- feat(rancher): AWS - Switch Rancher Server public ip to EIP
- fix(rancher): AWS - destroy helm_release.cert_manager timeout
- feat(rancher): AWS - Variable to adjust security group ingress cidr
- fix: windows ami does not exist (with changes: e38d45f]
Quickly stand up an HA-style installation of Rancher by SUSE products on your infrastructure provider of choice.
Intended for experimentation/evaluation ONLY.
You will be responsible for any and all infrastructure costs incurred by these resources. As a result, this repository minimizes costs by standing up the minimum required resources for a given provider. Use Vagrant to run Rancher locally and avoid cloud costs.
Rancher Management Server Quickstarts are provided for:
- Amazon Web Services (
aws
) - Microsoft Azure Cloud (
azure
) - DigitalOcean (
do
) - Google Cloud Platform (
gcp
) - Harvester (
harvester
) - Hetzner Cloud (
hcloud
) - Linode (
linode
) - Scaleway (
scw
) - Outscale (
outscale
)
You will be responsible for any and all infrastructure costs incurred by these resources.
Each quickstart will install Rancher on a single-node K3s cluster, then will provision another single-node RKE2 workload cluster using a Custom cluster in Rancher. This setup provides easy access to the core Rancher functionality while establishing a foundation that can be easily expanded to a full HA Rancher server.
A local quickstart is provided in the form of Vagrant configuration.
The Vagrant quickstart does not currently follow Rancher best practices for installing a Rancher management server. Use this configuration only to evaluate the features of Rancher. See cloud provider quickstarts for an HA foundation according to Rancher installation best practices.
NeuVector Quickstarts are provided for:
You will be responsible for any and all infrastructure costs incurred by these resources.
Each quickstart will install NeuVector on a single-node RKE2 cluster. Optionally, a Rancher Management Server can be deployed as well. This setup provides easy access to the NeuVector Rancher functionality while establishing a foundation that can be easily expanded to a full HA NeuVector installation.
- Vagrant
- VirtualBox
- 6GB unused RAM
See /vagrant for details on usage and settings.
- Terraform >=1.0.0
- Credentials for the cloud provider used for the quickstart
To begin with any quickstart, perform the following steps:
- Clone or download this repository to a local folder
- Choose a cloud provider and navigate into the provider's folder
- Copy or rename
terraform.tfvars.example
toterraform.tfvars
and fill in all required variables - Run
terraform init
- Run
terraform apply
When provisioning has finished, terraform will output the URL to connect to the Rancher server. Two sets of Kubernetes configurations will also be generated:
kube_config_server.yaml
contains credentials to access the cluster supporting the Rancher serverkube_config_workload.yaml
contains credentials to access the provisioned workload cluster
For more details on each cloud provider, refer to the documentation in their respective folders.
When you're finished exploring the Rancher server, use terraform to tear down all resources in the quickstart.
NOTE: Any resources not provisioned by the quickstart are not guaranteed to be destroyed when tearing down the quickstart. Make sure you tear down any resources you provisioned manually before running the destroy command.
Run terraform destroy -auto-approve
to remove all resources without prompting for confirmation.