diff --git a/deployer/README.md b/deployer/README.md index 2c415799dd..db285d9441 100644 --- a/deployer/README.md +++ b/deployer/README.md @@ -114,8 +114,8 @@ This directory contains the tests and assets used by them. It is called by `depl │ └── test_hub_health.py ``` -## The deployer's main sub-commands commandline usage -This section descripts all the subcommands the `deployer` can carry out and their commandline parameters. +## The deployer's main sub-commands +This section descripts some of the subcommands the `deployer` can carry out. **Command line usage:** @@ -159,47 +159,12 @@ These deployment related commands are all stored in `deployer/commands/deployer. This function is used to deploy changes to a hub (or list of hubs), or install it if it does not yet exist. It takes a name of a cluster and a name of a hub (or list of names) as arguments, gathers together the config files under `/config/clusters` that describe the individual hub(s), and runs `helm upgrade` with these files passed as `--values` arguments. -**Command line usage:** - -```bash - Usage: deployer deploy [OPTIONS] CLUSTER_NAME [HUB_NAME] - - Deploy one or more hubs in a given cluster - -╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ * cluster_name TEXT Name of cluster to operate on [default: None] [required] │ -│ hub_name [HUB_NAME] Name of hub to operate deploy. Omit to deploy all hubs on the cluster │ -│ [default: None] │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --dask-gateway-version TEXT Version of dask-gateway to install CRDs for [default: 2023.1.0] │ -│ --debug When present, the `--debug` flag will be passed to the `helm upgrade` command. │ -│ --dry-run When present, the `--dry-run` flag will be passed to the `helm upgrade` command. │ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` #### `deploy-support` This function deploys changes to the support helm chart on a cluster, or installs it if it's not already present. This command only needs to be run once per cluster, not once per hub. -**Command line usage:** - -```bash - Usage: deployer deploy-support [OPTIONS] CLUSTER_NAME - - Deploy support components to a cluster - -╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ * cluster_name TEXT Name of cluster to operate on [default: None] [required] │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --cert-manager-version TEXT Version of cert-manager to install [default: v1.8.2] │ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - #### `use-cluster-credentials` This function provides quick command line/`kubectl` access to a cluster. @@ -207,29 +172,6 @@ Running this command will spawn a new shell with all the appropriate environment It uses the deployer credentials saved in the repository and does not authenticate a user with their own profile - it will be a service account and may have different permissions depending on the cloud provider. Remember to close the opened shell after you've finished by using the `exit` command or typing `Ctrl`+`D`/`Cmd`+`D`. -**Command line usage:** - -```bash - - Usage: deployer use-cluster-credentials [OPTIONS] CLUSTER_NAME COMMANDLINE - - Pop a new shell or execute a command after authenticating to the given cluster - using the deployer's credentials - -╭─ Arguments ──────────────────────────────────────────────────────────────────────╮ -│ * cluster_name TEXT Name of cluster to operate on [default: None] │ -│ [required] │ -│ * commandline TEXT Optional shell command line to run after │ -│ authenticating to this cluster │ -│ [default: None] │ -│ [required] │ -╰──────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────╯ - -``` - #### `run-hub-health-check` This function checks that a given hub is healthy by: @@ -240,65 +182,16 @@ This function checks that a given hub is healthy by: For daskhubs, there is an optional check to verify that the user can scale dask workers. -**Command line usage:** - -```bash - Usage: deployer run-hub-health-check [OPTIONS] CLUSTER_NAME HUB_NAME - - Run a health check on a given hub on a given cluster. Optionally check scaling of dask workers if the hub is a - daskhub. - -╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ * cluster_name TEXT Name of cluster to operate on [default: None] [required] │ -│ * hub_name TEXT Name of hub to operate on [default: None] [required] │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --check-dask-scaling --no-check-dask-scaling Check that dask workers can be scaled │ -│ [default: no-check-dask-scaling] │ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` #### Support helper tools: `decrypt-age` Decrypts information sent to 2i2c by community representatives using [age](https://age-encryption.org/) according to instructions in [2i2c documentation](https://docs.2i2c.org/en/latest/support.html?highlight=decrypt#send-us-encrypted-content). -**Command line usage:** -``` - Usage: deployer decrypt-age [OPTIONS] - - Decrypt secrets sent to `support@2i2c.org` via `age` - -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --encrypted-file-path TEXT Path to age-encrypted file sent by user. Leave empty to read from stdin. │ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` ### The `generate` sub-command This deployer sub-command is used to generate various types of file assets. Currently, it can generate the cost billing table, initial cluster infrastructure files and the helm upgrade jobs. -**Command line usage:** - -```bash - - Usage: deployer generate [OPTIONS] COMMAND [ARGS]... - - Generate various types of assets. It currently supports generating files related to billing or new, dedicated - clusters. - -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Commands ───────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ cost-table Generate table with cloud costs for all GCP projects we pass costs through for. │ -│ dedicated-cluster Generate the initial files needed for a new cluster on GCP or AWS. │ -│ helm-upgrade-jobs Analyze added or modified files from a GitHub Pull Request and decide which clusters and/or hubs │ -│ require helm upgrades to be performed for their *hub helm charts or the support helm chart. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - #### `generate helm-upgrade-jobs` This function consumes a list of files that have been added or modified, and from that deduces which hubs on which clusters require a helm upgrade, and whether the support chart also needs upgrading. @@ -307,43 +200,10 @@ It constructs a human-readable table of the hubs that will be upgraded as a resu This function is primarily used in our CI/CD infrastructure and, on top of the human-readable output, JSON objects are also set as outputs that can be interpreted by GitHub Actions as matrix jobs. This allows us to optimise and parallelise the automatic deployment of our hubs. -**Command line usage:** - -```bash - - Usage: deployer generate helm-upgrade-jobs [OPTIONS] CHANGED_FILEPATHS - - Analyze added or modified files from a GitHub Pull Request and decide which clusters and/or hubs require helm upgrades - to be performed for their *hub helm charts or the support helm chart. - -╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ * changed_filepaths TEXT Comma delimited list of files that have changed [default: None] [required] │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - #### `generate dedicated-cluster` This generate sub-command can be used to create the initial files needed for a new cluster on GCP or AWS. -**Command line usage:** -```bash - - Usage: deployer generate dedicated-cluster [OPTIONS] COMMAND [ARGS]... - - Generate the initial files needed for a new cluster on GCP or AWS. - -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Commands ───────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ aws Automatically generate the files required to setup a new cluster on AWS │ -│ gcp Automatically generates the initial files, required to setup a new cluster on GCP │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - ##### `generate dedicated-cluster aws` This function generates the required files for an AWS cluster based on a few input fields, @@ -362,20 +222,6 @@ The files are generated based on the jsonnet templates in: - (`eksctl/template.json`)[https://github.com/2i2c-org/infrastructure/blob/master/eksctl/template.jsonnet] - (`terraform/aws/projects/basehub-template.tfvars`)[https://github.com/2i2c-org/infrastructure/blob/master/terraform/aws/projects/basehub-template.tfvars] -**Command line usage:** - -```bash - Usage: deployer generate dedicated-cluster aws [OPTIONS] - - Automatically generate the files required to setup a new cluster on AWS - -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ * --cluster-name TEXT [default: None] [required] │ -│ * --hub-type TEXT [default: None] [required] │ -│ * --cluster-region TEXT [default: None] [required] │ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` ##### `generate dedicated-cluster gcp` @@ -406,69 +252,17 @@ These defaults are described in each file template. The cluster configuration directory is generated based on the templates in: - (`config/clusters/templates/gcp`)[https://github.com/2i2c-org/infrastructure/blob/master/config/clusters/templates/gcp] -**Command line usage:** - -```bash - Usage: deployer generate dedicated-cluster gcp [OPTIONS] - - Automatically generates the initial files, required to setup a new cluster on GCP - -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ * --cluster-name TEXT [default: None] [required] │ -│ * --project-id TEXT [default: None] [required] │ -│ * --hub-name TEXT [default: None] [required] │ -│ --cluster-region TEXT [default: us-central1] │ -│ --hub-type TEXT [default: basehub] │ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - ### The `grafana` sub-command This deployer sub-command manages all of the available functions related to Grafana. -**Command line usage:** -```bash - - Manages Grafana related workflows. - -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Commands ───────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ deploy-dashboards Deploy the latest official JupyterHub dashboards to a cluster's grafana instance. This │ -│ is done via Grafana's REST API, authorized by using a previously generated Grafana │ -│ service account's access token. │ -│ new-token Generate an API token for the cluster's Grafana `deployer` service account and store it │ -│ encrypted inside a `enc-grafana-token.secret.yaml` file. │ -│ update-central-datasources Update the central grafana with datasources for all clusters prometheus instances │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - #### `grafana deploy-dashboards` This function uses [`jupyterhub/grafana-dashboards`](https://github.com/jupyterhub/grafana-dashboards) to create a set of grafana dashboards for monitoring hub usage across all hubs on a cluster. The support chart **must** be deployed before trying to install the dashboards, since the support chart installs prometheus and grafana. -**Command line usage:** - -```bash - - Usage: deployer grafana deploy-dashboards [OPTIONS] CLUSTER_NAME - - Deploy the latest official JupyterHub dashboards to a cluster's grafana instance. This is done via Grafana's REST API, - authorized by using a previously generated Grafana service account's access token. - The official JupyterHub dashboards are maintained in https://github.com/jupyterhub/grafana-dashboards along with a - python script to deploy them to Grafana via a REST API. - -╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ * cluster_name TEXT Name of cluster to operate on [default: None] [required] │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` #### `grafana new-token` + This function uses the admin credentials located in `helm-charts/support/enc-support.secret.values.yaml` to check if a [Grafana Service Account](https://grafana.com/docs/grafana/latest/administration/service-accounts/) named `deployer` exists for a cluster's Grafana, and creates it if it doesn't. For this service account, it then generates a Grafana token named `deployer`. This token will be used by the [`deploy-grafana-dashboards` workflow](https://github.com/2i2c-org/infrastructure/tree/HEAD/.github/workflows/deploy-grafana-dashboards.yaml) to authenticate with [Grafana’s HTTP API](https://grafana.com/docs/grafana/latest/developers/http_api/). @@ -489,185 +283,24 @@ If such a file doesn't already exist, it will be created by this function. Updates: - the content of `enc-grafana-token.secret.yaml` with the new token if one already existed -**Command line usage:** -```bash - - Usage: deployer grafana new-token [OPTIONS] CLUSTER - - Generate an API token for the cluster's Grafana `deployer` service account and store it encrypted inside a - `enc-grafana-token.secret.yaml` file. - -╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ * cluster TEXT Name of cluster for who's Grafana deployment to generate a new deployer token │ -│ [default: None] │ -│ [required] │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` #### `grafana update-central-datasources` Ensures that the central grafana at https://grafana.pilot.2i2c.cloud is configured to use as datasource the authenticated prometheus instances of all the clusters that we run. -**Command line usage:** -```bash - - Usage: deployer grafana update-central-datasources [OPTIONS] - - Update the central grafana with datasources for all clusters prometheus instances - -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --central-grafana-cluster TEXT Name of cluster where the central grafana lives [default: 2i2c] │ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - - ### The `validate` sub-command This function is used to validate the values files for each of our hubs against their helm chart's values schema. This allows us to validate that all required values are present and have the correct type before we attempt a deployment. -**Command line usage:** - -```bash - Validate configuration files such as helm chart values and cluster.yaml files. - -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Commands ───────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ all Validate cluster.yaml and non-encrypted helm config for given hub │ -│ authenticator-config For each hub of a specific cluster: - It asserts that when the JupyterHub GitHubOAuthenticator │ -│ is used, then `Authenticator.allowed_users` is not set. An error is raised otherwise. │ -│ cluster-config Validates cluster.yaml configuration against a JSONSchema. │ -│ hub-config Validates the provided non-encrypted helm chart values files for each hub of a specific │ -│ cluster. │ -│ support-config Validates the provided non-encrypted helm chart values files for the support chart of a │ -│ specific cluster. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - ### The `cilogon-client` sub-command for CILogon OAuth client management Deployer sub-command for managing CILogon clients for 2i2c hubs. -**Command line usage:** -```bash - Usage: deployer cilogon-client [OPTIONS] COMMAND [ARGS]... - - Manage cilogon clients for hubs' authentication. - -╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Commands ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ create Create a CILogon client for a hub. │ -│ delete Delete an existing CILogon client. This deletes both the CILogon client application, and the client credentials from the configuration file. │ -│ get Retrieve details about an existing CILogon client. │ -│ get-all Retrieve details about all existing 2i2c CILogon clients. │ -│ update Update the CILogon client of a hub. │ -╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - #### `cilogon-client create/delete/get/get-all/update` create/delete/get/get-all/update/ CILogon clients using the 2i2c administrative client provided by CILogon. -**Command line usage:** - -- `cilogon-client create` - - ```bash - Usage: deployer cilogon-client create [OPTIONS] CLUSTER_NAME HUB_NAME - [HUB_TYPE] HUB_DOMAIN - - Create a CILogon client for a hub. - - ╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ - │ * cluster_name TEXT Name of cluster to operate on [default: None] [required] │ - │ * hub_name TEXT Name of the hub for which we'll create a CILogon client [default: None] │ - │ [required] │ - │ hub_type [HUB_TYPE] Type of hub for which we'll create a CILogon client (ex: basehub, daskhub) │ - │ [default: basehub] │ - │ * hub_domain TEXT The hub domain, as specified in `cluster.yaml` (ex: staging.2i2c.cloud) │ - │ [default: None] │ - │ [required] │ - ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ - ╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ - │ --help Show this message and exit. │ - ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ - ``` - -- `cilogon-client delete` - - ```bash - Usage: deployer cilogon-client delete [OPTIONS] CLUSTER_NAME HUB_NAME - - Delete an existing CILogon client. This deletes both the CILogon client application, and the client credentials from - the configuration file. - - ╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ - │ * cluster_name TEXT Name of cluster to operate [default: None] [required] │ - │ * hub_name TEXT Name of the hub for which we'll delete the CILogon client details [default: None] │ - │ [required] │ - ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ - ╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ - │ --client-id TEXT (Optional) Id of the CILogon client to delete of the form `cilogon:/client_id/`. If the │ - │ id is not passed, it will be retrieved from the configuration file │ - │ --help Show this message and exit. │ - ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ - ``` - -- `cilogon-client get` - - ```bash - Usage: Usage: deployer cilogon-client get [OPTIONS] CLUSTER_NAME HUB_NAME - - Retrieve details about an existing CILogon client. - - ╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ - │ * cluster_name TEXT Name of cluster to operate on [default: None] [required] │ - │ * hub_name TEXT Name of the hub for which we'll retrieve the CILogon client details [default: None] │ - │ [required] │ - ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ - ╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ - │ --help Show this message and exit. │ - ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ - ``` - -- `cilogon-client-get-all` - - ```bash - Usage: deployer cilogon-client get-all [OPTIONS] - - Retrieve details about all existing 2i2c CILogon OAuth clients. - - ╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ - │ --help Show this message and exit. │ - ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ - ``` - -- `cilogon-client update` - ```bash - Usage: deployer cilogon-client update [OPTIONS] CLUSTER_NAME HUB_NAME - HUB_DOMAIN - - Update the CILogon client of a hub. - - ╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ - │ * cluster_name TEXT Name of cluster to operate on [default: None] [required] │ - │ * hub_name TEXT Name of the hub for which we'll update a CILogon client [default: None] [required] │ - │ * hub_domain TEXT The hub domain, as specified in `cluster.yaml` (ex: staging.2i2c.cloud) [default: None] │ - │ [required] │ - ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ - ╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ - │ --help Show this message and exit. │ - ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ - ``` - ### The `exec` sub-command for executing shells and debugging commands This deployer `exec` sub-command can be used to @@ -680,38 +313,9 @@ setup or an outage), or when taking down a hub. All these commands take a cluster and hub name as parameters, and perform appropriate authentication before performing their function. -**Command line usage:** - -```bash - Usage: deployer exec [OPTIONS] COMMAND [ARGS]... - - Execute a shell in various parts of the infra. It can be used for poking around, or debugging issues. - -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Commands ───────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ debug │ -│ shell │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` #### `exec debug` This sub-command is useful for debugging. -**Command line usage:** - -```bash - Usage: deployer exec debug [OPTIONS] COMMAND [ARGS]... - -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Commands ───────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ component-logs Display logs from a particular component on a hub on a cluster │ -│ start-docker-proxy Proxy a docker daemon from a remote cluster to local port 23760. │ -│ user-logs Display logs from the notebook pod of a given user │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` ##### `exec debug component-logs` @@ -721,30 +325,6 @@ logs upto the current point in time and then stop. If the component pod has rest due to an error, you can pass `--previous` to look at the logs of the pod prior to the last restart. -```bash - Usage: deployer exec debug component-logs [OPTIONS] CLUSTER_NAME HUB_NAME - COMPONENT:{hub|proxy|dask-gateway- - api|dask-gateway-controller|dask- - gateway-traefik} - - Display logs from a particular component on a hub on a cluster - -╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ * cluster_name TEXT Name of cluster to operate on [default: None] │ -│ [required] │ -│ * hub_name TEXT Name of hub to operate on [default: None] │ -│ [required] │ -│ * component COMPONENT:{hub|proxy|dask-gateway-api|dask-ga Component to display logs of [default: None] │ -│ teway-controller|dask-gateway-traefik} [required] │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --follow --no-follow Live update new logs as they show up [default: follow] │ -│ --previous --no-previous If component pod has restarted, show logs from just before the restart │ -│ [default: no-previous] │ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - ##### `exec debug user-logs` This subcommand displays live updating logs of a prticular user on a hub if @@ -752,24 +332,6 @@ it is currently running. If sidecar containers are present (such as per-user db) they are ignored and only the notebook logs are provided. You can pass `--no-follow` to provide logs upto the current point only. -```bash - Usage: deployer exec debug user-logs [OPTIONS] CLUSTER_NAME HUB_NAME USERNAME - - Display logs from the notebook pod of a given user - -╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ * cluster_name TEXT Name of cluster to operate on [default: None] [required] │ -│ * hub_name TEXT Name of hub to operate on [default: None] [required] │ -│ * username TEXT Name of the JupyterHub user to get logs for [default: None] [required] │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --follow --no-follow Live update new logs as they show up [default: follow] │ -│ --previous --no-previous If user pod has restarted, show logs from just before the restart │ -│ [default: no-previous] │ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - ##### `exec debug start-docker-proxy` Building docker images locally can be *extremely* slow and frustrating. We run a central docker daemon @@ -777,56 +339,15 @@ in our 2i2c cluster that can be accessed via this command, and speeds up image b Once you run this command, run `export DOCKER_HOST=tcp://localhost:23760` in another terminal to use the faster remote docker daemon. -```bash - Usage: deployer exec debug start-docker-proxy [OPTIONS] - [DOCKER_DAEMON_CLUSTER] - - Proxy a docker daemon from a remote cluster to local port 23760. - -╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ docker_daemon_cluster [DOCKER_DAEMON_CLUSTER] Name of cluster where the docker daemon lives [default: 2i2c] │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - #### `exec shell` This exec sub-command can be used to aquire a shell in various places of the infrastructure. -```bash -Usage: deployer exec shell [OPTIONS] COMMAND [ARGS]... - -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Commands ───────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ aws Exec into a shall with appropriate AWS credentials (including MFA) │ -│ homes Pop an interactive shell with the home directories of the given hub mounted on /home │ -│ hub Pop an interactive shell in the hub pod │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - ##### `exec shell hub` This subcommand gives you an interactive shell on the hub pod itself, so you can poke around to see what's going on. Particularly useful if you want to peek at the hub db with the `sqlite` command. -```bash - Usage: deployer exec shell hub [OPTIONS] CLUSTER_NAME HUB_NAME - - Pop an interactive shell in the hub pod - -╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ * cluster_name TEXT Name of cluster to operate on [default: None] [required] │ -│ * hub_name TEXT Name of hub to operate on [default: None] [required] │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - ##### `exec shell homes` This subcommand gives you a shell with the home directories of all the @@ -836,39 +357,10 @@ such as renames. When you exit the shell, the temporary pod spun up is removed. -```bash - Usage: deployer exec shell homes [OPTIONS] CLUSTER_NAME HUB_NAME - - Pop an interactive shell with the home directories of the given hub mounted on /home - -╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ * cluster_name TEXT Name of cluster to operate on [default: None] [required] │ -│ * hub_name TEXT Name of hub to operate on [default: None] [required] │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - ##### `exec shell aws` This sub-command can exec into a shall with appropriate AWS credentials (including MFA). -```bash - Usage: deployer exec shell aws [OPTIONS] PROFILE MFA_DEVICE_ID AUTH_TOKEN - - Exec into a shall with appropriate AWS credentials (including MFA) - -╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ * profile TEXT Name of AWS profile to operate on [default: None] [required] │ -│ * mfa_device_id TEXT Full ARN of MFA Device the code is from [default: None] [required] │ -│ * auth_token INTEGER 6 digit 2 factor authentication code from the MFA device [default: None] [required] │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --help Show this message and exit. │ -╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -``` - ## Running Tests To execute tests on the `deployer`, you will need to install the development requirements and then invoke `pytest` from the root of the repository. diff --git a/docs/howto/bill.md b/docs/howto/bill.md index d6eb56ee41..804b387702 100644 --- a/docs/howto/bill.md +++ b/docs/howto/bill.md @@ -25,12 +25,12 @@ that has monthly costs for all the clusters that are configured to have [bigquery export](new-gcp-project:billing-export). This sheet is currently manually updated. You can update it by running -`deployer generate-cost-table --output 'google-sheet'`. It will by default +`deployer generate cost-table --output 'google-sheet'`. It will by default update the sheet to provide information for the last 12 months. You can control the period by passing in the `start_month` and `end_month` parameters. If you just want to take a look at the costs in the terminal, you can also run -`deployer generate-cost-table --output 'terminal'` instead. +`deployer generate cost-table --output 'terminal'` instead. ## Caveats diff --git a/docs/howto/upgrade-cluster/aws.md b/docs/howto/upgrade-cluster/aws.md index 4511ba1d3f..4f7a4c556c 100644 --- a/docs/howto/upgrade-cluster/aws.md +++ b/docs/howto/upgrade-cluster/aws.md @@ -52,7 +52,7 @@ cluster is unused or that the maintenance is communicated ahead of time. git status # generates a few new files - deployer generate-aws-cluster --cluster-name=$CLUSTER_NAME --cluster-region=$CLUSTER_REGION --hub-type=$HUB_TYPE + deployer generate dedicated-cluster aws --cluster-name=$CLUSTER_NAME --cluster-region=$CLUSTER_REGION --hub-type=$HUB_TYPE # overview changed files git status diff --git a/docs/hub-deployment-guide/configure-auth/cilogon.md b/docs/hub-deployment-guide/configure-auth/cilogon.md index f805d9f410..a6816937cd 100644 --- a/docs/hub-deployment-guide/configure-auth/cilogon.md +++ b/docs/hub-deployment-guide/configure-auth/cilogon.md @@ -7,14 +7,14 @@ The steps to enable the JupyterHub CILogonOAuthenticator for a hub are similar with the ones for enabling [GitHubOAuthenticator](auth:github-orgs): ### Create a CILogon OAuth client -This can be achieved by using the `deployer cilogon-client-create` command. +This can be achieved by using the `deployer cilogon-client create` command. The command needs to be passed the cluster and hub name for which a client id and secret will be generated, but also the hub type, and the hub domain, as specified in `cluster.yaml` (ex: staging.2i2c.cloud). Example script invocation that creates a CILogon OAuth client for the 2i2c dask-staging hub: ```bash -deployer cilogon-client-create 2i2c dask-staging daskhub dask-staging.2i2c.cloud +deployer cilogon-client create 2i2c dask-staging daskhub dask-staging.2i2c.cloud ``` ````{note} diff --git a/docs/hub-deployment-guide/deploy-support/register-central-grafana.md b/docs/hub-deployment-guide/deploy-support/register-central-grafana.md index f37c6b440f..838472995e 100644 --- a/docs/hub-deployment-guide/deploy-support/register-central-grafana.md +++ b/docs/hub-deployment-guide/deploy-support/register-central-grafana.md @@ -49,4 +49,4 @@ deployer deploy-support $CLUSTER_NAME ## Link the cluster's Prometheus server to the central Grafana -Run `deployer update-central-grafana-datasources` to register the new prometheus with the default central grafana. +Run `deployer grafana update-central-datasources` to register the new prometheus with the default central grafana. diff --git a/docs/hub-deployment-guide/deploy-support/setup-grafana-dashboards.md b/docs/hub-deployment-guide/deploy-support/setup-grafana-dashboards.md index 9fcd72d207..8a70c6cfcb 100644 --- a/docs/hub-deployment-guide/deploy-support/setup-grafana-dashboards.md +++ b/docs/hub-deployment-guide/deploy-support/setup-grafana-dashboards.md @@ -33,7 +33,7 @@ export CLUSTER_NAME= ``` ```bash -deployer new-grafana-token $CLUSTER_NAME +deployer grafana new-token $CLUSTER_NAME ``` If the command succeeded, it should have created: @@ -58,7 +58,7 @@ This key will be used by the [`deploy-grafana-dashboards` workflow](https://gith You can deploy the dashboards locally using the deployer: ```bash -deployer deploy-grafana-dashboards $CLUSTER_NAME +deployer grafana deploy-dashboards $CLUSTER_NAME ``` ## Deploying the Grafana Dashboards from CI/CD diff --git a/docs/hub-deployment-guide/hubs/other-hub-ops/delete-hub.md b/docs/hub-deployment-guide/hubs/other-hub-ops/delete-hub.md index 387405163f..85d24172ff 100644 --- a/docs/hub-deployment-guide/hubs/other-hub-ops/delete-hub.md +++ b/docs/hub-deployment-guide/hubs/other-hub-ops/delete-hub.md @@ -22,7 +22,7 @@ Especially if we think that users will want this information in the future (or i ### 1.2. Delete data -Delete user home directories using the [deployer `exec-homes-shell`](https://github.com/2i2c-org/infrastructure/blob/master/deployer/README.md#exec-homes-shell) option. +Delete user home directories using the `deployer exec homes`command. ```bash export CLUSTER_NAME= @@ -30,7 +30,7 @@ export HUB_NAME= ``` ```bash -deployer exec-homes-shell $CLUSTER_NAME $HUB_NAME +deployer exec homes $CLUSTER_NAME $HUB_NAME ``` This should get you a shell with the home directories of all the users on the given hub. Delete all user home directories with: @@ -53,19 +53,19 @@ The naming convention followed when creating these apps is: `$CLUSTER_NAME-$HUB_ ### CILogon OAuth application -Similarly, for each hub that uses CILogon, we dynamically create an OAuth [client application](https://cilogon.github.io/oa4mp/server/manuals/dynamic-client-registration.html) in CILogon using the `deployer cilogon-client-create` command. -Use the `deployer cilogon-client-delete` command to delete this CILogon client when a hub is removed: +Similarly, for each hub that uses CILogon, we dynamically create an OAuth [client application](https://cilogon.github.io/oa4mp/server/manuals/dynamic-client-registration.html) in CILogon using the `deployer cilogon-client create` command. +Use the `deployer cilogon-client delete` command to delete this CILogon client when a hub is removed: You'll need to get all clients with: ```bash -deployer cilogon-client-get-all +deployer cilogon-client get-all ``` And then identify the client of the hub and delete based on its id with: ```bash -deployer cilogon-client-delete --client-id cilogon:/client_id/ $CLUSTER_NAME $HUB_NAME +deployer cilogon-client delete --client-id cilogon:/client_id/ $CLUSTER_NAME $HUB_NAME ``` This will clean up some of the hub values related to auth and must be done prior to removing the hub files. diff --git a/docs/hub-deployment-guide/new-cluster/aws.md b/docs/hub-deployment-guide/new-cluster/aws.md index 96875a2ac6..6e21d149f8 100644 --- a/docs/hub-deployment-guide/new-cluster/aws.md +++ b/docs/hub-deployment-guide/new-cluster/aws.md @@ -59,7 +59,7 @@ export HUB_TYPE= ``` ```bash -deployer generate-aws-cluster --cluster-name=$CLUSTER_NAME --cluster-region=$CLUSTER_REGION --hub-type=$HUB_TYPE +deployer generate dedicated-cluster aws --cluster-name=$CLUSTER_NAME --cluster-region=$CLUSTER_REGION --hub-type=$HUB_TYPE ``` This will generate the following files: diff --git a/docs/sre-guide/common-problems-solutions.md b/docs/sre-guide/common-problems-solutions.md index ebc39dfeb8..f3b8df36cc 100644 --- a/docs/sre-guide/common-problems-solutions.md +++ b/docs/sre-guide/common-problems-solutions.md @@ -173,7 +173,7 @@ Read more about [](cicd) Sometimes we need to inspect the job matrices the deployer generates for correctness. We can do this either by [inspecting the deployment plan that is posted to PRs](cicd/hub/pr-comment) -or by running the `generate-helm-upgrade-jobs` command of the deployer [locally](tutorials:setup). +or by running the `generate helm-upgrade-jobs` command of the deployer [locally](tutorials:setup). This will output the same deployment plan that is used in the PR comment, which is a table formatted by [`rich`](https://rich.readthedocs.io). However, we sometimes @@ -186,7 +186,7 @@ export CI=true ``` This will trigger the deployer to behave as if it is running in a CI environment. -Principally, this means that executing `generate-helm-upgrade-jobs` will write +Principally, this means that executing `generate helm-upgrade-jobs` will write two files to your local environment. The first file is called `pr-number.txt` and can be ignored (it is used by the workflow that posts the deployment plan as a comment and therefore requires the PR number). The second file we set the @@ -203,7 +203,7 @@ our JSON formatted job matrices will be written to. Now we're setup, we can run: ```bash -deployer generate-helm-update-jobs {comma separated list of changed files} +deployer generate helm-update-jobs {comma separated list of changed files} ``` where the list of changed files you can either provide yourself or you can copy-paste diff --git a/docs/sre-guide/support/build-image-remotely.md b/docs/sre-guide/support/build-image-remotely.md index 60386bdc6f..27e87f1c87 100644 --- a/docs/sre-guide/support/build-image-remotely.md +++ b/docs/sre-guide/support/build-image-remotely.md @@ -9,10 +9,10 @@ scale upload / download speeds. ## Building images remotely -1. From a clone of the `infrastructure` repository, use the `start-docker-proxy` command. +1. From a clone of the `infrastructure` repository, use the `debug start-docker-proxy` command. ```bash - deployer start-docker-proxy + deployer debug start-docker-proxy ``` This will forward your *local* computer's port `23760` to the port `2376` running diff --git a/docs/sre-guide/support/home-dir.md b/docs/sre-guide/support/home-dir.md index 13d315afba..09e82b1bd3 100644 --- a/docs/sre-guide/support/home-dir.md +++ b/docs/sre-guide/support/home-dir.md @@ -10,9 +10,7 @@ Sample notebook log from non-starting pod due to a dotfile that doesn't have cor /srv/start: line 23: exec: jupyterhub-singleuser: not found ``` -The -[`exec-homes-shell`](https://github.com/2i2c-org/infrastructure/blob/master/deployer/README.md#exec-homes-shell) -subcommand of the deployer can help us here. +The `exec homes` subcommand of the deployer can help us here. ```bash export CLUSTER_NAME= @@ -20,7 +18,7 @@ export HUB_NAME= ``` ```bash -deployer exec-homes-shell $CLUSTER_NAME $HUB_NAME +deployer exec homes $CLUSTER_NAME $HUB_NAME ``` Will open a bash shell with all the home directories of all the users of `$HUB_NAME` diff --git a/docs/topic/access-creds/cloud-auth.md b/docs/topic/access-creds/cloud-auth.md index 7e6d908442..3953ad4b28 100644 --- a/docs/topic/access-creds/cloud-auth.md +++ b/docs/topic/access-creds/cloud-auth.md @@ -166,13 +166,13 @@ are used to provide access to the AWS account from your terminal. - `arn-of-the-mfa-device` can be found by visiting the 'Security Credentials' page when you're logged into the web console, after - `code-from-token` is a 6 digit integer code generated by your MFA device - Alternatively, the deployer has a convenience command - `exec-aws-shell` + Alternatively, the deployer has a convenience command - `exec aws` to simplify this, purely implementing the suggestions from [the AWS docs](https://repost.aws/knowledge-center/authenticate-mfa-cli). You can execute it like so: ```bash - $ deployer exec-aws-shell + $ deployer exec aws ``` where `` must match the name of the profile in `~/.aws/credentials`