Skip to content

Commit

Permalink
Merge pull request 2i2c-org#4972 from sgibson91/rename-to-jupyterhub-…
Browse files Browse the repository at this point in the history
…home-nfs

Update references to renamed jupyterhub-home-nfs chart
  • Loading branch information
sgibson91 authored Oct 17, 2024
2 parents 712b646 + e7373a9 commit b389dff
Show file tree
Hide file tree
Showing 7 changed files with 21 additions and 19 deletions.
3 changes: 3 additions & 0 deletions .github/workflows/validate-clusters.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,9 @@ on:
- "**"
workflow_dispatch:

permissions:
packages: read

jobs:
# This job inspects changed files in order to determine which cluster files
# should be validated. If helm-chart files change, then all clusters will be
Expand Down
2 changes: 1 addition & 1 deletion config/clusters/nasa-veda/staging.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ basehub:
server: *url
username: *username

jupyter-home-nfs:
jupyterhub-home-nfs:
enabled: true
eks:
enabled: true
Expand Down
17 changes: 8 additions & 9 deletions docs/howto/features/storage-quota.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,18 +26,17 @@ ebs_volumes = {

This will create a disk with a size of 100GB for the `staging` hub that we can reference when configuring the NFS server.

## Enabling jupyterhub-home-nfs

## Enabling jupyter-home-nfs
To be able to configure per-user storage quotas, we need to run an in-cluster NFS server using [`jupyterhub-home-nfs`](https://github.com/sunu/jupyterhub-home-nfs). This can be enabled by setting `jupyterhub-home-nfs.enabled` to `true` in the hub's values file.

To be able to configure per-user storage quotas, we need to run an in-cluster NFS server using [`jupyter-home-nfs`](https://github.com/sunu/jupyter-home-nfs). This can be enabled by setting `jupyter-home-nfs.enabled` to `true` in the hub's values file.

jupyter-home-nfs expects a reference to an pre-provisioned disk. Here's an example of how to configure that on AWS and GCP.
jupyterhub-home-nfs expects a reference to an pre-provisioned disk. Here's an example of how to configure that on AWS and GCP.

`````{tab-set}
````{tab-item} AWS
:sync: aws-key
```yaml
jupyter-home-nfs:
jupyterhub-home-nfs:
enabled: true
eks:
enabled: true
Expand All @@ -48,7 +47,7 @@ jupyter-home-nfs:
````{tab-item} GCP
:sync: gcp-key
```yaml
jupyter-home-nfs:
jupyterhub-home-nfs:
enabled: true
gke:
enabled: true
Expand All @@ -63,7 +62,7 @@ These changes can be deployed by running the following command:
deployer deploy <cluster_name> <hub_name>
```

Once these changes are deployed, we should have a new NFS server running in our cluster through the `jupyter-home-nfs` Helm chart. We can get the IP address of the NFS server by running the following commands:
Once these changes are deployed, we should have a new NFS server running in our cluster through the `jupyterhub-home-nfs` Helm chart. We can get the IP address of the NFS server by running the following commands:

```bash
# Authenticate with the cluster
Expand Down Expand Up @@ -120,10 +119,10 @@ deployer deploy <cluster_name> <hub_name>

Now we can set quotas for each user and configure the path to monitor for storage quota enforcement.

This can be done by updating `basehub.jupyter-home-nfs.quotaEnforcer` in the hub's values file. For example, to set a quota of 10GB for all users on the `staging` hub, we would add the following to the hub's values file:
This can be done by updating `basehub.jupyterhub-home-nfs.quotaEnforcer` in the hub's values file. For example, to set a quota of 10GB for all users on the `staging` hub, we would add the following to the hub's values file:

```yaml
jupyter-home-nfs:
jupyterhub-home-nfs:
quotaEnforcer:
hardQuota: "10" # in GB
path: "/export/staging"
Expand Down
8 changes: 4 additions & 4 deletions helm-charts/basehub/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ dependencies:
version: "2024.1.0"
repository: "https://helm.dask.org/"
condition: dask-gateway.enabled
- name: jupyter-home-nfs
version: 0.0.5
repository: oci://ghcr.io/sunu/jupyter-home-nfs
condition: jupyter-home-nfs.enabled
- name: jupyterhub-home-nfs
version: 0.0.6
repository: oci://ghcr.io/2i2c-org/jupyterhub-home-nfs
condition: jupyterhub-home-nfs.enabled
4 changes: 2 additions & 2 deletions helm-charts/basehub/values.schema.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@ properties:
type: object
additionalProperties: true

jupyter-home-nfs:
jupyterhub-home-nfs:
type: object
additionalProperties: true
required:
Expand All @@ -276,7 +276,7 @@ properties:
enabled:
type: boolean
description: |
Enable using jupyter-home-nfs to provide persistent storage for
Enable using jupyterhub-home-nfs to provide persistent storage for
user home directories on an in-cluster NFS server with storage
quota enforcement.
Expand Down
2 changes: 1 addition & 1 deletion helm-charts/basehub/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1655,5 +1655,5 @@ jupyterhub:
else:
print("dask-gateway service not found, this should not happen!")
jupyter-home-nfs:
jupyterhub-home-nfs:
enabled: false
4 changes: 2 additions & 2 deletions terraform/aws/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -304,7 +304,7 @@ variable "ebs_volumes" {
description = <<-EOT
Deploy one or more AWS ElasticBlockStore volumes.
This provisions a managed EBS volume that can be used by jupyter-home-nfs server
to store home directories for users.
This provisions a managed EBS volume that can be used by jupyterhub-home-nfs
server to store home directories for users.
EOT
}

0 comments on commit b389dff

Please sign in to comment.