Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic Host Volumes #24479

Draft
wants to merge 14 commits into
base: main
Choose a base branch
from
Draft

Dynamic Host Volumes #24479

wants to merge 14 commits into from

Conversation

tgross
Copy link
Member

@tgross tgross commented Nov 18, 2024

Feature integration branch for dynamic host volumes.

Closes: #15489
Ref: https://hashicorp.atlassian.net/browse/NET-11259

@tgross tgross added this to the 1.10.0 milestone Nov 18, 2024
tgross added a commit that referenced this pull request Nov 20, 2024
When making a request to create a dynamic host volumes, users can pass a node
pool and constraints instead of a specific node ID. This changeset implements a
node scheduling logic by instantiating a filter by node pool and constraint
checker borrowed from the scheduler package.

Ref: #24479
tgross added a commit that referenced this pull request Nov 20, 2024
When making a request to create a dynamic host volumes, users can pass a node
pool and constraints instead of a specific node ID.

This changeset implements a node scheduling logic by instantiating a filter by
node pool and constraint checker borrowed from the scheduler package. Because
host volumes with the same name can't land on the same host, we don't need to
support `distinct_hosts`/`distinct_property`; this would be challenging anyways
without building out a much larger node iteration mechanism to keep track of
usage across multiple hosts.

Ref: #24479
tgross added a commit that referenced this pull request Nov 20, 2024
When making a request to create a dynamic host volumes, users can pass a node
pool and constraints instead of a specific node ID.

This changeset implements a node scheduling logic by instantiating a filter by
node pool and constraint checker borrowed from the scheduler package. Because
host volumes with the same name can't land on the same host, we don't need to
support `distinct_hosts`/`distinct_property`; this would be challenging anyways
without building out a much larger node iteration mechanism to keep track of
usage across multiple hosts.

Ref: #24479
tgross added a commit that referenced this pull request Nov 20, 2024
When making a request to create a dynamic host volumes, users can pass a node
pool and constraints instead of a specific node ID.

This changeset implements a node scheduling logic by instantiating a filter by
node pool and constraint checker borrowed from the scheduler package. Because
host volumes with the same name can't land on the same host, we don't need to
support `distinct_hosts`/`distinct_property`; this would be challenging anyways
without building out a much larger node iteration mechanism to keep track of
usage across multiple hosts.

Ref: #24479
tgross added a commit that referenced this pull request Nov 20, 2024
When dynamic host volumes are created, they're written to the state store in a
"pending" state. Once the client fingerprints the volume it's eligible for
scheduling, so we mark the state as ready at that point.

Because the fingerprint could potentially be returned before the RPC handler has
a chance to write to the state store, this changeset adds test coverage to
verify that upserts of pending volumes check the node for a
previously-fingerprinted volume as well.

Ref: #24479
@tgross tgross marked this pull request as ready for review November 20, 2024 21:26
@tgross tgross requested review from a team as code owners November 20, 2024 21:26
@tgross tgross marked this pull request as draft November 20, 2024 21:26
tgross and others added 9 commits November 20, 2024 17:03
This changeset implements the ACLs required for dynamic host volumes RPCs:
* `host-volume-write` is a coarse-grained policy that implies all operations.
* `host-volume-register` is the highest fine-grained privilege because it
  potentially bypasses quotas.
* `host-volume-create` is implicitly granted by `host-volume-register`
* `host-volume-delete` is implicitly granted only by `host-volume-write`
* `host-volume-read` is implicitly granted by `policy = "read"`,

These are namespaced operations, so the testing here is predominantly around
parsing and granting of implicit capabilities rather than the well-tested
`AllowNamespaceOperation` method.

This changeset does not include any changes to the `host_volumes` policy which
we'll need for claiming volumes on job submit. That'll be covered in a later PR.

Ref: https://hashicorp.atlassian.net/browse/NET-11549
This changeset implements the state store schema for Dynamic Host Volumes, and
methods used to query the state for RPCs.

Ref: https://hashicorp.atlassian.net/browse/NET-11549
This changeset implements the RPC handlers for Dynamic Host Volumes, including
the plumbing needed to forward requests to clients. The client-side
implementation is stubbed and will be done under a separate PR.

Ref: https://hashicorp.atlassian.net/browse/NET-11549
This changeset implements the HTTP API endpoints for Dynamic Host Volumes.

The `GET /v1/volumes` endpoint is shared between CSI and DHV with a query
parameter for the type. In the interest of getting some working handlers
available for use in development (and minimizing the size of the diff to
review), this changeset doesn't do any sort of refactoring of how the existing
List Volumes CSI endpoint works. That will come in a later PR, as will the
corresponding `api` package updates we need to support the CLI.

Ref: https://hashicorp.atlassian.net/browse/NET-11549
This changeset implements a first pass at the CLI for Dynamic Host Volumes.

Ref: https://hashicorp.atlassian.net/browse/NET-11549
The `HostVolumeByID` state store method didn't add a watch channel to the
watchset, which meant that it would never unblock. The tests missed this because
they were racy, so move the updates for unblocking tests into a `time.After`
call to ensure the queries are blocked before the update happens.
Add several validation steps in the create/register RPCs for dynamic host
volumes. We first check that submitted volumes are self-consistent (ex. max
capacity is more than min capacity), then that any updates we've made are
valid. And we validate against state: preventing claimed volumes from being
updated and preventing placement requests for nodes that don't exist.

Ref: #15489
* mkdir: HostVolumePluginMkdir: just creates a directory
* example-host-volume: HostVolumePluginExternal:
  plugin script that does mkfs and mount loopback

Co-authored-by: Tim Gross <[email protected]>
tgross added a commit that referenced this pull request Nov 20, 2024
When making a request to create a dynamic host volumes, users can pass a node
pool and constraints instead of a specific node ID.

This changeset implements a node scheduling logic by instantiating a filter by
node pool and constraint checker borrowed from the scheduler package. Because
host volumes with the same name can't land on the same host, we don't need to
support `distinct_hosts`/`distinct_property`; this would be challenging anyways
without building out a much larger node iteration mechanism to keep track of
usage across multiple hosts.

Ref: #24479
tgross added a commit that referenced this pull request Nov 20, 2024
When dynamic host volumes are created, they're written to the state store in a
"pending" state. Once the client fingerprints the volume it's eligible for
scheduling, so we mark the state as ready at that point.

Because the fingerprint could potentially be returned before the RPC handler has
a chance to write to the state store, this changeset adds test coverage to
verify that upserts of pending volumes check the node for a
previously-fingerprinted volume as well.

Ref: #24479
When making a request to create a dynamic host volumes, users can pass a node
pool and constraints instead of a specific node ID.

This changeset implements a node scheduling logic by instantiating a filter by
node pool and constraint checker borrowed from the scheduler package. Because
host volumes with the same name can't land on the same host, we don't need to
support `distinct_hosts`/`distinct_property`; this would be challenging anyways
without building out a much larger node iteration mechanism to keep track of
usage across multiple hosts.

Ref: #24479
tgross added a commit that referenced this pull request Nov 21, 2024
When dynamic host volumes are created, they're written to the state store in a
"pending" state. Once the client fingerprints the volume it's eligible for
scheduling, so we mark the state as ready at that point.

Because the fingerprint could potentially be returned before the RPC handler has
a chance to write to the state store, this changeset adds test coverage to
verify that upserts of pending volumes check the node for a
previously-fingerprinted volume as well.

Ref: #24479
When dynamic host volumes are created, they're written to the state store in a
"pending" state. Once the client fingerprints the volume it's eligible for
scheduling, so we mark the state as ready at that point.

Because the fingerprint could potentially be returned before the RPC handler has
a chance to write to the state store, this changeset adds test coverage to
verify that upserts of pending volumes check the node for a
previously-fingerprinted volume as well.

Ref: #24479
tgross added a commit that referenced this pull request Nov 21, 2024
When creating a dynamic host volumes, set up an optional monitor that waits for
the node to fingerprint the volume as healthy.

Ref: #24479
tgross added a commit that referenced this pull request Nov 21, 2024
When creating a dynamic host volumes, set up an optional monitor that waits for
the node to fingerprint the volume as healthy.

Ref: #24479
tgross added a commit that referenced this pull request Nov 21, 2024
When creating a dynamic host volumes, set up an optional monitor that waits for
the node to fingerprint the volume as healthy.

Ref: #24479
tgross added a commit that referenced this pull request Nov 21, 2024
Add support for dynamic host volumes to the search endpoint. Like many other
objects with UUID identifiers, we're not supporting fuzzy search here, just
prefix search on the fuzzy search endpoint.

Because the search endpoint only returns IDs, we need to seperate CSI volumes
and host volumes for it to be useful. The new context is called `"host_volumes"`
to disambiguate it from `"volumes"`. In future versions of Nomad we should
consider deprecating the `"volumes"` context in lieu of a `"csi_volumes"`
context.

Ref: #24479
tgross added a commit that referenced this pull request Nov 21, 2024
Add support for dynamic host volumes to the search endpoint. Like many other
objects with UUID identifiers, we're not supporting fuzzy search here, just
prefix search on the fuzzy search endpoint.

Because the search endpoint only returns IDs, we need to seperate CSI volumes
and host volumes for it to be useful. The new context is called `"host_volumes"`
to disambiguate it from `"volumes"`. In future versions of Nomad we should
consider deprecating the `"volumes"` context in lieu of a `"csi_volumes"`
context.

Ref: #24479
tgross added a commit that referenced this pull request Nov 21, 2024
Adds dynamic host volumes to argument autocomplete for the `volume status` and
`volume delete` commands. Adds flag autocompletion for those commands plus
`volume create`.

Ref: #24479
also ensure that volume ID is uuid-shaped so user-provided input
like `id = "../../../"` which is used as part of the target directory
can not find its way very far into the volume submission process
When creating a dynamic host volumes, set up an optional monitor that waits for
the node to fingerprint the volume as healthy.

Ref: #24479
tgross added a commit that referenced this pull request Nov 22, 2024
Add support for dynamic host volumes to the search endpoint. Like many other
objects with UUID identifiers, we're not supporting fuzzy search here, just
prefix search on the fuzzy search endpoint.

Because the search endpoint only returns IDs, we need to seperate CSI volumes
and host volumes for it to be useful. The new context is called `"host_volumes"`
to disambiguate it from `"volumes"`. In future versions of Nomad we should
consider deprecating the `"volumes"` context in lieu of a `"csi_volumes"`
context.

Ref: #24479
Add support for dynamic host volumes to the search endpoint. Like many other
objects with UUID identifiers, we're not supporting fuzzy search here, just
prefix search on the fuzzy search endpoint.

Because the search endpoint only returns IDs, we need to seperate CSI volumes
and host volumes for it to be useful. The new context is called `"host_volumes"`
to disambiguate it from `"volumes"`. In future versions of Nomad we should
consider deprecating the `"volumes"` context in lieu of a `"csi_volumes"`
context.

Ref: #24479
tgross added a commit that referenced this pull request Nov 22, 2024
Adds dynamic host volumes to argument autocomplete for the `volume status` and
`volume delete` commands. Adds flag autocompletion for those commands plus
`volume create`.

Ref: #24479
tgross added a commit that referenced this pull request Nov 22, 2024
Most Nomad upsert RPCs accept a single object with the notable exception of
CSI. But in CSI we don't actually expose this to users except through the Go
API. It deeply complicates how we present errors to users, especially once
Sentinel policy enforcement enters the mix.

Refactor the `HostVolume.Create` and `HostVolume.Register` RPCs to take a single
volume instead of a slice of volumes.

Add a stub function for Enterprise policy enforcement. This requires splitting
out placement from the `createVolume` function so that we can ensure we've
completed placement before trying to enforce policy.

Ref: #24479
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Dynamic Host Volumes
3 participants