Skip to content

Commit

Permalink
add placement spread assertions
Browse files Browse the repository at this point in the history
The `assert.placement` field of a `gdt-kube` test Spec allows a test author to
specify the expected scheduling outcome for a set of Pods returned by the
Kubernetes API server from the result of a `kube.get` call.

Suppose you have a Deployment resource with a `TopologySpreadConstraints` that
specifies the Pods in the Deployment must land on different hosts:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
       - name: nginx
         image: nginx:latest
         ports:
          - containerPort: 80
      topologySpreadConstraints:
       - maxSkew: 1
         topologyKey: kubernetes.io/hostname
         whenUnsatisfiable: DoNotSchedule
         labelSelector:
           matchLabels:
             app: nginx
```

You can create a `gdt-kube` test case that verifies that your `nginx`
Deployment's Pods are evenly spread across all available hosts:

```yaml
tests:
 - kube:
     get: deployments/nginx
   assert:
     placement:
       spread: kubernetes.io/hostname
```

If there are more hosts than the `spec.replicas` in the Deployment, `gdt-kube`
will ensure that each Pod landed on a unique host. If there are fewer hosts
than the `spec.replicas` in the Deployment, `gdt-kube` will ensure that there
is an even spread of Pods to hosts, with any host having no more than one more
Pod than any other.

Debug/trace output includes information on how the placement spread
looked like to the gdt-kube placement spread asserter:

```
jaypipes@lappie:~/src/github.com/gdt-dev/kube$ go test -v -run TestPlacementSpread ./eval_test.go
=== RUN   TestPlacementSpread
=== RUN   TestPlacementSpread/placement-spread
[gdt] [placement-spread] kube: create [ns: default]
[gdt] [placement-spread] create-deployment (try 1 after 1.254µs) ok: true
[gdt] [placement-spread] using timeout of 40s (expected: false)
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) ok: false
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) ok: false
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) ok: false
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) ok: false
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) failure: assertion failed: match field not equal: $.status.readyReplicas had different values. expected 6 but found 3
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 5 after 3.785007183s) ok: true
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, unique nodes: 3
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, pods per node: [2 2 2]
[gdt] [placement-spread] deployment-spread-evenly-across-hosts (try 1 after 3.369µs) ok: true
[gdt] [placement-spread] kube: delete [ns: default]
[gdt] [placement-spread] delete-deployment (try 1 after 1.185µs) ok: true

--- PASS: TestPlacementSpread (4.98s)
    --- PASS: TestPlacementSpread/placement-spread (4.96s)
PASS
ok  	command-line-arguments	4.993s
```

Issue #7

Signed-off-by: Jay Pipes <[email protected]>
  • Loading branch information
jaypipes committed Jun 1, 2024
1 parent cd9bc31 commit d778db4
Show file tree
Hide file tree
Showing 16 changed files with 609 additions and 441 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/gate-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
test-skip-kind:
strategy:
matrix:
go: ['1.19', '1.20', '1.21']
go: ['1.22']
os: [macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
steps:
Expand Down Expand Up @@ -42,7 +42,7 @@ jobs:
test-all:
strategy:
matrix:
go: ['1.19', '1.20', '1.21']
go: ['1.22']
os: [ubuntu-latest]
runs-on: ${{ matrix.os }}
steps:
Expand Down
106 changes: 106 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,6 +184,16 @@ matches some expectation:
`ConditionType` should have
* `reason` which is the exact string that should be present in the
`Condition` with the `ConditionType`
* `assert.placement`: (optional) an object describing assertions to make about
the placement (scheduling outcome) of Pods returned in the `kube.get` result.
* `assert.placement.spread`: (optional) an single string or array of strings
for topology keys that the Pods returned in the `kube.get` result should be
spread evenly across, e.g. `topology.kubernetes.io/zone` or
`kubernetes.io/hostname`.
* `assert.placement.pack`: (optional) an single string or array of strings for
topology keys that the Pods returned in the `kube.get` result should be
bin-packed within, e.g. `topology.kubernetes.io/zone` or
`kubernetes.io/hostname`.
* `assert.json`: (optional) object describing the assertions to make about
resource(s) returned from the `kube.get` call to the Kubernetes API server.
* `assert.json.len`: (optional) integer representing the number of bytes in the
Expand Down Expand Up @@ -450,6 +460,102 @@ tests:
reason: NewReplicaSetAvailable
```

### Asserting scheduling outcomes using `assert.placement`

The `assert.placement` field of a `gdt-kube` test Spec allows a test author to
specify the expected scheduling outcome for a set of Pods returned by the
Kubernetes API server from the result of a `kube.get` call.

#### Asserting even spread of Pods across a topology

Suppose you have a Deployment resource with a `TopologySpreadConstraints` that
specifies the Pods in the Deployment must land on different hosts:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: nginx
```

You can create a `gdt-kube` test case that verifies that your `nginx`
Deployment's Pods are evenly spread across all available hosts:

```yaml
tests:
- kube:
get: deployments/nginx
assert:
placement:
spread: kubernetes.io/hostname
```

If there are more hosts than the `spec.replicas` in the Deployment, `gdt-kube`
will ensure that each Pod landed on a unique host. If there are fewer hosts
than the `spec.replicas` in the Deployment, `gdt-kube` will ensure that there
is an even spread of Pods to hosts, with any host having no more than one more
Pod than any other.

#### Asserting bin-packing of Pods

Suppose you have configured your Kubernetes scheduler to bin-pack Pods onto
hosts by scheduling Pods to hosts with the most allocated CPU resources:

```yaml
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
profiles:
- pluginConfig:
- args:
scoringStrategy:
resources:
- name: cpu
weight: 100
type: MostAllocated
name: NodeResourcesFit
```

You can create a `gdt-kube` test case that verifies that your `nginx`
Deployment's Pods are packed onto the fewest unique hosts:

```yaml
tests:
- kube:
get: deployments/nginx
assert:
placement:
pack: kubernetes.io/hostname
```

`gdt-kube` will examine the total number of hosts that meet the nginx
Deployment's scheduling and resource constraints and then assert that the
number of hosts the Deployment's Pods landed on is the minimum number that
would fit the total requested resources.

### Asserting resource fields using `assert.json`

The `assert.json` field of a `gdt-kube` test Spec allows a test author to
Expand Down
2 changes: 1 addition & 1 deletion action.go
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ func (a *Action) Do(
) error {
cmd := a.getCommand()

debug.Println(ctx, t, "kube: %s [ns: %s]", cmd, ns)
debug.Println(ctx, "kube: %s [ns: %s]", cmd, ns)
switch cmd {
case "get":
return a.get(ctx, t, c, ns, out)
Expand Down
52 changes: 48 additions & 4 deletions assertions.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
package kube

import (
"context"
"encoding/json"
"errors"
"fmt"
Expand Down Expand Up @@ -158,6 +159,8 @@ type Expect struct {
// reason: NewReplicaSetAvailable
// ```
Conditions map[string]*ConditionMatch `yaml:"conditions,omitempty"`
// Placement describes expected Pod scheduling spread or pack outcomes.
Placement *PlacementAssertion `yaml:"placement,omitempty"`
}

// conditionMatch is a struct with fields that we will match a resource's
Expand Down Expand Up @@ -196,8 +199,21 @@ func (m *ConditionMatch) UnmarshalYAML(node *yaml.Node) error {
return nil
}

// PlacementAssertion describes an expectation for Pod scheduling outcomes.
type PlacementAssertion struct {
// Spread contains zero or more topology keys that gdt-kube will assert an
// even spread across.
Spread *gdttypes.FlexStrings `yaml:"spread,omitempty"`
// Pack contains zero or more topology keys that gdt-kube will assert
// bin-packing of resources within.
Pack *gdttypes.FlexStrings `yaml:"pack,omitempty"`
}

// assertions contains all assertions made for the exec test
type assertions struct {
// c is the connection to the Kubernetes API for when the assertions needs
// to query for things like placement outcomes or Node resources.
c *connection
// failures contains the set of error messages for failed assertions
failures []error
// exp contains the expected conditions to assert against
Expand Down Expand Up @@ -226,7 +242,7 @@ func (a *assertions) Failures() []error {

// OK checks all the assertions against the supplied arguments and returns true
// if all assertions pass.
func (a *assertions) OK() bool {
func (a *assertions) OK(ctx context.Context) bool {
exp := a.exp
if exp == nil {
if a.err != nil {
Expand All @@ -247,7 +263,10 @@ func (a *assertions) OK() bool {
if !a.conditionsOK() {
return false
}
if !a.jsonOK() {
if !a.jsonOK(ctx) {
return false
}
if !a.placementOK(ctx) {
return false
}
return true
Expand Down Expand Up @@ -426,7 +445,7 @@ func (a *assertions) conditionsOK() bool {

// jsonOK returns true if the subject matches the JSON conditions, false
// otherwise
func (a *assertions) jsonOK() bool {
func (a *assertions) jsonOK(ctx context.Context) bool {
exp := a.exp
if exp.JSON != nil && a.hasSubject() {
var err error
Expand All @@ -438,7 +457,7 @@ func (a *assertions) jsonOK() bool {
}
}
ja := gdtjson.New(exp.JSON, b)
if !ja.OK() {
if !ja.OK(ctx) {
for _, f := range ja.Failures() {
a.Fail(f)
}
Expand All @@ -448,6 +467,29 @@ func (a *assertions) jsonOK() bool {
return true
}

// placementOK returns true if the subject matches the Placement conditions,
// false otherwise
func (a *assertions) placementOK(ctx context.Context) bool {
exp := a.exp
if exp.Placement != nil && a.hasSubject() {
// TODO(jaypipes): Handle list returns...
res, ok := a.r.(*unstructured.Unstructured)
if !ok {
panic("expected result to be unstructured.Unstructured")
}
spread := exp.Placement.Spread
if spread != nil {
ok = a.placementSpreadOK(ctx, res, spread.Values())
}
pack := exp.Placement.Pack
if pack != nil {
ok = ok && a.placementPackOK(ctx, res, pack.Values())
}
return ok
}
return true
}

// hasSubject returns true if the assertions `r` field (which contains the
// subject of which we inspect) is not `nil`.
func (a *assertions) hasSubject() bool {
Expand All @@ -465,11 +507,13 @@ func (a *assertions) hasSubject() bool {
// newAssertions returns an assertions object populated with the supplied http
// spec assertions
func newAssertions(
c *connection,
exp *Expect,
err error,
r interface{},
) gdttypes.Assertions {
return &assertions{
c: c,
failures: []error{},
exp: exp,
err: err,
Expand Down
3 changes: 2 additions & 1 deletion connect.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ package kube
import (
"context"
"fmt"
"os"

gdtcontext "github.com/gdt-dev/gdt/context"
"k8s.io/apimachinery/pkg/api/meta"
Expand Down Expand Up @@ -191,7 +192,7 @@ func (s *Spec) connect(ctx context.Context) (*connection, error) {
}
disco := discocached.NewMemCacheClient(discoverer)
mapper := restmapper.NewDeferredDiscoveryRESTMapper(disco)
expander := restmapper.NewShortcutExpander(mapper, disco)
expander := restmapper.NewShortcutExpander(mapper, disco, func(s string) { fmt.Fprint(os.Stderr, s) })

return &connection{
mapper: expander,
Expand Down
8 changes: 4 additions & 4 deletions eval.go
Original file line number Diff line number Diff line change
Expand Up @@ -65,10 +65,10 @@ func (s *Spec) Eval(ctx context.Context, t *testing.T) *result.Result {
return result.New(result.WithRuntimeError(err))
}
}
a = newAssertions(s.Assert, err, out)
success = a.OK()
a = newAssertions(c, s.Assert, err, out)
success = a.OK(ctx)
debug.Println(
ctx, t, "%s (try %d after %s) ok: %v",
ctx, "%s (try %d after %s) ok: %v",
s.Title(), attempts, after, success,
)
if success {
Expand All @@ -77,7 +77,7 @@ func (s *Spec) Eval(ctx context.Context, t *testing.T) *result.Result {
}
for _, f := range a.Failures() {
debug.Println(
ctx, t, "%s (try %d after %s) failure: %s",
ctx, "%s (try %d after %s) failure: %s",
s.Title(), attempts, after, f,
)
}
Expand Down
Loading

0 comments on commit d778db4

Please sign in to comment.