Skip to content

Commit

Permalink
Fast Deploy (Experimental)
Browse files Browse the repository at this point in the history
OVERVIEW

This patch adds support for the Fast Deploy feature, i.e. the ability to
quickly provision a VM as a linked clone, as an experimental feature
that must be enabled manually. There are many things about this feature
that may change prior to it being ready for production.

ACTIVATION

Enabling the experimental Fast Deploy feature requires setting the
environment variable `FSS_WCP_VMSERVICE_FAST_DEPLOY` to `true` in the
VM Operator deployment.

Please note, even when the feature is activated, it is possible to
bypass the feature altogether by specifying the following annotation
on a VM: `vmoperator.vmware.com/fast-deploy: "false"`. This annotation
is completely ignored unless the feature is already activated via
environment variable as described above.

SUPPORT

The following notes are true about the feature in its current,
experimental state:

* Supports all VirtualMachineImage and ClusterVirtualMachineImage
  resources that are type OVF without any changes to the images.

* Supports stretch Supervisor.

* Supports vSAN.

CONSTRAINTS

The following is a list of constraints related to this feature at this
time:

* Is not compatible with VM encryption, either via a vTPM or encryption
  storage policy.

  Child disks can only be encrypted if their parent disks are encrypted,
  and more importantly, child disks must use the same encryption key as
  the parent disk.

  The first constraint could be tolerable -- users could simply build
  encrypted VMs and then publish them as images.

  However, the second constraint is not tenable given the support for
  the upcoming Bring Your Own Key (BYOK) provider feature.

* Is not compatible with backup/restore of VM Service VMs.

  The qualified backup/restore workflows for VM Service VMs have never
  been validated with linked clones as they have not been supported by
  VM Service up until this point.

  Due to how the linked clones are created in this feature, users should
  not expect existing backup/restore software to work with VMs
  provisioned with Fast Deploy at this time.

* May complicate datastore maintenance/migration.

  Existing datastore maintenance/migration workflows may not be aware of
  or know how to handle the top-level `.contentlib-cache` directories
  created by VM Operator when Content Library Item disks are cached on
  recommended datastores.

CREATE VM WORKFLOW

The changes to the "create VM" workflow for Fast Deploy feature can be
summarized as follows:

1. The ConfigSpec used to create/place the VM now includes:

   a. The disks and controllers used by the disks from the image.

      The disks also specify the VM spec's storage class's underlying
      storage policy ID.

   b. The image's guest ID if none was specified by the VM class or VM
      spec.

   c. The root `VMProfile` now specifies the VM spec's storage class's
      underlying storage policy ID

2. A placement recommendation for datastores is always required, which
   uses the storage policies specified in the ConfigSpec to recommend
   a compatible datastore.

3. The path(s) to the image's VMDK file(s) from the underlying Content
   Library Item are retrieved.

4. A special, top-level directory named `.contentlib-cache` is created,
   if it does not exist, at the root of the recommended datastore.

   Please note, this does support vSAN and thus the top-level directory
   may actually be a UUID that is resolved to `.contentlib-cache`.

5. A path is constructed that points to where the disk(s) for the
   library item are expected to be cached on the recommended datastore,
   ex.:
   `[<DATASTORE>] .contentlib-cache/<LIB_ITEM_ID>/<LIB_ITEM_CONTENT_VERSION>`

   If this path does not exist, it is created.

6. The following occurs for each of the library item's VMDK files:

    a. The first 17 characters of a SHA-1 sum of the VMDK file name are
       used to build the expected path to the VMDK file's cached
       location on the recommended datastore, ex.:
       `[<DATASTORE>] .contentlib-cache/<LIB_ITEM_ID>/<LIB_ITEM_CONTENT_VERSION>/<17_CHAR_SHA1_SUM>.vmdk`

    b. If there is no VMDK at the above path, the VMDK file is copied to
       the above path.

7. The `VirtualDisk` devices in the ConfigSpec used to create the VM are
   updated with `VirtualDiskFlatVer2BackingInfo` backings that specify a
   parent backing.

   This parent backing points to the appropriate, cached, base disk from
   above.

8. The `CreateVM_Task` VMODL1 API is used to create the VM. Because the
   the VM's disks have parent backings, this new VM is effectively a
   linked clone.

CACHE CLEANUP

The cached disks and entire cache folder structure are automatically
removed once there are no longer any VMs deployed as linked clones using
a cached disk.

This will likely change in the future to prevent the need to re-cache a
disk just because the VMs deployed from it are no longer using it.
Otherwise disks may need to be continuously cached, which reduces the
value this feature provides.
  • Loading branch information
akutz committed Dec 12, 2024
1 parent 66000bf commit a45d441
Show file tree
Hide file tree
Showing 24 changed files with 1,884 additions and 139 deletions.
6 changes: 6 additions & 0 deletions config/wcp/vmoperator/manager_env_var_patch.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,12 @@
name: FSS_WCP_SUPERVISOR_ASYNC_UPGRADE
value: "<FSS_WCP_SUPERVISOR_ASYNC_UPGRADE_VALUE>"

- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: FSS_WCP_VMSERVICE_FAST_DEPLOY
value: "<FSS_WCP_VMSERVICE_FAST_DEPLOY_VALUE>"

#
# Feature state switch flags beneath this line are enabled on main and only
# retained in this file because it is used by internal testing to determine the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import (
"errors"
"fmt"
"reflect"
"strconv"
"strings"
"time"

Expand Down Expand Up @@ -288,9 +289,26 @@ func (r *Reconciler) Reconcile(ctx context.Context, req ctrl.Request) (_ ctrl.Re
return ctrl.Result{}, client.IgnoreNotFound(err)
}

logger := ctrl.Log.WithName("VirtualMachine").WithValues("name", vm.NamespacedName())

if pkgcfg.FromContext(ctx).Features.FastDeploy {
// Allow the use of an annotation to control whether fast-deploy is used
// per-VM to deploy the VM.
if val := vm.Annotations["vmoperator.vmware.com/fast-deploy"]; val != "" {
if ok, _ := strconv.ParseBool(val); !ok {
// Create a copy of the config so the feature-state for
// FastDeploy can also be influenced by a VM annotation.
cfg := pkgcfg.FromContext(ctx)
cfg.Features.FastDeploy = false
ctx = pkgcfg.WithContext(ctx, cfg)
logger.Info("Disabled fast-deploy for this VM")
}
}
}

vmCtx := &pkgctx.VirtualMachineContext{
Context: ctx,
Logger: ctrl.Log.WithName("VirtualMachine").WithValues("name", vm.NamespacedName()),
Logger: logger,
VM: vm,
}

Expand Down
4 changes: 2 additions & 2 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,10 @@ require (
github.com/vmware-tanzu/vm-operator/external/tanzu-topology v0.0.0-00010101000000-000000000000
github.com/vmware-tanzu/vm-operator/pkg/backup/api v0.0.0-00010101000000-000000000000
github.com/vmware-tanzu/vm-operator/pkg/constants/testlabels v0.0.0-00010101000000-000000000000
github.com/vmware/govmomi v0.31.1-0.20241031174243-82b4ad661180
github.com/vmware/govmomi v0.47.0-alpha.0.0.20241211161205-382c6c002844
golang.org/x/exp v0.0.0-20230515195305-f3d0a9c9a5cc
// * https://github.com/vmware-tanzu/vm-operator/security/dependabot/24
golang.org/x/text v0.19.0
golang.org/x/text v0.21.0
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d
k8s.io/api v0.31.0
k8s.io/apiextensions-apiserver v0.31.0
Expand Down
12 changes: 6 additions & 6 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -108,16 +108,16 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/vmware-tanzu/image-registry-operator-api v0.0.0-20240509202721-f6552612433a h1:DWa7KUbaOs89ggmKDjiwBBuuR1ewUbN/U071O79W6v4=
github.com/vmware-tanzu/image-registry-operator-api v0.0.0-20240509202721-f6552612433a/go.mod h1:zn/ponkeFUViyBDhYp9OKFPqEGWYrsR71Pn9/aTCvSI=
github.com/vmware-tanzu/net-operator-api v0.0.0-20240523152550-862e2c4eb0e0 h1:ymNjvIbvYrk+hyNw6+Gat7XI/8z/15eqSD7CLG7VkOI=
github.com/vmware-tanzu/net-operator-api v0.0.0-20240523152550-862e2c4eb0e0/go.mod h1:w6QJGm3crIA16ZIz1FVQXD2NVeJhOgGXxW05RbVTSTo=
github.com/vmware-tanzu/nsx-operator/pkg/apis v0.0.0-20241112044858-9da8637c1b0d h1:z9lrzKVtNlujduv9BilzPxuge/LE2F0N1ms3TP4JZvw=
github.com/vmware-tanzu/nsx-operator/pkg/apis v0.0.0-20241112044858-9da8637c1b0d/go.mod h1:Q4JzNkNMvjo7pXtlB5/R3oME4Nhah7fAObWgghVmtxk=
github.com/vmware/govmomi v0.31.1-0.20241031174243-82b4ad661180 h1:EnF983cbd8pmpi1tvADVIQst/YMlU6tEZYF192gWsho=
github.com/vmware/govmomi v0.31.1-0.20241031174243-82b4ad661180/go.mod h1:uoLVU9zlXC4p4GmLVG+ZJmBC0Gn3Q7mytOJvi39OhxA=
github.com/vmware/govmomi v0.47.0-alpha.0.0.20241211161205-382c6c002844 h1:B3EnWXQWLH31Om5+J5aybtppJMC5QYc+Zk1sEgtVe0A=
github.com/vmware/govmomi v0.47.0-alpha.0.0.20241211161205-382c6c002844/go.mod h1:bYwUHpGpisE4AOlDl5eph90T+cjJMIcKx/kaa5v5rQM=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
Expand Down Expand Up @@ -155,8 +155,8 @@ golang.org/x/term v0.21.0 h1:WVXCp+/EBEHOj53Rvu+7KiT/iElMrO8ACK16SMZ3jaA=
golang.org/x/term v0.21.0/go.mod h1:ooXLefLobQVslOqselCNF4SxFAaoS6KujMbsGzSDmX0=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM=
golang.org/x/text v0.19.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
Expand Down
2 changes: 2 additions & 0 deletions pkg/config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,8 @@ type FeatureStates struct {
VMIncrementalRestore bool // FSS_WCP_VMSERVICE_INCREMENTAL_RESTORE
BringYourOwnEncryptionKey bool // FSS_WCP_VMSERVICE_BYOK
SVAsyncUpgrade bool // FSS_WCP_SUPERVISOR_ASYNC_UPGRADE
// TODO(akutz) This FSS is placeholder.
FastDeploy bool // FSS_WCP_VMSERVICE_FAST_DEPLOY
}

type InstanceStorage struct {
Expand Down
2 changes: 1 addition & 1 deletion pkg/config/env.go
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ func FromEnv() Config {
setBool(env.FSSVMImportNewNet, &config.Features.VMImportNewNet)
setBool(env.FSSVMIncrementalRestore, &config.Features.VMIncrementalRestore)
setBool(env.FSSBringYourOwnEncryptionKey, &config.Features.BringYourOwnEncryptionKey)

setBool(env.FSSFastDeploy, &config.Features.FastDeploy)
setBool(env.FSSSVAsyncUpgrade, &config.Features.SVAsyncUpgrade)
if !config.Features.SVAsyncUpgrade {
// When SVAsyncUpgrade is enabled, we'll later use the capability CM to determine if
Expand Down
4 changes: 3 additions & 1 deletion pkg/config/env/env.go
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ const (
FSSVMIncrementalRestore
FSSBringYourOwnEncryptionKey
FSSSVAsyncUpgrade

FSSFastDeploy
_varNameEnd
)

Expand Down Expand Up @@ -176,6 +176,8 @@ func (n VarName) String() string {
return "FSS_WCP_VMSERVICE_BYOK"
case FSSSVAsyncUpgrade:
return "FSS_WCP_SUPERVISOR_ASYNC_UPGRADE"
case FSSFastDeploy:
return "FSS_WCP_VMSERVICE_FAST_DEPLOY"
}
panic("unknown environment variable")
}
2 changes: 2 additions & 0 deletions pkg/config/env_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,7 @@ var _ = Describe(
Expect(os.Setenv("FSS_WCP_VMSERVICE_INCREMENTAL_RESTORE", "true")).To(Succeed())
Expect(os.Setenv("FSS_WCP_VMSERVICE_BYOK", "true")).To(Succeed())
Expect(os.Setenv("FSS_WCP_SUPERVISOR_ASYNC_UPGRADE", "false")).To(Succeed())
Expect(os.Setenv("FSS_WCP_VMSERVICE_FAST_DEPLOY", "true")).To(Succeed())
Expect(os.Setenv("CREATE_VM_REQUEUE_DELAY", "125h")).To(Succeed())
Expect(os.Setenv("POWERED_ON_VM_HAS_IP_REQUEUE_DELAY", "126h")).To(Succeed())
})
Expand Down Expand Up @@ -150,6 +151,7 @@ var _ = Describe(
BringYourOwnEncryptionKey: true,
SVAsyncUpgrade: false, // Capability gate so tested below
WorkloadDomainIsolation: true,
FastDeploy: true,
},
CreateVMRequeueDelay: 125 * time.Hour,
PoweredOnVMHasIPRequeueDelay: 126 * time.Hour,
Expand Down
14 changes: 6 additions & 8 deletions pkg/providers/vsphere/contentlibrary/content_library_utils.go
Original file line number Diff line number Diff line change
Expand Up @@ -83,16 +83,14 @@ func initImageStatusFromOVFVirtualSystem(
}

// Use operating system info from the first os section in the VM image, if one exists.
if os := ovfVirtualSystem.OperatingSystem; len(os) > 0 {
o := os[0]

if os := ovfVirtualSystem.OperatingSystem; os != nil {
osInfo := &imageStatus.OSInfo
osInfo.ID = strconv.Itoa(int(o.ID))
if o.Version != nil {
osInfo.Version = *o.Version
osInfo.ID = strconv.Itoa(int(os.ID))
if os.Version != nil {
osInfo.Version = *os.Version
}
if o.OSType != nil {
osInfo.Type = *o.OSType
if os.OSType != nil {
osInfo.Type = *os.OSType
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,12 +85,10 @@ var _ = Describe("UpdateVmiWithOvfEnvelope", func() {
},
},
},
OperatingSystem: []ovf.OperatingSystemSection{
{
OSType: ptr.To("dummy_os_type"),
ID: int16(100),
Version: ptr.To("dummy_version"),
},
OperatingSystem: &ovf.OperatingSystemSection{
OSType: ptr.To("dummy_os_type"),
ID: int16(100),
Version: ptr.To("dummy_version"),
},
VirtualHardware: []ovf.VirtualHardwareSection{
{
Expand Down
111 changes: 96 additions & 15 deletions pkg/providers/vsphere/placement/cluster_placement.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,37 +8,107 @@ import (
"fmt"
"strings"

"github.com/vmware/govmomi/find"
"github.com/vmware/govmomi/object"
"github.com/vmware/govmomi/vim25"
vimtypes "github.com/vmware/govmomi/vim25/types"

pkgcfg "github.com/vmware-tanzu/vm-operator/pkg/config"
pkgctx "github.com/vmware-tanzu/vm-operator/pkg/context"
"github.com/vmware-tanzu/vm-operator/pkg/util"
)

// Recommendation is the info about a placement recommendation.
type Recommendation struct {
PoolMoRef vimtypes.ManagedObjectReference
HostMoRef *vimtypes.ManagedObjectReference
// TODO: Datastore, whatever else as we need it.
PoolMoRef vimtypes.ManagedObjectReference
HostMoRef *vimtypes.ManagedObjectReference
Datastores []DatastoreResult
}

func relocateSpecToRecommendation(relocateSpec *vimtypes.VirtualMachineRelocateSpec) *Recommendation {
func relocateSpecToRecommendation(
ctx context.Context,
relocateSpec *vimtypes.VirtualMachineRelocateSpec) *Recommendation {

// Instance Storage requires the host.
if relocateSpec == nil || relocateSpec.Pool == nil || relocateSpec.Host == nil {
return nil
}

return &Recommendation{
r := Recommendation{
PoolMoRef: *relocateSpec.Pool,
HostMoRef: relocateSpec.Host,
}

if pkgcfg.FromContext(ctx).Features.FastDeploy {
if ds := relocateSpec.Datastore; ds != nil {
r.Datastores = append(r.Datastores, DatastoreResult{
MoRef: *ds,
})
}
for i := range relocateSpec.Disk {
d := relocateSpec.Disk[i]
r.Datastores = append(r.Datastores, DatastoreResult{
MoRef: d.Datastore,
ForDisk: true,
DiskKey: d.DiskId,
})
}
}

return &r
}

func clusterPlacementActionToRecommendation(action vimtypes.ClusterClusterInitialPlacementAction) *Recommendation {
return &Recommendation{
func clusterPlacementActionToRecommendation(
ctx context.Context,
finder *find.Finder,
action vimtypes.ClusterClusterInitialPlacementAction) (*Recommendation, error) {

r := Recommendation{
PoolMoRef: action.Pool,
HostMoRef: action.TargetHost,
}

if pkgcfg.FromContext(ctx).Features.FastDeploy {
if cs := action.ConfigSpec; cs != nil {
//
// Get the recommended datastore for the VM.
//
if cs.Files != nil {
if dsn := util.DatastoreNameFromStorageURI(cs.Files.VmPathName); dsn != "" {
ds, err := finder.Datastore(ctx, dsn)
if err != nil {
return nil, fmt.Errorf("failed to get datastore for %q: %w", dsn, err)
}
if ds != nil {
r.Datastores = append(r.Datastores, DatastoreResult{
Name: dsn,
MoRef: ds.Reference(),
})
}
}
}

//
// Get the recommended datastores for each disk.
//
for i := range cs.DeviceChange {
dcs := cs.DeviceChange[i].GetVirtualDeviceConfigSpec()
if disk, ok := dcs.Device.(*vimtypes.VirtualDisk); ok {
if bbi, ok := disk.Backing.(vimtypes.BaseVirtualDeviceFileBackingInfo); ok {
if bi := bbi.GetVirtualDeviceFileBackingInfo(); bi.Datastore != nil {
r.Datastores = append(r.Datastores, DatastoreResult{
MoRef: *bi.Datastore,
ForDisk: true,
DiskKey: disk.Key,
})
}
}
}
}
}
}

return &r, nil
}

func CheckPlacementRelocateSpec(spec *vimtypes.VirtualMachineRelocateSpec) error {
Expand Down Expand Up @@ -109,7 +179,7 @@ func CloneVMRelocateSpec(

// PlaceVMForCreate determines the suitable placement candidates in the cluster.
func PlaceVMForCreate(
ctx context.Context,
vmCtx pkgctx.VirtualMachineContext,
cluster *object.ClusterComputeResource,
configSpec vimtypes.VirtualMachineConfigSpec) ([]Recommendation, error) {

Expand All @@ -118,11 +188,15 @@ func PlaceVMForCreate(
ConfigSpec: &configSpec,
}

resp, err := cluster.PlaceVm(ctx, placementSpec)
vmCtx.Logger.V(4).Info("PlaceVMForCreate request", "placementSpec", vimtypes.ToString(placementSpec))

resp, err := cluster.PlaceVm(vmCtx, placementSpec)
if err != nil {
return nil, err
}

vmCtx.Logger.V(6).Info("PlaceVMForCreate response", "resp", vimtypes.ToString(resp))

var recommendations []Recommendation

for _, r := range resp.Recommendations {
Expand All @@ -132,7 +206,7 @@ func PlaceVMForCreate(

for _, a := range r.Action {
if pa, ok := a.(*vimtypes.PlacementAction); ok {
if r := relocateSpecToRecommendation(pa.RelocateSpec); r != nil {
if r := relocateSpecToRecommendation(vmCtx, pa.RelocateSpec); r != nil {
recommendations = append(recommendations, *r)
}
}
Expand All @@ -146,9 +220,10 @@ func PlaceVMForCreate(
func ClusterPlaceVMForCreate(
vmCtx pkgctx.VirtualMachineContext,
vcClient *vim25.Client,
finder *find.Finder,
resourcePoolsMoRefs []vimtypes.ManagedObjectReference,
configSpec vimtypes.VirtualMachineConfigSpec,
needsHost bool) ([]Recommendation, error) {
needHostPlacement, needDatastorePlacement bool) ([]Recommendation, error) {

// Work around PlaceVmsXCluster bug that crashes vpxd when ConfigSpec.Files is nil.
configSpec.Files = new(vimtypes.VirtualMachineFileInfo)
Expand All @@ -160,17 +235,18 @@ func ClusterPlaceVMForCreate(
ConfigSpec: configSpec,
},
},
HostRecommRequired: &needsHost,
HostRecommRequired: &needHostPlacement,
DatastoreRecommRequired: &needDatastorePlacement,
}

vmCtx.Logger.V(4).Info("PlaceVmsXCluster request", "placementSpec", placementSpec)
vmCtx.Logger.V(4).Info("PlaceVmsXCluster request", "placementSpec", vimtypes.ToString(placementSpec))

resp, err := object.NewRootFolder(vcClient).PlaceVmsXCluster(vmCtx, placementSpec)
if err != nil {
return nil, err
}

vmCtx.Logger.V(6).Info("PlaceVmsXCluster response", "resp", resp)
vmCtx.Logger.V(6).Info("PlaceVmsXCluster response", "resp", vimtypes.ToString(resp))

if len(resp.Faults) != 0 {
var faultMgs []string
Expand All @@ -194,7 +270,12 @@ func ClusterPlaceVMForCreate(

for _, a := range info.Recommendation.Action {
if ca, ok := a.(*vimtypes.ClusterClusterInitialPlacementAction); ok {
if r := clusterPlacementActionToRecommendation(*ca); r != nil {
r, err := clusterPlacementActionToRecommendation(vmCtx, finder, *ca)
if err != nil {
return nil, fmt.Errorf(
"failed to translate placement action to recommendation: %w", err)
}
if r != nil {
recommendations = append(recommendations, *r)
}
}
Expand Down
Loading

0 comments on commit a45d441

Please sign in to comment.