Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hard to Inject Provider-specifc Logic into Quota Check on Provision Requests #22641

Open
miyamotoh opened this issue Jul 31, 2023 · 7 comments · May be fixed by #22680
Open

Hard to Inject Provider-specifc Logic into Quota Check on Provision Requests #22641

miyamotoh opened this issue Jul 31, 2023 · 7 comments · May be fixed by #22680

Comments

@miyamotoh
Copy link
Contributor

It became obvious, while working on adding proper quota checks for CPU upon VM provisioning on IBM Power, that it's not easy to inject provider-specific logic in the workflow. This may be stemming from the fact that IBM Power cloud manager doesn't use the Flavor field to indicate how much CPU for the provisioning, that's assumed by the core class MiqProvisionRequest and ServiceTemplateProvisionRequest. Also, IBM Power allows fraction, instead of integer, to count its CPU allocation.

So, this may well go much wider, but to get my point across, I think we want some or all of ./content/automate/ManageIQ/System/CommonMethods/QuotaMethods.class (in manageiq-content) to lessen assumptions on how certain ManageIQ constructs (ie. Flavor) are used and become friendlier to provider overrides (ie. used, requested).

@jaywcarman
Copy link
Member

See ManageIQ/manageiq-providers-ibm_cloud#454 for details specific to the IBM Cloud PowerVS provider.

@jaywcarman
Copy link
Member

Here's an example of logic that should be pluggable in the provider repos:
https://github.com/ManageIQ/manageiq/blob/master/app/models/mixins/miq_provision_quota_mixin.rb#L394

@Fryguy
Copy link
Member

Fryguy commented Jul 31, 2023

cc @agrare

@agrare agrare self-assigned this Aug 2, 2023
@agrare
Copy link
Member

agrare commented Aug 10, 2023

The key is going to be finding something which is provider specific to key off of, then move the quota logic somewhere that a provider can override.
In the miq_provision_quota_mixin.rb there is a def vendor() method which checks request.source.vendor. This request.source is very likely the source vm/template object which would be of a specific provider subclass. Whatever logic is needed to be overridden can then be moved into that class and the "default" behavior moved into the base VmOrTemplate class.

@jaywcarman
Copy link
Member

This request.source is very likely the source vm/template object which would be of a specific provider subclass.

I dropped a byebug in the vm_quota_values method here and can confirm that the "source" is the template subclass.

(byebug) l =

[288, 297] in /home/jwcarman/src/github.com/ManageIQ/manageiq/app/models/mixins/miq_provision_quota_mixin.rb
   288:      })
   289:   end
   290: 
   291:   def vm_quota_values(pr, result)
   292:     byebug
=> 293:     num_vms_for_request = number_of_vms(pr)
   294:     return if num_vms_for_request.zero?
   295:     flavor_obj = flavor(pr)
   296:     result[:count] += num_vms_for_request
   297:     result[:memory] += memory(pr, cloud?(pr), vendor(pr), flavor_obj) * num_vms_for_request
(byebug) pr.source
  VmOrTemplate Load (1.0ms)  SELECT "vms".* FROM "vms" WHERE "vms"."id" = $1 LIMIT $2  [["id", 8], ["LIMIT", 1]]
  VmOrTemplate Inst Including Associations (0.4ms - 1rows)
#<ManageIQ::Providers::IbmCloud::PowerVirtualServers::CloudManager::Template id: 8, vendor: "ibm_power_vs", format: "", version: nil, name: "CentOS-Stream-8", description: "stock", location: "unknown", config_xml: nil, autostart: nil, host_id: nil, last_sync_on: nil, created_on: "2023-08-18 16:02:12.022095000 +0000", updated_on: "2023-08-18 16:02:12.022095000 +0000", storage_id: nil, guid: "e8844402-543c-43da-a916-bf7e5af54d9b", ems_id: 3, last_scan_on: nil, last_scan_attempt_on: nil, uid_ems: "31e22a48-6dee-4e2b-89f1-e669b936acd1", retires_on: nil, retired: nil, boot_time: nil, tools_status: nil, standby_action: nil, power_state: "never", state_changed_on: "2023-08-18 16:02:12.012183000 +0000", previous_state: nil, connection_state: nil, last_perf_capture_on: nil, registered: nil, busy: nil, smart: nil, memory_reserve: nil, memory_reserve_expand: nil, memory_limit: nil, memory_shares: nil, memory_shares_level: nil, cpu_reserve: nil, cpu_reserve_expand: nil, cpu_limit: nil, cpu_shares: nil, cpu_shares_level: nil, cpu_affinity: nil, ems_created_on: nil, template: true, evm_owner_id: nil, miq_group_id: 1, linked_clone: nil, fault_tolerance: nil, type: "ManageIQ::Providers::IbmCloud::PowerVirtualServers...", ems_ref: "31e22a48-6dee-4e2b-89f1-e669b936acd1", ems_cluster_id: nil, retirement_warn: nil, retirement_last_warn: nil, vnc_port: nil, flavor_id: nil, availability_zone_id: nil, cloud: true, retirement_state: nil, cloud_network_id: nil, cloud_subnet_id: nil, cloud_tenant_id: nil, raw_power_state: "never", publicly_available: nil, orchestration_stack_id: nil, retirement_requester: nil, tenant_id: 1, resource_group_id: nil, deprecated: nil, storage_profile_id: nil, cpu_hot_add_enabled: nil, cpu_hot_remove_enabled: nil, memory_hot_add_enabled: nil, memory_hot_add_limit: nil, memory_hot_add_increment: nil, hostname: nil, ems_ref_type: nil, restart_needed: nil, ancestry: nil, placement_group_id: nil>

Whatever logic is needed to be overridden can then be moved into that class and the "default" behavior moved into the base VmOrTemplate class.

I'll start working on a set of PRs to move number_of_cpus from the MiqProvisionQuotaMixin module into the ManageIQ::Providers::CloudManager::Template class and the IbmCloud::PowerVirtualServers plugin's ManageIQ::Providers::IbmCloud::PowerVirtualServers::CloudManager::Template subclass.

@miq-bot
Copy link
Member

miq-bot commented May 13, 2024

This issue has been automatically marked as stale because it has not been updated for at least 3 months.

If you can still reproduce this issue on the current release or on master, please reply with all of the information you have about it in order to keep the issue open.

1 similar comment
@miq-bot
Copy link
Member

miq-bot commented Aug 19, 2024

This issue has been automatically marked as stale because it has not been updated for at least 3 months.

If you can still reproduce this issue on the current release or on master, please reply with all of the information you have about it in order to keep the issue open.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants