From 91789c55b079899cbf2c7bcd5cec52cbb8e3033c Mon Sep 17 00:00:00 2001 From: kubevirt-bot Date: Thu, 14 Nov 2024 09:53:38 +0000 Subject: [PATCH] Postsubmit site update from e89accc2f9d65a2056436b80d9dd360a7c11148b Signed-off-by: kubevirt-bot --- search/search_index.json | 2 +- user_workloads/creating_vms/index.html | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/search/search_index.json b/search/search_index.json index 05ab4725..5aa1cd95 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-,:!=\\[\\]\\(\\)\"/]+|\\.(?!\\d)","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome","text":"

The KubeVirt User Guide is divided into the following sections:

"},{"location":"#try-it-out","title":"Try it out","text":""},{"location":"#kubevirt-labs","title":"KubeVirt Labs","text":""},{"location":"#getting-help","title":"Getting help","text":""},{"location":"#developer","title":"Developer","text":""},{"location":"#privacy","title":"Privacy","text":""},{"location":"architecture/","title":"Architecture","text":"

KubeVirt is built using a service oriented architecture and a choreography pattern.

"},{"location":"architecture/#stack","title":"Stack","text":"
  +---------------------+\n  | KubeVirt            |\n~~+---------------------+~~\n  | Orchestration (K8s) |\n  +---------------------+\n  | Scheduling (K8s)    |\n  +---------------------+\n  | Container Runtime   |\n~~+---------------------+~~\n  | Operating System    |\n  +---------------------+\n  | Virtual(kvm)        |\n~~+---------------------+~~\n  | Physical            |\n  +---------------------+\n

Users requiring virtualization services are speaking to the Virtualization API (see below) which in turn is speaking to the Kubernetes cluster to schedule requested Virtual Machine Instances (VMIs). Scheduling, networking, and storage are all delegated to Kubernetes, while KubeVirt provides the virtualization functionality.

"},{"location":"architecture/#additional-services","title":"Additional Services","text":"

KubeVirt provides additional functionality to your Kubernetes cluster, to perform virtual machine management

If we recall how Kubernetes is handling Pods, then we remember that Pods are created by posting a Pod specification to the Kubernetes API Server. This specification is then transformed into an object inside the API Server, this object is of a specific type or kind - that is how it's called in the specification. A Pod is of the type Pod. Controllers within Kubernetes know how to handle these Pod objects. Thus once a new Pod object is seen, those controllers perform the necessary actions to bring the Pod alive, and to match the required state.

This same mechanism is used by KubeVirt. Thus KubeVirt delivers three things to provide the new functionality:

  1. Additional types - so called Custom Resource Definition (CRD) - are added to the Kubernetes API
  2. Additional controllers for cluster wide logic associated with these new types
  3. Additional daemons for node specific logic associated with new types

Once all three steps have been completed, you are able to

One final note; both controllers and daemons are running as Pods (or similar) on top of the Kubernetes cluster, and are not installed alongside it. The type is - as said before - even defined inside the Kubernetes API server. This allows users to speak to Kubernetes, but modify VMIs.

The following diagram illustrates how the additional controllers and daemons communicate with Kubernetes and where the additional types are stored:

And a simplified version:

"},{"location":"architecture/#application-layout","title":"Application Layout","text":"

VirtualMachineInstance (VMI) is the custom resource that represents the basic ephemeral building block of an instance. In a lot of cases this object won't be created directly by the user but by a high level resource. High level resources for VMI can be:

"},{"location":"architecture/#native-workloads","title":"Native Workloads","text":"

KubeVirt is deployed on top of a Kubernetes cluster. This means that you can continue to run your Kubernetes-native workloads next to the VMIs managed through KubeVirt.

Furthermore: if you can run native workloads, and you have KubeVirt installed, you should be able to run VM-based workloads, too. For example, Application Operators should not require additional permissions to use cluster features for VMs, compared to using that feature with a plain Pod.

Security-wise, installing and using KubeVirt must not grant users any permission they do not already have regarding native workloads. For example, a non-privileged Application Operator must never gain access to a privileged Pod by using a KubeVirt feature.

"},{"location":"architecture/#the-razor","title":"The Razor","text":"

We love virtual machines, think that they are very important and work hard to make them easy to use in Kubernetes. But even more than VMs, we love good design and modular, reusable components. Quite frequently, we face a dilemma: should we solve a problem in KubeVirt in a way that is best optimized for VMs, or should we take a longer path and introduce the solution to Pod-based workloads too?

To decide these dilemmas we came up with the KubeVirt Razor: \"If something is useful for Pods, we should not implement it only for VMs\".

For example, we debated how we should connect VMs to external network resources. The quickest way seems to introduce KubeVirt-specific code, attaching a VM to a host bridge. However, we chose the longer path of integrating with Multus and CNI and improving them.

"},{"location":"architecture/#virtualmachine","title":"VirtualMachine","text":"

A VirtualMachine provides additional management capabilities to a VirtualMachineInstance inside the cluster. That includes:

It focuses on a 1:1 relationship between the controller instance and a virtual machine instance. In many ways it is very similar to a StatefulSet with spec.replica set to 1.

"},{"location":"architecture/#how-to-use-a-virtualmachine","title":"How to use a VirtualMachine","text":"

A VirtualMachine will make sure that a VirtualMachineInstance object with an identical name will be present in the cluster when the VirtualMachine is in a Running state, which is controlled via the spec.runStrategy field. For more information regarding Run Strategies, please refer to Run Strategies

"},{"location":"architecture/#starting-and-stopping","title":"Starting and stopping","text":"

Virtual Machines can be turned on/off in an imperative or a declarative manner. Setting a spec.runStrategy like Always or Halted means that the system will continuously try to ensure the Virtual Machine is turned on/off:

# Start the virtual machine:\nkubectl patch virtualmachine vm --type merge -p \\\n    '{\"spec\":{\"runStrategy\": \"Always\"}}'\n\n# Stop the virtual machine:\nkubectl patch virtualmachine vm --type merge -p \\\n    '{\"spec\":{\"runStrategy\": \"Halted\"}}'\n

However, with the Manual runStrategy, the user would imperatively choose when to turn the VM on or off, without the system performing any automatic actions:

# Start the virtual machine:\nvirtctl start vm\n\n# Stop the virtual machine:\nvirtctl stop vm\n

Find more details about a VM's life-cycle in the relevant section

"},{"location":"architecture/#controller-status","title":"Controller status","text":"

Once a VirtualMachineInstance is created, its state will be tracked via status.created and status.ready fields of the VirtualMachine. If a VirtualMachineInstance exists in the cluster, status.created will equal true. If the VirtualMachineInstance is also ready, status.ready will equal true too.

If a VirtualMachineInstance reaches a final state but the spec.running equals true, the VirtualMachine controller will set status.ready to false and re-create the VirtualMachineInstance.

Additionally, the status.printableStatus field provides high-level summary information about the state of the VirtualMachine. This information is also displayed when listing VirtualMachines using the CLI:

$ kubectl get virtualmachines\nNAME     AGE   STATUS    VOLUME\nvm1      4m    Running\nvm2      11s   Stopped\n

Here's the list of states currently supported and their meanings. Note that states may be added/removed in future releases, so caution should be used if consumed by automated programs.

"},{"location":"architecture/#restarting","title":"Restarting","text":"

A VirtualMachineInstance restart can be triggered by deleting the VirtualMachineInstance. This will also propagate configuration changes from the template in the VirtualMachine:

# Restart the virtual machine (you delete the instance!):\nkubectl delete virtualmachineinstance vm\n

To restart a VirtualMachine named vm using virtctl:

$ virtctl restart vm\n

This would perform a normal restart for the VirtualMachineInstance and would reschedule the VirtualMachineInstance on a new virt-launcher Pod

To force restart a VirtualMachine named vm using virtctl:

$ virtctl restart vm --force --grace-period=0\n

This would try to perform a normal restart, and would also delete the virt-launcher Pod of the VirtualMachineInstance with setting GracePeriodSeconds to the seconds passed in the command.

Currently, only setting grace-period=0 is supported.

Note

Force restart can cause data corruption, and should be used in cases of kernel panic or VirtualMachine being unresponsive to normal restarts.

"},{"location":"architecture/#fencing-considerations","title":"Fencing considerations","text":"

A VirtualMachine will never restart or re-create a VirtualMachineInstance until the current instance of the VirtualMachineInstance is deleted from the cluster.

"},{"location":"architecture/#exposing-as-a-service","title":"Exposing as a Service","text":"

A VirtualMachine can be exposed as a service. The actual service will be available once the VirtualMachineInstance starts without additional interaction.

For example, exposing SSH port (22) as a ClusterIP service using virtctl after the VirtualMachine was created, but before it started:

$ virtctl expose virtualmachine vmi-ephemeral --name vmiservice --port 27017 --target-port 22\n

All service exposure options that apply to a VirtualMachineInstance apply to a VirtualMachine.

See Service Objects for more details.

"},{"location":"architecture/#when-to-use-a-virtualmachine","title":"When to use a VirtualMachine","text":""},{"location":"architecture/#when-api-stability-is-required-between-restarts","title":"When API stability is required between restarts","text":"

A VirtualMachine makes sure that VirtualMachineInstance API configurations are consistent between restarts. A classical example are licenses which are bound to the firmware UUID of a virtual machine. The VirtualMachine makes sure that the UUID will always stay the same without the user having to take care of it.

One of the main benefits is that a user can still make use of defaulting logic, although a stable API is needed.

"},{"location":"architecture/#when-config-updates-should-be-picked-up-on-the-next-restart","title":"When config updates should be picked up on the next restart","text":"

If the VirtualMachineInstance configuration should be modifiable inside the cluster and these changes should be picked up on the next VirtualMachineInstance restart. This means that no hotplug is involved.

"},{"location":"architecture/#when-you-want-to-let-the-cluster-manage-your-individual-virtualmachineinstance","title":"When you want to let the cluster manage your individual VirtualMachineInstance","text":"

Kubernetes as a declarative system can help you to manage the VirtualMachineInstance. You tell it that you want this VirtualMachineInstance with your application running, the VirtualMachine will try to make sure it stays running.

Note

The current belief is that if it is defined that the VirtualMachineInstance should be running, it should be running. This is different from many classical virtualization platforms, where VMs stay down if they were switched off. Restart policies may be added if needed. Please provide your use-case if you need this!

"},{"location":"architecture/#example","title":"Example","text":"
apiVersion: kubevirt.io/v1\nkind: VirtualMachine\nmetadata:\n  labels:\n    kubevirt.io/vm: vm-cirros\n  name: vm-cirros\nspec:\n  runStrategy: Halted\n  template:\n    metadata:\n      labels:\n        kubevirt.io/vm: vm-cirros\n    spec:\n      domain:\n        devices:\n          disks:\n          - disk:\n              bus: virtio\n            name: containerdisk\n          - disk:\n              bus: virtio\n            name: cloudinitdisk\n        machine:\n          type: \"\"\n        resources:\n          requests:\n            memory: 64M\n      terminationGracePeriodSeconds: 0\n      volumes:\n      - name: containerdisk\n        containerDisk:\n          image: kubevirt/cirros-container-disk-demo:latest\n      - cloudInitNoCloud:\n          userDataBase64: IyEvYmluL3NoCgplY2hvICdwcmludGVkIGZyb20gY2xvdWQtaW5pdCB1c2VyZGF0YScK\n        name: cloudinitdisk\n

Saving this manifest into vm.yaml and submitting it to Kubernetes will create the controller instance:

$ kubectl create -f vm.yaml\nvirtualmachine \"vm-cirros\" created\n

Since spec.running is set to false, no vmi will be created:

$ kubectl get vmis\nNo resources found.\n

Let's start the VirtualMachine:

$ virtctl start vm vm-cirros\n

As expected, a VirtualMachineInstance called vm-cirros got created:

$ kubectl describe vm vm-cirros\nName:         vm-cirros\nNamespace:    default\nLabels:       kubevirt.io/vm=vm-cirros\nAnnotations:  <none>\nAPI Version:  kubevirt.io/v1\nKind:         VirtualMachine\nMetadata:\n  Cluster Name:\n  Creation Timestamp:  2018-04-30T09:25:08Z\n  Generation:          0\n  Resource Version:    6418\n  Self Link:           /apis/kubevirt.io/v1/namespaces/default/virtualmachines/vm-cirros\n  UID:                 60043358-4c58-11e8-8653-525500d15501\nSpec:\n  Running:  true\n  Template:\n    Metadata:\n      Creation Timestamp:  <nil>\n      Labels:\n        Kubevirt . Io / Ovmi:  vm-cirros\n    Spec:\n      Domain:\n        Devices:\n          Disks:\n            Disk:\n              Bus:        virtio\n            Name:         containerdisk\n            Volume Name:  containerdisk\n            Disk:\n              Bus:        virtio\n            Name:         cloudinitdisk\n            Volume Name:  cloudinitdisk\n        Machine:\n          Type:\n        Resources:\n          Requests:\n            Memory:                      64M\n      Termination Grace Period Seconds:  0\n      Volumes:\n        Name:  containerdisk\n        Registry Disk:\n          Image:  kubevirt/cirros-registry-disk-demo:latest\n        Cloud Init No Cloud:\n          User Data Base 64:  IyEvYmluL3NoCgplY2hvICdwcmludGVkIGZyb20gY2xvdWQtaW5pdCB1c2VyZGF0YScK\n        Name:                 cloudinitdisk\nStatus:\n  Created:  true\n  Ready:    true\nEvents:\n  Type    Reason            Age   From                              Message\n  ----    ------            ----  ----                              -------\n  Normal  SuccessfulCreate  15s   virtualmachine-controller  Created virtual machine: vm-cirros\n
"},{"location":"architecture/#kubectl-commandline-interactions","title":"Kubectl commandline interactions","text":"

Whenever you want to manipulate the VirtualMachine through the commandline you can use the kubectl command. The following are examples demonstrating how to do it.

    # Define a virtual machine:\n    kubectl create -f vm.yaml\n\n    # Start the virtual machine:\n    kubectl patch virtualmachine vm --type merge -p \\\n        '{\"spec\":{\"runStrategy\":\"Always\"}}'\n\n    # Look at virtual machine status and associated events:\n    kubectl describe virtualmachine vm\n\n    # Look at the now created virtual machine instance status and associated events:\n    kubectl describe virtualmachineinstance vm\n\n    # Stop the virtual machine instance:\n    kubectl patch virtualmachine vm --type merge -p \\\n        '{\"spec\":{\"runStrategy\":\"Halted\"}}'\n\n    # Restart the virtual machine (you delete the instance!):\n    kubectl delete virtualmachineinstance vm\n\n    # Implicit cascade delete (first deletes the virtual machine and then the virtual machine instance)\n    kubectl delete virtualmachine vm\n\n    # Explicit cascade delete (first deletes the virtual machine and then the virtual machine instance)\n    kubectl delete virtualmachine vm --cascade=true\n\n    # Orphan delete (The running virtual machine is only detached, not deleted)\n    # Recreating the virtual machine would lead to the adoption of the virtual machine instance\n    kubectl delete virtualmachine vm --cascade=false\n
"},{"location":"contributing/","title":"Contributing","text":"

Welcome!! And thank you for taking the first step to contributing to the KubeVirt project. On this page you should be able to find all the information required to get started on your contribution journey, as well as information on how to become a community member and grow into roles of responsibility.

If you think something might be missing from this page, please help us by raising a bug!

"},{"location":"contributing/#prerequisites","title":"Prerequisites","text":"

Reviewing the following will prepare you for contributing:

For code contributors:

"},{"location":"contributing/#your-first-contribution","title":"Your first contribution","text":"

The following will help you decide where to start:

"},{"location":"contributing/#important-community-resources","title":"Important community resources","text":"

You should familiarize yourself with the following documents, which are critical to being a member of the community:

"},{"location":"contributing/#other-ways-to-contribute","title":"Other ways to contribute","text":""},{"location":"quickstarts/","title":"Quickstarts","text":""},{"location":"quickstarts/#quickstart-guides","title":"Quickstart Guides","text":"

Killercoda provides an interactive environment for exploring KubeVirt scenarios:

Guides for deploying KubeVirt with different Kubernetes tools:

"},{"location":"release_notes/","title":"KubeVirt release notes","text":""},{"location":"release_notes/#v140","title":"v1.4.0","text":"

Released on: Wed Nov 13 2024

KubeVirt v1.4 is built for Kubernetes v1.31 and additionally supported for the previous two versions. See the KubeVirt support matrix for more information.

To see the list of very excellent people who contributed to this release, see the KubeVirt release tag for v1.4.0.

"},{"location":"release_notes/#api-change","title":"API change","text":""},{"location":"release_notes/#bug-fix","title":"Bug fix","text":""},{"location":"release_notes/#deprecation","title":"Deprecation","text":""},{"location":"release_notes/#sig-compute","title":"SIG-compute","text":""},{"location":"release_notes/#sig-storage","title":"SIG-storage","text":""},{"location":"release_notes/#sig-network","title":"SIG-network","text":""},{"location":"release_notes/#sig-scale","title":"SIG-scale","text":""},{"location":"release_notes/#monitoring","title":"Monitoring","text":""},{"location":"release_notes/#uncategorized","title":"Uncategorized","text":""},{"location":"release_notes/#v130","title":"v1.3.0","text":"

Release on: Wed Jul 17 2024

KubeVirt v1.3 is built for Kubernetes v1.30 and additionally supported for the previous two versions. See the KubeVirt support matrix for more information.

To see the list of fine folks who contributed to this release, see the KubeVirt release tag for v1.3.0.

"},{"location":"release_notes/#api-change_1","title":"API change","text":""},{"location":"release_notes/#bug-fix_1","title":"Bug fix","text":""},{"location":"release_notes/#deprecation_1","title":"Deprecation","text":""},{"location":"release_notes/#sig-compute_1","title":"SIG-compute","text":""},{"location":"release_notes/#sig-storage_1","title":"SIG-storage","text":""},{"location":"release_notes/#sig-network_1","title":"SIG-network","text":""},{"location":"release_notes/#sig-scale_1","title":"SIG-scale","text":""},{"location":"release_notes/#monitoring_1","title":"Monitoring","text":""},{"location":"release_notes/#uncategorized_1","title":"Uncategorized","text":""},{"location":"release_notes/#v120","title":"v1.2.0","text":"

Released on: Tue Mar 05 2024

KubeVirt v1.2 is built for Kubernetes v1.29 and additionally supported for the previous two versions. See the KubeVirt support matrix for more information.

"},{"location":"release_notes/#api-change_2","title":"API change","text":""},{"location":"release_notes/#bug-fix_2","title":"Bug fix","text":""},{"location":"release_notes/#deprecation_2","title":"Deprecation","text":""},{"location":"release_notes/#sig-compute_2","title":"SIG-compute","text":""},{"location":"release_notes/#sig-storage_2","title":"SIG-storage","text":""},{"location":"release_notes/#sig-network_2","title":"SIG-network","text":""},{"location":"release_notes/#sig-infra","title":"SIG-infra","text":""},{"location":"release_notes/#monitoring_2","title":"Monitoring","text":""},{"location":"release_notes/#uncategorized_2","title":"Uncategorized","text":""},{"location":"release_notes/#v110","title":"v1.1.0","text":"

Released on: Tue Nov 07 2023

"},{"location":"release_notes/#api-change_3","title":"API change","text":""},{"location":"release_notes/#bug-fixes","title":"Bug fixes:","text":""},{"location":"release_notes/#deprecation_3","title":"Deprecation","text":""},{"location":"release_notes/#sig-compute_3","title":"SIG-compute","text":""},{"location":"release_notes/#sig-storage_3","title":"SIG-storage","text":""},{"location":"release_notes/#sig-network_3","title":"SIG-network","text":""},{"location":"release_notes/#sig-infra_1","title":"SIG-infra","text":""},{"location":"release_notes/#sig-scale_2","title":"SIG-scale","text":""},{"location":"release_notes/#uncategorized_3","title":"Uncategorized","text":""},{"location":"release_notes/#v100","title":"v1.0.0","text":"

Released on: Thu Jul 11 17:39:42 2023 +0000

"},{"location":"release_notes/#api-changes","title":"API changes","text":""},{"location":"release_notes/#bug-fixes_1","title":"Bug fixes","text":"