Retrieving Kubevirt Virtual Machines from Kubernetes Clusters using Kubernetes Visibility Agent<!-- /*NS Branding Styles*/ --> .ns-kb-css-body-editor-container { p { font-size: 12pt; font-family: Lato; color: #000000; } span { font-size: 12pt; font-family: Lato; color: #000000; } h2 { font-size: 24pt; font-family: Lato; color: black; } h3 { font-size: 18pt; font-family: Lato; color: black; } h4 { font-size: 14pt; font-family: Lato; color: black; } a { font-size: 12pt; font-family: Lato; color: #00718F; } a:hover { font-size: 12pt; color: #024F69; } a:target { font-size: 12pt; color: #032D42; } a:visited { font-size: 12pt; color: #00718f; } ul { font-size: 12pt; font-family: Lato; } li { font-size: 12pt; font-family: Lato; } img { display: ; max-width: ; width: ; height: ; } } Introduction Starting from Kubernetes Visibility Agent 3.13.x and informer version 2.6.x, we support retrieval of Kubevirt virtual machines from Kubernetes clusters. What is KubeVirt: In one sentence: Kubevirt is an open-source project that allows to run traditional virtual machines as pods in Kubernetes A bit more from https://kubevirt.io/ KubeVirt technology addresses the needs of development teams that have adopted or want to adopt Kubernetes but possess existing Virtual Machine-based workloads that cannot be easily containerized. More specifically, the technology provides a unified development platform where developers can build, modify, and deploy applications residing in both Application Containers as well as Virtual Machines in a common, shared environment. Benefits are broad and significant. Teams with a reliance on existing virtual machine-based workloads are empowered to rapidly containerize applications. With virtualized workloads placed directly in development workflows, teams can decompose them over time while still leveraging remaining virtualized components as is comfortably desired. Pre-Requisites Minimum version of “Discovery and Service Mapping Patterns” plugin: 1.29.x Minimum version of Kubernetes Visibility Agent plugin: 3.13.x Minimum version of the informer pod: 2.6.x Configuration To enable the feature: On the informer side: With Helm chart use the helm parameter: --set kubevirt.enabled=true With k8s_informer.yaml: Change the “data” part in the ConfigMap section to be as follows: data: # Sample values for the resources and mappings. # resources : '[{"apiGroups":["example.com"],"apiVersions":["v1"],"resources":["mycustomresources"]}]' # mappings : '{"v1/configmaps":{"filter":[{"jsonPath":".metadata.name","regex":"^kube-root-ca.crt$"}]}}' resources: | [ { "apiGroups": [ "kubevirt.io" ], "apiVersions": [ "v1" ], "resources": [ "virtualmachines", "virtualmachineinstances" ], "verbs": [ "get", "list", "watch" ] }, { "apiGroups": [ "cdi.kubevirt.io" ], "apiVersions": [ "v1beta1" ], "resources": [ "datavolumes" ], "verbs": [ "get", "list", "watch" ] } ] mappings: '' mappings_oob: | { "kubevirt.io/v1/virtualmachines": { "fields": [ { "name": "running", "fieldExtractor": { "jsonPath": ".status.printableStatus" } }, { "name": "guest_os", "fieldExtractor": { "jsonPath": ".spec.template.metadata.annotations.vm\\.kubevirt\\.io/os" } }, { "name": "cpus", "fieldExtractor": { "jsonPath": ".spec.template.spec.domain.cpu" } }, { "name": "memory", "fieldExtractor": { "jsonPath": ".spec.template.spec.domain.memory.guest" } }, { "name": "disks", "fieldExtractor": { "jsonPath": ".spec.template.spec.domain.devices.disks" } }, { "name": "nics", "fieldExtractor": { "jsonPath": ".spec.template.spec.domain.devices.interfaces" } } ], "relatedEntries": [ { "targetTable": "cmdb_kubevirt_virtual_machine", "referenceField": "configuration_item", "arrayJsonPath": "", "fields": [ { "name": "running", "targetColumn": "running", "fieldExtractor": { "jsonPath": ".status.printableStatus" } }, { "name": "k8s_uid", "targetColumn": "k8s_uid", "fieldExtractor": { "jsonPath": ".metadata.uid" } } ] } ] }, "kubevirt.io/v1/virtualmachineinstances": { "relatedEntries": [ { "targetTable": "cmdb_kubevirt_virtual_machine_instance", "referenceField": "configuration_item", "arrayJsonPath": "", "fields": [ { "name": "k8s_uid", "targetColumn": "k8s_uid", "fieldExtractor": { "jsonPath": ".metadata.uid" } }, { "name": "node_name", "targetColumn": "node_name", "fieldExtractor": { "jsonPath": ".metadata.labels.kubevirt\\.io/nodeName" } } ] } ], "relations": [ { "targetResourceUid": { "jsonPath": ".metadata.ownerReferences[:].uid" }, "targetKind": { "jsonPath": ".metadata.ownerReferences[:].kind" }, "targetApiVersion": { "jsonPath": ".metadata.ownerReferences[:].apiVersion" }, "relationType": "Owns::Owned by", "relationDirection": false } ], "fields": [ { "name": "phase", "fieldExtractor": { "jsonPath": ".status.phase" } }, { "name": "interfaces", "fieldExtractor": { "jsonPath": ".status.interfaces" } }, { "name": "nodeName", "fieldExtractor": { "jsonPath": ".status.nodeName" } }, { "name": "volumeStatus", "fieldExtractor": { "jsonPath": ".status.volumeStatus" } }, { "name": "guestOSInfo", "fieldExtractor": { "jsonPath": ".status.guestOSInfo" } }, { "name": "ownerReferences", "fieldExtractor": { "jsonPath": ".metadata.ownerReferences" } }, { "name": "specDomain", "fieldExtractor": { "jsonPath": ".spec.domain" } } ] }, "cdi.kubevirt.io/v1beta1/datavolumes": { "relatedEntries": [ { "targetTable": "cmdb_kubevirt_data_volume", "referenceField": "configuration_item", "arrayJsonPath": "", "fields": [ { "name": "requested_storage", "targetColumn": "requested_storage", "fieldExtractor": { "jsonPath": ".spec.storage.resources.requests.storage" } }, { "name": "k8s_uid", "targetColumn": "k8s_uid", "fieldExtractor": { "jsonPath": ".metadata.uid" } } ] } ], "relations": [ { "targetResourceUid": { "jsonPath": ".metadata.ownerReferences[:].uid" }, "targetKind": { "jsonPath": ".metadata.ownerReferences[:].kind" }, "targetApiVersion": { "jsonPath": ".metadata.ownerReferences[:].apiVersion" }, "relationType": "Owns::Owned by", "relationDirection": false } ] } } And add this part to the “rules” in the ClusterRole section (the ClusterRole named servicenow): - apiGroups: - "kubevirt.io" resources: - virtualmachines - virtualmachineinstances verbs: - get - list - watch - apiGroups: - "cdi.kubevirt.io" resources: - datavolumes verbs: - get - list - watch On the instance side: The system will create the Kubevirt specific records (see below) without further configuration. However, if there is a need to create also a cmdb_ci_vm_instance, add the property sn_acc_visibility.kubevirt.create_cmdb_ci_vm_instance with the value ‘true’. If the property sn_acc_visibility.kubevirt.create_server is available with the value ‘true’, the system will also create a cmdb_ci_server with “instantiates” relation to the cmdb_ci_vm_instance. By default, this CI will not be created. If a more specific server CI already exists (e.g. cmdb_ci_linux_server or cmdb_ci_win_server), the CI coming from Kubernetes will be reconciled and will maintain its class. Outcomes The following Kubernetes resources will be reflected in the database: Kubevirt Virtual Machine: Record in cmdb_ci_kubernetes_component: Fields populated (labels): Name, Namespace, Kubernetes UID, Kubernetes Cluster, Kind: VirtualMachine, API Version: kubevirt.io/v1 The CI has relations to the Kubernetes cluster and the namespace. It has relations to Virtual Machine Instance and Data Volume if it owns those. Record in cmdb_kubevirt_virtual_machine: Fields populated: Kubernetes Resource CI: reference to the cmdb_ci_kubernetes_component Virtual Machine Instance: reference to cmdb_ci_vm_instance Running: taken from .status.printableStatus The default list view is showing the following fields: Name, Namespace, Kubernetes Cluster, Kubernetes UID, Most recent discovery : from the referenced cmdb_ci_kubernetes_component CPUs, Disks, Memory (MB): From the referenced cmdb_ci_vm_instance, if such is created. Kubevirt Virtual Machine Instance: Record in cmdb_ci_kubernetes_component: Fields populated (Labels): Name, Namespace, Kubernetes UID, Kubernetes Cluster, Kind: VirtualMachineInstance, API Version: kubevirt.io/v1 The CI has relations to the Kubernetes cluster and the namespace. It also had a relation to the Virtual Machine that owns it. Record in cmdb_kubevirt_virtual_machine_instance: Fields populated: Kubernetes Resource CI: reference to the cmdb_ci_kubernetes_component Virtual Machine Instance: reference to cmdb_ci_vm_instance Running: taken from .status.printableStatus The default list view is showing the following fields: Name, Namespace, Kubernetes Cluster, Kubernetes UID, Most recent discovery : from the referenced cmdb_ci_kubernetes_component Node Name: taken from metadata.labels.kubevirt.io/nodeName CPUs, Disks, Memory (MB): From the referenced cmdb_ci_vm_instance, if such is created. Kubevirt Data Volume: Record in cmdb_ci_kubernetes_component: Fields populated: Name, namespace, Kubernetes UID, Kubernetes Cluster, Kind: DataVolume, API Version: cdi.kubevirt.io/v1beta1 The CI has relations to the Kubernetes cluster and the namespace. It also has a relation to the Virtual machine that owns it. Record in cmdb_kubevirt_data_volume: Fields populated: Kubernetes Resource CI: reference to the cmdb_ci_kubernetes_component Requested Storage: taken from . spec.storage.resources.requests.storage The default list view is showing the following fields: Name, Namespace, Kubernetes Cluster, Kubernetes UID, Most recent discovery: from the referenced cmdb_ci_kubernetes_component Requested Storage: taken from spec.storage.resources.requests.storage CPUs, Disks, Memory (MB): From the referenced cmdb_ci_vm_instance, if such is created. Virtual Machine Instance (cmdb_ci_vm_instance): This CI will be created if the customer creates the sys_property sn_acc_visibility.kubevirt.create_cmdb_ci_vm_instance with the value “true”. The CI will have a ‘Managed-by’ relation to the Kubernetes cluster CI. The following fields will be populated (Labels): Name: from the Kubevirt Virtual Machine name Object ID: The Kubernetes UID of the Kubevirt Virtual Machine CPUs: From the data of Kubevirt Virtual Machine: spec.template.spec.domain.cpu.cores X spec.template.spec.domain.cpu.sockets X spec.template.spec.domain.cpu.threads Disks: From the data of Kubevirt Virtual Machine: count of spec.template.spec.domain.devices.disks Memory (MB): From the data of Kubevirt Virtual Machine: spec.template.spec.domain.memory.guest (converted to MB) Network Adapters: From the data of Kubervirt Virtual Machine: count of Spec.template.spec.domain.devices.interfaces State: From the data of Kubevirt Virtual Machine ‘running’ field: Running->on, Otherwise: off Guest OS Name: from the Kubevirt Virtual Machine: spec.template.metadata.annotations.[vm.kubevirt.io/os] Lifecycle of cmdb_ci_vm_instance: When the CI cmdb_ci_kubernetes_component with kind VirtualMachine is marked as ‘absent’ (or the status defined in property sn_acc_visibility.absent_install_status), the corresponding cmdb_ci_vm_instance will be marked as ‘retired’. This is when the property sn_acc_visibility.kubevirt.create_cmdb_ci_vm_instance is ‘true’.