In this section we present several enhancements introduced with Rocks 6.2 which allow greater control of the virtualization functionalities.
The KVM roll allows users to define custom plugins which are invoked at each VM startup and shutdown. The plugins can be used to re-allocate the VM on a different physical container or to mount and unmount a storage system for the VM images, or to change some hardware feature of the VM before it is booted.
There are two different startup plugins which are triggered in different stages of the startup process. They both reside in /opt/rocks/lib/python2.6/site-packages/rocks/commands/start/host/vm and they must be called as indicated below.
plugin_allocate.py
This script is invoked before the virtual machine is started and at this stage it is still possible to make modifications to the Rocks database. This means that it is still possible to modify the physical container of the virtual machine or its networking configuration.
plugin_preboot.py
This script is invoked before the virtual machine is started but at this stage it is not possible to make modifications to the Rocks database. The virtual machine can not be relocated anymore to a different VM container. If this script returns a string it will be used as the libvirt XML startup. This means that it is still possible to make temporary changes to the hardware of the virtual machine.
There is one shutdown plugin which should be placed in: /opt/rocks/lib/python2.6/site-packages/rocks/commands/stop/host/vm The shutdown plugin must be called as indicated below.
plugin_disallocate.py
This script is invoked after the virtual machine is shutdown and it should release the resources allocated by the two previous scripts.
Several attributes can be used to customize the virtual hardware of the VM.
Since attribute values will be used inside an XML file during the kickstart generation they need to be properly escaped following XML escaping rules (only the characters >, < and & will be escaped). For this reason the attribute value might appear different when running rocks list host attr. |
Using the attribute cpu_mode it is now possible to configure a guest CPU to be as close to host CPU as possible. The attribute value can have two values (which are taken from Libvirt Documentation):
host-model: The host-model mode is essentially a shortcut to copying host CPU definition from capabilities XML into domain XML. Since the CPU definition is copied just before starting a domain, exactly the same XML can be used on different hosts while still providing the best guest CPU each host supports. Use this mode if you need to migrate virtual machine.
host-passthrough: With this mode, the CPU visible to the guest should be exactly the same as the host CPU even in the aspects that libvirt does not understand. Though the downside of this mode is that the guest environment cannot be reproduced on different hardware. This is the default mode, if you don't need migration capabilities but just speed use this mode.
The attribute cpu_mode can also be used to specify a specific topology or model type for the cpu. If the value of cpu_mode attribute has a colon then the second part of the value (the one after the colon) will be used as a string to be inserted between the cpu tag. To better understand what tags can be used inside the cpu tag see the Libvirt Documentation. So for example if the cpu_mode attribute value is:
exact: <model fallback='allow'>core2duo</model><vendor>Intel</vendor><topology sockets='1' cores='2'/> |
Then the xml used for libvirt will be:
<cpu mode='exact'> <model fallback='allow'>core2duo</model><vendor>Intel</vendor><topology sockets='1' cores='2'/> </cpu> |
Using the attribute kvm_cpu_pinning it is now possible to pin virtual CPUs to the physical CPUs. If the value of the attribute is pin_all each virtual cpu will be automatically pinned to the corresponding physical CPUs. This mode is good only if you have 1 VM for each physical machine, since the pinning will always start from physical core 0. Every other value used in this attribute will be dumped in the final libvirt xml as a child of the <domain> root tag.
For example the following command will pin virtual CPU 0 to physical CPU 4, virtual CPU 1 to physical CPU 5, virtual CPU 2 to physical CPU 6 and virtual CPU 3 to physical CPU 7 on VM called hosted-vm-0-2-0.
rocks set host attr hosted-vm-0-2-0 kvm_cpu_pinning value='<cputune>\ <vcpupin vcpu="0" cpuset="4"/>\ <vcpupin vcpu="1" cpuset="5"/>\ <vcpupin vcpu="2" cpuset="6"/>\ <vcpupin vcpu="3" cpuset="7"/>\ </cputune>' |
Using the attributes called kvm_device_%d where the %d can be an integer going from 0 up, it is possible to add >devices< lines to a VM to fine tune the hardware devices which will be presented to the VM (for more info on the syntax which can be used please refer to the Libvirt Documentation)
For example the following command will assign the PCI slot 2 bus 6 and function 0 to the VM hosted-vm-0-2-0.
rocks set host attr hosted-vm-0-2-0 kvm_device_0 value='<hostdev mode='subsystem' type='pci' managed='yes'>\ <source>\ <address bus='0x06' slot='0x02' function='0x0'/>\ </source>\ </hostdev>' |
Using the command rocks set host vm cdrom it is now possible attach CDROM to VM. The path specified in the cdrom attribute must exist on the physical container of the Virtual Machine. When a CDROM is attached the boot order of the machine is changed so that the CDROM will be first then it will try the network and then the hard disk. After the CDROM path has been changed with rocks set host vm cdrom the virtual machine has to be powered off and restarted with rocks start host vm in order to make effective the changes.
If a virtual machine has the attribute kvm_autostart defined with a value equal to true it will be automatically restart if the physical container is rebooted. If the physical container is properly shut down the Virtual Machine will be paused and saved to disk, and when the physical container restarts the VM will be properly restored automatically. If the physical container is unplugged from the power (hard shutdown) the virtual machine will crash like the physical container and it will be automatically restarted through a normal boot when the physical container is restarted.
Starting with Rocks 6.2 it is possible to run virtual machine on every type of node, for example you can have compute nodes which runs virtual machines. VM Containers are automatically enabled but if the user wants to enable a generic node to run virtual machines it must set the attribute kvm equal to true, and then re-install the node. After the re-installation the node will be able to host virtual machines.