Why are the Nova Hypervisor statistic not updating after renaming a host while instance are running?

The nova-compute service is periodically updating hardware (VCPU, RAM, DISK) statistics for a host and is using the host name (check with hostname -f in Linux) to update the database with the available resources.

In cases where the host name has been changed while instances are running, all existing instances still reference the old host name inside the node column of the nova.instances table. All those entries need to be updated in order to get the correct amount of available resources for nova  inside the nova MySQL database:

UPDATE nova.instances SET node = '<new host name>' WHERE node = '<old host name>';

Other columns as host, launched_on should be included in a subsequent SQL.

Ever wondering why Windows Guests come up with the wrong time running inside Openstack ?

Openstack is starting the instances in UTC time when using kvm, the simulated guest hardware clock is always to UTC. This is independent from the host clock setting.

Windows OSes only assumes the hardware clock is set to local time so it boots up in UTC time until the time synchronization against the time.microsoft.com finishes and corrects it to the desired time zone.

To change the hardware clock in windows, you can add this registry entry :

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation]
"RealTimeIsUniversal"=dword:00000001

That will boot the instance with the correct time, since the hardware inside the guest and host are lining using the same time zone information.

Additionally please note, that Microsoft Windows Server 2008/7 had a High CPU issue when changing to DST.

That has been fixed with the hot fix :

2800213 High CPU usage during DST changeover in Windows Server 2008, Windows 7, or Windows Server 2008 R2

https://support.microsoft.com/en-us/kb/2687252

Update per 6/25/2015: This hot-fix is only applicable for older Openstack releases (Havana and lower). Newer releases of Openstack do start the guest inside the localtime zone, local to the host. Additionally Openstack needs to be aware that the image you’re using is a window guest and you have to set the os_type image property to windows. But beware for errors around those glance properties. There are known issues that the image properties are not retained if you create new images from existing nova instances.

KVM Live Migration (RedHat)

Live Migration using shared storage

I really love the feature to migrate running VMs from one Linux hypervisor to another without having the burden of pooling or the necessity to have some sort of shared storage attached. Although migrations using shared storage, e.g.NFS, are faster and easier to accomplish. The migration can be initiated using the virt-manager GUI tool or even simpler at the virsh CLI. As a requirement I installed libvirtd and opened the network communication for libvirtd (listen_tcp to 1 @/etc/libvirt/libvirtd.conf). Also check your firewall settings on the hypervisor, if the libvirtd port is open. (as root: netstat -ntlp |grep libvirtd)

Following example shows how to migrate a VM over network using libvirtd. You should always enable TLS for libvirtd, but encryption is not always supported by 3rd party products like Cloudstack :

sudo virsh migrate --live --persistent --p2p --tunnelled <VM> qemu+tcp://<hypervisor>/system

Important is the –persistent option, which ensures the new VM on the target hypervisor stays persistent. If you don’t use the option, the VM configuration will automatically be removed from the target hypervisor and you have to start the VM on the old machine.

I usually use a temporary live migration during hardware maintenance or overload situations, with the intention to run the VM on the old metal afterwards.

 

Live Migration using local storage

KVM allows you to live migrate a VM from local to local storage. The only requirement is that you have enough RAM and a destination disk image available with the same disk size. This image needs to reside at the same path and file name.

  • Create a new disk on the destination KVM host
sudo qemu-img create -f qcow2 /var/lib/libvirt/images/<VM>.img 2G
  • Start migration on the source KVM host
sudo virsh migrate --live --p2p --tunnelled --persistent --copy-storage-all <VM> qemu+tcp://<hypervisor>/system

 

 

 

Adding Linux VLAN and bridge interfaces using libvirt

Always wanted to now how to add interfaces (VLANs or bridges) to your Linux hypervisor without dealing with the distribution specific network configuration to serve guest networks ?

Just use libvirt or its command line tool virsh to accomplish this tutorial.

First create a XML file containing your physical network layout. In this example I have a bonded Ethernet interface (bond0) and create a new Ethernet interface bond0.10 which tags the Ethernet traffic to VLAN ID 10. It is just a arbitrary number in this example but I always suggest to tag all VM guest traffic using a bridge. Ideally those bridges are running on top a bonding interface which is sometimes called teaming. Using the Linux bonding driver you can aggregate multiple interfaces to a logical interfaces which can enhance bandwidth. Your switch should support IEEE 802.3AD aggregation protocols like LACP otherwise I recommend to use active-passive bonding to enhance reliability against NIC or switch failures.

<interface type='bridge' name='br10'> 
  <start mode='onboot'/> 
  <bridge> 
    <interface type='vlan' name='bond0.10'> 
      <vlan tag='10'> 
        <interface name='bond0'/> 
      </vlan> 
    </interface> 
  </bridge> 
</interface>

Finally create your libvirt/Linux interface

sudo virsh iface-define br10.xml
sudo virsh iface-start br10

Now adding a libvirt network using this XML file. I just create a network called vlan10 and connect it to the previous created bridge.

<network connections='1'>
 <name>vlan10</name>
 <forward mode='bridge'/>
 <bridge name='br10' />
</network>

Time to assemble your libvirt network.

sudo virsh net-define vlan10.xml
sudo virsh net-start vlan10
sudo virsh net-autostart vlan10

If everything is done right just check it using virsh again :

virsh # iface-list
Name State MAC Address
--------------------------------------------
bond0 active 00:1d:09:70:a5:a2
br10 active 00:1d:09:70:a5:a2
lo active 00:00:00:00:00:00

virsh # net-list

Name State Autostart Persistent
-------------------------------------------------
default active yes yes
vlan10 active yes yes
virsh # net-info vlan10
Name vlan10
UUID a19fa2be-161a-f7cc-a776-e645a990eee2
Active: yes
Persistent: yes
Autostart: yes
Bridge: br10

For the RedHat or CentOS guys who want to know how bonding interfaces can be created, just add the file  ifcfg-bond0 (the number must be incremented with every new interface)

DEVICE=bond0
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=1 miimon=100"

Finally assign multiple Ethernet interfaces, at least one for mode 1 (active-passive), to this bonding device by adding the following lines in each ifcfg-ethX file:

SLAVE="yes"
MASTER="bond0"