Live Migration - NEVER TURN THEM OFF!!
I today succeeded in performing a live migration of a VM, from one host to another. From the source host:virsh migrate --live --verbose --domain myguest --desturi qemu+ssh://newhost/systemIt took all of about 8 seconds to complete the migration. No downtime. It is said this is done in such a way that connections are not even interrupted. I would like to test that for myself. :-)
Bonding, Bridging, and DNS
Ubuntu 12.04 now requires that DNS resolver directives reside in the /etc/network/interfaces file, or else are placed somewhere I haven't found out about yet. Here are the key directives, which should go with the adapter definition (as in, under iface eth0 inet static):
dns-search my.local.domain my.other.local.domain
dns-nameservers 192.168.4.4 192.168.4.5
For 10.04 up, bonding and bridging seem to be a lot easier. Unfortunately, the documentation for these features SUCKS (meaning it's almost non-existent, and forget the configuration examples, too). Here I configure a two-nic bond, using mode 6 (balance-alb - no switch support required). I also stuff the bond into a bridge (br0), since this host is also serving VMs:
auto eth0 eth1 br0 bond0
iface eth0 inet manual
bond-master bond0
iface eth1 inet manual
bond-master bond0
iface bond0 inet manual
bond-miimon 100
bond-slaves none
bond-mode 6
iface br0 inet static
bridge-ports bond0
address 10.17.0.124
netmask 255.240.0.0
gateway 10.16.0.1
bridge-stp on
bridge-fd 0
bridge-maxwait 0
Some things to note about the above:
- bond-slaves none was the recommendation of the forums and docs - the slaves are defined by specifying their master for each interface. This seems to work rather well.
- bond-mode now suggests it's even easier to set up different kinds of bonds on the same machine. Previously you had to do this by aliasing the bonding driver multiple times with different options. Not terrible, just a little more clunky.
- The bridge-* options used to be bridge_*. Note the dash is used instead of the underscore. Moreover, the forward-delay (fd) and maxwait are set to zero here. Trying to set them manually or via a method other than this seems to fail terribly - they'll always go back to their defaults! This is the first time I've been able to get my bridge to not have the 15 second forwarding delay on new interfaces, though I am anxious to actually validate it beyond any doubt. The configuration tells me that's the way it is, so hopefully it tells the truth.
I tied the VM to the bridge in the NIC configuration of the VM definition (libvirt-style):
Note: Remember that when creating new VMs, just omit the
and the tags, and they will be auto-filled with appropriate or auto-generated values.
No comments:
Post a Comment