I use iSCSI from a giant ZFS storage pool in order to setup virtual machines. There's a couple advantages to this, as you get the power of ZFS snapshots and the ability to grow your virtual machine's disks fairly easily. To make administrating things a bit easier, I use one large iSCSI target and put LVM on top of this. So really this article will apply to anyone that wants to use logical volumes for their disks, which is much preferred over using files.
I run my servers all in a headless manner, so I will be using virsh
and virt-install
over virt-manager
. Also, I will be using Fedora 12 for the virtual host as this is where KVM is actively developed and generally where you will find the best performance / stability.
Installing the required software
Fedora has a nice "Virtualization" group package which will get you setup to run KVM based virtual machines.
# yum groupinstall "Virtualization" -y # chkconfig libvirt on
Since I want my virtual machines to have direct access to the network, I will put one of my NICs into a bridge. You will want to do the same unless you prefer the default NAT'd mode.
Network configuration
To configure your network you need to create /etc/sysconfig/network-scripts/ifcfg-br0
for a bridged interface. Use the following as an example:
# cat ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=static IPADDR=10.10.0.3 NETMASK=255.255.0.0 ONBOOT=YES
Then same directory, you need to edit the file for the interface you want attached to the bridge. For my configuration I'm using eth3.
# cat ifcfg-eth3 DEVICE=eth3 HWADDR=00:15:17:D6:35:69 ONBOOT=YES TYPE=Ethernet IPV6INIT=no USERCTL=no BRIDGE=br0
That's it. I highly recommend removing NetworkManager
. Don't just disable it, it's a waste of space and complicates things.
# yum remove -y NetworkManager # chkconfig network on # service network restart # ifconfig br0 br0 Link encap:Ethernet HWaddr 00:15:17:D6:35:69 inet addr:10.10.0.3 Bcast:10.10.255.255 Mask:255.255.0.0 inet6 addr: fe80::215:17ff:fed6:3569/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:68994 errors:0 dropped:0 overruns:0 frame:0 TX packets:25910 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:59178792 (56.4 MiB) TX bytes:1746255 (1.6 MiB)
Now you have br0
to put virtual NICs (those created by KVM) on and access your network directly.
Disk configuration
Since LVM is useful in this situation, we're going to use it. I'm starting with a freshly partitioned disk so I will go over creating a proper LVM configuration. You need to know which partition and disk you will be creating this on, so if you don't know, do not just blindly run these commands. Consult with fdisk -l
and the man pages.
# fdisk -l /dev/sdc Disk /dev/sdc: 214.7 GB, 214748364800 bytes 255 heads, 63 sectors/track, 26108 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xae668fde Device Boot Start End Blocks Id System /dev/sdc1 1 26108 209712478+ 8e Linux LVM # pvcreate /dev/sdc1 # vgcreate zfs_lun /dev/sdc1 # lvcreate -L 20G zfs_lun -n guest1 # lvdisplay /dev/zfs_lun/guest1 /dev/dm-2: read failed after 0 of 4096 at 0: Input/output error --- Logical volume --- LV Name /dev/zfs_lun/guest1 VG Name zfs_target LV UUID pVt1tv-OZSr-3joc-xaO1-iklH-z2eS-FEAbRf LV Write Access read/write LV Status available # open 1 LV Size 20.00 GB Current LE 5120 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2
You now have a logical volume of 20GB that you can use for installing virtual machines. To briefly go over the commands above, first we needed to create the physical volume on top of /dev/sdc1
. From there you create a volume group, which is a collection of logical volumes. You can have more than one of these on a disk, but for my purposes I just let the volume group consume the entire physical volume. Finally, we setup a logical volume with 20GB of space. The last line is useful for double checking your work, which is always a good idea.
libvirt Configuration
Since I'm on an isolated trusted network, I like to have full access to the VNC serial connections made available. By default these only listen on localhost, and ssh port forwarding becomes tedious when you are trying to manage more than one at a time. So to open up VNC to the world, you need to uncomment this line in /etc/libvirt/qemu.conf
:
# VNC is configured to listen on 127.0.0.1 by default. # To make it listen on all public interfaces, uncomment # this next option. # # NB, strong recommendation to enable TLS + x509 certificate # verification when allowing public access # vnc_listen = "0.0.0.0"
Restart libvirt after, also we'll make sure it starts on boot at this time:
# service libvirtd restart # chkconfig libvirtd on
virt-install, it's easy!
This command line utility is great. I strongly recommend reading the man page for this one as you will find useful information for installing guests the way you would like. I find it much more powerful than virt-manager
, as last I checked virt-manager
leaves out a few useful features from the libvirt library. (Disclaimer: I haven't used virt-manager in a long long time, at least in development years)
# virt-install --name guest1 \ --ram 1024 \ --location http://mirror.crazy.lan/Fedora/12/Fedora/x86_64/os \ --file /dev/zfs_lun/guest1 \ --vnc \ --bridge=br0
The long names of each option should make this fairly easy to understand. Location refers to the location of your installation media. This can be a variety of options (check the man page!) but http is my personal choice. File of course specifies the disk that you will be using, vnc tells qemu-kvm
to allow VNC access, and bridge specifies the network bridge to attach to.
An interesting option that I once played around with was --pxe
. This allows your VM to boot over the network and install from there. This is a great little option if you've already got the infrastructure in place for netbooting, and possibly well worth setting up if you're tired of carrying around install CDs.
virsh
Your virtual shell. This is where you can modify your VMs and again, well worth looking at the man page for. If you're too lazy, just type virsh help
and you'll be presented with a list of options. So earlier we started an install, how can we check if it's running and connect to the serial console?
# virsh Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # list Id Name State ---------------------------------- 1 guest1 running virsh # vncdisplay guest1 :0
This tells you what VMs are running and the state they are in. The vncdisplay
command lets you find out which display to connect to when using vnc. So from your management station you can run the following to get in:
vncviewer virt02.crazy.lan:0
Now you have a VNC window open and you will be taken through a normal installation process. That's it. There's a ton of things you can do with libvirt and really it just requires some reading. At some point I may consider looking more into virt-manager
, but honestly it leaves some to be desired compared to the already powerful command line tools.