QEMU (简体中文)
来自 QEMU 关于页面:“Qemu”是一个广泛使用的开源计算机模拟器和虚拟机。"
当作为模拟器时,可以在一种架构(如x86 PC)下运行另一种架构(如ARM)下的操作系统和程序。通过使用动态转换,它可以获得非常好的性能。
作为虚拟机时,QEMU可以使用其他虚拟机管理程序(如 Xen 或 KVM)来使用CPU扩展(HVM)进行虚拟化,通过在主机CPU上直接执行客户机代码来获得接近于宿主机的性能。
Contents
- 1 安装
- 2 QEMU 的图形前端
- 3 创建新虚拟系统
- 4 运行虚拟化的系统
- 5 宿主机和虚拟机数据交互
- 6 网络
- 7 图形
- 8 Installing virtio drivers
- 9 Tips and tricks
-
10 Troubleshooting
- 10.1 Mouse cursor is jittery or erratic
- 10.2 No visible Cursor
- 10.3 Keyboard seems broken or the arrow keys do not work
- 10.4 Virtual machine runs too slowly
- 10.5 Guest display stretches on window resize
- 10.6 ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy
- 10.7 libgfapi error message
- 10.8 Kernel panic on LIVE-environments
- 11 参阅
安装
安装 qemu,(或 qemu-headless,一个没有GUI的版本)并根据需要安装下面的可选软件包:
- qemu-arch-extra - 额外架构支持
- qemu-block-gluster - Glusterfs block 支持
- qemu-block-iscsi - iSCSI block 支持
- qemu-block-rbd - RBD block 支持
- samba - SMB/CIFS 服务器支持
QEMU 的图形前端
与其他的虚拟化程序如 VirtualBox 和 VMware 不同, QEMU不提供管理虚拟机的GUI(运行虚拟机时出现的窗口除外),也不提供创建具有已保存设置的持久虚拟机的方法。除非您已创建自定义脚本以启动虚拟机,否则必须在每次启动时在命令行上指定运行虚拟机的所有参数。
Libvirt提供了一种管理 QEMU 虚拟机的便捷方式。有关可用的前端,请参阅 libvirt 客户端列表。
其他图形前端:
- virt-manager
- gnome-boxes
- qemu-launcherAUR
- qtemuAUR
- aqemuAUR
创建新虚拟系统
创建硬盘镜像
除非直接从 CD-ROM 或网络引导(并且不安装系统到本地),运行 QEMU 时都需要硬盘镜像。硬盘镜像是一个文件,存储虚拟机硬盘上的内容。
一个硬盘镜像可能是 raw镜像, 和客户机器上看到的内容一模一样,并且将始终使用主机上的来宾硬盘驱动器的全部容量。此方法提供的I / O开销最小,但可能会浪费大量空间,因为guest虚拟机上未使用的空间无法在主机上使用。
另外一种方式是qcow2 格式,仅当客户系统实际写入内容的时候,才会分配镜像空间。对客户机器来说,硬盘大小表现为完整大小,即使它可能仅占用主机系统上的非常小的空间。此映像格式还支持QEMU快照功能(有关详细信息,请参阅 #Creating and managing snapshots via the monitor console[broken link: invalid section])。但是,使用此格式而不是 raw 可能会影响性能。
QEMU 提供 qemu-img
命令创建硬盘镜像.例如创建一个 4 GB raw 格式的镜像:
$ qemu-img create -f raw image_file 4G
您也可以用 -f qcow2
创建一个 qcow2 镜像。
用 dd
或 fallocate
也可以创建一个 raw 镜像。
dd
或 fallocate
创建一个所需大小的 raw 镜像。上层存储镜像
可以创建一个基础镜像,and have QEMU keep mutations to this image in an overlay image. This allows you to revert to a previous state of this storage image. You could revert by creating a new overlay image at the time you wish to revert, based on the original backing image.
To create an overlay image, issue a command like:
$ qemu-img create -o backing_file=img1.raw,backing_fmt=raw -f qcow2 img1.cow
After that you can run your QEMU VM as usual (see #Running virtualized system[broken link: invalid section]):
$ qemu-system-i386 img1.cow
The backing image will then be left intact and mutations to this storage will be recorded in the overlay image file.
When the path to the backing image changes, repair is required.
Make sure that the original backing image's path still leads to this image. If necessary, make a symbolic link at the original path to the new path. Then issue a command like:
$ qemu-img rebase -b /new/img1.raw /new/img1.cow
At your discretion, you may alternatively perform an 'unsafe' rebase where the old path to the backing image is not checked:
$ qemu-img rebase -u -b /new/img1.raw /new/img1.cow
调整镜像大小
执行 qemu-img
带 resize
选项调整硬盘驱动镜像的大小.它适用于 raw 和 qcow2. 例如, 增加镜像 10 GB 大小, 运行:
$ qemu-img resize disk_image +10G
在磁盘映像扩容后,必须使用虚拟机内部系统的分区工具对该镜像进行分区并格式化后才能真正开始使用新空间。 在收缩磁盘映像时,必须首先使用虚拟机内部系统的分区工具减少分该分区的大小,然后相应地收缩磁盘映像,否则收缩磁盘映像将导致数据丢失!
准备安装介质
要将操作系统安装到您的磁盘镜像, 你需要操作系统的安装介质 (例如 光盘, USB设备, 或 ISO 镜像). 不要挂载安装介质,因为 QEMU 要直接访问媒体。
运行虚拟化的系统
qemu-system-*
程序 (例如 qemu-system-i386
或 qemu-system-x86_64
, 取决于客户机架构)用来运行虚拟化的客户机. 用法是:
$ qemu-system-i386 options disk_image
所有 qemu-system-*
的选项是相同的,参见 qemu(1)
查看文档和所有选项
默认 QEMU会在窗口中显示虚拟机的视频输出.有一点要记住:当您单击QEMU窗口,鼠标指针被捕获。要放开,按 Ctrl+Alt
.
启用 KVM
KVM 必须要您处理器和内核支持, 和必要的 kernel modules加载. 更多信息参见 KVM.
要在KVM模式中启动QEMU, 追加 -enable-kvm
到启动选项. To check if KVM is enabled for a running VM, enter the QEMU Monitor using Ctrl+Alt+Shift+2
, and type info kvm
.
- If you start your VM with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.
- KVM needs to be enabled in order to start Windows 7 and Windows 8 properly without a blue screen.
启用 IOMMU (Intel VT-d/AMD-Vi) 的支持
使用IOMMU可以打开一些功能,比如PCI透传功能和保护内存免受故障或恶意设备的侵害。
参考:
使能IOMMU步骤:
- 确保CPU支持AMD-Vi/Intel VT-d,并且在BIOS设置其为启用。
- 如果您使用的是Intel CPU,那么将
intel_iommu=on
添加到kernel parameters中;如果您使用的是AMD CPU,那么将amd_iommu=on
添加到kernel parameters中。 - 重新启动并确保IOMMU已启用,通过检查
dmesg
中的DMAR
:[0.000000] DMAR: IOMMU enabled
确认。 - 根据
-machine
,添加iommu=on
或q35,iommu=on
。
宿主机和虚拟机数据交互
Network
Data can be shared between the host and guest OS using any network protocol that can transfer files, such as NFS, SMB, NBD, HTTP, FTP, or SSH, provided that you have set up the network appropriately and enabled the appropriate services.
The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via SMB or NFS, or you can access the host's HTTP server, etc. It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see #Tap networking with QEMU[broken link: invalid section]).
QEMU's built-in SMB server
QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up Samba with an automatically generated smb.conf
file located at /tmp/qemu-smb.pid-0/smb.conf
and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and this is not necessarily very useful since the guest can also access the normal Samba service on the host if you have set up shares on it.
To enable this feature, start QEMU with a command like:
$ qemu-system-i386 disk_image -net nic -net user,smb=shared_dir_path
where shared_dir_path
is a directory that you want to share between the guest and host.
Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to \\10.0.2.4\qemu
.
- If you are using sharing options multiple times like
-net user,smb=shared_dir_path1 -net user,smb=shared_dir_path2
or-net user,smb=shared_dir_path1,smb=shared_dir_path2
then it will share only the last defined one. - If you cannot access the shared folder and the guest system is Windows, check that the NetBIOS protocol is enabled and that a firewall does not block ports used by the NetBIOS protocol.
挂载虚拟硬盘镜像
When the virtual machine is not running, it is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices. This does not work with disk images in special formats, such as qcow2, although those can be mounted using qemu-nbd
.
With manually specifying byte offset
One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:
# mount -o loop,offset=32256 disk_image mountpoint
The offset=32256
option is actually passed to the losetup
program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the sizelimit
option to specify the exact size of the partition, but this is usually unnecessary.
Depending on your disk image, the needed partition may not start at offset 32256. Run fdisk -l disk_image
to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to mount
.
With loop module autodetecting partitions
The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:
- Get rid of all your loopback devices (unmount all mounted images, etc.).
-
Unload the
loop
kernel module, and load it with themax_part=15
parameter set. Additionally, the maximum number of loop devices can be controlled with themax_loop
parameter.
/etc/modprobe.d
to load the loop module with max_part=15
every time, or you can put loop.max_part=15
on the kernel command-line, depending on whether you have the loop.ko
module built into your kernel or not.Set up your image as a loopback device:
# losetup -f -P disk_image
Then, if the device created was /dev/loop0
, additional devices /dev/loop0pX
will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:
# mount /dev/loop0p1 mountpoint
To mount the disk image with udisksctl, see Udisks#Mount loop devices.
With kpartx
kpartx from the multipath-tools package can read a partition table on a device and create a new device for each partition. For example:
# kpartx -a disk_image
This will setup the loopback device and create the necessary partition(s) device(s) in /dev/mapper/
.
挂载 qcow2 镜像
你可以使用 qemu-nbd
挂载 qcow2 镜像. 参见 Wikipedia:Qcow#Mounting_qcow2_images.
使用物理分区作为硬盘镜像中的唯一主分区
Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the file system layer on the physical host. Such a partition also provides a way to share data between the host and guest.
In Arch Linux, device files for raw partitions are, by default, owned by root and the disk group. If you would like to have a non-root user be able to read and write to a raw partition, you need to change the owner of the partition's device file to that user.
- Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.
- You must not mount a file system on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.
After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.
However, things are a little more complicated if you want to have the entire virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a bootloader to a partition that is itself formatted as a file system and not as a partitioned device with a MBR. Such a virtual machine can be booted either by specifying the kernel and initrd manually, or by simulating a disk with a MBR by using linear RAID.
By specifying kernel and initrd manually
QEMU supports loading Linux kernels and init ramdisks directly, thereby circumventing bootloaders such as GRUB. It then can be launched with the physical partition containing the root file system as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:
/dev/sda3
read-only (to protect the file system from the host) and specify the /full/path/to/images
or use some kexec hackery in the guest to reload the guest's kernel (extends boot time). $ qemu-system-i386 -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3
In the above example, the physical partition being used for the guest's root file system is /dev/sda3
on the host, but it shows up as /dev/sda
on the guest.
You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.
When there are multiple kernel parameters to be passed to the -append
option, they need to be quoted using single or double quotes. For example:
... -append 'root=/dev/sda1 console=ttyS0'
Simulate virtual disk with MBR using linear RAID
A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a file system and not just having the guest partition the partition as if it were a disk, is to simulate a MBR for it so that it can boot using a bootloader such as GRUB.
You can do this using software RAID in linear mode (you need the linear.ko
kernel driver) and a loopback device: the trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image.
Suppose you have a plain, unmounted /dev/hdaN
partition with some file system on it you wish to make part of a QEMU disk image. First, you create some small file to hold the MBR:
$ dd if=/dev/zero of=/path/to/mbr count=32
Here, a 16 KB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:
# losetup -f /path/to/mbr
Let us assume the resulting device is /dev/loop0
, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + /dev/hdaN
disk image using software RAID:
# modprobe linear # mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hdaN
The resulting /dev/md0
is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of /dev/hdaN
inside /dev/md0
(an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using fdisk
on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non-kilobyte-roundable offsets (such as 31.5 KB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:
# fdisk /dev/md0
Press X
to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.
Now, press R
to return to the main menu.
Press P
and check that the cylinder size is now 16k.
Now, create a single primary partition corresponding to /dev/hdaN
. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.
Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:
$ qemu-system-i386 -hdc /dev/md0 [...]
You can, of course, safely set any bootloader on this disk image using QEMU, provided the original /dev/hdaN
partition contains the necessary tools.
网络
The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde because tap devices and bridges are implemented in-kernel.
In addition, networking performance can be improved by assigning virtual machines a virtio network device rather than the default emulation of an e1000 NIC. See #Installing virtio drivers for more information.
Link-level address caveat
By giving the -net nic
argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address 52:54:00:12:34:56
. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.
Make sure that each virtual machine has a unique link-level address, but it should always start with 52:54:
. Use the following option, replace X with arbitrary hexadecimal digit:
$ qemu-system-i386 -net nic,macaddr=52:54:XX:XX:XX:XX -net vde disk_image
Generating unique link-level addresses can be done in several ways:
- Manually specify unique link-level address for each NIC. The benefit is that the DHCP server will assign the same IP address each time the virtual machine is run, but it is unusable for large number of virtual machines.
- Generate random link-level address each time the virtual machine is run. Practically zero probability of collisions, but the downside is that the DHCP server will assign a different IP address each time. You can use the following command in a script to generate random link-level address in a
macaddr
variable:printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) qemu-system-i386 -net nic,macaddr="$macaddr" -net vde disk_image
- Use the following script
qemu-mac-hasher.py
to generate the link-level address from the virtual machine name using a hashing function. Given that the names of virtual machines are unique, this method combines the benefits of the aforementioned methods: it generates the same link-level address each time the script is run, yet it preserves the practically zero probability of collisions.qemu-mac-hasher.py
#!/usr/bin/env python import sys import zlib if len(sys.argv) != 2: print("usage: %s <VM Name>" % sys.argv[0]) sys.exit(1) crc = zlib.crc32(sys.argv[1].encode("utf-8")) & 0xffffffff crc = str(hex(crc))[2:] print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))
In a script, you can use for example:
vm_name="VM Name" qemu-system-i386 -name "$vm_name" -net nic,macaddr=$(qemu-mac-hasher.py "$vm_name") -net vde disk_image
用户模式
默认情况下,没有任何-netdev
参数,QEMU将使用带有内置DHCP服务器的用户模式网络。当您的虚拟机运行其DHCP客户端时,将为其分配IP地址,它们将能够通过QEMU伪装的IP来访问物理主机的网络。
如果主机已连接Internet,则此默认配置可以使您的虚拟机轻松访问Internet。但是如果您同时启动多个虚拟机,则虚拟机将无法在外部网络上直接看到,虚拟机也将无法相互通信。
QEMU的用户模式网络可以提供更多功能,例如内置TFTP或SMB服务器,将主机端口重定向到虚拟机(例如,允许SSH连接到虚拟机)或将虚拟机连接到VLAN,以便它们可以彼此通信。 有关更多详细信息,请参见-net user
标志上的QEMU文档。
但是,用户模式网络在效用和性能上都有局限性。更高级的网络配置需要使用TAP设备或其他方法。
Tap 网络
Tap devices是一个Linux内核特性,允许您创建作为真实网络接口的虚拟网络接口。发送到tap接口的包将被传递到一个用户空间程序(如QEMU),该程序将自己绑定到该接口。
QEMU可以为虚拟机使用tap网络,因此发送到tap接口的包将被发送到虚拟机,并显示为来自虚拟机中的网络接口(通常是以太网接口)。相反,虚拟机通过其网络接口发送的所有内容都将出现在tap接口上。
Linux桥接驱动程序支持Tap设备,因此可以将Tap设备彼此桥接在一起,也可以连接其他主机接口,如eth0
。如果您希望您的虚拟机能够相互通信,或者希望LAN上的其他机器能够与虚拟机通信,那么这是非常理想的方案。
正如在用户模式网络部分中指出的,tap设备提供比用户模式具有更高的网络性能。如果虚拟机中的操作系统支持virtio网络驱动程序,那么网络性能也会显著提高。假设使用tap0设备,virtio驱动程序在客户端上使用,并且没有使用脚本来帮助启动/停止网络,使用下面的qemu命令:
-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
但是,如果已经使用带有virtio网络驱动程序的Tap设备,则甚至可以通过启用vhost来提高网络性能,例如:
-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no,vhost=on
详情请参考:http://www.linux-kvm.com/content/how-maximize-virtio-net-performance-vhost-net
仅主机 网络
如果为网桥提供了IP地址,并且使能发往该网桥的流量允许,但没有实际接口(例如eth0
)连接到网桥,则虚拟机与虚拟机间,虚拟机与主机间能够相互通信。但是,如果您没有在物理主机上设置IP掩蔽,则他们将无法与外部网络进行通信。 此配置被其他虚拟化软件(例如VirtualBox)称为“仅主机网络模式”。
- 如果你想设置IP掩蔽,例如虚拟机的NAT,请查看Internet sharing#Enable NAT页面。
- 您也许想在网桥接口上运行一个DHCP服务器来服务虚拟网络。例如,使用
172.20.0.1/16
子网,dnsmasq作为DHCP服务器:
# ip addr add 172.20.0.1/16 dev br0 # ip link set br0 up # dnsmasq --interface=br0 --bind-interfaces --dhcp-range=172.20.0.2,172.20.255.254
内部网络
如果您不为网桥提供IP地址并在iptables添加INPUT规则链,将所有流向网桥中的数据丢弃,则虚拟机将能够彼此通信,但无法与物理主机或外部网络通信。此配置被其他虚拟化软件(例如VirtualBox)称为“内部网络”。您将需要为虚拟机分配静态IP地址,或在其中一个虚拟机上运行DHCP服务器。
在默认情况下,iptables将丢弃桥接网络中的数据包。您可能需要使用这样的iptables规则来允许桥接网络中的数据包:
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
使用 qemu-bridge-helper 桥接网络
这种方法不需要启动脚本,并且很容易适应多个tap和多个桥。它使用/usr/lib/qemu/qemu-bridge-helper
,允许在现有桥上创建tap设备。
首先,创建一个配置文件,包含QEMU使用的所有网桥的名称:
/etc/qemu/bridge.conf
allow bridge0 allow bridge1 ...
现在启动虚拟机:
$ qemu-system-i386 -net nic -net bridge,br=bridge0 [...]
在多个TAP设备的情况下,最基本的用法是要为所有NIC指定VLAN:
$ qemu-system-i386 -net nic -net bridge,br=bridge0 -net nic,vlan=1 -net bridge,vlan=1,br=bridge1 [...]
手工创建网桥
下面介绍如何将虚拟机连接到主机接口,如eth0
,这可能是最常见的配置。这种配置使虚拟机看起来直接位于外部网络,与物理主机位于同一以太网段。
我们将用桥适配器替换普通的以太网适配器,然后将普通的以太网适配器绑定到它。
- 安装bridge-utils,它提供
brctl
来操作网桥。
- 启用IPv4转发:
# sysctl net.ipv4.ip_forward=1
要使更改永久生效,请将/etc/sysctl.d/99-sysctl.conf
中的net.ipv4.ip_forward = 0
更改为net.ipv4.ip_forward = 1
。
- 加载
tun
模块,并将其配置为在引导时加载。详见Kernel modules。
- 现在创建桥。有关详细信息,请参见Bridge with netctl。请记住网桥的命名,如
br0
,或将以下脚本更改为网桥的名称。 - 创建QEMU用于打开tap适配器的脚本,该脚本具有
root:kvm
750权限:
/etc/qemu-ifup
#!/bin/sh echo "Executing /etc/qemu-ifup" echo "Bringing up $1 for bridged mode..." sudo /usr/bin/ip link set $1 up promisc on echo "Adding $1 to br0..." sudo /usr/bin/brctl addif br0 $1 sleep 2
- 创建QEMU用于在
/etc/qemu-ifdown
中关闭tap适配器的脚本,该脚本具有root:kvm
750权限:
/etc/qemu-ifdown
#!/bin/sh echo "Executing /etc/qemu-ifdown" sudo /usr/bin/ip link set $1 down sudo /usr/bin/brctl delif br0 $1 sudo /usr/bin/ip link delete dev $1
- 使用
visudo
将以下内容添加到sudoers
文件中:
Cmnd_Alias QEMU=/usr/bin/ip,/usr/bin/modprobe,/usr/bin/brctl %kvm ALL=NOPASSWD: QEMU
- 您可以使用以下
run-qemu
脚本启动QEMU:
run-qemu
#!/bin/bash USERID=$(whoami) # Get name of newly created TAP device; see https://bbs.archlinux.org/viewtopic.php?pid=1285079#p1285079 precreationg=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort) sudo /usr/bin/ip tuntap add user $USERID mode tap postcreation=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort) IFACE=$(comm -13 <(echo "$precreationg") <(echo "$postcreation")) # This line creates a random MAC address. The downside is the DHCP server will assign a different IP address each time printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) # Instead, uncomment and edit this line to set a static MAC address. The benefit is that the DHCP server will assign the same IP address. # macaddr='52:54:be:36:42:a9' qemu-system-i386 -net nic,macaddr=$macaddr -net tap,ifname="$IFACE" $* sudo ip link set dev $IFACE down &> /dev/null sudo ip tuntap del $IFACE mode tap &> /dev/null
然后,要启动VM,可以这样做:
$ run-qemu -hda myvm.img -m 512 -vga std
- 出于性能和安全原因,建议禁用网桥上的防火墙[2]:
/etc/sysctl.d/10-disable-firewall-on-bridge.conf
net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0
运行sysctl -p /etc/sysctl.d/10-disable-firewall-on-bridge.conf
立即应用更改。
参见libvirt wiki和Fedora bug 512206。如果在引导过程中sysctl发现关于不存在文件的错误,请在引导时加载bridge
模块。参见Kernel modules#Automatic module handling[broken link: invalid section]。
或者,您可以配置iptables,通过添加类似这样的规则,允许所有流量通过桥进行转发:
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
物理设备和Tap设备之间通过iptables进行网络共享
桥接网络能在有线接口(例如eth0)之间工作,并且很容易设置。但是,如果主机通过无线设备连接到网络,则无法进行桥接。
参见 Network bridge#Wireless interface on a bridge.
One way to overcome that is to setup a tap device with a static IP, making linux automatically handle the routing for it, and then forward traffic between the tap interface and the device connected to the network through iptables rules.
解决这个问题的一种方法是,给tap设备设置一个静态IP,使linux自动处理它的路由,然后通过iptables规则转发tap接口和连接到网络的设备之间的通信。
参见 Internet sharing.
在那里你可以找到在设备之间共享网络所需要的东西,包括tap和tun。下面将进一步介绍所需的一些主机配置。如上所述,需要为静态IP配置客户机,使用分配给tap接口的IP作为网关。需要注意的是,如果客户机上的DNS服务器在从一个连接到网络的主机设备切换到另一个时发生了更改,那么它们可能需要手动编辑。
要在每次启动时允许IP转发,需要在/etc/sysctl.d中,向sysctl配置文件添加以下信息:
net.ipv4.ip_forward = 1 net.ipv6.conf.default.forwarding = 1 net.ipv6.conf.all.forwarding = 1
iptables规则如下:
# Forwarding from/to outside iptables -A FORWARD -i ${INT} -o ${EXT_0} -j ACCEPT iptables -A FORWARD -i ${INT} -o ${EXT_1} -j ACCEPT iptables -A FORWARD -i ${INT} -o ${EXT_2} -j ACCEPT iptables -A FORWARD -i ${EXT_0} -o ${INT} -j ACCEPT iptables -A FORWARD -i ${EXT_1} -o ${INT} -j ACCEPT iptables -A FORWARD -i ${EXT_2} -o ${INT} -j ACCEPT # NAT/Masquerade (network address translation) iptables -t nat -A POSTROUTING -o ${EXT_0} -j MASQUERADE iptables -t nat -A POSTROUTING -o ${EXT_1} -j MASQUERADE iptables -t nat -A POSTROUTING -o ${EXT_2} -j MASQUERADE
假设有3个设备连接到一个内部设备的网络共享流量,例如:
INT=tap0 EXT_0=eth0 EXT_1=wlan0 EXT_2=tun0
前面显示了一个转发,允许与tap设备共享有线和无线连接。
所示的转发规则是无状态的,用于纯转发。可以考虑限制特定的流量,设置防火墙来保护来宾和其他人。然而,这些会降低网络性能,而简单的网桥不包括这些。
好处:不管连接是有线还是无线,如果使用tun设备通过VPN连接到远程站点,假设为该连接打开的tun设备是tun0,并且应用了先前的iptables规则,那么远程连接也将与客户机共享。这避免了客户也需要打开VPN连接。同样,由于来宾网络需要是静态的,因此如果以这种方式远程连接主机,很可能需要编辑来宾网络上的DNS服务器。
Networking with VDE2
What is VDE?
VDE stands for Virtual Distributed Ethernet. It started as an enhancement of uml_switch. It is a toolbox to manage virtual networks.
The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. You are invited to read the documentation of the project.
The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.
Basics
VDE support can be installed via the vde2 package in the official repositories.
In our config, we use tun/tap to create a virtual interface on my host. Load the tun
module (see Kernel modules for details):
# modprobe tun
Now create the virtual switch:
# vde_switch -tap tap0 -daemon -mod 660 -group users
This line creates the switch, creates tap0
, "plugs" it, and allows the users of the group users
to use it.
The interface is plugged in but not configured yet. To configure it, run this command:
# ip addr add 192.168.100.254/24 dev tap0
Now, you just have to run KVM with these -net
options as a normal user:
$ qemu-system-i386 -net nic -net vde -hda [...]
Configure networking for your guest as you would do in a physical network.
Startup scripts
Example of main script starting VDE:
/etc/systemd/scripts/qemu-network-env
#!/bin/sh # QEMU/VDE network environment preparation script # The IP configuration for the tap device that will be used for # the virtual machine network: TAP_DEV=tap0 TAP_IP=192.168.100.254 TAP_MASK=24 TAP_NETWORK=192.168.100.0 # Host interface NIC=eth0 case "$1" in start) echo -n "Starting VDE network for QEMU: " # If you want tun kernel module to be loaded by script uncomment here #modprobe tun 2>/dev/null ## Wait for the module to be loaded #while ! lsmod | grep -q "^tun"; do echo "Waiting for tun device"; sleep 1; done # Start tap switch vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group users # Bring tap interface up ip address add "$TAP_IP"/"$TAP_MASK" dev "$TAP_DEV" ip link set "$TAP_DEV" up # Start IP Forwarding echo "1" > /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE ;; stop) echo -n "Stopping VDE network for QEMU: " # Delete the NAT rules iptables -t nat -D POSTROUTING "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE # Bring tap interface down ip link set "$TAP_DEV" down # Kill VDE switch pgrep -f vde_switch | xargs kill -TERM ;; restart|reload) $0 stop sleep 1 $0 start ;; *) echo "Usage: $0 {start|stop|restart|reload}" exit 1 esac exit 0
Example of systemd service using the above script:
/etc/systemd/system/qemu-network-env.service
[Unit] Description=Manage VDE Switch [Service] Type=oneshot ExecStart=/etc/systemd/scripts/qemu-network-env start ExecStop=/etc/systemd/scripts/qemu-network-env stop RemainAfterExit=yes [Install] WantedBy=multi-user.target
Change permissions for qemu-network-env
to be executable
# chmod u+x /etc/systemd/scripts/qemu-network-env
You can start qemu-network-env.service
as usual.
Alternative method
If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq, and iptables you can do the following for the same result.
# vde_switch -daemon -mod 660 -group users # slirpvde --dhcp --daemon
Then, to start the VM with a connection to the network of the host:
$ qemu-system-i386 -net nic,macaddr=52:54:00:00:EE:03 -net vde disk_image
VDE2 Bridge
Based on quickhowto: qemu networking using vde, tun/tap, and bridge graphic. Any virtual machine connected to vde is externally exposed. For example, each virtual machine can receive DHCP configuration directly from your ADSL router.
Basics
Remember that you need tun
module and bridge-utils package.
Create the vde2/tap device:
# vde_switch -tap tap0 -daemon -mod 660 -group users # ip link set tap0 up
Create bridge:
# brctl addbr br0
Add devices:
# brctl addif br0 eth0 # brctl addif br0 tap0
And configure bridge interface:
# dhcpcd br0
Startup scripts
All devices must be set up. And only the bridge needs an IP address. For physical devices on the bridge (e.g. eth0
), this can be done with netctl using a custom Ethernet profile with:
/etc/netctl/ethernet-noip
Description='A more versatile static Ethernet connection' Interface=eth0 Connection=ethernet IP=no
The following custom systemd service can be used to create and activate a VDE2 tap interface for use in the users
user group.
/etc/systemd/system/vde2@.service
[Unit] Description=Network Connectivity for %i Wants=network.target Before=network.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/bin/vde_switch -tap %i -daemon -mod 660 -group users ExecStart=/usr/bin/ip link set dev %i up ExecStop=/usr/bin/ip addr flush dev %i ExecStop=/usr/bin/ip link set dev %i down [Install] WantedBy=multi-user.target
And finally, you can create the bridge interface with netctl.
图形
QEMU 可以使用一下几个图形输出:std, cirrus, vmware, qxl, xenfs 和 vnc。
使用 vnc
选项,你可以单独运行客户机,并且通过 VNC 连接。其他选项是使用std
, vmware
, cirrus
:
std
使用 -vga std
你可以得到最高 2560 x 1600 像素的分辨率。从 QEMU 2.2 开始是默认选项。
qxl
QXL is a paravirtual graphics driver with 2D support. To use it, pass the -vga qxl
option and install drivers in the guest. You may want to use SPICE for improved graphical performance when using QXL.
On Linux guests, the qxl
and bochs_drm
kernel modules must be loaded in order to gain a decent performance.
SPICE
The SPICE project aims to provide a complete open source solution for remote access to virtual machines in a seamless way.
SPICE can only be used when using QXL as the graphical output.
The following is example of booting with SPICE as the remote desktop protocol:
$ qemu-system-i386 -vga qxl -spice port=5930,disable-ticketing -chardev spicevm
Connect to the guest by using a SPICE client. At the moment spice-gtk3[断开的链接:replaced by spice-gtk] is recommended, however other clients, including other platforms, are available:
$ spicy -h 127.0.0.1 -p 5930
Using Unix sockets instead of TCP ports does not involve using network stack on the host system, so it is reportedly better for performance. Example:
$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent -spice unix,addr=/tmp/vm_spice.socket,disable-ticketing,playback-compression=off $ spicy --uri="spice+unix:///tmp/vm_spice.socket"
For improved support for multiple monitors, clipboard sharing, etc. the following packages should be installed on the guest:
- spice-vdagent: Spice agent xorg client that enables copy and paste between client and X-session and more
- xf86-video-qxl xf86-video-qxl-gitAUR: Xorg X11 qxl video driver
- For other operating systems, see the Guest section on SPICE-Space download page.
vmware
尽管有一点怪怪的,但是这种方法确实比 std 和 cirrus 效果好。在客户机中,安装 软件包 xf86-video-vmware 和 xf86-input-vmmouse。
virtio
virtio-vga
/ virtio-gpu
is a paravirtual 3D graphics driver based on virgl. Currently a work in progress, supporting only very recent (>= 4.4) Linux guests.
cirrus
The cirrus graphical adapter was the default before 2.2. It should not be used on modern systems.
none
This is like a PC that has no VGA card at all. You would not even be able to access it with the -vnc
option. Also, this is different from the -nographic
option which lets QEMU emulate a VGA card, but disables the SDL display.
vnc
Given that you used the -nographic
option, you can add the -vnc display
option to have QEMU listen on display
and redirect the VGA display to the VNC session. There is an example of this in the #Starting QEMU virtual machines on boot section's example configs.
$ qemu-system-i386 -vga std -nographic -vnc :0 $ gvncviewer :0
When using VNC, you might experience keyboard problems described (in gory details) here. The solution is not to use the -k
option on QEMU, and to use gvncviewer
from gtk-vnc. See also this message posted on libvirt's mailing list.
Installing virtio drivers
QEMU offers guests the ability to use paravirtualized block and network devices using the virtio drivers, which provide better performance and lower overhead.
- A virtio block device requires the option
-drive
instead of the simple-hd*
plusif=virtio
:
$ qemu-system-i386 -boot order=c -drive file=disk_image,if=virtio
-boot order=c
is absolutely necessary when you want to boot from it. There is no auto-detection as with -hd*
.- Almost the same goes for the network:
$ qemu-system-i386 -net nic,model=virtio
准备 (Arch) Linux 客户端
要在Arch Linux客户端中使用virtio设备,必须在客户端加载以下模块:virtio_pci
、virtio_pci
、virtio_blk
、virtio_net
、virtio_ring
。对于32位客户端,不需要特定的“virtio”模块。
如果希望从virtio磁盘引导,ramdisk必须包含必要的模块。默认情况下,这是由mkinitcpio的autodetect
钩子处理的。否则要在/etc/mkinitcpio.conf
中使用MODULES
数组包含必要的模块并重新构建ramdisk。
/etc/mkinitcpio.conf
MODULES="virtio virtio_blk virtio_pci virtio_net"
Virtio磁盘公认的前缀为“v”
(例如:“v”da
, “v”db
,等等)。因此,当从virtio磁盘启动时,需要在/etc/fstab
和/boot/grub/grub.cfg
中进行更改。。
/etc/fstab
和bootloader中通过UUID引用磁盘时,不需要执行任何操作。关于使用KVM进行半虚拟化的更多信息,可以参考Boot_from_virtio_block_device。
您可以安装qemu-guest-agent来实现对QMP命令的支持,从而增强管理程序的管理能力。安装完成后,您需要使能和启动qemu-ga.service
。
Preparing a Windows guest
Block device drivers
New Install of Windows
Windows does not come with the virtio drivers. Therefore, you will need to load them during installation. There are basically two ways to do this: via Floppy Disk or via ISO files. Both images can be downloaded from the Fedora repository.
The floppy disk option is difficult because you will need to press F6 (Shift-F6 on newer Windows) at the very beginning of powering on the QEMU. This is difficult since you need time to connect your VNC console window. You can attempt to add a delay to the boot sequence. See qemu(1) for more details about applying a delay at boot.
The ISO option to load drivers is the preferred way, but it is available only on Windows Vista and Windows Server 2008 and later. The procedure is to load the image with virtio drivers in an additional cdrom device along with the primary disk device and Windows installer:
$ qemu-system-i386 ... \ -drive file=/path/to/primary/disk.img,index=0,media=disk,if=virtio \ -drive file=/path/to/installer.iso,index=2,media=cdrom \ -drive file=/path/to/virtio.iso,index=3,media=cdrom \ ...
During the installation, the Windows installer will ask you for your Product key and perform some additional checks. When it gets to the "Where do you want to install Windows?" screen, it will give a warning that no disks are found. Follow the example instructions below (based on Windows Server 2012 R2 with Update).
- Select the option
Load Drivers
. - Uncheck the box for "Hide drivers that aren't compatible with this computer's hardware".
- Click the Browse button and open the CDROM for the virtio iso, usually named "virtio-win-XX".
- Now browse to
E:\viostor\[your-os]\amd64
, select it, and press OK. - Click Next
You should now see your virtio disk(s) listed here, ready to be selected, formatted and installed to.
Change Existing Windows VM to use virtio
Modifying an existing Windows guest for booting from virtio disk is a bit tricky.
You can download the virtio disk driver from the Fedora repository.
Now you need to create a new disk image, which fill force Windows to search for the driver. For example:
$ qemu-img create -f qcow2 fake.qcow2 1G
Run the original Windows guest (with the boot disk still in IDE mode) with the fake disk (in virtio mode) and a CD-ROM with the driver.
$ qemu-system-i386 -m 512 -vga std -drive file=windows_disk_image,if=ide -drive file=fake.qcow2,if=virtio -cdrom virtio-win-0.1-81.iso
Windows will detect the fake disk and try to find a driver for it. If it fails, go to the Device Manager, locate the SCSI drive with an exclamation mark icon (should be open), click Update driver and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.
When the installation is successful, you can turn off the virtual machine and launch it again, now with the boot disk attached in virtio mode:
$ qemu-system-i386 -m 512 -vga std -drive file=windows_disk_image,if=virtio
-m
parameter, and that you do not boot with virtio instead of ide for the system drive before drivers are installed.网络驱动
安装virtio网络驱动程序要容易一些,只需如上所述添加-net
参数即可。
$ qemu-system-i386 -m 512 -vga std -drive file=windows_disk_image,if=virtio -net nic,model=virtio -cdrom virtio-win-0.1-74.iso
Windows将检测网络适配器并尝试为其找到驱动程序。如果失败,请转到“设备管理器”,找到带有感叹号图标的网络适配器(双击打开),切换到驱动程序并单击“更新驱动程序”,然后选择虚拟CD-ROM。别忘了选中显示要递归搜索目录的复选框。
Preparing a FreeBSD guest
Install the emulators/virtio-kmod
port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your /boot/loader.conf
file:
virtio_loader="YES" virtio_pci_load="YES" virtio_blk_load="YES" if_vtnet_load="YES" virtio_balloon_load="YES"
Then modify your /etc/fstab
by doing the following:
sed -i bak "s/ada/vtbd/g" /etc/fstab
And verify that /etc/fstab
is consistent. If anything goes wrong, just boot into a rescue CD and copy /etc/fstab.bak
back to /etc/fstab
.
Tips and tricks
Starting QEMU virtual machines on boot
With libvirt
If a virtual machine is set up with libvirt, it can be configured through the virt-manager GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".
Custom script
To run QEMU VMs on boot, you can use following systemd unit and config.
/etc/systemd/system/qemu@.service
[Unit] Description=QEMU virtual machine [Service] Environment="type=system-x86_64" "haltcmd=kill -INT $MAINPID" EnvironmentFile=/etc/conf.d/qemu.d/%i ExecStart=/usr/bin/env qemu-${type} -name %i -nographic $args ExecStop=/bin/sh -c ${haltcmd} TimeoutStopSec=30 KillMode=none [Install] WantedBy=multi-user.target
systemd.service(5)
and systemd.kill(5)
man pages it is necessary to use the KillMode=none
option. Otherwise the main qemu process will be killed immediately after the ExecStop
command quits (it simply echoes one string) and your quest system will not be able to shutdown correctly.Then create per-VM configuration files, named /etc/conf.d/qemu.d/vm_name
, with the following variables set:
- type
- QEMU binary to call. If specified, will be prepended with
/usr/bin/qemu-
and that binary will be used to start the VM. I.e. you can boot e.g.qemu-system-arm
images withtype="system-arm"
. - args
- QEMU command line to start with. Will always be prepended with
-name ${vm} -nographic
. - haltcmd
- Command to shut down a VM safely. I am using
-monitor telnet:..
and power off my VMs via ACPI by sendingsystem_powerdown
to monitor. You can use SSH or some other ways.
Example configs:
/etc/conf.d/qemu.d/one
type="system-x86_64" args="-enable-kvm -m 512 -hda /dev/mapper/vg0-vm1 -net nic,macaddr=DE:AD:BE:EF:E0:00 \ -net tap,ifname=tap0 -serial telnet:localhost:7000,server,nowait,nodelay \ -monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0" haltcmd="echo 'system_powerdown' | nc localhost 7100" # or netcat/ncat # You can use other ways to shut down your VM correctly #haltcmd="ssh powermanager@vm1 sudo poweroff"
/etc/conf.d/qemu.d/two
args="-enable-kvm -m 512 -hda /srv/kvm/vm2.img -net nic,macaddr=DE:AD:BE:EF:E0:01 \ -net tap,ifname=tap1 -serial telnet:localhost:7001,server,nowait,nodelay \ -monitor telnet:localhost:7101,server,nowait,nodelay -vnc :1" haltcmd="echo 'system_powerdown' | nc localhost 7101"
To set which virtual machines will start on boot-up, enable the qemu@vm_name.service
systemd unit.
Mouse integration
To prevent the mouse from being grabbed when clicking on the guest operating system's window, add the option -usbdevice tablet
. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated. For example:
$ qemu-system-i386 -hda disk_image -m 512 -vga std -usbdevice tablet
If that does not work, try the tip at #Mouse cursor is jittery or erratic.
Pass-through host USB device
To access physical USB device connected to host from VM, you can start QEMU with following option:
$ qemu-system-i386 -usbdevice host:vendor_id:product_id disk_image
You can find vendor_id
and product_id
of your device with lsusb
command.
Enabling KSM
Kernel Samepage Merging (KSM) is a feature of the Linux kernel that allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. The KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.
To enable KSM, simply run
# echo 1 > /sys/kernel/mm/ksm/run
To make it permanent, you can use systemd's temporary files:
/etc/tmpfiles.d/ksm.conf
w /sys/kernel/mm/ksm/run - - - - 1
If KSM is running, and there are pages to be merged (i.e. at least two similar VMs are running), then /sys/kernel/mm/ksm/pages_shared
should be non-zero. See https://www.kernel.org/doc/Documentation/vm/ksm.txt for more information.
$ grep . /sys/kernel/mm/ksm/*
Multi-monitor support
The Linux QXL driver supports four heads (virtual screens) by default. This can be changed via the qxl.heads=N
kernel parameter.
The default VGA memory size for QXL devices is 16M (VRAM size is 64M). This is not sufficient if you would like to enable two 1920x1200 monitors since that requires 2 × 1920 × 4 (color depth) × 1200 = 17.6 MiB VGA memory. This can be changed by replacing -vga qxl
by -vga none -device qxl-vga,vgamem_mb=32
. If you ever increase vgamem_mb beyond 64M, then you also have to increase the vram_size_mb
option.
Copy and paste
To have copy and paste between the host and the guest you need to enable the spice agent communication channel. It requires to add a virtio-serial device to the guest, and open a port for the spice vdagent. It is also required to install the spice vdagent in guest (spice-vdagent for Arch guests, Windows guest tools for Windows guests). Make sure the agent is running (and for future, started automatically).
Start QEMU with the following options:
$ qemu-system-i386 -vga qxl -spice port=5930,disable-ticketing -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent
The -device virtio-serial-pci
option adds the virtio-serial device, -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0
opens a port for spice vdagent in that device and -chardev spicevmc,id=spicechannel0,name=vdagent
adds a spicevmc chardev for that port.
It is important that the chardev=
option of the virtserialport
device matches the id=
option given to the chardev
option (spicechannel0
in this example). It is also important that the port name is com.redhat.spice.0
, because that is the namespace where vdagent is looking for in the guest. And finally, specify name=vdagent
so that spice knows what this channel is for.
Windows-specific notes
QEMU can run any version of Windows from Windows 95 through Windows 10.
It is possible to run Windows PE in QEMU.
Fast startup
For Windows 8 (or later) guests it is better to disable "Fast Startup" from the Power Options of the Control Panel, as it causes the guest to hang during every other boot.
Fast Startup may also need to be disabled for changes to the -smp
option to be properly applied.
Remote Desktop Protocol
If you use a MS Windows guest, you might want to use RDP to connect to your guest VM. If you are using a VLAN or are not in the same network as the guest, use:
$ qemu-system-i386 -nographic -net user,hostfwd=tcp::5555-:3389
Then connect with either rdesktop or freerdp to the guest. For example:
$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan
Troubleshooting
Mouse cursor is jittery or erratic
If the cursor jumps around the screen uncontrollably, entering this on the terminal before starting QEMU might help:
$ export SDL_VIDEO_X11_DGAMOUSE=0
If this helps, you can add this to your ~/.bashrc
file.
No visible Cursor
Add -show-cursor
to QEMU's options to see a mouse cursor.
Keyboard seems broken or the arrow keys do not work
Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in /usr/share/qemu/keymaps
.
$ qemu-system-i386 -k keymap disk_image
Virtual machine runs too slowly
There are a number of techniques that you can use to improve the performance if your virtual machine. For example:
- Use the
-cpu host
option to make QEMU emulate the host's exact CPU. If you do not do this, it may be trying to emulate a more generic CPU. - If the host machine has multiple CPUs, assign the guest more CPUs using the
-smp
option. - Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128 MiB of memory to each virtual machine. Use the
-m
option to assign more memory. For example,-m 1024
runs a virtual machine with 1024 MiB of memory. - Use KVM if possible: add
-machine type=pc,accel=kvm
to the QEMU start command you use. - If supported by drivers in the guest operating system, use virtio for network and/or block devices. For example:
$ qemu-system-i386 -net nic,model=virtio -net tap,if=tap0,script=no -drive file=disk_image,media=disk,if=virtio
- Use TAP devices instead of user-mode networking. See #Tap networking with QEMU[broken link: invalid section].
- If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's file system. For example, you can mount an ext4 file system with the option
barrier=0
. You should read the documentation for any options that you change because sometimes performance-enhancing options for file systems come at the cost of data integrity. - If you have a raw disk image, you may want to disable the cache:
$ qemu-system-i386 -drive file=disk_image,if=virtio,cache=none
- Use the native Linux AIO:
$ qemu-system-i386 -drive file=disk_image,if=virtio,aio=native
- If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling kernel same-page merging:
# echo 1 > /sys/kernel/mm/ksm/run
- In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU with the
-balloon virtio
option.
See http://www.linux-kvm.org/page/Tuning_KVM for more information.
Guest display stretches on window resize
To restore default window size, press Ctrl+Alt+u
.
ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy
If an error message like this is printed when starting QEMU with -enable-kvm
option:
ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy failed to initialize KVM: Device or resource busy
that means another hypervisor is currently running. It is not recommended or possible to run several hypervisors in parallel.
libgfapi error message
The error message displayed at startup:
Failed to open module: libgfapi.so.0: cannot open shared object file: No such file or directory
is not a problem, it just means that you are lacking the optional GlusterFS dependency.
Kernel panic on LIVE-environments
If you start a live-environment (or better: booting a system) you may encounter this:
[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)
or some other boot hindering process (e.g. cannot unpack initramfs, cant start service foo).
Try starting the VM with the -m VALUE
switch and an appropriate amount of RAM, if the ram is to low you will probably encounter similar issues as above/without the memory-switch.
参阅
- Official QEMU website
- Official KVM website
- QEMU Emulator User Documentation
- QEMU Wikibook
- Hardware virtualization with QEMU by AlienBOB (last updated in 2008)
- Building a Virtual Army by Falconindy
- Lastest docs
- QEMU on Windows
- Wikipedia
- QEMU - Debian Wiki
- QEMU Networking on gnome.org
- Networking QEMU Virtual BSD Systems
- QEMU on gnu.org
- QEMU on FreeBSD as host