Guides

Non-root GPU passthrough setup

(c) cc-by | Mike Powell

If you want to use Linux as your main operating system and don’t want to do compromises like using a dual-boot solution with Windows there is an alternative called GPU passthrough. You basically pass through your GPU into a virtual machine s.t. your guest can fully utilize it according to your desires. The bottleneck of doing this is negligible (in my tests the 3DMark benchmark results were almost equal).

There are already a lot of guides and tutorials about how to make a GPU passthrough setup with QEMU. Among the best resources I could find are:

Although considering all of those and putting a lot of effort into setting up my own setup I had to deal with many annoying and very time consuming issues.
One of those issues is that most of the available guides just start QEMU with root privileges. This means a QEMU breakout directly leads to full control over your host system.

This article is meant as a frequently updated guide (last update: 09.07.2016) and (hopefully) complete walkthrough for everyone who is considering to make a non-root GPU passthrough setup with QEMU.
It was successfully tested with an up-to-date Archlinux and Xubuntu 15.10 (Wily), but should be compatible with any other distribution having a newer kernel version (preferably versions >= 3.9).

GPU passthrough with QEMU

Under normal circumstances there should be no risks doing this, however, I don’t take any responsibilities whatsoever in case you still manage to somehow brick your system by following this guide.

Hardware requirements

This setup has the following requirements on your hardware:

  • IOMMU compatible hardware (referred to as VT-D for Intel and VI for AMD):
    • CPU
    • Mainboard and bios (reading the mainboard’s online manual can help).
  • Two GPUs: one will be used for the host system and the other for the guest. An integrated GPU i.e. iGPU will do for the host.
  • One monitor with two different inputs like DP and HDMI. Two inputs are necessary, because you need to switch between your host and guest video output. You can also use a KVM switch or utilize a second monitor (the latter is recommended).

Basic preparations

For this guide we will use QEMU (which makes use of KVM). One graphics card will be exclusively allocated to the guest system while the other takes care of running our host system. To achieve this we can make use of the Linux VFIO driver (available in kernel versions >= 3.9?).

Make sure your system is up to date:

Enable IOMMU support in your bootloader: ensure the default grub config contains “intel_iommu=on” if you have an Intel cpu and “amd_iommu=on” for an AMD cpu:

then do

and ensure /boot/grub/grub.cfg is properly updated.

Dedicated GPU isolation

Dedicated GPU isolation can be done by using pci-stub with VFIO (available since 3.9+) or directly using vfio-pci (available since 4.1+). The latter is recommended since you don’t need to rebind the GPU with VFIO after each boot. Ensure it is installed by running:

No output means everything is ok. If there is an error you need to install pci-stub (c.f. related sections in Puget system’s guide).

At this point there are at least two possibilities to load the correct modules:

  • Archlinux:
    You need to modify “/etc/mkinitcpio.conf” and update the initial ramdisk environment. Please refer to the VFIO-PCI section here: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#vfio-pci
  • Debian based systems:
    You can load all necessary passthrough modules like VFIO by adding them into the modules file:

    • Attention:
      if you don’t have vfio-pci support don’t forget to add pci_stub here, too!

Assuming vfio-pci is available you can then get the ID of your dedicated GPU and also its audio device (used for DP/HDMI):

In this case the important ids are: 10de:17c8 and 10de:0fb0. Further, we can then add those ids to the VFIO modprobe config so that the VFIO-PCI drivers will be bound to those devices on boot:

  • Attention:
    in case you have two gpus with same ids this won’t work. “Another issue that users encounter when sequestering devices is what to do when there are multiple devices with the same vendor:device ID and some are intended to be used for the host.” – please consider reading and following this specific part here: http://vfio.blogspot.de/2015/05/vfio-gpu-how-to-series-part-3-host.html

Finally, for Debian run:

and for Archlinux with a normal linux kernel run:

in order for the changes to be applied on boot. At this point you should restart your system and verify with “lspci -nnk” if the entry “Kernel driver in use: vfio-pci” is available for the dedicated GPU.

In case you have made it this far most things should be ready for the next step: setting up QEMU and telling it to forward our dedicated GPU.

QEMU setup

Install QEMU (assuming you have a x86-64 system):

First we will need a QEMU image for our VM (unless you want to forward a SATA controller or use another storage device for which I won’t go into detail at this point). Many guides use virtio controller and virtio drivers. Although they are improving performance, my advice is to avoid them at least for now since they unnecessarily complicate things. You can create a disk with 60 GB space and qcow2 format (makes use of compression) with:

Alternatively, you can use a raw image here, however, doing so won’t improve performance much and will eat up more space than necessary (I have to verify this claim first though). The next thing is taking care of the boot process. To be able to boot disks with GPT partitions we need EFI. Further, this requires UEFI (which is a replacement for the old bios version). UEFI support is provided by using OVMF and can be downloaded pre-compiled from Gerd Hoffman’s site https://www.kraxel.org/repos/jenkins/edk2/. Install it using:

If you use a Debian based distribution like Ubuntu you can use the following commands to extract it:

We assume that /usr/share/edk2.git/ovmf-x64/ is available at this point. Then copy the OVMF settings file to the current directory:

A basic QEMU start script can look like this:

  • Adjust the number of virtual cores, physical cores and threads with the “-smp” option for your CPU accordingly.
  • We use “kvm=off” to trick Nvidia cards into believing they are not passed through (else the card might reject working with errorcode 43).
  • In above configuration my dedicated GPU’s PCI ids were 02:00.0 and 02:00.1 you should lookup yours by executing “lspci” and adjust them accordingly.
  • Using “-vga qxl” makes QEMU use the QXL paravirtual graphic card as the main emulated VGA device. This gives us a window where we can directly see what’s going on in the VM. Many guides recommended directly making use of the passthrough and switch to the dedicated GPU’s output. However, I had no signal i.e. my screen just stayed black. This method makes it way easier to setup the OS first which then in turn will use the correct drivers. Further, we can avoid passing through any USB devices yet since they would lead to deadlocks during a failure within the vm.
  • Instead of an iso image you can also directly pass any device (e.g. passing a USB stick with -cdrom /dev/sdX where sdX is the correct device name works just fine).
  • Hint:
    Already at this point you can verify if the passthrough is working by starting QEMU and writing “info pci” into the QEMU console:

Here you can see that my device with the id 10de:17c8 was correctly mapped into the vm.

If you have fully installed your desired OS and booted into it you should make sure that the passed-through GPU is correctly identified (e.g. in the device manager of Windows or by using lspci in Linux). In my case I had to download and install the newest Nvidia GPU drivers in order for the GPU to be correctly identified in Windows 10. Once this step was done I was able to switch to the other monitor port and was finally able to see the guest’s dedicated GPU output :).

  • Hint:
    If you want to manage the VM with your dedicated GPU’s video output you can just click into the QXL window and then switch to your other monitor input.
  • Attention:
    A Linux guest can refuse to show output on the dedicated GPU when QXL is running (you can verify this by using “xrandr”). In order to fix this you can deactivate the QXL device s.t. only the dedicated GPU will be known to the guest:

    However, doing so will require passing through a mouse/keyboard to control the vm.

    • Update:
      By appending “-device qxl” you can disable the QXL video output while still being able to control the VM exactly the same way as in the given hint above (thx to Reddit user /u/SxxxX for pointing this out).

The next step is to forward the keyboard and mouse for better responsiveness (QXL is too slow). Here you can either pass through a complete USB controller or rather pass through a specific USB device. Since I was unable to pass through a USB 3.0 controller I had to pass through specific devices (if you want to try to pass through a complete USB controller you can refer to Step 7 in Puget system’s guide).

To forward the usb devices just get their ids:

Once you have all ids you can forward those devices in our QEMU startup script with:

Sound

We will now take care of getting working sound. To find out more about QEMU’s supported sound devices consider:

Adjust our startup script to use the Intel HD Audio (hda) device:

We further need to specify an audio driver we would like to use. Let’s give ALSA a try:

Alternatively you can also use Pulseaudio (it doesn’t have the best reputation though and I don’t recommend using it until really necessary…):

To get more information about the above used sound parameters and some further sound driver information consider:

Finally, we need to rewrite the QEMU startup script to consider the sound environment settings:

If you have crackles in your sound using ALSA try adjusting the buffer and period sizes.

  • Question:
    Do you have any advice about how to improve the input (microphone) quality at this point?

Whereas the above configuration worked instantly for Xubuntu, using Archlinux lead to horrible sound seemingly because of permission issues while setting the ALSA buffer and period sizes.
To solve this you can alternatively use the ac97 device and leave out any ALSA parameters in the environment variable.

  • Attention:
    Windows won’t detect your sound device correctly. You have to use exactly the right drivers which are Intel drivers in this case (there are many drivers and you can waste a lot of time by choosing wrong ones here). You can find the correct AC 97 drivers for QEMU here: Intel 82801AA AC97 Audio (don’t download the first driver since this is adware. Use the second link with the zip file instead).

Network

The network seemed to work directly out of the box i.e. I could utilize about 5-6 MB/s with my 50 Mbit/s internet connection so I didn’t investigate further here. However, there are some things you can do to improve the performance if necessary (c.f. Performance Tweaks Section).

Synergy

Synergy allows you to share the mouse and keyboard across multiple operating systems which is very helpful when we pass through USB devices. I recommend using Synergy as a server within the guest system while running it as a client on the host system.

  • Attention:
    Don’t use Synergy versions <1.7 (they have a bug that doesn’t allow you to press multiple keys at once. This is very annoying when typing capitalized letters…). You should download a version >= 1.7 (for your host and guest) at http://www.synergy-project.org/nightly.

Make the guest’s Synergy accessible by forwarding its default port (24800 is default):

  • Hint:
    In my case I had to add “setxkbmap de nodeadkeys” to fix an issue with the German keyboard layout and Synergy.

Now you can configure the Synergy server and client by setting a proper client name. Finally, you should be able to connect to localhost on the client side while Synergy is running on both systems.

  • Hint:
    During gaming you want to lock the mouse to one specific monitor. You can do this by pressing the Scroll-Lock button. If you don’t have a Scroll-Lock button on your keyboard you can specify another button in the Synergy server settings. In my case I have bound a mouse button for this.

If you are using Synergy as a server on your host and as a client in your guest (e.g. when you don’t want to use USB passthrough) and experience weird mouse movements ingame try setting “Send relative mouse moves” in the Synergy server.

Congratulations if you have made it this far!

Your basic GPU passthrough setup should work by now. We can now take care of removing the requirements of QEMU to be started as the root user.

Permissions for non-root GPU passthrough

At this point you should have a setup that is working as root. The next steps in this chapter will help to improve the overall QEMU security. Please keep in mind that using a passthrough setup even as non-root is risky since the GPU can have a lot of control over the system. Further, we are granting the VM access to other devices like USB devices which doesn’t really improve the situation. Thus, please don’t expect this chapter to be a guarantee for a really bulletproof setup. All we will do here is to decrease the attack surface as much as possible.

Basic permissions

We’ll create the user “qemu_vga” with a disabled password (s.t. you have to use “su”/”sudo” as an admin) for the purpose of running QEMU:

We can now add the qemu_vga user to the audio and kvm group:

Let’s allow our current user (in my case evonide) sudo access to the qemu_vga user (this way we don’t need to type any passwords to start our vm later):

Ensure that our user has write access to our vm image and the OVMF variables file:

You have at least two options regarding the QEMU startup script:

  1. Make sure the QEMU startup script is non writable by the qemu_vga user.
  2. Never execute the QEMU startup script with your main user, but rather only execute it with “sudo -Hu qemu_vga /path/to/the/qemu_script.sh”.

If you neglect those points a possible breakout would mean the qemu_vga user would be able to write arbitrary commands into the startup script which would get executed by your user once you start the VM. For this guide I assume you use option 1, however, using option 2 would be okay, too. Just don’t forget to remove the redundant “sudo” command in the QEMU script if you decide to go with it.

Next, if you want to use QXL (recommended for testing) you need to give our qemu_vga user access to the XServer by modifying our QEMU script:

  • Attention:
    Granting XSever access to the user qemu_vga is a bad idea from many point of views regarding security. You should only do this while testing and using QXL. You can avoid any windows by using “-nographic” together with the “-vga none” option (c.f. Miscellaneous Section).

Device permissions

To avoid errors like:

We need set proper permissions for the VFIO and USB devices by granting all users in the kvm group access. We will add a VFIO rule and our USB device ids (c.f. USB passthrough in the previous chapter) to the Linux udev rules like this:

You can then reload all udev rules with:

If this doesn’t work you might have to restart your system at this point. Please verify if the rights are correctly set with:

and

Here you should see similar entries for your devices.

Further, we want QEMU to start as our new user so we can modify the startup once again:

If you prefer using a password for qemu_vga and prefer using “su” instead you can go with:

  • Attention:
    At this point it is important to notice the “-” after su. This takes care of creating a new environment for the user. Avoiding this can lead to several XServer permission errors.

Memory permissions

If you would start QEMU at this point you would get an error similar to the following one:

Our user should be able to allocate enough memory for the vm. To fix this we need to modify the limits file:

Here I have granted the user qemu_vga access to at most 20 GiB. Since we will switch to the user with “sudo” or “su” we need to ensure those limits will be considered:

Verify the locked memory limit with:

Sound permissions

The next important step is to get the sound running. One option is to use the ac97 and ALSA without parameters solution (as described in the first Chapter). However, I wanted to be able to further customize ALSA with parameters to further improve the sound quality. This was definitively the most difficult part of the setup. Unfortunately, the following steps only worked for Xubuntu. When starting our setup at this point with the ALSA driver you might get the following error:

After a lot of research and even after reading QEMU-ALSA sourcecode I was unable to find the reason behind it. Alternatively trying Pulseaudio (as described in the previous chapter) lead to the following error:

This problem can be solved by allowing anonymous authentication and sharing Pulseaudio access via a socket file like /tmp/pulse. Update the existing entry in the following way:

Further, we need to tell the clients to connect to this socket:

  • Attention:
    You might need to add “export PULSE_SERVER=unix:/tmp/pulse” to the QEMU startup script in addition (thanks to Sarnex).

Don’t forget to kill and restart the Pulseaudio daemon:

  • Attention:
    I later noticed that after rebooting my system there was a bug where the Pulseaudio daemon wouldn’t startup. Adding “pulseaudio -D” to the Application Autostart in Xubuntu solved this problem though.

By doing so I got Pulseaudio running. What absolutely stunned me was the fact that solving the Pulseaudio issue also solved the ALSA issue described before. I know that most Ubuntu versions (including Xubuntu) automatically come with Pulseaudio, too. However, the connection between ASLA, Pulseaudio and this error was very unclear to me.

While the only working solution on Archlinux was using ALSA with ac97, all sound device and driver combinations worked successfully under Xubuntu.

If you have reached this point you should have a basic GPU passthrough setup working with our qemu_vga user.

Performance tweaks

Hyper-V enlightments

You can activate several paravirt features within Windows by setting some CPU parameters referred to as Hyper-V enlightments.

Consider: Cole Robinson’s blog – Enabling Hyper-V enlightenments with KVM

You can adjust your CPU settings like this:

However, setting hv parameters will lead to an NVIDIA error 43 with newer driver versions! Hence, we can use hv_vendor_id=Nvidia43FIX as a workaround. The latter only works with a newer QEMU version like 2.5.0 though.

“If you have an Intel CPU with APIC virtualization (starting with Ivy Bridge-E and Haswell), you should avoid using hv_vapic. It has more overhead than APICv, causing about 10-12% more VM exits.” – according to Reddit user /u/glowtape.

Since the version in my repository didn’t support this feature yet I compiled QEMU 2.5.0 on my own. You can find some very rudimentary advice on this in the Miscellaneous Section below.

Unfortunately, I don’t have any information on the real performance gain here. If you have any please feel free to contribute them here.

Disk tweaks

You can improve the disk IO performance by making use of the virtio drivers. You can either install them already during Windows installation by providing an ISO with the drivers as a secondary CDROM drive or by installing them afterwards.

You can download the latest drivers on the Fedora wiki – Windows Virtio Drivers site.

The QEMU options for specifying a virtio drive (for the qcow2 file $(pwd)/qemu_vm.qcow2) would look like this:

Creates an iothread to move the IO away from the main QEMU thread, then specifies a disk drive without interface and ties it to a virtio SCSI controller. The relevant virtio driver to install in Windows is the SCSI Passthrough one.

  • Many thanks to Reddit user /u/glowtape for pointing this out and providing the basic parameters!

In my tests the speed performance jumped from roughly 300 MB/s reading speed to about 600-700 MB/s with a qcow2 image file.

Memory tweaks

Preallocation

Similar to VMware or other virtualization software you can preallocate enough memory for the vm:

Hugepages

You can make use of hugepages to further improve ram performance. Please consider:

Invoke them in the QEMU script like this:

I haven’t tested this yet. Nevertheless, this should be one of the tweaks with the biggest performance gain.

Other
  • You should disable paging in the guest since the host system already takes care of that.

V-CPU pinning

You can assign specific virtual cores to the vm to further improve performace. I am starting my vm like this:

Consider: Linux KVM – Running your VM on specific CPUs for more infos (thanks to the Archlinux forum user nbhs for his guide at this point)!

QEMU q35 machine architecture

By default QEMU uses pc-i440fx-2.1. Nevertheless, I often came across guides using the QEMU q35 machine architecture. It can be invoked like this:

I don’t use this architecture and I didn’t detect any real performance gains by using it. Please convince me from the opposite if you made different experiences.

Network

To improve the network performance you can make use of virtio-net. In particular, you have to specify something like this:

Where qemu-ifup and qemu-ifdown are helper scripts. To make use of this you further need a bridge called br0 on your interface with internet access.

  • Big thanks to Reddit user /u/glowtape for providing the scripts and information necessary for this section!

More

  • Please let me know if you know of any further tricks to improve the machine performance here.

Miscellaneous

Rebinding

Apparently, it is also possible to assign the GPU to your host again once it has been used in the VM. I haven’t tested this yet but according to Reddit user /u/glowtape you can just bind and unbind the VFIO drivers. This has the drawback that you have to stop XServer for rebinding.

Please refer to his startup script to see how it can be done (links to the qemu-ifup and qemu-ifdown scripts can be found in the Network section above).

OVMF

You can adjust the standard resolution during boot (e.g. when using QXL) in the bios settings (press del key on boot). Go to “Device Manager” – “OVMF Platform Configuration” – “Change Preferred” and commit changes. This is helpful when using QXL for any tasks like OS management or OS installation.

QEMU settings

In case you don’t want the qemu_vga user to start a QEMU window or just don’t want to grant it any XServer rights you can use

Monitor switching

I don’t use it since I have to press a button on the monitor to switch the input. However, there are enough guides describing how you can utilize “xrandr” if your monitor supports automatic signal switching.

  • Request:
    Please feel free to contribute if you have a working solution.

Debugging

You can add the following option to see some OVMF debug messages:

Using libvirt and virt-manager

If you want a graphical interface for managing the vm you can try some of the following steps.

  • Attention:
    This is an unfinished section. I had too many problems doing this.

Consider the following resources:

First install virt-manager (this automatically will install dependencies like libvirt):

We then need to convert our whole QEMU-CLI startup script to a XML representation:

Already at this point I had to play around with the parameters since virsh was not able to convert all of them correctly. Once you have created the “vm.args” file though you can create a XML file by doing:

You can then load this XML into the virt-manager by doing:

Further, having installied libvirt and virt-manager might require you to setup some privileges:

Apparmor settings:

then restart Apparmor with:

Qemu permissions (make sure the number for the VFIO device in the last line is correct):

Finally, you should have an entry in your virt-manager or virsh. Nevertheless, several bugs with passing through the GPU device at this point ruined the fun for me here.

Compile QEMU 2.5.0

  • Make sure all QEMU dependencies are met.
  • Lookup the correct make parameters for your distribution, e.g. on Launchpad like here.
  • Remove any existing QEMU packages with your package manager.
  • Compile everything and run “make install”.
    • Attention:
      There is no “make uninstall”. You can make use of checkinstall to create a deb package first and install this one instead.

Share directories and files

Install samba and use the QEMU parameter:

Then you can access the shared directory over netshare at the location: \\10.0.2.4\qemu

Saving the machine state

Apparently there is no way like in VMware or Virtualbox to save the machine state. Executing “savevm test” in the QEMU window lead to the following error:

This error can be fixed by converting the OVMF variables file into the qcow2 format:

However, this only lead to a new error stating that the state of our passed through VFIO device couldn’t be saved:

I guess the VFIO save state support is lacking. The question is whether there is any kind of feature request so far?

  • Update:
    The Reddit user /u/SxxxX has pointed out that this seems to be a very complex issue both on the GPU side and software side of things. Nevertheless, we can hope that a clever solution / workaround to this problem can and will be found.

Automatic setup script

In addition to this guide an automatic script for guidance through the complete setup would be awesome. Please leave a comment or contact me if you are interested in setting something up.

Discussion

Other virtualization software

  • Using VMware for passthrough is only possible when you use VMware vSphere and special Nvidia cards like Quadro cards.
    I could run VMware (unlike Virtualbox) with no problems in parallel to QEMU though.
  • There is unRAID which also uses QEMU and apparently can be used for passthrough purposes.
  • I have read some articles about XEN successfully supporting GPU passthrough, too.

Complicated solutions

During my research I have seen many things being done in a very complicated way. One examle is that you don’t need pci-stub if your kernel already supports vfio-pci. Further, keep things simple by using short parameters like:

  • -hda (only if you don’t need performance gains through using virtio drivers)
  • -cdrom
  • -smb

Issues

Overall the VM is really feeling nice and responsive. However, there are some issues that I encountered so far and am trying to fix.

  • Mouse microlags:
    My mouse stutters in some games although being passed through (also without Synergy). This happens under a Windows or Linux guest.
  • Running QEMU parallel to Virtualbox is a nogo:
    This seems to be an issue since Virtualbox and QEMU both share KVM. No solution in sight here?

Conclusion

This whole setup was definitively one of the more/most frustrating things I have done so far. However, we need more articels and scripts to decrease the global frustration level while making awesome things like this. I hope this guide helps you to reduce your stress level and to create your own setup.

In conclusion I can only say: It is absolutely worth the struggle. It can really make up for not using dual-boot and gets you a step further away from using Windows as the main OS.

You can find my newest and optimized start script/s here:

Github – Evonide’s GPU passthrough

Last but not least:
Many THANKS, KUDOS and SHOUTOUTS to all people who have created those awesome guides, have provided useful information or have struggled and felt the pain of setting up something like this. Finally, my biggest thanks go to my colleague Dario Weißer for doing the setup with Archlinux and helping me with many issues.

Please feel free to comment if you find any mistakes, if you want to share your own experiences or if you know about any other missing cool things.

About the author

Ruslan Habalov

Is dealing with Information Security issues for more than 10 years. Likes sophisticated challenges and is additionally interested in Artificial General Intelligence. [Read more...]

37 Comments

Click here to post a comment

  • qemu 3.1 (which ships with debian buster) has deprecated several statements used in these scripts.

    “-usb” now is an option for the machine, i.e.:

    OPTS=”$OPTS -machine pc-i440fx-2.7,accel=kvm,usb=on”

    and “-usbdevice” has new keyword and option syntax, i.e.:

    OPTS=”$OPTS -device usb-host,vendorid=0x046d,productid=0xc52b”

  • I may be somewhat pretty late, but, you can get surround audio from this setup? I’ve been using jack to solve this question.

    • Hi Romulo, unfortunately I haven’t tried out using surround audio for this setup. Please let us know if you find out any more information on this matter.

  • For the rootless part of the guide, have you considered or tried creating and using a system account instead of a regular user account? Just seems it would be more appropriate for that use case.

    • Hey BlueBit, unfortunately I don’t understand your question. From my point of view a system account is a root account. Using a root account for a rootless setup seems to be pointless. Do you mean some kind of chroot or something similar?

  • Using this guide I got everything working fine enough with windows 10. Didn’t actually use the rootless part yet.

    One thing I did do to avoid blacklisting the radeon driver:

    /etc/modprobe.d/radeon.conf

    softdep radeon pre: vfio-pci

    From: https://superuser.com/questions/1043330/how-to-use-vga-passthrough-in-ubuntu-15-10-with-two-amd-graphic-cards-using-the

    However I did have problems when trying to switch the disk IO performance boost using -drive insead of -hda Windows 10 wouldn’t boot, got the bios command line instead. Did I have to use this from install?

    And using:

    OPTS=”$OPTS -vga none”
    OPTS=”$OPTS -nographic”

    Prevents windows 10 from loading, my graphics card doesn’t spring to life as it normally dose with qxl enabled. Still trying to figure that one out.

    I had also tried using hugepages, but that just caused instability in my system resulting in odd system crashes.

    Using Fur-Mark my AMD Radeon R9 on 1920×1080, 8x MSAA, Dynamic camera and Post-FX averages 53 FPS at 83Deg C

    If memory serves I was getting about 58ish FPS on my old Windows 7 install

    • Thanks for your feedback dugite-code. You should be able to boot your disk normally after changing to the “-drive” option. I didn’t need to install anything during the Windows installation. As mentioned in the article installing the drivers after having Windows installed should be enough (in addition to setting the -drive option of course).
      I had a very similar issue with “nographic”. Strangely, changing/decreasing the memory size of the vm was enough to fix this problem. Currently, I am just tolerating a QXL window since this isn’t really bothering me and seems to have no visible impact on the vm performance. Unfortunately, I haven’t used hugepages yet so I can’t comment on that.

  • I’d really appreciate seeing Fedora based instructions for places like loading the modules.

    • Hey there sorry I currently have no time to research everything that is necessary for Fedora. However, the differences should be minor. Please feel free to contribute here in case you make progress.

  • For audio on nonroot, I had to add export PULSE_SERVER=unix:/tmp/pulse to my qemu wrapper script.

  • Great work, I just found out I’ve been working on a very similar configuration recently.

    ————-
    First we will need a QEMU image for our VM (unless you want to forward a SATA controller or use another storage device for which I won’t go into detail at this point).
    ————-
    This part… I passed through an entire SSD (with only the VM on it) with the virtio drivers. Sequential R/W performance is OK-ish but the 4K transfer rate is absolutely appalling (<1MB/s), about 20x lower than it should be. Do you think passing an entire SATA controller would give better performance?

    I read that enabling iothreads(aka x-data-plane) for the disk could really improve performance but I'm unable to do this through libvirt (I will try the QEMU-only option)…

    • Thanks Wei. I don’t think that passing through the complete SATA controller will make a big difference, but you should definitely try it out. I haven’t used direct storage passthrough for my storage devices that much yet. However, as you suggested adjusting virtio settings in the non-libvirt startup script seems to be the best approach here.

  • Hi there. Thanks for the guide, it helped me to convert from virt-manager to a qemu script, which offers much more flexibility.

    Problem: Everything is working great, passing through a Nvidia GTX960 to a Windows 10 guest, except that with a few minutes of use the video begins corrupting and tearing, eventually leading to a video card driver restart and general guest-OS Windows instability.

    Has anyone encountered this? It’s like the video memory isn’t getting cleared effectively, you can see artifacts from previous screens appearing within the tears.

    The Nvidia card looks good otherwise, and reports healthy in Windows (no strange error code 43 or 31 or anything), but the system is so unstable that it’s pretty unusable.

    Here is a paste of the qemu script i’m using, which is just a modified version of your starter script. http://pastebin.com/raw/kRty2SH4

    Works well, besides the graphics corruption.

    • Hey seg, thanks for posting.
      During testing and until now I had no problems with my graphics card whatsoever. To me this rather sounds like a hardware problem. You should try to benchmark it further on your host system and you should also maybe check the HDMI/DP cable. Further, you could try your graphics card on a Linux guest just to exclude any driver errors in Windows.

  • Really great guide!
    I just have a question regarding CPU usage. That is, when running my guest machine under Ubuntu, the CPU usage is very high on host.
    This sounds about right if I were running something very CPU stressful but this happens when all I am doing is watching a Youtube video. Does this seem right to you, or is it just me? Thank you!

    • Thanks Kyler. That’s definitively unusual. As you stated, if your guest machine is idling there shouldn’t be much CPU load on your host. Either you have a “weak” CPU or something else isn’t working as it should.

  • Now that you say you get 98-99%, I think the problem in my case may be the old and noisy Samsung 80gb sata I hdd (from my first pc, around 2006) that I used for the gpu passthrough experiment, while my native windows sits on a modern WD caviar blue.

    • The performance was pretty much the same even after moving the image to a modern HDD.
      I’ve been thinking lately, what if I get the performance hit because of the pci-e lanes?
      I have the asus Z87-K, which has one pcie 3.0 x16 slot (x16 mode) and one pcie 2.0 x16 (x4 mode). I placed the gtx750 ti in the 3.0 slot and the R7 240 in the 2.0 slot.
      My Xeon 1231 v3 has only 16 lanes (1×16, 2×8, 1×8/2×4), so my gtx750 ti actually runs in x8 mode and the r7 240 in x4 mode. Could this configuration affect my performance, or my 750 ti is not powerfull enough in order to feel the difference between x16 vs x8 mode?
      Also, I only have 8 gigs of ram, so I can only allocate 6 gigs max to the vm. I don’t know if this lowers the performance in cinebench gpu test and metro last light, but I’l definitely buy another 8 gb stick.

  • Just as a review

    I got this thing working on Xubuntu 15.10 (but the sound worked only with the ac97 driver method).
    My system:
    Xeon E3 1231-v3 (quad-core HT, 3.4Ghz)
    Asus Z87-K
    Crucial Balistix Sport 8gb 1600mhz (another 8gb soon to be added)
    Asus R7 240 2gb ddr3 – Linux
    EVGA gtx750 ti – Windows 10 VM

    I had to install the proprietary nvidia drivers and blacklist them (don’t really know if necessary blacklisting them, I did it to be sure) in order for the vfio to grab my gpu.

    I gave the VM 6gb ram and 3 cores plus hyperthreading and installed cinebench. Did the gpu benchmark and obtained 92 fps, while in the native windows 10 I obtained 110 fps, so I get 80% performance on the VM.
    Keep in mind that I haven’t done yet any performance tweak, memory permissions, nothing, just installed the VM, fixed the sound and that’s it.

    • Thanks for the review. That’s interesting and unfortunately I can’t really explain why you have such a drastic performance drawback. With my setup and 3DMark I get about 98-99% of the points compared to the results on a native setup even without doing any optimizations.

  • Hey, I got it working, only problem is I made the qcow2 file only 20 gb, since this is only a test, and installed windows 10, which occupied nearly all of it. Can I extend the qcow2 file without reinstalling windows or I need to start all over again?

  • This guide was very useful to make sense of all of this, thanks.
    Though I think it may not work on my system… I can install Windows10 in the qxl dialogue and my GTX 980 will show up in device manager named “NVIDIA GeForce GTX 980” with device status: “A driver(service)for this device has been disabled. An alternate driver may be providing this functionality. (Code 32)” does not detect any other displays, installing the driver causes windows to bluescreen with system_service_exception and system_thread_exception until I disable my GPU passthrough in the bootscript, boot in, uninstall drivers from add/remove programs, and then re-enable passthrough in boot script

    • Hey David,
      thank you for your comment. A quick search showed that code 32 seems to be GPU passthrough unrelated. At least I couldn’t find any comments regarding this error in connection to passthrough setups. Hence, it might be the case that the issue can be found elsewhere. Please keep us updated if you make any progress on this problem.

    • the solution was to use OVMF. The problem I was having with OVMF before was that no drives would show up, no boot devices. This was fixed by getting a different OVMF and running with -pflash OVMF.fd

  • Hi and thanks so much for this guide. It took me about 30-40 minutes to setup my Windows 7 with passthrough following your steps one by one. Everything worked fine, even sound and mouse/keyboard. This VM lets me play games like NFS Most Wanted 2012 and Rivals or Dying Light with max settings without any problems whatsoever. Also no stability problems at all.

    Hardware is Asus Z170 MoBo, i7-6700k Skylake CPU with Noctua Cooling, 32GB DDR4 RAM, Geforce 750Ti and SSD-only drives.

    Works fantastic.

    Again, thanks !

    • Hello Normy,

      it is great to hear that everything worked out fine on the first try! My setup also runs perfectly stable. I can strongly recommend checking out the performance tweaks section. For example using VIRTIO SCSI is a must do when using SSDs :).

    • Hi Ruslan,

      Yes, thats exactly what I’m doing right now: optimizing 😉
      I’ll let you know how it went.

  • Really nice how to, i spent my last days optimizing my VM, i found a problem with the virtio-scsi drive, i was not possible to install windows 10 with the last drive version, i had to use the virtio-win-0.1.110.iso

    • Thanks I have just recently successfully tested the virtio drivers “virtio-win-0.1.102.iso” from the Fedora page (https://fedoraproject.org/wiki/Windows_Virtio_Drivers#Direct_download as linked in the article) with Windows 10. Here I just had to select “Win 8.1” during the manual driver installation in the device manager (this was post Windows installation). In my tests the speed performance jumped from roughly 300 MB/s reading speed to about 600-700 MB/s with a qcow2 image file. Currently, the latest virtio-scsi driver is provided by “virtio-win-0.1.112.iso” though.

  • There is two things that may replace snapshots for some use-cases. First if you going to do something potentially harmful in VM then you can create QCOW2 image that will work as backing file (-b option). So all read going to real disk (HDD, partition or other QCOW image), but writes going into backing file. Also you can always just stop/cont VM anytime and when it’s not used you can even unload it from RAM on host just by changing CGroup rules.

  • About your issues with mouse. If you have lags when passing real USB device this is really weird issue, but first thing worth to try is to tell QEMU emulate USB mouse/keyboard instead of PS/2 one it’s using by default: “-device usb-kbd -device usb-mouse”.

    Also except you want extra mouse buttons working you can be fine without passing actual USB device and use QEMU window instead. If you running with “-vga none” mouse capture via window is disabled, but you can run it with “-vga none -device qxl” instead so mouse capture keep working while emulated display device won’t work.

    And yeah other option to improve VM responsiveness it’s recompile kernel with “Low Latency Desktop” as Preemption Model and “1000 Hz” for Timer Frequency.

    My current configuration available on gist:
    https://gist.github.com/ArseniyShestakov/dc152d080c65ebaa6781
    There is also dirty script to pass USB mouse to VM in runtime on hotkey:
    https://gist.github.com/ArseniyShestakov/fcd91b9235f0c2b0cff8

    • I would be very curious if this works as well. I am having the same issue in spite of the fact I am forwarding an entire USB host PCI device which all my inputs are plugged into.
      The lag not enough that it causes issues with regular gaming, but on my Oculus Rift it is very noticeable.
      Possibly related, the USB DAC on the Rift, and an external DAC both have noticeable static they don’t have native.

      Here is my script/config:
      https://github.com/lrvick/dotfiles/blob/master/.local/bin/win
      3DMark Benchmark is pretty win at least. Only 0.64% lower than native:
      http://www.3dmark.com/compare/fs/8402818/fs/8403766

    • Thanks for posting lrvick.
      Unfortunately, I haven’t tried out SXX’s suggestions yet (fixing a problem like this can be quite time consuming).
      Please be kind enough to post again if you find a working solution to this problem.