[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Qemu setting "-cpu host" seems broken with Windows vms
From: |
xtec |
Subject: |
Qemu setting "-cpu host" seems broken with Windows vms |
Date: |
Thu, 28 Dec 2023 11:45:18 -0600 |
User-agent: |
Roundcube Webmail/1.6.0 |
I noticed something weird when using "-cpu host" with Windows vms.
First, I always use it along with ",hv_passthrough" as well.
First, performance: since some years ago, since prior to qemu 6.2 until
latest 8.2, win10 and win11 vms always worked slower than expected. This
could be noticed by comparing booting/starting times between vm and a
bare metal installation, but I particularly measured it when installing
windows cumulative updates through windows update. On vm, from
downloading to finishing rebooting it always took 1.5 circa 1.5 hours,
while just 40 minutes on bare metal.
Second, and more recently, newer windows 11 23h2 seems to have big
problem with "-cpu host".
When trying to update from 22h2 to 23h2 I got either black screen or
bsod after trying to reboot.
Also, same result when trying to install 23h2 from scratch.
This on qemu 7.1 and 8.2.
Did a long search, and finally found the cause which also solved the
problem for me:
https://forum.proxmox.com/threads/new-windows-11-vm-fails-boot-after-update.137543/
I found similar problems and similar solution in other forums as well.
So in my case, physical host cpu is intel core 11th gen; tried using
libvirt's "virsh capabilities" to see which qemu cpu model better
matched, and for some reason it gave Broadwell instead of newer
Skylake...
Anyway, tried with "-cpu <Broadwell_model>,hv_passthrough", and this
solved *both* problems: performance finally matched bare metal in all
aspects, and the windows 23h2 problem was finally gone.
On IRC, it was suggested to try "-cpu host" and "disabling CPU bits" one
by one until finding the culprit. But I don't know how to do this...
Could someone look into this?
Thanks.
- Qemu setting "-cpu host" seems broken with Windows vms,
xtec <=