Fix starting PCI-having HVMs on early system boot and later

1. Make sure VMs are started after dom0 actual memory usage is reported
to qmemman, otherwise dom0 will hold 4GB, even if just a little over 1GB
is needed at that time.

2. Request only vm.memory MB from qmemman, instead of vm.maxmem. While
HVM with PCI devices indeed do not support populate-on-demand, this is
already handled in libvirt XML.

The later may often cause VM startup fail on systems with 8GB of memory,
because maxmem is 4GB there and with dom0 keeping the other 4GB (see
point 1) there is not enough memory to start any sych VM.

Fixes QubesOS/qubes-issues#3462
This commit is contained in:
Marek Marczykowski-Górecki 2018-01-29 22:57:32 +01:00
parent 2a8fd9399e
commit 86026e364f
No known key found for this signature in database
GPG Key ID: 063938BA42CFA724
2 changed files with 1 additions and 5 deletions

View File

@ -1,7 +1,7 @@
[Unit] [Unit]
Description=Start Qubes VM %i Description=Start Qubes VM %i
Before=systemd-user-sessions.service Before=systemd-user-sessions.service
After=qubesd.service After=qubesd.service qubes-meminfo-writer-dom0.service
[Service] [Service]
Type=oneshot Type=oneshot

View File

@ -1244,10 +1244,6 @@ class QubesVM(qubes.vm.mix.net.NetVMMixin, qubes.vm.BaseVM):
stubdom_mem = 0 stubdom_mem = 0
initial_memory = self.memory initial_memory = self.memory
if self.virt_mode == 'hvm' and self.devices['pci'].persistent():
# HVM with PCI devices does not support populate-on-demand on
# Xen
initial_memory = self.maxmem
mem_required = int(initial_memory + stubdom_mem) * 1024 * 1024 mem_required = int(initial_memory + stubdom_mem) * 1024 * 1024
qmemman_client = qubes.qmemman.client.QMemmanClient() qmemman_client = qubes.qmemman.client.QMemmanClient()