Fix starting PCI-having HVMs on early system boot and later

1. Make sure VMs are started after dom0 actual memory usage is reported
to qmemman, otherwise dom0 will hold 4GB, even if just a little over 1GB
is needed at that time.

2. Request only vm.memory MB from qmemman, instead of vm.maxmem. While
HVM with PCI devices indeed do not support populate-on-demand, this is
already handled in libvirt XML.

The later may often cause VM startup fail on systems with 8GB of memory,
because maxmem is 4GB there and with dom0 keeping the other 4GB (see
point 1) there is not enough memory to start any sych VM.

Fixes QubesOS/qubes-issues#3462
这个提交包含在:
Marek Marczykowski-Górecki 2018-01-29 22:57:32 +01:00
父节点 2a8fd9399e
当前提交 86026e364f
找不到此签名对应的密钥
GPG 密钥 ID: 063938BA42CFA724
共有 2 个文件被更改,包括 1 次插入5 次删除

查看文件

@ -1,7 +1,7 @@
[Unit]
Description=Start Qubes VM %i
Before=systemd-user-sessions.service
After=qubesd.service
After=qubesd.service qubes-meminfo-writer-dom0.service
[Service]
Type=oneshot

查看文件

@ -1244,10 +1244,6 @@ class QubesVM(qubes.vm.mix.net.NetVMMixin, qubes.vm.BaseVM):
stubdom_mem = 0
initial_memory = self.memory
if self.virt_mode == 'hvm' and self.devices['pci'].persistent():
# HVM with PCI devices does not support populate-on-demand on
# Xen
initial_memory = self.maxmem
mem_required = int(initial_memory + stubdom_mem) * 1024 * 1024
qmemman_client = qubes.qmemman.client.QMemmanClient()