qmemman: request VMs balloon down with 16MB safety margin

It looks like Linux balloon driver do not always precisely respect
requested target memory, but perform some rounding. Also, in some cases
(HVM domains), VM do not see all the memory that Xen have assigned to it
- there are some additional Xen pools for internal usage.
Include 16MB safety margin in memory requests to account for those two
things. This will avoid setting "no_response" flag for most of VMs.

QubesOS/qubes-issues#3265
This commit is contained in:
Marek Marczykowski-Górecki 2018-01-11 03:35:46 +01:00
parent bf4306b815
commit 4bca631350
No known key found for this signature in database
GPG Key ID: 063938BA42CFA724

View File

@ -161,10 +161,13 @@ class SystemState(object):
# apparently xc.lowlevel throws exceptions too
try:
self.xc.domain_setmaxmem(int(id), int(val/1024) + 1024) # LIBXL_MAXMEM_CONSTANT=1024
self.xc.domain_set_target_mem(int(id), int(val/1024))
self.xc.domain_set_target_mem(int(id), int(val / 1024))
except:
pass
self.xs.write('', '/local/domain/' + id + '/memory/target', str(int(val/1024)))
# VM sees about 16MB memory less, so adjust for it here - qmemman
# handle Xen view of memory
self.xs.write('', '/local/domain/' + id + '/memory/target',
str(int(val/1024 - 16 * 1024)))
# this is called at the end of ballooning, when we have Xen free mem already
# make sure that past mem_set will not decrease Xen free mem