This value needs to be set to actual static max for qmemman to work
properly. If it's set higher than real static-max, qmemman will try to
assign more memory to dom0, which dom0 could not use - will be wasted.
Since this script is executed before any VM is started, simply
take the current dom0 memory usage, instead of parsing dom0_mem Xen
argument. There doesn't seem to be nice API to get this value from Xen
directly.
FixesQubesOS/qubes-issues#4891
qubes-vm@.service would already cause this ordering, but not every user
has any autostart=True VMs.
Also needed to maybe f*x QubesOS/qubes-issues#3149 at some point.
When some expiring rules are present, it is necessary to reload firewall
when those rules expire. Previously systemd timer was used to trigger
this action, but since we have own daemon now, it isn't necessary
anymore - use this daemon for that.
Additionally automatically removing expired rules was completely broken
in R4.0.
FixesQubesOS/qubes-issues#1173
1. Make sure VMs are started after dom0 actual memory usage is reported
to qmemman, otherwise dom0 will hold 4GB, even if just a little over 1GB
is needed at that time.
2. Request only vm.memory MB from qmemman, instead of vm.maxmem. While
HVM with PCI devices indeed do not support populate-on-demand, this is
already handled in libvirt XML.
The later may often cause VM startup fail on systems with 8GB of memory,
because maxmem is 4GB there and with dom0 keeping the other 4GB (see
point 1) there is not enough memory to start any sych VM.
FixesQubesOS/qubes-issues#3462
Forward-ported from qubes-core-agent-linux:
commit aad6fa6d190d24393e326a4c2ff7ebc3b5921641
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date: Sat Sep 30 04:56:02 2017 +0200
Hint shellcheck where to look for sourced files, if in repository
This will ease running shellcheck from the repository.
The service needs to be started after qubesd and qubes-qmemman to be
able to start domains. The same applies on shutdown - should be ordered
before shutting down qubesd.
qubesd service is critical for Qubes usage, so even in case of critical
error crashing the whole service, make sure it is restarted.
Set delay to 1s (default 100ms), to allow other services to restart too,
if crash was caused by some other service (like libvirtd crash).
QubesOS/qubes-issues#2960
Since qubesd properly handle chained startup of sys-net->sys-firewall
etc, we don't need a separate service to start netvm explicitly earlier.
FixesQubesOS/qubes-issues#2533
This service currently does more harm (desync libvirt state with
reality) than good. Since we have qubesd, we may come back to
implementing it properly using libvirt events.
This all either have been migrated to core3, or is not needed anymore.
There is still qvm-tools directory with a few tools that needs to be
migrated, or installed as is.
This way it will work independently from where qrexec-policy tool will
be called (in most cases - from a system service, as root).
This is also very similar architecture to what we'll need when moving to
GUI domain - there GUI part will also be separated from policy
evaluation logic.
QubesOS/qubes-issues#910
Prior to this commit, the qubes-core.service inherited systemd's default
timeout value of 90 seconds. With slow hard disk drives, this caused the
dom0 shut-down sequence to proceed even if some VMs were still not fully
shut down at the time of dom0 shut down.
This commit aims to avoid this issue by setting the service stop timeout to
180 seconds.
Signed-off-by: M. Vefa Bicakci <m.v.b@runbox.com>
Have dm-snapshot of dm-snapshot. The first layer is to "cache" changes
done by base volume holder (TemplateVM in case of root.img), the second
layer is to hold changes do by snapshot volume holder (AppVM in case of
root.img). In case of Linux VMs the second layer is normally done inside
of VM (original volume is exposed read-only). But this does not work for
non-Linux VMs, orr even Linux but without qubes-specific startup
scripts.
This is first part of the change - actual construction of two layers of
dm-snapshot, not plugged in to core scripts yet.
QubesOS/qubes-issues#2256
Otherwise qvm-create-default-dvm may fail to include it in
saved-cows.tar, which will lead to DispVM being not really disposable.
FixesQubesOS/qubes-issues#2200
This is no longer necessary since volatile.img is formated inside the
VM. This also fixes DispVM creation if the user sets a restrictive umask
for root. Maybe related to #2200.
The following list is bollocks. There were many, many more.
Conflicts:
core-modules/003QubesTemplateVm.py
core-modules/005QubesNetVm.py
core/qubes.py
core/storage/__init__.py
core/storage/xen.py
doc/qvm-tools/qvm-pci.rst
doc/qvm-tools/qvm-prefs.rst
qubes/tools/qmemmand.py
qvm-tools/qvm-create
qvm-tools/qvm-prefs
qvm-tools/qvm-start
tests/__init__.py
vm-config/xen-vm-template-hvm.xml
This commit took 2 days (26-27.01.2016) and put our friendship to test.
--Wojtek and Marek
systemd-user-sessions.service is specicically for that, do not use hack
(plymouth-quit.service), which doesn't work when the service is
disabled.
FixesQubesOS/qubes-issues#1250
It may happen (especially when VM doesn't close cleanly and needs to be
killed) that qubesdb-daemon will not notice VM shutdown immediately.
Normally it would stop after 60s timeout, but speed it up in case of
system shutdown
QubesOS/qubes-issues#1425
This is part of fixing qvm-start.
qmemman was moved with minimal touching, mainly module names.
Moved function parsing human-readable sizes from core2. This function is
wrong, because it treats k/M/G as 1024-based, but leave it for now.